Fresh Proxmox install, having a dreadful time. Trying not to be dramatic, but this is much worse than I imagined. I’m trying to migrate services from my NAS (currently docker) to this machine.
How should Jellyfin be set up, lxc or vm? I don’t have a preference, but I do plan on using several docker containers (assuming I can get this working within 28 days) in case that makes a difference. I tried WunderTech’s setup guide which used an lxc for docker containers and a separate lxc of jellyfin. However that guide isn’t working for me: curl doesn’t work on my machine, most install scripts don’t work, nano edits crash, and mounts are inconsistent.
My Synology NAS is mounted to the host, but making mount points to the lxc doesn’t actually connect data. For example, if my NAS’s media is in /data/media/movies or /data/media/shows and the host’s SMB mount is /data/, choosing the lxc mount point /data/media should work, right?
Is there a way to enable iGPU to pass to an lxc or VM without editing a .conf in nano? When I tried to make suggested edits, the lxc freezes for over 30 minutes and seemingly nothing happens as the edits don’t persist.
Any suggestions for resource allocation? I’ve been looking for guides or a formula to follow for what to provide an lxc or VM to no avail.
If you suggest command lines, please keep them simple as I have to manually type them in.
Here’s the hardware: Intel i5-13500 64GB Crucial DR5-4800 ASRock B760M Pro RS 1TB WD SN850X NVMe


Great!
Transcoding we should be able to sort out pretty easily. How did you make the lxc? Was it manual, did you use one of the proxmox community scripts, etc?
For transferring all your JF goodies over, there are a few ways you can do it.
If both are on the NAS, I believe you said you have a synology. You can go to the browser and go to http://nasip:5000/ and just copy around what you want if its stored on the NAS as a mount and not inside the container. If its inside the container only its going to be a bit trickier, like mounting the host as a volume on the container, copying to that mount, then moving around. But even Jellyfin says its complex - https://jellyfin.org/docs/general/administration/migrate/ - so be aware that could be rough.
The other option is to bring your docker container over to the new VM, but then you’ve got a new complication in needing to pass through your GPU entirely rather than giving the lxc access to the hosts resource, which is much simpler IMO.
I used the community script’s lxc for jelly. With that said, the docker compose I’ve been using is great, and I wouldn’t mind just transferring that over 1:1 either…whichever has the best transcoding and streaming performance. Either way, I’m unfortunately going to need a bit more hand-holding
LXC is going to be better, IMO. And we can definitely get hardware acceleration going.
So first, let’s do this from the console of the lxc:
ls -la /dev/dri/Is there something like card0 and renderD128 listed?
LXC is fine with me, the “new Jellyfin” instance is mostly working anyway. It just has a few issues:
And yes, I see card0 and renderD128 entries. ‘vainfo’ shows VA-API version: 1.20 and Driver version: Intel iHD driver…24.1.0
Ok lets start with that rendering - seeing those is good! You should only need to add some group access, so run this:
groups jellyfinThe output should just say “jellyfin” right now. Thats the user thats running the Jellyfin service. So lets go ahead and…
usermod -a -G video,render jellyfin groups jellyfinYou should now see the jellyfin user as a member of jellyfin, video, and render. This gives access for the jellyfin user to make use of the gpu/hardware acceleration.
Now restart that jellyfin and try again!
Ok, consider it done! My concern is this section of the admin settings:
I followed Intel’s decode/encode specs for my CPU, but there’s no feedback on my selection. I’m still getting “Playback failed due to a fatal player error.”
What do you have above that?
There should be a hardware acceleration dropdown, and then a device below that. Since you have /dev/dri/renderD128, that should be in the “device” field, and the Hardware Acceleration dropdown should be QSV or VAAPI (if one doesn’t work, do the other)
QSV and ‘/dev/dri/renderD128’. I’ll switch to VAAPI and see… Edit: no luck, same error
Just checked one of mine, VAAPI is where I’m set, with acceleration working. 7th or 8th gen or so on that box, so VAAPI should do the trick for you.