I am running ubuntu with casa os. I was previously running an intel apu (the name has slipped me I will update the post when I can with this info). Recently I got a 1650 that I installed for nvenc transcoding. It seems all the proper drivers are installed but my jellyfin container still fails playback anytime with it turned on.
I have reinstalled the container with the nvidia device variable and no dice. I have also tried installing the nvidia cintainer toolkit but that didn’t work either. I am at a loss for trying to get nvenc to work.
Any help is appreciated!
EDIT: here is the ffmpeg log file
EDIT 2: It was a problem with my docker compose file! I didn’t list the needed devices from the jellyfin documentation. I thought the Container was detecting the gpu but it wasn’t. Docker exec <container-name> nvidia-smi is your friend!
EDIT 3: so now it doesnt kick me out saying the playback failed but its just a black screen with 4k media
EDIT 4: nvm i just didnt have the proper jellytin transcode settings set lol
So it’s a container on Ubuntu? Official, linuxserver, or hotio? What version of Ubuntu and Jellyfin? Does the GPU work on the host? Have you followed this guide? https://jellyfin.org/docs/general/administration/hardware-acceleration/nvidia#configure-with-linux-virtualization
Yes it is a container on ubuntu, it seems to be the official container on the casa os store. It is labled “jellyfin (nvidia gpu)”. The gpu seems to be working fine on the host. Ubuntu 20.04
Yeah i have tried tollowing that guide already.
Their wiki says it’s the linuxserver image, so you should follow their guide: https://hub.docker.com/r/linuxserver/jellyfin
But I don’t know why it would be labeled “Nvidia GPU” if it’s just the standard lsio image. You might want to try a CasaOS support channel. It’s weird to me that an official image would complain about missing libraries.
Does nvidia-smi work on the host? Does nvidia-smi work in the container (docker exec <container_id> nvidia-smi)?
Output of nvidia-smi on the host:
Output of nvdia-smi on the container:
OCI runtime exec failed: unable to start container process: exec: “nvidia-smi”: executable file not found im $PATH: unknown
EDIT:
I tried it again with the jellyfin docker compose command and this is what I get:
Failed to initialize NVML: Unknown Error
Edit 2: I fixed it I am now getting the same output for both!
What’s the output of nvidia-smi on host? What is “casa os”? Do you have the Nvidia container cuda runtime installed? What’s your docker-compose.yml look like?
For me on Debian it worked pretty much out of the box, I just installed the cuda toolkit, blacklisted nouveau, and added this to docker-compose.yml:
environment: NVIDIA_DRIVER_CAPABILITIES: all NVIDIA_VISIBLE_DEVICES: all deploy: resources: reservations: devices: - capabilities: [gpu]
It works both in Immich and Jellyfin.
Casaos is just a server ui for ubuntu to manage docker and other settings:
Here is the output of nvidia-smi (sorry for the picture if my monitor. It’s late and I blanking on my lemmy pass and cannot sign in on the pc itself to send a screen cap lol):
I used several different jellyfin images posted in casaos appstore along with the official nvidia documentation. It still does not want to work:
services: jellyfin: image: jellyfin/jellyfin user: 1000:1000 network_mode: 'host' volumes: - /DATA/AppData/jellyfin/config:/config - /DATA/AppData/jellyfin/cache:/cache - /DATA/AppData/jellyfin/media:/media - /mnt/drive1/media:/mnt/drive1/media - /mnt/drive2/Jellyfin:/mnt/drive2/Jellyfin - /mnt/drive3:/mnt/drive3 - /mnt/drive4/media:/mnt/drive4/media - /mnt/drive5/jellyfin:/mnt/drive5/jellyfin - /mnt/drive6/jellyfin:/mnt/drive6/jellyfin runtime: nvidia deploy: resources: reservations: devices: - driver: nvidia - /dev/nvidia-caps:/dev/nvidia-caps - /dev/nvidia0:/dev/nvidia0 - /dev/nvidiactl:/dev/nvidiactl - /dev/nvidia-modeset:/dev/nvidia-modeset - /dev/nvidia-uvm:/dev/nvidia-uvm - /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools count: all capabilities: [gpu] ```
Have you tried removing all that stuff?
I’m talking about:
- driver: nvidia - /dev/nvidia-caps:/dev/nvidia-caps - /dev/nvidia0:/dev/nvidia0 - /dev/nvidiactl:/dev/nvidiactl - /dev/nvidia-modeset:/dev/nvidia-modeset - /dev/nvidia-uvm:/dev/nvidia-uvm - /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools count: all
And just leave it with:
deploy: resources: reservations: devices: count: all capabilities: [gpu]
Finally, make sure you have the cuda docker toolkit installed.
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
Make sure to then test with:
sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
As per:
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/sample-workload.html
EDIT: NVM see you fixed it! Could you post your final docker-compose.yml for reference? I would like to dive deep and figure out why some people need to add more to their docker-compose.yml and some don’t.