Full GPU passthrough for any CUDA-capable workload inside Docker running in an unprivileged LXC.
Tested: Proxmox 9 / Debian Bookworm Applies to: any NVIDIA GPU
- Host owns NVIDIA kernel driver and devices
- LXC installs user-space driver only
- Docker exposes GPU to containers
no-cgroups = trueis REQUIRED for unprivileged LXC/dev/nvidia-uvmMUST exist or CUDA workloads fail
Docker GPU access is provided via NVIDIA Container Toolkit, which dynamically exposes devices and driver libraries at runtime ([NVIDIA Developer][2])
apt install -y dkms pve-headers build-essential libvulkan1cat <<EOF > /etc/modprobe.d/blacklist-nvidia-nouveau.conf
blacklist nouveau
options nouveau modeset=0
EOF
update-initramfs -u
rebootNVIDIA recommends package-managed installs where possible. This guide uses
.runfor consistency across Proxmox and mixed LXC environments.
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/580.142/NVIDIA-Linux-x86_64-580.142.run
chmod +x NVIDIA-Linux-x86_64-580.142.run
./NVIDIA-Linux-x86_64-580.142.run --dkmsSelections:
- No 32-bit
- No X
- DKMS = YES
dkms status
nvidia-smi# REQUIRED for CUDA workloads (creates /dev/nvidia-uvm)
modprobe nvidia_uvmOptional (safe but not required):
modprobe nvidia
modprobe nvidia_modeset
modprobe nvidia_drmcat <<EOF > /etc/systemd/system/nvidia-persistenced.service
[Unit]
Description=NVIDIA Persistence Daemon
After=network.target
[Service]
Type=forking
ExecStart=/usr/bin/nvidia-persistenced --user root
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable --now nvidia-persistenced# Ensure required devices exist BEFORE generation
modprobe nvidia_uvm
echo "lxc.cgroup2.devices.allow: c 226:* rwm"
echo "lxc.cgroup2.devices.allow: c 195:* rwm"
echo "lxc.cgroup2.devices.allow: c 509:* rwm"
echo "lxc.cgroup2.devices.allow: c 234:* rwm"
echo "lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir"
ls -l /dev/nvidia* | awk '/^crw/ {
dev=$NF; gsub(/.*\//,"",dev);
print "lxc.mount.entry: /dev/"dev" dev/"dev" none bind,optional,create=file"
}'Copy output into:
/etc/pve/nodes/pve/lxc/<CTID>.conf
/dev/nvidia0
/dev/nvidiactl
/dev/nvidia-modeset
/dev/nvidia-uvm
/dev/nvidia-uvm-tools
/dev/dri
- Missing
/dev/nvidia-uvm→ Docker GPU fails - Even if
nvidia-smiworks
apt install -y libvulkan1 curl gpgwget https://us.download.nvidia.com/XFree86/Linux-x86_64/580.105.08/NVIDIA-Linux-x86_64-580.105.08.run
chmod +x NVIDIA-Linux-x86_64-580.105.08.run
./NVIDIA-Linux-x86_64-580.105.08.run --no-kernel-modulecurl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | \
gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
apt update
apt install -y \
nvidia-container-toolkit \
nvidia-container-toolkit-base \
libnvidia-container-tools \
libnvidia-container1sed -i 's/^#\?no-cgroups.*/no-cgroups = true/' /etc/nvidia-container-runtime/config.toml- MUST be set before configuring runtime
- Required due to LXC cgroup limitations
nvidia-ctk runtime configure --runtime=docker
systemctl restart dockerdocker info | grep -i runtimenvidia-sminvidia-smidocker run --gpus all nvidia/cuda:12.6.1-base-ubuntu24.04 nvidia-smiUse:
gpus: all
Fallback only:
runtime: nvidia
Do not mix arbitrarily.
GPU selection is handled via Docker/NVIDIA runtime (--gpus) or NVIDIA_VISIBLE_DEVICES ([NVIDIA Docs][3])
- Do NOT install kernel modules inside LXC
- Driver versions should match host + LXC
- Always generate LXC config dynamically
/dev/nvidia-uvmis required for CUDAno-cgroups = trueis mandatory- Docker automatically mounts required NVIDIA driver libraries
If GPU fails:
modprobe nvidia_uvmVerify:
ls -l /dev/nvidia-uvmWorking system must have:
- GPU visible on host
- GPU visible in LXC
- Docker
--gpus allworks - Containers can use CUDA