Plex GPU Transcoding with Nvidia in Docker - The Right Way

A practical guide to enabling Nvidia GPU hardware transcoding in Plex Media Server running in Docker

I've been deep into self-hosting for about 5 years now, and Plex has always been the crown jewel of my setup. My old rig was this tiny mini PC with an Intel I3 (4 cores) and 16GB of RAM, running everything through Proxmox - two VMs, one handling all my Docker containers including Plex. Worked like a charm for what I needed.

But last year I finally treated myself to a proper PC with an Nvidia GPU. I knew I was probably signing up for some Linux driver headaches, especially since I had my heart set on a Hyprland setup. Still, the promise of hardware transcoding was too tempting.

That's when the real fun began.

I'd been running Plex in Docker for ages - honestly, it's the only sane way to manage it. But getting that shiny new GPU to actually do transcoding work? That nearly drove me to tears.

The old Intel setup just chugged along with CPU transcoding, and I didn't think much of it. But now I had this powerful GPU sitting there doing absolutely nothing while my CPU was still melting trying to transcode a 4K movie to my phone.

I wasted several evenings following guides that told me to use privileged: true or manually mount /dev/nvidia* devices. Half didn't work at all, the other half were basically giving root access to everything. Great security model there.

The worst part? The GPU would show up inside the container - you'd run nvidia-smi and think "awesome, it's there!" - but Plex would still be grinding away with CPU transcoding like it was 2010.

Running this on Arch with Hyprland because I apparently enjoy making my life harder. The method should work, I guess, on any Linux distro where you can get proper Nvidia drivers running though - Ubuntu users, you're not missing out on the pain.


What was actually going wrong

Coming from my old Intel setup where CPU transcoding "just worked" (if you can call waiting a bit for a stream to start "working"), I thought GPU transcoding would be straightforward. Boy, was I wrong.

After banging my head against this for way too long, I realized most tutorials fall into the same traps:

Option 1: privileged: true - Sure, you get all the device files, but the Nvidia userspace libraries? Nowhere to be found.

Option 2: Manual device mounting - Same problem, except now you're also hardcoding device paths like /dev/nvidia0. What happens when you add another GPU? Exactly.

Both approaches give you a container where /dev/nvidia* exists and nvidia-smi runs, but Plex can't actually talk to the GPU because the driver libraries aren't there. It's like having a Ferrari in your garage but the keys are locked inside the house.

Going from my reliable old Intel setup to this was frustrating - I went from "slow but works" to "fast hardware but completely broken."


The solution that finally worked

After diving way too deep into Docker's GPU support documentation (and seriously considering going back to my old Intel setup), I figured out you need exactly three things:

  1. Nvidia Container Toolkit installed and properly configured (this is the important bit)
  2. Docker's deploy.resources.reservations syntax for GPU passthrough
  3. The right environment variables in your container

Getting your system ready

Install the packages

On Arch (because of course):

# Nvidia drivers - skip this if you already have them
sudo pacman -S nvidia nvidia-utils

# The magic sauce
sudo pacman -S nvidia-container-toolkit

Check out the official Nvidia Container Toolkit docs. They've got instructions for pretty much every package manager.

Make Docker play nice with Nvidia

You need to tell Docker about the Nvidia runtime. Edit or create /etc/docker/daemon.json:

{
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "nvidia-container-runtime"
        }
    }
}

Restart Docker (yeah, I know, restarting services is annoying):

sudo systemctl restart docker

Quick sanity check

Before we go any further, make sure everything's working:

# Check if Docker knows about the Nvidia runtime
docker info | grep -i runtime
# You want to see: Runtimes: nvidia runc io.containerd.runc.v2

# Make sure your GPU is still alive
nvidia-smi

If nvidia-smi isn't working, fix that first. No point in debugging Docker if your drivers are broken.


The Docker Compose setup

Alright, here's the configuration that finally worked. I'm using the deploy.resources.reservations approach because it's the modern way to do this:

plex:
  image: lscr.io/linuxserver/plex:latest
  container_name: plex
  restart: unless-stopped
  ports:
    - "32400:32400/tcp"
    - "3005:3005/tcp"
    - "8324:8324/tcp"
    - "32469:32469/tcp"
    - "1900:1900/udp"
    - "32410:32410/udp"
    - "32412:32412/udp"
    - "32413:32413/udp"
    - "32414:32414/udp"
  volumes:
    - plex:/config
    - /path/to/media:/media
    - plex-transcode:/transcode
  environment:
    - TZ=Europe/Bucharest
    - PUID=0
    - PGID=0
    - VERSION=docker
    - NVIDIA_VISIBLE_DEVICES=all
    - NVIDIA_DRIVER_CAPABILITIES=video,compute,utility
  deploy:
    resources:
      reservations:
        devices:
          - driver: nvidia
            count: all
            capabilities: [video,compute,utility]

The ports section is the standard Plex setup - nothing special there. The volumes are where I mount my media and config. But the real magic happens in those environment variables and the deploy section.

Why this works (the technical bit)

This deploy.resources.reservations section is what finally made everything click:

deploy:
  resources:
    reservations:
      devices:
        - driver: nvidia
          count: all
          capabilities: [video,compute,utility]

When Docker sees this, it actually uses the Nvidia Container Runtime instead of the default one. This runtime does three critical things:

  • Mounts all the Nvidia userspace libraries into the container
  • Exposes the GPU devices at /dev/nvidia*
  • Sets up the proper permissions for video encoding/decoding

Important gotcha: Don't use capabilities: [gpu] like some guides suggest. I spent an hour debugging why that didn't work - turns out you need [video,compute,utility] for transcoding workloads specifically.


Double-checking the GPU works

Once your container is up, run these commands to make sure everything's actually working:

# Are the Nvidia devices there?
docker exec plex ls -la /dev/nvidia*

# This is the critical one - are the libraries mounted?
docker exec plex bash -c "ldconfig -p | grep -i nvidia"

# Can we talk to the driver?
docker exec plex cat /proc/driver/nvidia/version

The second command is the one that matters most. If ldconfig -p | grep nvidia comes back empty, the Nvidia Container Runtime isn't being used properly. Go back and check your daemon.json setup.


Making Plex use the GPU

The Plex side is pretty straightforward:

  1. Open the web UI at http://your-server:32400/web
  2. Hit Settings > Transcoder
  3. Check "Use hardware acceleration when available"
  4. Check "Use hardware-accelerated video encoding"
  5. Save it

Nothing fancy here. Plex should automatically detect the GPU once the container has proper access.


Testing if it's actually working

Now for the moment of truth. Start playing something that needs transcoding - maybe force a lower quality stream on your phone or tablet - and run:

watch -n 1 nvidia-smi

Here's what you want to see:

  • GPU power state drops from P8 (basically sleeping) down to P1 or P2 (actually working)
  • Power draw jumps from the idle ~14W up to 30-50W range
  • Encoder and decoder utilization show something other than zero

In Plex's dashboard, any active transcoding sessions will have (hw) next to them. That's your confirmation that it's using the GPU instead of grinding away on the CPU.

The first time I saw that "(hw)" tag, I actually did a little victory dance. After all those evenings of frustration and missing my simple Intel setup, it was finally working better than I'd imagined.


When things go sideways

GPU shows up but no libraries

What you see: /dev/nvidia* devices exist, but ldconfig -p | grep nvidia returns nothing.

What's wrong: Docker isn't using the Nvidia Container Runtime. Check that you have the deploy.resources.reservations section in your compose file, not just environment variables.

"could not select device driver nvidia"

This error message is basically Docker's way of saying "I don't understand what you want."

Fix: Change capabilities: [gpu] to capabilities: [video,compute,utility]. The generic gpu capability doesn't work for transcoding.

GPU works but Plex ignores it

Most frustrating scenario. Your nvidia-smi shows the GPU, all the libraries are there, but Plex keeps hammering the CPU anyway.

Check these things:

  1. Hardware transcoding is actually enabled in Plex settings (sounds obvious, but I forgot this once)
  2. The codec you're trying to transcode actually supports hardware acceleration
  3. Plex logs at /config/Library/Application Support/Plex Media Server/Logs/ - they'll tell you exactly why it's falling back to software transcoding

What not to do (lessons from my failures)

  • Skip privileged: true - It's a security nightmare and doesn't even solve the library mounting problem
  • Don't manually mount GPU devices - Use the proper runtime instead of hardcoding /dev/nvidia* paths
  • Avoid runtime: nvidia - Works but is legacy. The new deploy.resources.reservations approach is cleaner

I tried all of these before finding the right way. Save yourself the time.


That's it. No more CPU transcoding, no more overheated servers, no more three-minute waits for a 30-second preview to load. My new setup finally lives up to the hardware upgrade.

The GPU transcoding setup looks complicated when you're staring at failed configs at 11 PM, missing the simplicity of your old Intel setup. But once you understand what Docker actually needs to make the Nvidia runtime work, it's pretty straightforward.

Now go enjoy your hardware-accelerated Plex setup. Trust me, after dealing with software transcoding for years, you've more than earned this upgrade.


If you found this useful, you might want to check out some of my other Docker and self-hosting guides: