Running GUI apps from Linux containers

I was trying to run Kodi and some other GUI apps from LXD containers on Archlinux but had some small problems, so I decided to document it in case I need it again in the future. The same approach should work for Docker and other types of Linux containers.

UID mapping

When it comes to running containers, it is recommended to use user namespace remapping (man subuid) to have different uid ranges for different cgroups. This way container processes are better isolated.

In cases where you use containers to simply run different software versions and don’t care about the added security of namespace remapping, you can allow reusing the same UIDs inside the container. This allows easy mounting of host directories inside the container without the need of any additional work to deal with file permissions.

So if you want to reuse the same user ID, append additional mapping lines to subuid/subgid files:

echo "root:$UID:1" | sudo tee -a /etc/subuid /etc/subgid

Create container

lxc launch images:archlinux/current/amd64 gui

If you decided to use specific UID mapping, configure container to use it

lxc config set gui raw.idmap "both $UID 1000"

For Kodi to use hardware accelerated graphics, we want to share GPU with the container:

lxc config device add gui mygpu gpu
lxc config device set gui mygpu uid 1000
lxc config device set gui mygpu gid 1000

In my case the host is running graphics server so we have to share the Xorg socket with container to be able to draw on it:

lxc config device add gui X0 proxy connect=unix:/tmp/.X11-unix/X0 listen=unix:/tmp/.X11-unix/X0 bind=container uid=1000 gid=1000 mode=0666

Permanently add DISPLAY variable to the container, then reboot the container

lxc config set gui raw.lxc 'lxc.environment = DISPLAY=:0'

You must install the same version of graphics drivers on the container. In case where both host and container is Archlinux or open source drivers are used, it’s taken care of automatically, but in case you want to use Ubuntu container, and you have Nvidia card with proprietary drivers, you have to install them in the container.

sudo apt-get install ubuntu-drivers-common software-properties-common
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update
sudo ubuntu-drivers devices
sudo apt-get install nvidia-driver-450

Allow access to Xorg on host

Not being able to access host Xorg was the reason why it didn’t work for me from the start. You have to allow connecting to X on host even when using the unix socket:

xhost +local:gui

gui is the hostname of the container

This has to be run each time X is restarted (unless configured permanently).

Using xhost + disables permission checking altogether and can be useful while testing. Run man xhost to see other configuration options.

Add audio support

Mount pulseaudio socket to the container to be able to play audio

lxc config device add gui PASocket proxy connect=unix:/run/user/1000/pulse/native listen=unix:/tmp/.pulse-native bind=container uid=1000 gid=1000 mode=0666

Install pulseaudio

sudo pacman -S pulseaudio

Inside the container make pulseaudio use the already mounted socket

sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf
echo export PULSE_SERVER=unix:/tmp/.pulse-native | tee --append /home/ubuntu/.profile

Now pactl info in the container should show that it is connected

Test it

Connect to container and make sure you have your /tmp/.X11-unix/X0 socket available. Then try running some program.

If you get this:

No protocol specified
Error: couldn't open display :0

… then you have to adjust the permissions with xhost on the host.


This is probably going to be impossible with Wayland out of the box but it is not like Wayland is really production ready, even in 2020. šŸ™‚


How to easily run graphics-accelerated GUI apps in LXD containers on your Ubuntu desktop

LXD on ArchWiki

Routing traffic through Wireguard peers

Wireguard is an amazing VPN solution that in 10 years time will be the go-to VPN solution for most. If you haven’t heard of it yet, go check it out.

Common scenario

I had a situation where clients want to access internal network which is behind NAT and no incoming ports can be opened to the outside world.

Solution approach

The solution was simple – set up Wireguard on a server elsewhere, make clients connect to that Wireguard server and tell it to route traffic through one of the clients inside the aforementioned LAN.

Intricate details

While it seems simple, I initially misconfigured a parameter which stopped the configuration from working as necessary. The misconfigured parameter was AllowedIPs parameter on the server.

After configuring the internal address of the peer in AllowedIPs line, you can append multiple subnets that can and will be routed through that peer. Adding will allow routing any traffic through it. But it will also create a default route that might have to be removed.

Without adding these subnets to the config Wireguard won’t accept packets for those subnets even if routed manually.

Then I set up separate routing table and used it for routing the other peers. This simplifies things.

Here is the Server config:

Address =
SaveConfig = false
PostUp = iptables -A FORWARD -i %i -j ACCEPT; ip rule add from lookup privatenet; ip route add default via table privatenet; `ip route del default dev wg1 table 51820`
PostDown = iptables -D FORWARD -i %i -j ACCEPT; ip rule del from lookup privatenet; ip route del default via table privatenet
ListenPort = 41234
PrivateKey = ...

# Router behind NAT
PublicKey = ..
PresharedKey = ..
AllowedIPs =,
PersistentKeepalive = 25

# Client
PublicKey = ..
PresharedKey = ..
AllowedIPs =

And here is the config of peer inside the LAN:

PrivateKey = ..
SaveConfig = false
Address =

PublicKey = ..
PresharedKey = ..
AllowedIPs =
Endpoint =
PersistentKeepalive = 25

Note how I use PersistentKeepalive to make sure it stays up. Wireguard is a silent protocol and unless some traffic is being sent to the interface, it won’t do anything.

To keep the configuration cleaner, I keep PostUp and PostDown commands in separate shell scripts.

Of course it also needs packet forwarding enabled and some forward/NAT rules on the peer behind the firewall.

Dropping of the Wireguard default route and some other things could maybe be avoided by interfacing with Wireguard directly instead of using wg-quick and config files, but I really like having those config files.