I slimmed down my workflow for Linux container development upon the release of Windows 11 – I took the release as an opportunity to clean up my local PC and start from scratch. This won’t be a typical workflow for users performing a standard upgrade.

Before beginning, make sure you understand the caveats and limitations of using this method. Notably there is quite a bit more complexity to make local bind mounts for Docker volumes.

Docker under WSL2

Microsoft has made big improvements in WSL2 with the release of Windows 11. Check out the documentation here.

Bootstrapping you WSL2 experience

Getting started, there are no pre-installed Linux distributions for WSL2. I chose Debian, as that’s a stable well-known distro that will serve just fine for running non-interactive tasks. The default version for WSL is out-of-date, but we’ll address that ourselves.

Instead of managing this yourself, you may wish to look at DistroD, which provides a full install experience for your preferred Linux distro and also runs SystemD within your WSL2 instance.

If you really want just SystemD, there is Genie, which creates a new PID namespace so that it runs as PID 1 instead of the WSL init process.

To see a list of pre-built distributions to choose from, open a command line and run:

wsl --list --online

Install and setup the initial distribution

wsl --install -d Debian

Follow the prompts to finish setting up the distribution.

If you need to log in again, you should see an icon in your start menu or alternatively just type bash into a command prompt. Typing bash will launch your default Linux distribution. A tab will also automatically be created in the Windows Terminal.
If you’re going to use multiple WSL distros, consider planning out your UID/GIDs here so that file permission management is easier.

Get your installation up to date. Here are the quick instructions for Debian upgrade to bullseye:

sudo apt update -y && sudo apt upgrade -y
sudo apt dist-upgrade
cat <<-EOF > /etc/apt/sources.list
	deb https://deb.debian.org/debian bullseye main
	deb https://deb.debian.org/debian bullseye-updates main
	deb https://deb.debian.org/debian-security bullseye-security main contrib
	deb https://deb.debian.org/debian bullseye-backports main
EOF
sudo apt upgrade --without-new-pkgs
sudo apt full-upgrade

Full instructions are here.

Once done, you should have an up-to-date debian version:

> cat /etc/debian_version
11.1

> cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

Setup Docker under your Linux distro

One of the big changes that I made to my workflow is the eliminination of Docker Desktop for Windows. I spent most of my time at the command line utilizing Linux containers, so it was a natural fit to simply use Docker in WSL2.

For Debian, here are the steps:

  1. Install Docker/Docker-Compose

    sudo apt install docker.io docker-compose
  2. The Docker service startup script needs to read /etc/fstab, which likely doesn’t exist under WSL; create it:

    touch /etc/fstab
  3. To continue, manually start Docker, and to ensure that the Docker daemon starts upon login add it one of your login scripts (.bashrc, .bash_profile, or .profile depending on your distribution):

    cat <<-EOF >> .profile
    	sudo mkdir -p /sys/fs/cgroup/systemd
    	sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd >/dev/null 2>&1
    	sudo service docker start
    EOF

    You may be able to omit the service docker start command if you’re running the latest WSL2 version via Windows Insiders, check here under User Preview Options.

  4. In preparation for other Linux distros to be able to use this Docker installation as a docker host, update the Docker startup process to ensure that the Docker daemon is listening on the standard Docker API service port. This will be distribution specific – for Debian:

    echo 'DOCKER_OPTS="-H unix:///var/run/docker.sock -H tcp://127.0.0.1:2375"' >> /etc/default/docker
    All WSL guests share a network namespace, so listening ports on localhost are made available to all WSL2 guests. They are also made available to the Windows host loopback address. No port proxying is needed.
    Only do this on your local development workstation, and only on trusted networks. Ensure your Docker socket is protected for any non-local development or production use.
  5. Check that Docker is running using docker info. You should see something similar to the following:

    Client:
     Context:    default
     Debug Mode: false  
    
    Server:
     Containers: 0
      Running: 0
      Paused: 0
      Stopped: 0
     Images: 2
     Server Version: 20.10.5+dfsg1
     Storage Driver: overlay2
      Backing Filesystem: extfs
      Supports d_type: true
      Native Overlay Diff: true
     Logging Driver: json-file
     Cgroup Driver: cgroupfs
     Cgroup Version: 1
     Plugins:
      Volume: local
      Network: bridge host ipvlan macvlan null overlay
      Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
     Swarm: inactive
     Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
     Default Runtime: runc
     Init Binary: docker-init
     containerd version: 1.4.5~ds1-2
     runc version: 1.0.0~rc93+ds1-5+b2
     init version:
     Security Options:
      seccomp
       Profile: default
     Kernel Version: 5.10.60.1-microsoft-standard-WSL2
     Operating System: Debian GNU/Linux 11 (bullseye)
     OSType: linux
     Architecture: x86_64
     CPUs: 12
     Total Memory: 31.32GiB
     Name: Persephone
     ID: JIEE:XE7Z:P2LD:E2DD:S7NB:DBOQ:6B5F:KTYT:A6GI:OHWI:MUZA:PW6Z
     Docker Root Dir: /var/lib/docker
     Debug Mode: false
     Registry: https://index.docker.io/v1/
     Labels:
     Experimental: false
     Insecure Registries:
      127.0.0.0/8
     Live Restore Enabled: false    
    
    WARNING: API is accessible on http://127.0.0.1:2375 without encryption.
             Access to the remote API is equivalent to root access on the host. Refer
             to the 'Docker daemon attack surface' section in the documentation for
             more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
    WARNING: No blkio throttle.read_bps_device support
    WARNING: No blkio throttle.write_bps_device support
    WARNING: No blkio throttle.read_iops_device support
    WARNING: No blkio throttle.write_iops_device support
    
  6. Make sure your user ID is a part of the docker group to avoid having to use sudo all the time:

    usermod -a -G docker <userid>

Building images for other CPU architectures

As we didn’t install Docker Desktop to manage our Docker environment, building container images for CPU architectures other than x86/x64 requires a few more steps. With the rise of ARM processors (Apple – M1, AWS – Graviton, OCI – Ampere, and Raspberry Pi for example) this becomes more relevant for each image built.

Refer to this Docker documentation for general steps.

As we have our Docker daemon running in our Debian instace, here are the highlight level steps:

  1. Install qemu-user-static

    sudo apt install qemu-user-static
  2. Register qemu with binfmt_misc for each of the CPU architectures you with to support. This should be done any time the Debian WSL instance starts up, add this to your .profile prior to the Docker daemon startup. Restart your daemon as nesessary if this is the first time installing qemu.

    # Enable linux/arm64 and linux/arm/v6,v7
    sudo update-binfmts --enable qemu-aarch64
    sudo update-binfmts --enable qemu-arm

  3. Check that you can use BuildKit and buildx

    docker buildx ls
    
    NAME/NODE DRIVER/ENDPOINT  STATUS  PLATFORMS
    wsl2 *    docker
      wsl2    wsl2             running linux/amd64, linux/arm64, linux/386, linux/arm/v7, linux/arm/v6
        

    If you get the response docker: 'buildx' is not a docker command. then you may be missing the buildx plugin on your CLI instance or are not running your CLI in experimental mode. Try export DOCKER_CLI_EXPERIMENTAL=enabled or install the buildx plugin from directions here.

  4. Check that your Docker server can run images with other CPU architectures:

    docker run --rm arm64v8/alpine uname -a

    Note that uname will return aarch64 as the architecture, not x86_64

    Unable to find image 'arm64v8/alpine:latest' locally
    latest: Pulling from arm64v8/alpine
    Digest: sha256:c74f1b1166784193ea6c8f9440263b9be6cae07dfe35e32a5df7a31358ac2060
    Status: Downloaded newer image for arm64v8/alpine:latest
    WARNING: The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64) and no specific platform was requested
    
    Linux 99733ea38e6e 5.10.60.1-microsoft-standard-WSL2 #1 SMP Wed Aug 25 23:20:18 UTC 2021 aarch64 Linux

Optional – Add GPU support to Docker

With WSL2 in Windows 11, GPU access is possible for containers. Here’s how to set up access for NVidia GPUs:

  1. Update to the latest graphics drivers: https://developer.nvidia.com/cuda/wsl/download. At the time of this writing the version is 510.06

    Supposedly these drivers are distributed to PCs enrolled in Windows Insiders automatically through Windows Update. I’ve elected to keep my PC on the consumer track so I’m installing these manually.
  2. Validate that the appropriate libraries are injected into your Linux instance by checking /usr/lib/wsl. For NVidia, you can quickly check that your WSL2 distribution can access the video card by typing nvidia-smi. Make sure the returned driver version matches your installed drivers:

    > nvidia-smi
    
    Tue Nov 23 13:22:56 2021
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 510.00       Driver Version: 510.06       CUDA Version: 11.6     |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |                               |                      |               MIG M. |
    |===============================+======================+======================|
    |   0  NVIDIA GeForce ...  On   | 00000000:06:00.0  On |                  N/A |
    |  0%   48C    P8    14W / 185W |   2547MiB /  8192MiB |     N/A      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
    
    +-----------------------------------------------------------------------------+
    | Processes:                                                                  |
    |  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
    |        ID   ID                                                   Usage      |
    |=============================================================================|
    |  No running processes found                                                 |
    +-----------------------------------------------------------------------------+
    
  3. Follow the NVidia guide to install their container toolkit based on your installed distribution.

    Upon completion of the installation and restart of the Docker service, running docker info should show that the nvidia runtime has been added to the Docker daemon:

    docker info | grep nvidia

    Runtimes: nvidia runc io.containerd.runc.v2 io.containerd.runtime.v1.linux

  4. Validate that running nvidia-smi in a container shows the correct driver versions – the output should match the output from nvidia-smi above:

    docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
  5. Test running a sample container from the Nvidia Container Catalog

    docker run --gpus all --rm -it nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda11.2.1
    [Vector addition of 50000 elements]
    Copy input data from the host memory to the CUDA device
    CUDA kernel launch with 196 blocks of 256 threads
    Copy output data from the CUDA device to the host memory
    Test PASSED
    Done
    

A note on the WSLG NVidia CUDA libraries

Every time ldconfig is run within your WSL distribution, you may see the following error message pop up:

ldconfig: /usr/lib/wsl/lib/libcuda.so.1 is not a symbolic link

See the discussion on this GitHub issue for more insight.

In short – as annoying as the message is, it’s harmless.

Creating a user distribution – Arch Linux

While it is absolutely possible to continue using the latest stable Debian distribution as your daily driver, I prefer using Arch Linux.

Arch Linux is not provided as a Windows Store application, so to utilize Arch we need to set it up ourselves. Now that we have Docker up and running, this is easy to accomplish.

Following the MS Guide, utilize the Arch container image archlinux/archlinux:base-devel to build the system.

  1. Create a local directory to store the base filesystem for your distribution:

    mkdir C:\WSL2\Arch
  2. Pull the docker image

    docker pull archlinux/archlinux:base-devel

  3. Create a temporary container based on the pulled image. This container is only created so that there is an exportable target we can use. We’re going to do some quick maintenence and pre-install some missing packages on the container prior to importing it.

    docker run --name=arch_setup -it archlinux/archlinux:base-devel bash

    In the container:

    # Update System and install packages
    pacman -Syu
    pacman -S docker git inetutils openssh python reflector
    reflector --save /etc/pacman.d/mirrorlist --threads 10 -c ca -c us -n 10 -p http
    
    # Create user
    useradd -G wheel -m <userid>
    passwd <userid>
    
    # Configure WSL defaults
    echo -e "[user]\ndefault=<userid>" >> /etc/wsl.conf
    
    # Make sure Docker context is set up
    su - <userid>
    docker context create WSL2 --description "WSL2 on Debian" --docker host=tcp://127.0.0.1:2375
    docker context use WSL2
    
    # Perform other bootstrap tasks here
    ...

  4. Export the container to a tar archive

    dockerContainerID=$(docker container ls -a | grep -i arch-setup | awk '{print $1}')
    docker export $dockerContainerID > /mnt/c/Temp/arch_base_image.tar
  5. Import the distribution into WSL.

    wsl --import Arch C:\WSL2\Arch c:\Temp\arch_base_image.tar
  6. Launch the new distro, and set it to optional if desired

    rem Optional
    wsl -s Arch
    wsl -d Arch

Connecting to your Docker Instance from other WSL2 instances

Connecting from other WSL2 distributions is accomplished by adding a new context to your docker client:

Learn more about docker contexts here: https://docs.docker.com/engine/reference/commandline/context/

docker context create WSL2 --description "WSL2 on Debian" --docker host=tcp://127.0.0.1:2375
docker context use WSL2

Once the docker context is set, you should be able to call docker info and see the same values as were returned when it was called from the Debian instance.

If you wish to utilize the standard Docker socket at /var/run/docker.sock, there are a few different ways to accomplish that.

  1. You can forward a local UNIX socket to a TCP endpoint via socat. Doing this will allow programs to connect to the Docker daemon via /var/run/docker.sock as if it were running locally.

    sudo socat UNIX-LISTEN:/var/run/docker.sock,fork TCP4-CONNECT:127.0.0.1:2375 &
    sudo chown root:docker /var/run/docker.sock
    sudo chmod 660 /var/run/docker.sock
  2. If you read ahead, you’ll see that the /mnt/wsl folder is a tmpfs filesystem that WSL2 shares between all distributions. If you’d like to avoid having the docker daemon listen on the localhost address, you can instead add an additional socket listener via -H unix:///mnt/wsl/docker.sock and create a context to connect via that. Be aware though, that your GID must match across distributions for convienent access via the docker CLI.

  3. Follow the previous step, but instead of creating a separate context you may instead choose to create a symlink from /var/run/docker.sock to /mnt/wsl/docker.sock

Cross-distribution file access

As mentioned at the top of the article, Docker remote contexts don’t allow you to bind mount local directories/files into your containers.

the /mnt/wsl directory is a tmpfs filesystem that is mounted into each distribution by WSL and can be used as cross-distribution mount point.

Using this shared directory can allow you to “bypass” the inability to mount local files/directories into our pseduo-remote docker context.

Be warned – all the dangers of shared filesystems are present when bind mounting filesystems between distributions. There is also the issue in that you may find your UIDs/GIDs are mis-matched between distributions

From your working distribition (Arch, in my case) bind mount your working directories to /mnt/wsl:

# This bind mounts the /projects directory into the /mnt/wsl directory
sudo mount -o bind /projects /mnt/wsl/projects

Once they are mounted from your working distribution, there are two choices:

  1. Work from within the /mnt/wsl directory for any container work that requires mounting local files or directories

  2. Create a symlink in the distribution hosting the Docker daemon to match your working distribution

    sudo ln -sv /mnt/wsl/projects /projects

Now, when creating containers, you can use the -v /projects/localfile:/container/directory/localfile command line option or through your docker-compose.yml files to allow containers running on your Docker distribution to have access to files on your working distribution.

Connecting to your Docker Instance from Windows

As we’ve avoided installing Docker Desktop, we now need to set up Windows to work with our Docker WSL2 host.

As mentioned above, mounting local files and directories from Windows isn’t possible in the traditional manner as we’re using a remote Docker context. HOWEVER – if you transform your file paths to their WSL equivalent, file access works as expected:

C:\Temp\File.txt <-> /mnt/c/Temp/File.txt

If you prefer (or have legacy compose stacks) that expect Windows drives mounted at a different location, edit the automount stanza in /etc/wsl.conf according to your preferences. Microsoft documentation can be found here

Docker command line binaries can be installed from Docker here: https://download.docker.com/win/static/stable/x86_64/

Choose the latest version, extract the contents, and ensure that the docker.exe executable is located in your %PATH%.

Once the Docker command line tools are successfully enabled, create a context as described above in the same way and test connectivity by running docker info.

Remember, listening TCP ports from WSL are also exposed to Windows, so the TCP address is the same: tcp://127.0.0.1:2375

Issues with WSL2 overlapping or otherwise using inappropriate network address ranges

The WSL network interface attempts to choose a network address range that looks unused at startup, however this may cause it to choose a network address range that would later be assigned by, for example a VPN provider.

Microsoft thus far has not provided a way to deterministically set the assigned address range for the WSL network interface.

If you with to attempt to control the address assignment yourself, take a look here:

If you find that there are frequent overlaps between the default docker bridge and your WSL2 network interface, you may with to change the way that the Docker daemon creates the default bridge network. See here for how to configure the Docker daemon.

Attaching USB Devices to your WSL2 instance

If you would like to be able to utilize USB attached devices within your WSL2 instance (for example, a Yubikey), there are some additional steps that need to be performed both in the WSL2 instance and on the Windows host. The majority of these steps can be found at USBIPd wiki.

  1. Install the USB over IP device host on Windows from here

  2. Install the linux guest tools in your preferred operating system. For our Debian 11 system:

    sudo apt install usbip hwdata usbutils
  3. To attach a device to WSL2, run the the following command template in an Administrative command prompt on the Windows host:

    usbipd wsl list  
    usbipd wsl attach --busid <busid> --distribution <name>
    Devices are attached via usbip commands in a guest distro, but are available to all distributions that are running.
  4. Once the devices are attached, you can see them in the guest distribution by running a simple lsusb command.

    Once devices are attached to a WSL guest, keep in mind the following caveats:

    • You may require udev rules to perform configuration of your device
    • The WSL kernel is monolithic and doesn’t have every module compiled in. If you require specific kernel modules to support your device you must compile a custom kernel. Kernel sources from Microsoft can be found here and instructions on how to configure WSL to use a custom kernel can be found here

Convienence Tips

Some CLI programs will attempt to spawn a browser window, commonly using xdg-open. While WSL will transform and append your Windows PATH environment variable into the WSL equivalent, if you have spaces in your executable path and don’t have a symlink set up (without the .exe) xdg-open will fail.

I use Firefox, so I create a symlink in my ~/.local/bin directory (make sure this directory is in your PATH):

cd ~/.local/bin
ln -sv "/mnt/c/Program Files/Mozilla Firefox/firefox.exe" firefox

And in .bashrc I added the BROWSER environment variable to choose Firefox to be used by xdg-open

export BROWSER=firefox

Work with local Hyper-V instances?

If you also use local VMs with the built-in Hyper-V system that WSL2 uses, and use the default network adaptor for those VMs, you’ll find that you’ll be unable to communicate between your WSL2 instances and your Hyper-V instances.

This is because both the WSL virtual network adaptor and the default Hyper-V adaptor are classifed as “Internal” Hyper-V networks.

To communicate between them, IP forwarding needs to be configured on the network interfaces.

This is as simple as running the following command from an elevated PowerShell prompt:

Get-NetIPInterface | where {$_.InterfaceAlias -eq 'vEthernet (WSL)' -or $_.InterfaceAlias -eq 'vEthernet (Default Switch)'} | Set-NetIPInterface -Forwarding Enabled