Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/1527
Dr Donald Kinghorn (Scientific Computing Advisor )

NVIDIA Docker2 with OpenGL and X Display Output

Written on July 11, 2019 by Dr Donald Kinghorn
Share:

Introduction

Docker is a great Workstation tool. It is mostly used for command-line application or servers but, ... What if you want to run an application in a container, AND, use an X Window GUI with it? What if you are doing development work with CUDA and are including OpenGL graphic visualization along with it? You CAN do that!

Two years ago when NVIDIA had first released nvidia-docker to provide GPU support for containers I wrote a series of posts about setting up docker and nvidia-docker on a Workstation. That series of posts included this, Docker and NVIDIA-docker on your workstation: Using Graphical Applications. I showed how to run X based applications and OpenGL displays using that original version 1 of nvidia-docker. In 2018 after NVIDIA had released the excellent NGC container registry I again wrote a series of posts about using docker and nvidia-docker. This was with version 2 of nvidia-docker. There were significant changes to how nvidia-docker was implemented in version 2 including how OpenGL was handled. In that second series of posts I did not discuss using graphical applications with nvidia-docker.

A colleague recently ask me about building a CUDA application with OpenGL support in an nvidia-docker2 container. I tried to do it and ran into difficulty. A lot of reading and experimenting followed. I was able to get it all working nicely.

This post is a guide to working with OpenGL and X-Window applications from a docker container running on a Workstation with the NVIDIA runtime.

Setting up Docker and NVIDIA-docker2 on your Workstation (references)

Docker together with the NVIDIA "runtime" (nvidia-docker) is very useful for starting up various applications and environments without having to do direct installs on your system. Setting up docker and nvidia-docker is one of the first things I do after an install on a Linux workstation.

I have written many posts about using docker and nvidia-docker. If you go to the Puget Systems HPC Blog and search for "docker" you will find nearly 70 posts! The top posts should be How-To's and Guides. The most recent install and setup post about docker and nvidia-docker was How To Install Docker and NVIDIA-Docker on Ubuntu 19.04. That guide is concise and equally applicable to Ubuntu 18.04 (recommended). It also contains references to other posts that will give you more detailed information if you want to dig deeper.

For what follows I assume you have docker and the nvidia-docker runtime installed and configured on your Workstation.

Command Line arguments needed for X and OpenGL with NVIDIA-docker2

There are 4 extra docker "run" arguments that are needed to use your X Window display and OpenGL with nvidia-docker2,

docker run --runtime=nvidia --rm -it -v $HOME/projects:/projects -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY -e XAUTHORITY -e NVIDIA_DRIVER_CAPABILITIES=all nvidia/cuda

Lets go through this command-line in some detail.

First part is my normal start-up for a nvidia-docker2 container,

docker run --runtime=nvidia --rm -it -v $HOME/projects:/projects

If you have read any of my TensorFlow GPU testing posts you have probably seen that before. That is just starting "docker run" with,

  • "--runtime=nvidia" to set the nvidia runtime,
  • "--rm" means to remove the container instance on exit (optional)
  • "-it or -i -t" is interactive with a pTTY (terminal),
  • "-v $HOME/projects:/projects" is binding the volume (directory) "projects" from my home directory to "/projects" in the container. That's where keep what I'm working on.

Second part, is needed for an X Window and OpenGL display when running a program in an nvidia-docker2 container,

-v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY -e XAUTHORITY -e NVIDIA_DRIVER_CAPABILITIES=all

  • "-v /tmp/.X11-unix:/tmp/.X11-unix", This is binding your X11 socket into the same location in the container image. You need that for the container to access the display.

The next 3 items are environment variables to be set in the container.

  • "-e DISPLAY", this makes your DISPLAY environment variable available in the container. (That's usually set to something like ":0" )
  • "-e XAUTHORITY", passes your "MIT-MAGIC-COOKIE" file location (used by xauth) to give permission for the container to use your X session. (that's usually .Xauthority in your home directory) with this you shouldn't need to do anything with "xhost" to set display permissions.
  • "-e NVIDIA_DRIVER_CAPABILITIES=all", this is the biggest change from version 1 of nvidia-docker. By default the container environment variable NVIDIA_DRIVER_CAPABILITIES does not include all of the capabilities of your driver and GPU.

Here is a list of values that can be assigned to NVIDIA_DRIVER_CAPABILITIES,

The following list is from the nvidia-container-runtime documentation on GitHub

https://github.com/NVIDIA/nvidia-container-runtime


NVIDIA_DRIVER_CAPABILITIES

This option controls which driver libraries/binaries will be mounted inside the container.

Possible values

  • compute,video, graphics,utility …: a comma-separated list of driver features the container needs.
  • all: enable all available driver capabilities.
  • empty or unset: use default driver capability: utility.

Supported driver capabilities

  • compute: required for CUDA and OpenCL applications.
  • compat32: required for running 32-bit applications.
  • graphics: required for running OpenGL and Vulkan applications.
  • utility: required for using nvidia-smi and NVML.
  • video: required for using the Video Codec SDK.
  • display: required for leveraging X11 display.

Example: Compile "nbody" from the CUDA Samples using an NVIDIA CUDA docker image and run it with OpenGL display

One of the first things I do to check a CUDA set-up is to compile the (optional) "Samples" code. There is a great selection of sample code for various features/aspects of CUDA programming. A favorite is the "nbody" sample. I'll do a build of that code using a docker container from the NVIDIA repository on DockerHub using the nvidia-docker runtime. This nbody code has a nifty OpenGL display that looks like a "big-bang" star formation.

Step 1)

Get the CUDA samples for the latest version of CUDA

Go to the NVIDIA CUDA download page and click the buttons until you get to your distribution i.e. Linux - x86_64 - Ubuntu - 18.04 - runfile(local). You can download the .run file from your browser or left click on the Download button and copy the location and then use wget,

wget https://developer.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda_10.1.168_418.67_linux.run

(That was the current version when I wrote this post.)

We will only install the Samples from this, NOT the CUDA ToolKit!

To use the .run file to install (only) the samples, go the the directory where you downloaded the .run file and do,

sh cuda_10.1.168_418.67_linux.run --silent --samples --samplespath=~/projects/

I have the .run script installing the sample directory into the "projects" directory in my home directory. You should now have a directory named "NVIDIA_CUDA-10.1_Samples". There is lot of good stuff in there! ... and you can compile it from a docker container.

Step 2)

Start the docker container for the latest CUDA release image on DockerHub

docker run --runtime=nvidia --rm -it -v $HOME/projects:/projects -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY -e XAUTHORITY -e NVIDIA_DRIVER_CAPABILITIES=all nvidia/cuda​​​​​​​

That is the full command-line, see the previous section for a description. That container is maintained by NVIDIA and the default "tag" is "latest" so it should be in sync with what is available on the CUDA download page.

Step 3)

Install the dependencies for OpenGL

The container does not have all the development libraries we need to to build an OpenGL application. Fortunately we can get all the libraries we need by installing 1 package,

apt-get update
apt-get install freeglut3-dev​​​​​​​

Step 4)

Compile the nbody code

The container you started above will be a base Ubuntu 16.04 with CUDA 10.1 ToolKit and Tools installed. From the container command prompt cd to the nbody source directory and type "make"

cd projects/NVIDIA_CUDA-10.1_Samples/5_Simulations/nbody

make

Step 5)

Run nbody and marvel at the spectacle of a wonderful OpenGL CUDA application running on your display from a docker container!

./nbody

nbody OpenGL display from docker

Happy computing! --dbk @dbkinghorn


Looking for a GPU Accelerated Workstation?

Puget Systems offers a range of workstations that are tailor made for your unique workflow. Our goal is to provide most effective and reliable system possible so you can concentrate on your work and not worry about your computer.

Configure a System!

Why Choose Puget Systems?


Built specifically for you

Rather than getting a generic workstation, our systems are designed around your unique workflow and are optimized for the work you do every day.

Fast Build Times

By keeping inventory of our most popular parts, and maintaining a short supply line to parts we need, we are able to offer an industry leading ship time of 7-10 business days on nearly all our system orders.

We're Here, Give Us a Call!

We make sure our representatives are as accessible as possible, by phone and email. At Puget Systems, you can actually talk to a real person!

Lifetime Support/Labor Warranty

Even when your parts warranty expires, we continue to answer your questions and even fix your computer with no labor costs.

Click here for even more reasons!

Puget Systems Hardware Partners

Tags: NVIDIA-docker, Docker, CUDA
Suryadiputra Liawatimena

I followed every steps.
$ xhost +
$ echo "export DISPLAY=:0.0" >> ~/.bashrc
$ source ~/.bashrc
$ sudo nvidia-docker run --runtime=nvidia -it -v $HOME/d/learn:/learn -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY -e XAUTHORITY -e NVIDIA_DRIVER_CAPABILITIES=all nvidia/cuda

I did some libraries installation in the docker. I exit. Then I call again the docker.
Why installed libraries are gone, it is like from the start again? I already remove --rm option.

How to solve it? Thank you very much in advance.

Posted on 2019-08-26 07:46:00
Donald Kinghorn

sorry I'm late with a reply ...
I think you want to do a "docker commit" docker run just gives you an instance of a container, any changes are are lost when it is closed (independent of --rm)

This one of the things about docker that is both good and bad. On the plus side, when you do a commit you can set a new container tag and it's just the changes that get added i.e. the whole thing is not copied.

Hope this helps --Don

Posted on 2019-09-06 16:41:30