We need to run docker daemon using lxc driver to be able to modify the configuration and give the container access to the device. (it can be a little tricky so i will suggest you follow this guide )ĪTTENTION : It's really important that you keep the files you used for the host cuda installation Install nvidia driver and cuda on your host. I'm running on ubuntu server 14.04 and i'm using the latest cuda (6.0.37 for linux 13.04 64 bits). Ok i finally managed to do it without using the -privileged mode. If everything worked, you should see the following output: deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs = 1, Device0 = GRID K520 cuda-samples-linux-6.5.n -noprompt -cudaprefix=/usr/local/cuda-6.5/īuild deviceQuery sample: $ cd /usr/local/cuda/samples/1_Utilities/deviceQuery Install CUDA samples: $ cd /opt/nvidia_installers This should be run from inside the docker container you just launched. Here's what worked for me: $ sudo docker run -ti -device /dev/nvidia0:/dev/nvidia0 -device /dev/nvidiactl:/dev/nvidiactl -device /dev/nvidia-uvm:/dev/nvidia-uvm tleyden5iwx/ubuntu-cuda /bin/bash You'll want to customize this command to match your nvidia devices. The dockerfile is available on dockerhub if you want to know how this image was built. I've created a docker image that has the cuda drivers pre-installed. Run Docker container with nvidia driver pre-installed $ sudo apt-get update & sudo apt-get install lxc-dockerįind your nvidia devices ls -la /dev | grep nvidiaĬrw-rw-rw- 1 root root 195, 0 Oct 25 19:37 nvidia0Ĭrw-rw-rw- 1 root root 195, 255 Oct 25 19:37 nvidiactlĬrw-rw-rw- 1 root root 251, 0 Oct 25 19:37 nvidia-uvm $ sudo sh -c "echo deb docker main > /etc/apt//docker.list" See CUDA 6.5 on AWS GPU Instance Running Ubuntu 14.04 to get your host machine setup. Install nvidia driver and cuda on your host These instructions were tested on the following environment: Instead it's better to tell docker about the nvidia devices via the -device flag, and just use the native execution context rather than lxc. Regan's answer is great, but it's a bit out of date, since the correct way to do this is avoid the lxc execution context as Docker has dropped LXC as the default execution context as of docker 0.9. Or docker run -name my_first_gpu_container -gpus '"device=0"' nvidia/cuda To assign specific gpu to the docker container (in case of multiple GPUs available in your machine) docker run -name my_first_gpu_container -gpus device=0 nvidia/cuda Please note, the flag -gpus all is used to assign all available gpus to the docker container. Running the docker with GPU support docker run -name my_all_gpu_container -gpus all -t nvidia/cuda $ sudo apt-get update & sudo apt-get install -y nvidia-container-toolkit $ curl -s -L $distribution/nvidia-docker.list | sudo tee /etc/apt//nvidia-docker.list $ sudo yum install -y nvidia-container-toolkitįor Debian based OSes, execute the following set of commands: # Add the package repositories $ curl -s -L $distribution/nvidia-docker.repo | sudo tee /etc//nvidia-docker.repo Install the nvidia-container-toolkit package as per official documentation at Github.įor Redhat based OSes, execute the following set of commands: $ distribution=$(. Since Docker 19.03, you need to install nvidia-container-toolkit package and then use the -gpus all flag. Versions earlier than Docker 19.03 used to require nvidia-docker2 and the -runtime=nvidia flag. Writing an updated answer since most of the already present answers are obsolete as of now.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |