How to Install TensorFlow with GPU Support on Windows 10 (Without Installing CUDA) UPDATED!

This post is the needed update to a post I wrote nearly a year ago (June 2018) with essentially the same title. This time I have presented more details in an effort to prevent many of the “gotchas” that some people had with the old guide. This is a detailed guide for getting the latest TensorFlow working with GPU acceleration without needing to do a CUDA install.

How To Install CUDA 10 (together with 9.2) on Ubuntu 18.04 with support for NVIDIA 20XX Turing GPUs

NVIDIA recently released version 10.0 of CUDA. This is an upgrade from the 9.x series and has support for the new Turing GPU architecture. This CUDA version has full support for Ubuntu 18.4 as well as 16.04 and 14.04. The CUDA 10.0 release is bundled with the new 410.x display driver for Linux which will be needed for the 20xx Turing GPU’s. If you are doing development work with CUDA or running packages that require you to have the CUDA toolkit installed then you will probably want to upgrade to this. I’ll go though how to do the install of CUDA 10.0 either by itself or along with an existing CUDA 9.2 install.

Install TensorFlow with GPU Support on Windows 10 (without a full CUDA install)

In this post I’ll walk you through the best way I have found so far to get a good TensorFlow work environment on Windows 10 including GPU acceleration. I’ll go through how to install just the needed libraries (DLL’s) from CUDA 9.0 and cuDNN 7.0 to support TensorFlow 1.8. I’ll also go through setting up Anaconda Python and create an environment for TensorFlow and how to make that available for use with Jupyter notebook. As a “non-trivial” example of using this setup we’ll go through training LeNet-5 with Keras using TensorFlow with GPU acceleration. We’ll get a setup that is 18 times faster than using the CPU alone.

Docker and NVIDIA-docker on your workstation: Using Graphical Applications

You can use graphical application with Docker and NVIDIA-Docker by attaching your X-Window server socket to a container. And, it can be done in a relatively safe and secure way. I will take advantage of the Docker security and usability enhancements from the configuration with User-Namespaces that we setup in the previous post and show you how to run a CUDA application with OpenGL output support.