Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/1184
Dr Donald Kinghorn (Scientific Computing Advisor )

How to install CUDA 9.2 on Ubuntu 18.04

Written on June 15, 2018 by Dr Donald Kinghorn
Share:


If you are wanting to use Ubuntu 18.04 and also want a CUDA install this post should help you get that working.

I was surprised when NVIDIA did not include an installer for Ubuntu 18.04 when they launched CUDA 9.2. The new Ubuntu had been out for a while and it seemed like "everybody" already had support it. It was working with Docker, NVIDIA-Docker, TensorFlow, Virtualbox, Anaconda Python, etc.. I hadn't found anything that was not working fine. It's not that much different, at a system level, from Ubuntu 17.10 or Fedora-23 which are both supported for CUDA 9.2. I'm still not sure exactly why they don't have an installer link for it but after doing my own install I have some suspicions of where the they may have run into trouble with Ubuntu 18.04. However, ...

... I didn't have any serious problems installing CUDA 9.2 on Ubuntu 18.04! I did it using the ".run" file for Ubuntu 17.10 together with the GeForce runtime driver rather than the Tesla driver that comes with the CUDA 9.2 install.

In my recent post The Best Way To Install Ubuntu 18.04 with NVIDIA Drivers and any Desktop Flavor I went through how I've been doing installs for the latest Ubuntu. If you are thinking about installing 18.04 you might want to look at that post especially if you have had any trouble with the install. I usually include a CUDA install when I do a post like that but in that one I decided to look at it separately since NVIDIA had not released an official installer for it. Well, this is the post where I do the (unofficial) CUDA install.

Disclaimer: What follows is my own personal hack to get CUDA installed and running on Ubuntu 18.04. It is not supported by anyone, not even me!


Steps to install CUDA 9.2 on Ubuntu 18.04

Step 1) Get Ubuntu 18.04 installed!

Fortunately, this "shouldn't" be too hard. See my recent post on doing that.

Step 2) Get the "right" NVIDIA driver installed

If you followed my instructions for installing Ubuntu 18.04 you would have installed the driver nvidia-390 from the graphics-drivers ppa. That is the current "long term" driver and it supports cards up-to and including Titan V. However if you jumped ahead and did the CUDA toolkit install like I describe in later steps you would compile the deviceQuery code and run it and see a message like,

kinghorn@u18:~/projects/samples/1_Utilities/deviceQuery$ ./deviceQuery
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

cudaGetDeviceCount returned 35
-> CUDA driver version is insufficient for CUDA runtime version
Result = FAIL

That is a run-time version miss-match! The NVIDIA display drivers come with a CUDA runtime library. That's so you can run CUDA accelerated programs without having CUDA installed on your system. That's usually just what you want. But, If you are doing CUDA dev work you need to have your run-time and development libraries in sync! The nvidia-390 driver is not recent enough for CUDA 9.2.

The CUDA installers contain a display driver. For CUDA 9.2 that driver is currently version 396.26. That is the Tesla driver! [ If you go to the NVIDIA driver download page and select "Product Type:" Tesla you will get to the 396.26 driver. ] The equivalent driver for GeForce or Titan is 396.24. That's the one that you want. 396.24 is the current "short term" driver and it is available on the graphics-drivers ppa. However, ...

Ubuntu 18.04 quirk: You can't install nvidia-396 from the graphics drivers ppa using apt-get!

I don't understand why but, if you have the graphics-drivers ppa configured for apt and try to install nvidia-396 you will get a "package not found" error even though you can go to the ppa page and see the deb file sitting there for 18.04. apt-get will only "see" up-to version nvidia-390 from the ppa. This is an unsolved puzzle for me.

Work-around: use the "Software & Updates" "Additional Drivers" GUI

Software Update GUI

The Software & Updates utility does list the nvidia-396 driver. Go ahead and select that and "Apply" it. Reboot and you will be running the needed 396.24 driver.

Step 3) Install CUDA "dependencies"

There are a few dependencies that get installed when you run the CUDA deb file but, since we are not going to use the deb file, you will want to install them separately. It's simple since we can get what's needed with just four package installs,

sudo apt-get install freeglut3 freeglut3-dev libxi-dev libxmu-dev

Those packages will get the needed GL, GLU, Xi, Xmu libs and several other libraries that will be installed as dependencies from those.

step 4) Get the CUDA "run" file installer

Go to the CUDA Zone and click the Download Now button. Then click the link buttons until you get the following,

CUDA download

Download both of both of those.

Step 4) Run the "runfile" to install the CUDA toolkit and samples

This is where we get the CUDA developer toolkit and samples onto the system. Just use sh to run the shell script (runfile),

sudo sh cuda_9.2.88_396.26_linux.run

You will be asked several questions. Here are my answers, (after accepting the EULA),

You are attempting to install on an unsupported configuration. Do you wish to continue?
(y)es/(n)o [ default is no ]: y

Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 396.26?
(y)es/(n)o/(q)uit: n

Install the CUDA 9.2 Toolkit?
(y)es/(n)o/(q)uit: y

Enter Toolkit Location
 [ default is /usr/local/cuda-9.2 ]:

Do you want to install a symbolic link at /usr/local/cuda?
(y)es/(n)o/(q)uit: y

Install the CUDA 9.2 Samples?
(y)es/(n)o/(q)uit: y

Enter CUDA Samples Location
 [ default is /home/kinghorn ]: /usr/local/cuda-9.2

The most important part of those answers was saying "No" to installing the Driver. The default is "yes".

Step 5) Install the cuBLAS patch

The runfile for the cuBLAS patch just copies the fixed version into the CUDA install directory.

sudo sh cuda_9.2.88.1_linux.run

Step 6) Setup your environment variables

There are two good ways to setup your environment variables so you can use CUDA.

  • Setup system environment
  • Setup user environment

In the past I have usually been doing installs for any number of users on a system so I would do system-wide environment configuration. You can do this even for a single user workstation but you might prefer to just create a couple of small scripts that set things up just for the terminal you are working in when you need it.

System-wide alternative

  • To configure the CUDA environment for all users (and applications) on your system create the file (use sudo and a text editor of your choice)
    /etc/profile.d/cuda.sh
    
    with the following content,
    export PATH=$PATH:/usr/local/cuda/bin
    export CUDADIR=/usr/local/cuda
    
    Also create the file,
    /etc/ld.so.conf.d/cuda.conf
    
    and add the line,
    /usr/local/cuda/lib64
    
    Then run
    sudo ldconfig
    

The next time you login your shells will start with CUDA on your path and be ready to use. If you want to load that environment in a shell right now without logging out then just do, source /etc/profile.d/cuda.sh.

User per terminal alternative

If you want to be able to activate your CUDA environment only when and where you need it then this is the way to do it. You might prefer this method over a system-wide environment.

  • For a localized user CUDA environment create the following simple script. You don't need to use sudo for this and you can keep the script anywhere in your home directory. You will just need to "source" it when you want a CUDA dev environment.

I'll create the file with the name cuda9.2-env. Add the following lines to this file,

export PATH=$PATH:/usr/local/cuda-9.2/bin
export CUDADIR=/usr/local/cuda-9.2
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-9.2/lib64

Note: I explicitly used the full named path to version 9.2 i.e /usr/local/cuda-9.2 rather than to the symbolic link /usr/local/cuda. You can use the symbolic link path if you want. I just did this in case I want to install another version of CUDA and make another environment script pointing to the different version.

Now when you want your CUDA dev environment just do source cuda9.2-env. That will set those environment variables in your current shell. (you could copy that file to your working directory or else give the full path to it when you use the source command.)

Step 7) Test CUDA by building the "samples"

Let's make sure everything is working correctly. Copy the CUDA samples source directory to someplace in your home directory

mkdir cuda-testing

source cuda-9.2-env

cp -a /usr/local/cuda/samples  cuda-testing/

cd cuda-testing/samples

make -j4

Running that make command will compile and link all of the source examples as specified in the Makefile. ( the -j4 just means run 4 "jobs" make can build objects in parallel so you can speed up the build time by using more processes. The systems I was testing this on had 4 CPU cores.)

After everything finishes building you can cd to bin/x86_64/linux/release/ and see all of the sample executables. I had 165 programs built without error. I believe that is the entire sample set. I ran several of the programs and they were working as expected including the one that were using OpenGL graphics.

Just because the samples built OK doesn't mean that there are not any problems with the install but it is a really good indication that you can proceed with confidence for your development work!


I hope this post helps you with your CUDA projects!

I want to finish by saying that I personally am starting to use Docker containers when I need something like a CUDA dev environment. I did that for building TensorFlow from source a few weeks ago. I have written a series of posts on using Docker on your workstation (along with lots of other stuff!) which you can find on the Puget Systems HPC blog.

Happy computing! --dbk

Tags: Ubuntu 18.04, CUDA
Mark Hughes

Great post, thanks! When I get to the testing step 7), and try source cuda-9.2-env from my home directory, I get that there is no such file or directory.

I used the system wide alternative for the environment variables, and am running Ubuntu 18.04. Is it likely I'm missing a step? I'm new to Ubuntu, so I my apologies if I'm missing something obvious.

Posted on 2018-06-18 22:59:55
The Gotamist

If you did the system wide alternative, then there is no need to source cuda-9.2-env as that file was created for the per-terminal alternative.

Posted on 2018-08-21 19:28:22
shivang patel

Lots of thanks. this post solve my issue related to CUDA on my new laptop by correcting set cuda path.

Posted on 2018-06-19 06:51:41
cce62

Great post!

I initially tried to install without having the nvidia-396 drivers. After running

sudo add-apt-repository ppa:graphics-drivers/ppa

sudo apt upgrade

sudo apt-get install nvidia-driver-396

I was able to get the nvidia-396 and successfully compile and run the sample code.

Have you tried running PyTorch using 9.2?

Posted on 2018-06-26 12:33:03
Brian Tuomanen

Thanks for posting this, it was extremely helpful. In my case, for whatever the reason, I found it necessary to edit the nvcc configuration in "/usr/local/cuda-9.2/bin/nvcc.profile", and then set up the "INCLUDES" variable to be INCLUDES += "-I/usr/local/cuda-9.2/include" $(_SPACE_) , as well as append "-L/usr/local/cuda-9.2/lib" to the "LIBRARIES" variable. (The LD_LIBRARY_PATH and cuda.conf options just weren't working for me, for whatever reason... some mysterious quirk in the universe I suppose.)

Posted on 2018-06-28 00:17:53
Disrupt IT

Nvidia releases new driver 396.37 (09/07) currently available only in rpm and not available on graphics driver ppa impossibility to compile cuda 9.2 because old version remove and based on the new driver

Posted on 2018-07-11 20:57:22
Masinde Mtesigwa

Thanks!!

Posted on 2018-07-31 19:36:38
Tahereh Toosi

Thank you for this incredibly helpful post.

Posted on 2018-08-09 14:46:03
Roman Ring

Very nice post, thank you!

Posted on 2018-08-12 15:28:24
Agung Wahyudiono

Great Post.
I was trying to install using local deb, but always come up with error. But when I try to use the local run, it solved.

Posted on 2018-08-16 02:16:53
Jay Duff

I have a "virgin" 18.04 installation in that I just installed 18.04 from DVD (complete format on a new SSD). It's an older Dell XPS 8100 and this computer has a GT 730 card.
I think I'm following your instructions accurately but I keep getting an error:
ERROR: An NVIDIA kernel module 'nvidia-drm' appears to already be loaded in

your kernel. This may be because it is in use (for example, by an X

server, a CUDA program, or the NVIDIA Persistence Daemon), ...

Posted on 2018-08-25 21:58:28
Donald Kinghorn

Hi Jay, I'll warn you up front that the GT730 is a real problematic card for Linux in general. There is usually some non-standard firmware on the cards that doesn't play nice. That may or may not be what is giving you trouble but if you can I'd recommending replacing that card. (I've had hit and miss luck with those cards!)
I'm really not sure about the error you are seeing. Here is something to check,

If you install Ubuntu on a system that has MS secure boot enabled strange things happen. Ubuntu has a signed cert from MS so the install goes OK but NVIDIA driver installs fail with strange messages.

I don't know if this is what is give you trouble but worth a check. Your BIOS should be set the "Other OS" in the secure boot settings. BIOS settings vary so you may see something else ... but you get the idea ...

Posted on 2018-08-27 16:12:18
Jay Duff

Thanks for your post and your answer Donald. I got really frustrated, went for a walk then realized 18.04 wasn't really that important to me. I rebuilt the system on 16.04 and everything worked perfectly. I'm using a GT730 because it's an old computer and it''s not worth putting good stuff in it. Right next to my 10 year old computer is my destination - a Jetson TX2 - in a beautiful Puget Systems case. Thanks for your help and I'll look for other Puget Systems products.

Posted on 2018-08-29 00:22:48
Donald Kinghorn

Ha ha! I like that ... the frustration and then go for a walk thing is something I do a lot :-) You are right of course, 16.04 is pretty solid. I'm just a junkie for updates. The thing with 18.04 that I was most happy about really was that all of the function keys on my laptop worked without any special effort. I do like to update but for your case staying with something that is working well is definitely the best thing.

I have a TX2 myself and it has been haunting me because I haven't set it up yet. I have some machine learning applications I'd like to try with it.

Best wishes --Don

Posted on 2018-08-29 23:39:40
Anders Pedersen

Hi Donald,

Im using CUDA for the first time on Ubuntu 18.04.01. And im clueless as to what im really doing with the samples.

Can you elaborate on the final part with the sample executables - does it "work" if the executables are in the /release folder (got 166), or should i be able to execute the files? I try to execute them ./matrixMul for example, but i get :

[Matrix Multiply Using CUDA] - Starting...
CUDA error at ../../common/inc/helper_cuda.h:1147 code=35(cudaErrorInsufficientDriver) "cudaGetDeviceCount(&device_count)"

Posted on 2018-09-17 15:06:56
Donald Kinghorn

this is the problem code=35(cudaErrorInsufficientDriver) you need to check the NVIDIA driver that you have installed. Try running nvidia-smi that should list it. If you are running anything later than nvidia-396 CUDA 9.2 will complain. If you run nvidia-smi and it tells you no driver found then you need to go back to the section on getting the driver installed and then be sure to reboot your system to be sure that driver module gets built and starts up. ( make sure you have dkms installed too i.e. do sudo apt-get install dkms ) If thing are still not working post back and myself or someone else may have more ideas...

If you are new to cuda and GPU computing and want an easier introduction to getting some code running on the GPU I highly recommend PyTorch. My last 4 posts were using that. It's a machine learning framework but really it's a great general purpose scientific computing tool. It's very similar to using numpy. It makes calls to CUDA libs through cuBLASS and MAGMA. Python with PyTorch (or something like it) is the way to go in my opinion. If you need to bring some GPU support into C++ or something then I recommend to start by making library calls i.e cuBLASS or whatever. Low level cuda programming is challenging, to much so for me really. I'm having a blast with coding with PyTorch. I'll probably try (re-try) Julia too but I strongly recommend PyTorch

While I rambeling on ... if you want to try PyTorch and you're not sure how to setup a good Python dev environment then check out some of my post on setting up for TensorFlow. The setup in those post using Anaconda Python will be what you want for PyTorch too. Take care --Don

Posted on 2018-09-18 15:29:10
Disrupt IT

I invited you on this blog post: http://www.disruptit.be/?p=123, you have an explanation how to install the driver, the nvidia software, the compilation of the CUDA samples and compilation of Tensorflow used as Backend for Keras library. It was strongly influenced by the post of Donald.

Posted on 2018-09-19 08:25:08
Donald Kinghorn

Thant you! :-)

Posted on 2018-09-19 15:25:59
Hypersphere

Thank you for a most helpful post. Puget Systems is the best computer vendor, and it has the most intelligent people providing technical advice!

Posted on 2018-09-19 19:51:06
Lawrence Barras

Hi Don, this was a helpful guide and really made things easier! However, I discovered something really odd with the Titan-V. I set all this up in Ubuntu 18.04 and driver 396.54. I replaced 2x GTX1080ti with 2 Titan V cards over the weekend and discovered that for some reason, the Titans's GPU clocks are being restricted. I setup nvidia-container and ngc with the latest tensor flow and found the Titan-V clocks are limited to 1200-1335mhz. I monitored with nvidia-smi dmon and confirmed it. Even setting a higher clock with nvidia-smi, shows it changing in idle and as soon as TensorFlow starts up, the clocks are immediately dropped.

This oddly, is not the case with the 1080ti cards. They run at full boost with the workload, throttling only with thermal or power limits, as expected.

I found this because I was surprised at the relatively low performance I was getting on the TitanV and finally figured out the clocks are throttled in TensorFlow running in the GTC container. Haven't installed it natively, but I can't see any reason it would be different.

Posted on 2018-09-24 16:39:57
Donald Kinghorn

Hi Lawrence, that is really interesting! It is puzzling... I can't really think of anything that would cause that ??? I don't know if it will help or not but I did a series of posts on setting up docker for NGC. In the last post in the series I went through some performance tuning. You might want to have a look at that.
https://www.pugetsystems.co...
I'm not sure that any of those changes would help (and you may have already done them)

I really like the Titan V and I've been taking advantage of FP64 on them, I've been getting amazing performance on them. I have to say though the 1080Ti is also a great card! It's the best value for stuff like TensorFlow using FP32. The Titan V should be giving better performance but not by a large margin.

P.S. I just fired up tensorflow on NGC ... I see they have a CUDA 10 build on there already :-) It requires having driver 410 or later and I haven't updated my system yet... I'm working on a post today about setting up CUDA 10 :-) --Don

Posted on 2018-09-25 20:48:07
Lawrence Barras

Did some more checking after swapping cards back and forth a few times. The Titan-V appears to be clock restricted to 1335 mhz, if a computing load is running, period. However, it will support all boost clock speeds on graphics only loads. You can monitor it with "nvidia-smi dmon". My EVGA 1080ti hybrid-cooled will run at full boost clocks, limited only by thermal or power throttling, and actually performs better on some workloads. I would be really curious if you see the same results on Titan-V and possibly the Quadro V100 or Tesla V100 cards.

Can't wait to see CUDA 10! It is such a pain to update the NVIDIA drivers, especially if they aren't on PPA already.

Posted on 2018-09-26 17:27:54
Donald Kinghorn

I'm writing the CUDA 10 post now. It should go up Thur. I'm doing it on a system that has CUDA 9.2 already installed as in this post. The end result will be a setup with both 9.2 and 10.0

Fortunately the install of 10.0 from the "deb" works as it should. The 410 driver installed nicely from the CUDA 10 repo, so until the graphics-drivers ppa gets updated the cuda 10 install will be the easiest way to get the new driver installed :-)

Everything seems to be working perfectly. I can build and run code with both 9.2 and 10.

I will be doing a round of hardware testing in a couple of weeks when we get the 2080Ti's in (we have 2080's in testing but I'm going to wait for the Ti) When I do this testing I'll be sure to look at what is going on with the clocks [ I should be comparing 1080Ti, Titan V and 2080Ti ]

I saw this in the CUDA 10 release notes ... it might be interesting for the clock thing :-)

Added the ability to lock clocks in nvidia-smi and NVML (nvmlDeviceSetGpuLockedClocks and nvmlDeviceResetGpuLockedClocks APIs). The following commands can be used in nvidia-smi:

$ nvidia-smi -lgc/--lock-gpu-clock <mingpuclock, maxgpuclock="">

$ nvidia-smi -rgc/--reset-gpu-clock

Posted on 2018-09-26 19:02:40
Dotan

Indeed great post. I've tried several and this one really solved how to install on ubuntu!

Posted on 2018-09-28 17:41:49
Donald Kinghorn

I glad it helped! I have posted an update for installing CUDA 10 ... which is actually supported on Ubuntu 18.04! In that post I still point people back to this one for doing 9.2 since that may be the best choice for awhile. I did the 10 install on a base system that had 9.2 installed and added 10 while keeping 9.2 available.

Posted on 2018-10-01 15:24:08
David Clement

Thank you for this. Your efforts have saved us so much time.

Posted on 2018-10-03 21:17:50
Dave Krolik

Hi there, I'm currently trying to install Cuda on my ubuntu 18.04 LTS laptop with a gtx 960m GPU. Cuda 10 did not work so I'm now trying Cuda 9.2. I have followed your instructions however when I run the device query sample I get the following:

CUDA Device Query (Runtime API) version (CUDART static linking)
cudaGetDeviceCount returned 30
-> unknown error
Result = FAIL

Also when I use in nvidia-smi in the terminal I get the following error:

NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.

This is strange as I have selected the driver you have in the Software & Updates Additional Drivers GUI and rebooted.

Would you happen to have a any suggestions for me to try? Thank you for your help in advance.

Posted on 2018-10-07 17:56:18
Donald Kinghorn

The NVIDIA driver is not loading for some reason. Installing the way you did should have done the right things like blacklisting nouveau and such. It looks like you are coming up on the Intel GPU instead. This might be from some laptop specific thing. I hope someone else will comment here because I'm not completely sure how to get around that. I have a gaming laptop that I use sometimes and it only even sees the 1070 that's in it. If you don't care about the power saving from using the Intel GPU then you might be able to find something in the BIOS to turn it off, or at least set the priority to the 960m.

You might also want to look at /var/log/Xorg.0.log and see if there is any strange errors in there.

I don't have a good direct answer but you're not going to get anywhere until you can get the machine to come up on the 960m. I think this fairly common with laptops so you may find something good with a search ... thinking something like "how to force my laptop to use nvidia card instead of on-board Intel graphics"

I hope that points you in the right direction! --Don

Posted on 2018-10-08 20:18:45
Ankita Joshi

This is the only post that helped me with my installation, thank you so much Dr. Kinghorn.
However, I had two GPUs on my machine, GTX 1080 and a TITAN X. I had to remove the 1080 to get the installation to work. I am not sure why the installation of drivers failed when the system had two GPUs on it. Would you have any idea on how to get two gpus to work with Ubuntu 18.04, CUDA 9.0. I have to stick to CUDA 9.0 (only cuda version for which pytorch 0.4.0 is compiled against, and the project I am working on requires me to use pytorch 0.4.0).
Thanks!

Posted on 2018-10-09 05:08:32
Donald Kinghorn

That is odd but we do see things like that occasionally. It often has to do with the BIOS PCIe bus configuration. The primary order may not be what you think it is?? In any case it is most likely a motherboard quirk. I would check the BIOS settings and try different orderings of the cards in the slots. I can't think of any reason on the software side that things would not work ... Sometimes hardware is just crazy! You could see about doing a BIOS update too but keep a copy of the old BIOS just-in-case ...

I'm wondering if there is another way to get your 0.4.0 pytorch environment ... let me check something ... yes, there are builds of pytorch 0.4.0 with cuda 9.0 in the official pytorch tree on Anaconda cloud. I think you could create an env for an older version ... let me try it ...yup.... this will do it

conda create --name pytorch40 python=3.6 pytorch=0.4.0

That creates the environment with everything you need,

The following NEW packages will be INSTALLED:

blas: 1.0-mkl
ca-certificates: 2018.03.07-0
certifi: 2018.8.24-py36_1
cffi: 1.11.5-py36he75722e_1
cudatoolkit: 9.0-h13b8566_0
cudnn: 7.1.2-cuda9.0_0
intel-openmp: 2019.0-118
libedit: 3.1.20170329-h6b74fdf_2
libffi: 3.2.1-hd88cf55_4
libgcc-ng: 8.2.0-hdf63c60_1
libgfortran-ng: 7.3.0-hdf63c60_0
libstdcxx-ng: 8.2.0-hdf63c60_1
mkl: 2019.0-118
mkl_fft: 1.0.6-py36h7dd41cf_0
mkl_random: 1.0.1-py36h4414c95_1
nccl: 1.3.5-cuda9.0_0
ncurses: 6.1-hf484d3e_0
ninja: 1.8.2-py36h6bb024c_1
numpy: 1.15.2-py36h1d66e8a_1
numpy-base: 1.15.2-py36h81de0dd_1
openssl: 1.0.2p-h14c3975_0
pip: 10.0.1-py36_0
pycparser: 2.19-py36_0
python: 3.6.6-h6e4f718_2
pytorch: 0.4.0-py36hdf912b8_0
readline: 7.0-h7b6447c_5
setuptools: 40.4.3-py36_0
sqlite: 3.25.2-h7b6447c_0
tk: 8.6.8-hbc83047_0
wheel: 0.32.1-py36_0
xz: 5.2.4-h14c3975_4
zlib: 1.2.11-ha838bed_2

Posted on 2018-10-09 16:23:02
Donald Kinghorn

of course this implies using anaconda python ... but with this you probably wouldn't even need to install cuda since it's packaged with pytorch in that environment

Posted on 2018-10-09 16:26:24
Douglas R Jones

Dr Kinghorn,
I have used you installing Ubuntu 18.04 post twice now. Twice because I scrubbed my system last Friday as I could not get OpenCV 3.4.3 or OpenCV 4.0 to build with Cuda. Doesn't seem to matter which Cuda (I have 9.2) currently installed. DLib built fine. Have you ever successfully built OpenCV either 3.4.1, 3.4.3 or 4.0.0 successfully on Ubuntu 18.04 with Cuda? I can build without GPU support but I cannot get the darn thing to build with GPU support.

Thanks,
Douglas R. Jones

Posted on 2018-11-06 21:13:21
Donald Kinghorn

I did build it once a while back (don't remember version) on 16.04. I did have trouble with it! I was messing with Darkflow for image classification and tracking in video and needed a build with streaming support or something like that, that wasn't in the default builds. I eventually found someone else's build that did what I needed. I don't remember any details and I didn't save the work! (and I didn't write it up) so ... I'm not much help :-) I do sympathize with you though! Best wishes -Don

Posted on 2018-11-08 03:11:13
jBoy210

Hi. First, great post.

Unfortunately I got stuck when I look at the "Additional Drivers" tab for my GTX1070. It does not show nvidia-driver-396. Instead it says
NVIDIA Corporation: GP104[GeForce GTX 1070]
"This device is using the recommended drivers"
The only options are the selected one, "Using NVIDIA driver metapackages from nvidia-driver-390 (proprietary, tested)" and unselected "Using X.Org X server - Nouveau ...."

Any idea what I am doing wrong?

Posted on 2018-11-09 00:24:01
Donald Kinghorn

That's interesting that it didn't have a newer driver listed there ... I suggest you install the driver from the ppa. When I wrote this the driver packaging was messed up for some reason. The "graphics-drivers" ppa is at https://launchpad.net/~grap...

You'll need to do this,
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update
sudo apt-get install nvidia-396

The 396 driver is a long term support driver and should work well. The ppa does have the 410.73 driver now too. 410 would be needed for the new RTX GPU's and also for CUDA 10. That should work OK too but it is not as stable a driver (the recent .73 patch did fix some issues I had with a 2080Ti)

Posted on 2018-11-09 01:32:35
jBoy210

Thanks! I figured it out.

When I installed Ubuntu I installed the Nvidia driver to the GTX 1070. That is why my system showed "Using NVIDIA driver metapackages from nvidia-driver-390 (proprietary, tested)", the key point being (proprietary, tested). Since this was a brand new system, I just reinstalled 18.04 from scratch and followed your steps. Everything worked as documented. So, thanks again, for your great tutorial!

Now on to Pytorch 1.0 and FastAI.

Posted on 2018-11-09 15:42:56
Donald Kinghorn

Great! and a big thumbs up for PyTorch and FastAI! I want to work through their course, they've done some really nice work

Posted on 2018-11-09 17:22:24