Puget Systems HPC Blog
In this second post on Docker and Nvidia-Docker we will do an install and setup on an Ubuntu 16.04 workstation.
Docker containers together with the NVIDIA-docker can provide relief from dependency and configuration difficulties when setting up GPU accelerated machine learning environments on a workstation. In this post I will discuss motivation for considering this.
NVIDIA has released the Quadro GP100 bringing Tesla P100 Pascal performance to your desktop. This new card gives you the compute performance of the NVIDIA Tesla P100 together with Quadro display capability. That means full double precision floating point capability of the P100 and NVLINK for multiple cards.
Does PCIe X16 give better performance than X8 for training models with Caffe when using cuDNN? Yes, but not by much!
OK, Intel Core-i7 7th gen Kabylake is out. Of course the first thing I want to do is drop a Core i7 7700K in a new Z270 based motherboard, install Linux and run a Linpack benchmark. You know, GFLOP/s and all that. We installed Ubuntu 1610 with a recent release of Intel MKL and fired up a few Linpack job runs. Read on for the not-so dramatic results.
NVIDIA's Pascal GPU's have twice the computational performance of the last generation. A great use for this compute capability is for training deep neural networks. We have tested NVIDIA DIGITS 4 with Caffe on 1 to 4 Titan X and GTX 1070 cards. Training was for classification of a million image data set from ImageNet. Read on to see how it went.
You got your new wonderful NVIDIA Pascal GPU ... maybe a GTX 1080, 1070, or Titan X(P) ... And, you want to setup a CUDA environment for some dev work or maybe try some "machine learning" code with your new card. What are you going to do? At the time of this writing CUDA 8 is still in RC and the deb and rpm packages have drivers that don't work with Pascal. I'll walk through the tricks you need to do a manual setup of CUDA 7.5 and 8.0 on top of Ubuntu 16.04 or 14.04 that will work with the new Pascal based GPU's
I have a Titan Black, Titan X (Maxwell) and a new Titan X (Pascal) in a system for a quick CUDA performance test. Install is on Ubuntu 16.04 with CUDA 8.0rc. We'll look at nbody from the CUDA samples code and NAMD Molecular Dynamics. It is stunning to see how much the CUDA performance has increased on these wonderful GPU's in just 3 years.
Intel's Xeon E5 v4 processors are available and there are lots of them! The changes from the v3 Haswell are mostly small clock changes and increases in core count. You can now get a E5-2699v4 with 22 cores. In a dual socket system that's 44 cores to work with. If the programs you want to run scale well with thread count then that could be a great processor for you. However, if your parallel scaling is not near linear then it may not be the best value. We have a dynamic chart of performance based on Amdahl's Law that may help you decide which processor is best for your uses.
The new NVIDIA GeForce GTX 1080 and GTX 1070 GPU's are out and I've received a lot of questions about NAMD performance. The short answer is -- performance is great! I've got some numbers to back that up below. We've got new Broadwell Xeon and Core-i7 CPU's thrown into the mix too. The new hardware refresh gives a nice step up in performance.
Just got a NVIDIA GTX 1080 for testing. I hacked up an install with Ubuntu 16.04 and CUDA 7.5 along with a beta display driver that works! First run after compiling the cuda samples nbody gave 5816 GFLOP/s! A GTX 980 on the same system does 2572 GFLOP/s. However, it's not all good news ...
The Intel Xeon E5 2600 v4 Broadwell processors are finally available. My first Linpack testing with a E5-2687W v4 shows a greater than 35% performance increase over the v3 Haswell version! And, it's the same price as the v3 version! It's significantly better than expected.
You can try Intel Python from your Anaconda install using conda!
If you are happy to use Ubuntu 14.04 LTS (Ubuntu-MATE in our case) then setting up a system with the NVIDIA DIGITS software stack is simple. I'll give you some guidance on getting everything working, from the Linux install to the DIGITS web interface.
A brief description of graphics driver Timeout Detection and Recovery, why it can be problematic for intensive GPU codes, and how to work around it so that Windows can be a viable GPU computing platform.
I was preparing a Puget Systems Traverse Skylake based laptop for GPU accelerated molecular dynamics demos at the upcoming ACS meeting and decided to see if I could get Ubuntu 16.04 beta working with NVIDIA CUDA 7.5. It worked!
Don and I sit down after a holiday hiatus to talk about an interesting development. Intel has announced they are going to maintain a build of Python. Good news for a language that has been around for a long time. We also discuss our time at Supercomputing 2015 and FPGAs.
Intel E5 v3 processors will run at "All Core Turbo" under load if properly cooled. This "clock" measurement is a better predictor of theoretical performance than base clock. We present a table of CPU performance at "all-core-turbo" using different parallel scaling factors from Amdhal's Law. We have a dynamic graph that will show how much performance you lose when your parallel scaling is less than perfect. Just because your dual socket 16-core system shows all 32 cores at 100% doesn't mean your problem is running 32 times faster!
Machine Learning is getting a lot of attention these days and with good reason. There are mountains of data to work with and computing resources to handle the problems are easily attainable. Even a single GPU accelerated workstation is capable of serious work.
Molecular Dynamics programs can achieve very good performance on modern GPU accelerated workstations giving job performance that was only achievable using CPU compute clusters only a few years ago. The group at UIUC working on NAMD were early pioneers of using GPU's for compute acceleration and NAMD has very good performance acceleration using NVIDIA CUDA. We show you how good that performance is on modern Nvidia GPU's
Intel Skylake Core-i7 CPU -- 256 GFLOP/s Linpack result with Intel Parallel Studio XE 2016 and MKL 11.3 vs 200 GFLOP/s using Intel Parallel Studio XE 2015 and MKL 11.2!
I have done a little informal testing with the new i7 and i5 processor running the Linpack benchmark and a NAMD MD simulation. Mixed results!