We have a new collection of GPU accelerated Molecular Dynamics benchmark packages put together for GROMACS, NAMD 2, and NAMD 3-alpha10. (The benchmark packages will be available to the public soon.) In this post we present results for, - 3 applications: GROMACS, NAND 2 and NAMD 3alpha10, - 8 MD simulations, - 12 different NVIDIA GPUs, - 96 total results.
NAMD Custom Build for Better Performance on your Modern GPU Accelerated Workstation -- Ubuntu 16.04, 18.04, CentOS 7Written on July 20, 2018 by Dr Donald Kinghorn
In this post I will be compiling NAMD from source for good performance on modern GPU accelerated Workstation hardware. Doing a custom NAMD build from source code gives a moderate but significant boost in performance. This can be important considering that large simulations over many time-steps can run for days or weeks. I wanted to do some custom NAMD builds to ensure that that modern Workstation hardware was being well utilized. I include some results for the STMV benchmark showing the custom build performance boost. I've included some results using NVIDIA 1080Ti and Titan V GPU's as well as an "experimental" build using an Ubuntu 18.04 base.
One of the questions I get asked frequently is "how much difference does PCIe X16 vs PCIe X8 really make?" Well, I got some testing done using 4 Titan V GPU's in a machine that will do 4 X16 cards. I ran several jobs with TensorFlow with the GPU's at both X16 and X8. Read on to see how it went.
I attended the Microsoft Build 2018 developers conference this week and really enjoyed it. I wanted to share my "big picture" feelings about it and some of the things that stood out to me. I'm not going to give you a "reporters" view or repeat press-release items. This is just my personal impression of the conference.
I have been qualifying a 4 GPU workstation for Machine Learning and HPC use. The last confirmation testing I wanted to do was running it with TensorFlow benchmarks on 4 NVIDIA Titan V GPU's. I have that systems up and running and the multi-GPU scaling looks very good.
GPU Memory Size and Deep Learning Performance (batch size) 12GB vs 32GB -- 1080Ti vs Titan V vs GV100Written on April 27, 2018 by Dr Donald Kinghorn
Batch size is an important hyper-parameter for Deep Learning model training. When using GPU accelerated frameworks for your models the amount of memory available on the GPU is a limiting factor. In this post I look at the effect of setting the batch size for a few CNN's running with TensorFlow on 1080Ti and Titan V with 12GB memory, and GV100 with 32GB memory.
Tensor-cores are one of the compelling new features of the NVIDIA Volta architecture. In this post I discuss the some thought on mixed precision and FP16 related to Tensor-cores. I have some performance results for large convolution neural network training that makes a good argument for trying to use them. Performance looks very good.
NVIDIA's Graphics Technology Conference (GTC) is probably my all-time favorite conference. It's an interesting blend of "Scientific Research meeting" and Trade-Show. It's put on by a hardware vendor but still feels like a scientific meeting. It's not just a "Kool-Aid" fest! In this post I go present some of my thoughts about this years conference.
This post will look at the molecular dynamics program, NAMD. NAMD has good GPU acceleration but is heavily dependent on CPU performance as well. It achieves best performance when there is a proper balance between CPU and GPU. The system under test has 2 Xeon 8180 28-core CPU's. That's the current top of the line Intel processor. We'll see how many GPU's we can add to those Xeon 8180 CPU's to get optimal CPU/GPU compute balance with NAMD.
TensorFlow Scaling on 8 1080Ti GPUs - Billion Words Benchmark with LSTM on a Docker Workstation ConfigurationWritten on March 2, 2018 by Dr Donald Kinghorn
In this post I present some Multi-GPU scaling tests running TensorFlow on a very nice system with 8 1080Ti GPU's. I use the Docker Workstation setup that I have recently written about. The job I ran for this testing was the "Billion Words Benchmark" using an LSTM model. Results were very good and better than expected.
The Intel CPU flaw and the Meltdown and Spectre security exploits are causing a lot of concern. There is a possibility of application slowdown from the kernel patches to mitigate the exploits. This slowdown concern is a concern for GPU accelerated application because of the systems calls they require for moving data between CPU and GPU memory space. I did some testing on a couple of large Tensorflow and Caffe machine learning jobs along with the creation of a LMDA database from 1.3 million images.
NVIDIA's Pascal GPU's have twice the computational performance of the last generation. A great use for this compute capability is for training deep neural networks. We have tested NVIDIA DIGITS 4 with Caffe on 1 to 4 Titan X and GTX 1070 cards. Training was for classification of a million image data set from ImageNet. Read on to see how it went.
You got your new wonderful NVIDIA Pascal GPU ... maybe a GTX 1080, 1070, or Titan X(P) ... And, you want to setup a CUDA environment for some dev work or maybe try some "machine learning" code with your new card. What are you going to do? At the time of this writing CUDA 8 is still in RC and the deb and rpm packages have drivers that don't work with Pascal. I'll walk through the tricks you need to do a manual setup of CUDA 7.5 and 8.0 on top of Ubuntu 16.04 or 14.04 that will work with the new Pascal based GPU's
I have a Titan Black, Titan X (Maxwell) and a new Titan X (Pascal) in a system for a quick CUDA performance test. Install is on Ubuntu 16.04 with CUDA 8.0rc. We'll look at nbody from the CUDA samples code and NAMD Molecular Dynamics. It is stunning to see how much the CUDA performance has increased on these wonderful GPU's in just 3 years.
Intel's Xeon E5 v4 processors are available and there are lots of them! The changes from the v3 Haswell are mostly small clock changes and increases in core count. You can now get a E5-2699v4 with 22 cores. In a dual socket system that's 44 cores to work with. If the programs you want to run scale well with thread count then that could be a great processor for you. However, if your parallel scaling is not near linear then it may not be the best value. We have a dynamic chart of performance based on Amdahl's Law that may help you decide which processor is best for your uses.
The new NVIDIA GeForce GTX 1080 and GTX 1070 GPU's are out and I've received a lot of questions about NAMD performance. The short answer is -- performance is great! I've got some numbers to back that up below. We've got new Broadwell Xeon and Core-i7 CPU's thrown into the mix too. The new hardware refresh gives a nice step up in performance.
Just got a NVIDIA GTX 1080 for testing. I hacked up an install with Ubuntu 16.04 and CUDA 7.5 along with a beta display driver that works! First run after compiling the cuda samples nbody gave 5816 GFLOP/s! A GTX 980 on the same system does 2572 GFLOP/s. However, it's not all good news ...
The Intel Xeon E5 2600 v4 Broadwell processors are finally available. My first Linpack testing with a E5-2687W v4 shows a greater than 35% performance increase over the v3 Haswell version! And, it's the same price as the v3 version! It's significantly better than expected.
I was preparing a Puget Systems Traverse Skylake based laptop for GPU accelerated molecular dynamics demos at the upcoming ACS meeting and decided to see if I could get Ubuntu 16.04 beta working with NVIDIA CUDA 7.5. It worked!
Molecular Dynamics programs can achieve very good performance on modern GPU accelerated workstations giving job performance that was only achievable using CPU compute clusters only a few years ago. The group at UIUC working on NAMD were early pioneers of using GPU's for compute acceleration and NAMD has very good performance acceleration using NVIDIA CUDA. We show you how good that performance is on modern Nvidia GPU's
Intel Skylake Core-i7 CPU -- 256 GFLOP/s Linpack result with Intel Parallel Studio XE 2016 and MKL 11.3 vs 200 GFLOP/s using Intel Parallel Studio XE 2015 and MKL 11.2!
Another amazing deal on Xeon Phi from Intel! This time you can get a 90% discount on a Phi 5110p and get the Intel Parallel Studio Cluster edition with a 1 year license for free.