GTC 2015 Deep Learning and OpenPOWERWritten on April 6, 2015 by Dr. Donald Kinghorn
First, GTC is a great meeting! If you are doing anything involving GPU programming you owe it to yourself to go this meeting. It is very well organized and the technical content is tremendous. As a perk, NVIDIA knows how to party too!
Check out Jen-Hsun Huang’s keynote to get a feel for the tone of the conference.
NVIDIA very wisely and kindly records the talks at the conference and they are now available. I highly recommend you check them out. You are sure to find something interesting and useful.
Puget Systems was there showing off a Peak Mini running benchmarks on a Titan X and K40. We had only had the Titan X for a couple of days and it was just officially announced at the meeting, so we could show it off. There were not any drivers for Windows yet but it worked perfectly under Linux ( We were running Kubuntu 14.04 ). It was only identified in the system as “Graphics Device”.
Here’s the numbers we were showing off;
nbody -benchmark -numbodies=256000 (single precision) Titan X -- 3.6 TFLOP/s Tesla K40 -- 1.7 TFLOP/s
The Titan X is fantastic for single precision workloads … like Machine Learning and MD simulations!
We also had one or our Traverse Pro laptops there with a GTX 970m running gromacs connected to VMD for real time visualization of the molecular dynamics run. This was all running on the 970m and performance was great. See my post on how to get a modern laptop to work right with a CUDA.
Deep Learning, Machine Learning, Deep Neural Networks, Artificial Neural Networks, …
All of these phrases are mostly referring to the same basic idea and “Artificial Neural Networks” is the most descriptive term. The media buzz phrase du jour is “Deep Learning”.
“Deep Learning” (Artificial Neural Networks) is the killer app for GPU computing. It works really well! Check out these two excellent GTC keynote talks from Google Senior Fellow Jeff Dean and Baidu Cheif Scientist Andrew Ng.
With a properly configured system, … (yes, we do that :-) … even a single machine can do serious work. A system with four Titan X cards and some of the public available data sets (or private data sets) provides opportunity for many interesting research projects. I had worked with Artificial Neural Networks in the early 1990’s but there just wasn’t lots of easily available training data and computing capability to do much. Now there is plenty of both! If I can find the time I may revive some of the ideas I had way back then. Anyone who had worked with Neural Networks in the early days should probably be dusting off their notes!
Coinciding with the 2015 GTC meeting was the first OpenPOWER Summit. Most of these talks are online now too. The OpenPOWER opening sessions were just right. They had all the exciting news and all the boring news. The boring news was about all of the infrastructure like API’s and compliance. That’s really important stuff! They said all the right things to indicate that the infrastructure was all in place and well managed. The exciting news was that it is all real, now! There are reference systems and full on products happening. Expect to have OpenPOWER based systems competing with Intel bases boxes by the end of the year. For my take on the Intel vs OpenPOWER situation see my post on the matter.
OpenPOWER will likely be the platform of choice for Pascal, the next generation of NVIDIA compute GPU’s. Pascal will be the first NVIDIA GPU to utilze NVLINK. Using NVLINK means that it wont be on the PCIe bus … which means it will be on a board that supports the NVLINK connector and CPU with NVLINK support i.e. OpenPower! There will probably be some version of the GPU that is intended for PCIe use for existing systems or at least systems using the next PCIe version (version 4). I don’t see them abandoning their very strong gaming PC market.
I barely scratched the surface of all the interesting stuff that happened at GTC this year. If you missed it you really should try to go next year!
Happy computing! --dbk