AMD 3900X (Brief) Compute Performance Linpack and NAMD

I was able to spend a little time with an AMD Ryzen 3900X. Of course the first thing I wanted know was the double precision floating point performance. My two favorite applications for a “first look” at a new processor are Linpack and NAMD. The Ryzen 3900X is a pretty impressive processor!

PyTorch for Scientific Computing – Quantum Mechanics Example Part 4) Full Code Optimizations — 16000 times faster on a Titan V GPU

This is the 16000 times speedup code optimizations for the scientific computing with PyTorch Quantum Mechanics example. The following quote says a lot,

“The big magic is that on the Titan V GPU, with batched tensor algorithms, those million terms are all computed in the same time it would take to compute 1!!!”

PyTorch for Scientific Computing – Quantum Mechanics Example Part 3) Code Optimizations – Batched Matrix Operations, Cholesky Decomposition and Inverse

An amazing result in this testing is that “batched” code ran in constant time on the GPU. That means that doing the Cholesky decomposition on 1 million matrices took the same amount of time as it did with 10 matrices!

In this post we start looking at performance optimization for the Quantum Mechanics problem/code presented in the first 2 posts. This is the start of the promise to make the code over 15,000 times faster! I still find the speedup hard to believe but it turns out little things can make a big difference.

GTC 2015 Deep Learning and OpenPOWER

Another great GTC meeting. NVIDIA does this right! The most interesting aspects for me this year were the talks on “Deep Learning” (Artificial Neural Networks) and OpenPOWER. I have some observations and links to recordings of the keynotes and talks. Enjoy!