Machine Learning is getting a lot of attention these days and with good reason. There are mountains of data to work with and computing resources to handle the problems are easily attainable. Even a single GPU accelerated workstation is capable of serious work.
5 Ways of Parallel Programming
Modern computing hardware is all about parallelism. This is because we essentially hit the wall several years ago on increasing core clock frequency to speedup serial code execution. The transistor count has continued to follow Moore’s Law (doubling every 1.5-2 years) but these transistors have mostly gone into multiple cores, vector units, memory controllers, etc. on a single die. To utilize this hardware, software needs to be written to take advantage of it, i.e. you have to go parallel.
NVIDIA CUDA GPU computing on a (modern) laptop
Modern high-end laptops can be treated as desktop system replacements so it’s expected that people will want to try to do some serious computing on them. Doing GPU accelerated computing on a laptop is possible and performance can be surprisingly good with a high-end NVIDIA GPU. [I’m looking at GTX 980m and 970m ]. However, first you have to get it to work! Optimus technology can present serious problems to someone who wants to run a Linux based CUDA laptop computing platform. Read on to see what worked.
Xeon E5 v3 Haswell-EP Performance — Linpack
The Intel Xeon E5 v3 Haswell EP processors are here. The floating point performance on these new processors is outstanding. We run a Linpack benchmark on a dual Xeon E5-2687W v3 system and show how it stacks up against several processors.
Memory Performance for Intel Xeon Haswell-EP DDR4
Memory bandwidth is often an important factor for compute or data intensive workloads. The STREAM benchmark has been used for may years as a measure of this bandwidth. We present STREAM results for the new Xeon E5 v3 Haswell processor with DDR4 memory and compare this with an Xeon E5 v2 Ivy Bridge system.
Linpack performance Haswell E (Core i7 5960X and 5930K)
The new Intel desktop Core i7 processors are out, Haswell E! We look at how the Core i7 5960X and 5930K stack up with some other processors for numerical computing with the Intel optimized MKL Linpack benchmark.
POV-ray on Quad Xeon and Opteron
POV-ray is an open source ray tracing package with a long history. It has been a favorite system performance testing package since it’s inception because of the heavy load it places on the CPU. It has had an SMP parallel implementation since the mid 2000’s and is often used as a multi-core CPU parallel performance benchmark on both Linux and Windows.
So lets try it on our Quad socket many-core systems!
Hyper-Threading may be Killing your Parallel Performance
Hyper-Threading, hyperthreading, or just HT for short, has been around on Intel processors for over a decade and it still confuses people. I’m not going to do much to help with the confusion. I just want to point out an example from some testing I was doing recently with the ray-tracing application POV-ray that surprised me. Hyper-threading dramatically lowered the performance on a multi-core test system running Windows when running POV-ray in parallel.
NVIDIA GPU Starter DevKit with OpenACC
NVIDIA Tesla K20 plus PGI Accelerator compilers with OpenACC in a package deal with a system. Yes, it’s official. If you’ve wanted to do some development work with OpenACC on Tesla, this is a nice way to get started with a heavily discounted K20 and PGI compiler package pre loaded on a Peak Mini.
NVIDIA HPC future directions
Where is NVIDIA heading with High Performance Computing hardware? Ever since Intel announced Xeon Phi Knights Landing as a stand-alone processor integrated at the board level as a full compute unit, I’ve been wondering what NVIDIA would do along these lines. It just makes sense that they would do something similar since getting the GPU off of the PCIe bus and tightly integrated with plentiful system memory would be a huge step forward for usability and performance. Here’s my guess about where NVIDIA is heading.