Table of Contents
This is a re-write of an older post that was done as a guide to the (then new) Xeon E5v3 processors. In this re-make all of the data has been updated to use All-Core-Turbo as the CPU clock speed for the theoretical performance calculations. That changes things significantly. All-core-turbo is a much better performance measure than CPU base-clock frequency!
I used to conservatively just use Intel’s CPU base-clock in performance estimates but then started noticing that with newer Intel processors I was ALWAYS seeing all-core-turbo CPU frequencies in /proc/cpuinfo when I had a system under heavy load. (… with the exception of laptops. There you see thermal induced frequency throttling under load, as expected.) Check out Matt’s interesting article on thermal throttling!
An Intel Haswell based workstation with proper power and cooling WILL run at all-core-turbo clock frequency under full load. So why doesn’t Intel just report all-core-turbo as the processor clock frequency? I really don’t know but I’m guessing it has something to do with Engineering and Marketing butting heads. It’s actually even difficult to find all-core-turbo frequency information. Most spec lists just report base and max-turbo clock frequencies, both of which are not very useful. The official Intel document with the good information is here (pdf file), Intel Xeon Processor E5 v3 Product Family Processor Specification Update August 2015. That is a really interesting document!
The most important consideration when configuring a system for optimal parallel performance is the process scaling of your program. You need to have some idea of how many cores can be effectively utilized in order to make an informed decision about your system configuration. If you only want to run one job at a time using all cores on your system then you need to know how many processes your program will scale to before parallel scaling degradation limits your performance gains. If you know that your code only scales well to 8 process you need to decide if you want to just configure an 8 core machine or configure a machine that will let you run several of these 8 core jobs at the same time. … see “Other Considerations”.
To get the idea of Amdahl’s law consider this: If you have a single threaded program and you can find a section of the code that uses 90% of the time and you can make that part of the code run in parallel (perfectly), even though that sounds good, your program will never be more than ten times faster, no matter how many cores you use! [You might want to take a look at Matt’s article about estimating performance with Andahl’s Law.]
If your code scaling is not great then you are likely better off with fewer cores running at higher clock frequencies. If you code scales really well then you will likely benefit from a higher core count.
The following chart shows the Amdahl’s Law curves up to 36 cores for 7 different parallel fractions, P, ranging from 1 to 0.95, i.e. from perfect linear scaling to 95% of execution time in parallel (maximum speedup = 20).
speedup = 1/( (1-P) + P/n ) where P is the parallel fraction and n is the number of processes (cores)
Notice how the speedup falls off with increasing core count. Just because your program runs almost 4 times faster with 4 cores does not mean it will run 36 times faster with a dual 18-core system.
To evaluate processor performance under the influence of Amdahl’s Law, observe that the speedup is the “effective” core count. If we calculate the the theoretical performance of a systems using this “effective core count” we get a much better picture of potential “real world” performance.
performance = "Effective core count" * All-core-turbo Clock (GHz) * Special ops. i.e. AVX2, FMA3 (16) for a dual E5-2699v3 system with perfect parallel scaling, P=1,that would be performance = 36 * 2.8 * 16 = 1612.80 GFLOPS Now at a parallel fraction of .95 Amdahl's law gives us; effective number of cores = 1/( 1-.95) + .95/36) = 13.1 this give a performance at P = .95 of performance(P=.95) = 13.1 * 2.8 * 16 = 586.47 GFLOPS
In the following chart 27 E5 v3 processors are listed in decreasing cost order (cost of two 26xx CPUs or one 16xx CPU) The bar length corresponds to the theoretical peak performance UNDER THE INFLUENCE OF AMDAHL’S LAW!
When I first looked at this chart I was shocked! It doesn’t tell the whole story though. There are are other general usage considerations. Also note that some of the processors have a larger “smart cache” per core and that can have a big influence on codes that are slowed down by cache misses etc.. I have a table of the processors with some of their features listed at the bottom of this post.
The three primary ways to utilize a multi-core system are;
- Run single parallel jobs with all available cores
- Take advantage of the increased core count to facilitate larger problem sizes
- Run multiple, “single” or “few” process jobs
These three use cases are governed by the following;
- Parallel performance characterized by Amdahl’s Law ( we looked at this above)
- Parallel performance according to Gustafson’s Law
- Efficient job scheduling
You can treat a high core count workstation as a replacement for a small cluster. Set it up with a job scheduler, create a queue and load up your jobs. You may have some job runs with single threaded code and some jobs that really can’t take advantage of more than a couple of parallel processes. Let the scheduler balance the load. This can be a great way to get good utilization out of your system. In this case your choice of processors may be dictated more by your budget than anything else. If you have the resources you can go with a dual 18 or 14 core processor and get to work. Modern job schedulers are “parallel aware” so you can run mixed job types. Setting up a job scheduler is not always trivial but can certainly be worth the effort. Examples are SLURM, Grid Engine, Torque, PBS etc..
The next case that can be facilitated by a many-core workstation is running “larger” problems than you could with a less capable system. This is the realm of Gustafson’s Law.
The ideal case for Gustafson’s law on a workstation is when having twice as many cores means you can run a job that is twice the size in the same amount of time. (You will likely need at least twice the memory too!) You have to be careful when considering this type of scaling (weak scaling) on a single node workstation since larger problems can be limited by memory performance. On a cluster distributing a larger parallel job over several nodes will benefit from a more even distribution over cache and memory controllers and this can sometimes make a big difference in parallel performance and can occasionally result in “super linear” scaling because of the better memory utilization. On a single node many-core workstation you do get the extra cache associated with the cores but the number of memory controllers is fixed.
On a many-core workstation you are more likely to be limited by the Amdahl’s Law performance of your code regardless of the problem size. However, if you are looking to increase the size of the problems that you look at, lots of cores and lots of memory are your friends!
* Price from Intel ARK
Happy computing! –dbk