Table of Contents
It's the end of the 2010's and start of 2020's. Time to reflect …
What I am going to write is from my own opinions, feelings, observations and memory. You may/will have a different viewpoint! I hope this post will give you a moment to pause and think back on your own experience of what an incredible decade it has been for computing.
Note: there are probably some inaccuracies in the numbers and details but you will get the ideas … Also, these are my own personal views! They do not necessarily reflect the views of any other individuals, businesses or organizations.
Catagory: “Tech Companies that have made the most significant advances in computing in the past decade!” … and the Nominees are …
… (crickets) … well, I guess that's it…
Okay, Yes, that is a joke, I'm not being fair to Intel and all of the many others that did outstanding work over the past ten years! Intel made some really significant advances. But, NVIDIA was just amazing! I'll get back to them later.
But What About the Others!!??
Well, lets think back.
What did we get from Intel since 2010?
- AVX, AVX2 FMA3, AVX512 FMA3: The AVX vector unit was a significant computing advance. The decade started off with the introduction of AVX with the Sandy-Bridge(core). That was a big numerical compute boost over SSE4.2. The processors based on Sandy-Bridge(core) are mostly gone now. AVX2 in Haswell(core) and AVX512 in Skylake(core) are still going strong.
- Xeon Phi: That looked promising but it turned out to be as much, work (or more) to code for than NVIDIA GPU's with CUDA. And, the performance was just not as good. I have a warm spot in my heart for the Phi but, it just didn't happen.
- Xeon EP and Core "i" series Haswell(core): Haswell(core) with AVX2 and FMA3 came out in the middle of the decade. I'm referring to the actual CPU core design, not specific code names that use this basic core. They are still using that core! The current i9 9900K is just the latest in a long list of "refreshes" on that core design. They have all had compute optimizations up to AVX2.
- Xeon SP and Core-X series Skylake(core): A couple of years ago Intel gave us Skylake(core). [Along with well over 50 SKU's to sort out!] This was a nice compute performance improvement over Haswell(core). It and the Xeon Phi, launched the AVX512 vector units. Note: don't confuse the "Core i" series refresh that was called "Skylake" with Skylake(core). The "core i" Skylake was basically a Haswell(core).
- Headaches!: Some of the things Intel gave us were not so great. I'll let you make your own list of those! There was the "the good, the bad, and the ugly" for Intel.
Intel is a great company! They are starting the 2020's with a new CEO, and maybe a little humility. They have a lot of excellent folks working on very interesting projects. I expect them to make significant advances in computing over the coming years. Intel is a big, important company, with enormous resources and potential. They dominated the high-end CPU market in the 2010's with some really great processors. I'm looking forward to seeing what they do next! … Xe, FPGA's, ML/AI chips, new CPU's, developer "ecosystem" expansion, diversification …
AMD is back for HPC and looking fantastic! … but, sadly, they were "missing in action" for most of the 2010's.
In the 2000's AMD was making innovative advances for the x86 architecture. AMD Opteron was the best platform for HPC. It was the best platform for the, rapidly growing, Linux cluster movement. They had a CPU platform that would scale to 8 sockets with good performance. Intel was far behind and scrambling to catch up … and then at the end of the 2000's Intel released the Nehalem. Nehalem was competitive with Opteron and was probably the most important architecture change ever for Intel. Linux clusters for super-computing were going in everywhere and folks started shifting to Intel. AMD just kind of vaporized in that market. They stopped innovating and the Opteron stagnated. I can't say much about AMD on the high-end of computing for most of the 2010's, because, not much was going on util the end of the decade.
Now at the end of the 2010's AMD is back to their former glory. They are innovating again and they have a formidable road-map. The Zen 2 core Ryzen, Threadripper and EPYC processors are looking very strong. If they can keep to their road-map for Zen 3 and beyond they may very well dominate the CPU market again. AMD should have a very good 2020's decade.
A few highlights of what we got from AMD in the (late) 2010's:
- Lisa Su: She revived AMD! She brought them back to a point where they are innovating again. She is a "real" techie, an electrical engineer with formable business leadership talent. It's hard to imagine a better CEO for AMD.
- Multi-Chip-Module design: When I first tested the Zen 1 32-core Threadripper. I really felt like I was sitting behind a quad-socket Opteron machine. They have made big improvements in this design with the new Zen2 based processors. This is just brilliant! You can expect Intel and NVIDIA to follow AMD's lead.
- Hope: It is great to have AMD back as a strong competitor in the CPU market. They will help keep everyone honest and push the entire industry forward in a more healthy way.
ARM, IBM POWER (OpenPOWER), …
There were other big developments during the 2010's. OpenPOWER licensed by IBM looked like it might take off a few years ago but it just didn't get much attention. Then there's ARM. There were ARM designs that moved into the serious HPC computing domain. Interest in ARM for the high-end has been maturing, cautiously, over the past decade and it has a chance to become much larger in the years ahead, but, we'll have to wait and see about that.
… And the Winner is … NVIDIA!
Some may not realize how much NVIDIA has done for Scientific Computing, especially for Machine Learning and AI. After all, doesn't NVIDIA make video cards for gamers? Well, yes they do that. But in the mid-to late 2000's people (scientists) started looking at how fast GPU's were getting at drawing and transforming data on displays. They started looking at what a "shader language" was doing and realized it was just short vector math. Everyone was looking for interesting chips that could be used as "accelerators" for matrix vector operations. NVIDIA took note of the interest and in 2007 released CUDA (Compute Unified Device Architecture) and the Tesla C870 GPU compute card. CUDA allowed for a relatively simple way to write computational kernels that could be off-loaded to a GPU. It worked! And, it worked on inexpensive everyday NVIDIA gaming GPU's too! (That's more important than NVIDIA ever want's to admit.)
In 2008 (or 9) there was a meeting at the National Center for Supercomputing Applications (NCSA) which I had the pleasure of attending. It was a gathering of folks who were working on accelerated computing with various types of processors (remember the IBM CellBE that went into the Sony Playstation? ). There were great talks and discussions. The big take-away at that end of that meeting was that the way forward was to use NVIDIA's CUDA libraries and GPU's. By 2010 the GPU acceleration (GPGPU) era was in full swing. Then, what NVIDIA did was relentlessly brilliant …
Here is some of what NVIDIA gave us in the 2010's,
- Visionary Leadership: In the 2000's NVIDIA was making large profits from gaming GPU's and when they saw there was interest in doing serious computing on GPU's they funneled those profits into developing hardware and software resources that supported those efforts.
- CUDA: CUDA took a higher-level approach to accessing the GPU hardware where others were looking at low-level control. CUDA made it possible for anyone with some C programming background and a little understanding of how GPU's were layed out to write code. AND, it worked on regular GeForce GPU's too. Everyone could do it …
- Developer Ecosystem: Using the CUDA tools, people started writing useful programs and NVIDIA created a gathering place for everything that was being done. It was a great resource. They promoted applications and supported development and education. They constantly worked on providing development tools to the community. NVIDIA set the bar high for the idea of a "Developer Ecosystem" and they have never let-up on their support!
- The Rebirth of AI: Thanks to stubborn persistence by Prof. Geoffrey Hinton and others the flame of AI using neural networks was kept alive. In 2012 Hinton's group smashed the record on the "ImageNet Large Scale Visual Recognition Challenge" using a Convolution Neural Network called AlexNet. They ran that "training" calculation on 2 NVIDIA GTX580 GPU's! (It's a great paper if you want to read it.) That kick-started the whole massive flurry of research that is still on-going for ML/AI. GPU's + CUDA based libraries and frameworks have turned out to be the perfect hardware/software for ML/AI work. NVIDIA has put extraordinary effort and resources into this field and now even employs world-class researchers directly. This is changing the world!
- Massive Performance for Scientific Computing: If you look at the Top500 supercomputer list you will see that most of the computational capability in the world today is being provided by NVIDIA GPU's. It's not unusual for Science to be driven forward by advances in computing hardware. Even a humble Workstation with a single RTX Titan GPU exceeds the compute performance of the fastest supercomputer in the world from 2000. (at least for single precision)
- Amazing Performance Increases: Here's a table of single precision (fp32) performance of notable GeForce and Titan GPU's (There are rough performance equivalents for, Tesla compute specific GPU's, and Quadro Workstation GPU's, but I won't list those. )
- 2010 | GTX 480 | 1350 GFLOPS
- 2012 | GTX 680 | 3090 GFLOPS
- 2014 | Titan Black | 5120 GFLOPS
- 2015 | Titan X | 6600 GFLOPS
- 2017 | Titan Xp | 11370 GFLOPS
- 2018 | Titan V | 14900 GFLOPS (7500 GFLOPS in fp64!! introduced fp16 Tensorcores — My favorite computing hardware of all time! )
- 2019 | Titan RTX | 16300 GFLOPS + Tensorcores + Ray Tracing acceleration
Even the performance of the GTX 480 is impressive and the latest GPU's exceed that by over a factor of ten!
NVIDIA had their bad moments too but, over all … NVIDIA made a HUGE difference in Scientific computing!
Thank You to all of the countless individuals, companies and organizations that made the 2010's great! Best wishes for all in the coming years!
Happy computing! –dbk @dbkinghorn
Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.
Why Choose Puget Systems?
Built specifically for you
Rather than getting a generic workstation, our systems are designed around your unique workflow and are optimized for the work you do every day.
We’re Here, Give Us a Call!
We make sure our representatives are as accessible as possible, by phone and email. At Puget Systems, you can actually talk to a real person!
Fast Build Times
By keeping inventory of our most popular parts, and maintaining a short supply line to parts we need, we are able to offer an industry leading ship time.
Lifetime Labor & Tech Support
Even when your parts warranty expires, we continue to answer your questions and even fix your computer with no labor costs.
Click here for even more reasons!
Puget Systems Hardware Partners