Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/1087
Dr Donald Kinghorn (Scientific Computing Advisor )

Don's Computing Technology Predictions for 2018

Written on December 28, 2017 by Dr Donald Kinghorn
Share:

Predictions for 2018. Sure, why not? 2017 was an interesting year in computing. Just like every year since the mid 80's. I have stronger expectations for 2018 than 2017.

At the beginning of 2017 I thought IBM Open Power architecture would take off and become a rival to Intel even in the workstation market. I expected ARM to disappear from the server market. I also, expected much more rapid progress with cloud services directed at end users -- Especially from Microsoft Azure. I was wrong.

I also expected Machine Learning and AI to grow at a tremendous rate driven by GPU acceleration from NVIDIA. I expected Python to become the dominant programming language (at least for machine learning). I was sure docker would continue to take over the data center and cause a shift from monolithic applications to micro-services. I was right.

This year I'm going to write down some of my prediction so I can see how well (or badly) I did at the end of 2018. It may just be a testament to what can happen to a person during the short rainy days of early Winter in the Northwest after too many late nights working jigsaw puzzles and eating too much Texas pecan fruit-cake ... followed by trying to counteract it all with too much espresso during the day.

1) Microsoft will have its own Linux distribution

When pigs fly you say! Why not? Amazon did it, Oracle did it, Google kind of did it (Android). Why not Microsoft? I hear they love Linux! Why not show that love by making their own Linux distribution ... they could do it you know ... but, no, they probably wont brand it Microsoft Linux. Instead they will work closely with Canonical for the Ubuntu server customizations they need for optimization on Azure. I expect a more completely integrated "Linux" running on Windows-Subsystem-for-Linux on Windows 10. I put Linux in parenthesis because it will be everything except the Linux kernel. WSL is on top of the NT kernel. By "integrated" I mean it will have at least full network hardware access through the NT kernel services. It's already come a long way and it's one of the most interesting things Microsoft has ever done in my opinion. I expect to see full Linux docker support. No Hyper-V needed ... we'll see.

2) NVIDIA's dominance in Accelerated computing will decline

For many years I have been a strong NVIDIA GPU computing advocate. I have a long list of blog posts that I plan to write about how to use some of the great new hardware and tools they have developed. However, there was a very grave disturbance in the force this past week that has given many people working in the open source machine learning community pause to wonder if they have been lured into the dark side. It started with an announcement from a Japanese datacenter services provider saying that they were no longer offering access to some of the hardware platforms used for machine learning because those system had NVIDIA GeForce video cards in them. This was caused by this juicy tidbit in one of the EULA's in the NVIDIA software stack concerning GeForce cards.

"No Datacenter Deployment. The SOFTWARE is not licensed for datacenter deployment, except that blockchain processing in a datacenter is permitted."

They can't dictate what you do with the hardware you buy but they can make it illegal for you to use the software drivers needed for that hardware depending on where you put that hardware and what kinds of calculations you are running on it.

Despite the wonderful ecosystem that NVIDIA has built up around CUDA development and some nice open source software contributions they have made. It's little gems like that EULA update that remind the development community that they are using proprietary software for critical aspects of their research and open source projects. NVIDIA holds all the cards. They are effectively saying that you can't use GeForce cards in a data center unless you are doing coin mining! That means no machine learning development with GeForce for acceleration unless it's on a box sitting next to your desk. Hummm! So what are the data centers that are doing all of this great machine learning and AI work supposed to do if they have racked up some nodes with GerForce cards? Well, they are supposed to buy NVIDIA Tesla hardware to achieve the same performance level for machine learning applications at anywhere from 5 to 10 times the cost. Now, before I say anything more, I have been advising this for years! My advise has always been to develop on GeForce and if you are deploying in a large cluster then go ahead and pay the extra money for the "top bin" components with nice things like ECC memory and direct support from NVIDIA for that use case. But, that's not really the point. The point is that they are trying to make that a legal requirement depending on the type of calculation you are doing.

I'm hoping that this is all some misunderstanding or a mistake of some kind. If not then it will be a PR problem for NVIDIA when people come back to work after New Years and find this little Christmas present waiting for them.

In 2018 many people working on important research software that is dependent on CUDA and NVIDIA GPU acceleration are going to be looking for alternatives. If this wake-up call turns out to be true. It's going to be hard to find anything that's as good.

3) Intel will drop the Xeon Phi

Sorry Intel. Even though the Xeon Phi Knights Landing is a vast improvement in usability over Knights Corner. The whole thing just didn't come together as a developer ecosystem. The new Knights Mill will mostly go unnoticed. It's interesting to note that historically It's been developments in Intel CPU's that have been the demise of special hardware compute accelerators. "Regular old Intel CPU's" always seemed to catch-up, and pass-up, the performance to anything that came out (the big exception to this has been NVIDIA GPU's for compute). My guess is that the Xeon Scalable CPU's a.k.a. Purley, a.k.a. Skylake-SP (and Skylake-W and skylake-X on the desktop) are going to make Phi mostly irrelevant. And, Intel will do their own special purpose accelerators that will out perform the Phi.

4) Intel will pull together a machine learning development ecosystem and special purpose hardware that will rival current state of the art.

This is a long shot but possible. Intel has all the pieces. Great compilers and compiler developers. And, they have Nervana, and they have FPGA's. The acquisition of the AI software company Nervana Systems last year has resulted in a new chip for machine learning acceleration. It's new and I haven't seen it or any numbers about real performance. However, Intel has resources to do something significant if they can get the right people involved and pull everything together.

5) AMD will gain significant CPU market-share but will be derailed by ARM in the datacenter.

It wasn't that long ago that AMD Opteron was the CPU of choice for HPC. The current fastest supercomputer in the US is running 18688 16-core Opterons (along with 18688 NVIDIA Tesla K20X GPU's). It took Intel a while to catch up to the Opteron but, for the last 8 years or so Intel has dominated in HPC. And, AMD did nothing! That has changed. AMD is back. The EPYC processors are getting a lot of interest. I have not had a chance to try them yet but I'm really looking forward to it. At the SC17 supercomputing conference in November AMD was getting a lot of attention. Vendors I talked with were having difficulty keeping up with demand for test systems for their big customers. Supply has been sparse and slow coming but that should change in 2018 and I think we can expect to hear about some large deployments.

AMD will have one major problem and it wont come from Intel. It will come from ARM! There are new high performance ARM processors from Cavium and Qualcom that will likely derail what could have been a massive win for AMD in the datacenter for 2018.

6) ARM will make significant gains in the datacenter and in HPC

ARM seemed to be the hottest topic at SC17. Every talk that had anything to do with ARM was packed. That may be because the Qualcomm Centriq 2400 Processor was announced and early benchmark numbers were released a couple of weeks before the meeting. That and Cray was showing off their new XC series supercomputer design based on the Cavium Thunder X2. See my recent post ARM for Supercomputing a view from SC17 for more information. Low power consumption, high performance, flexible possibilities, yes, ARM will get some major deployments in 2018. We may even see some workstation configurations.

7) Microsoft will release another ARM based system and this time it will be successful.

I suppose we already know that Microsoft will make another attempt at an ARM based computing device. It will probably be another laptop/tablet but it would be more interesting if it was more of a desktop system. My surprise prediction is that it will be successful. The reason it may work this time is because they will implement a low-level x86 emulation layer. That means that it will be able to run "real" Windows applications. However, it could come to a crashing halt if Intel decides they want to sue to prevent the use of the x86 emulation. I'm looking forward to seeing how this works out.

8) Microsoft Azure will gain market parity with Amazon AWS

This one is a real long shot! However, Microsoft has made amazing progress with their Azure cloud services. I expect to see several surprise announcements about Azure in 2018. One that was announced at SC17 was "supercomputing as a service". They are going to attach a Cray supercomputer to Azure and basically rent it out through the Azure interface. That just makes me smile :-) One of the things about Microsoft at SC17 was that they "were involved with the community". They almost seemed naive in that HPC crowd but they were there at BOF meetings and panel discussions. The other cloud vendors were there but they seemed to keep to themselves. It's hard to explain because Amazon and Google had a significant presence at the meeting but Microsoft just seemed, to me at least, to be more engaged and eager somehow. It's actually pretty unlikely that anyone will "catch-up" to Amazon AWS but Microsoft seems to be strongest competition and they are growing rapidly.

9) The USA will lose it's technology dominance to China

The USA will fire up the big DOE supercomputers in 2018 and they will be great. However, China will likely fire something up that will again knock the USA out of the top spot on the Top500 list. It looks like machine learning, AI and robotics will get huge support in 2018 from the Chinese government. It will probably have a nationalistic feel to it in China like the moon landing did for the US. I also expect them to do this on their own hardware designs. They will have their own CPU's and accelerators and they will be really good.

10) Apple will bring out a wonderful modular Mac Pro and no one will care

This last one is kind of sad. But, just like like AMD with the EPYC processors, Apple is 2 years too late updating their serious workstation computing platforms. The "creative" community, a large percentage of whom were die-hard Apple fans, finally gave up on Apple because Windows based workstations were just so compelling ... and up-to-date! The new iMac with the Intel Xeon skylake-w processor and overall great components looks really good. If they follow up with a modular design for the Mac Pro (and not a dumpster instead of a trash-can) then they may win back some of the folks that walked away from them over the last couple of years. But, my feeling is that they are too late for a comeback. Also, why should they even bother? They make buckets of money from their phones. High performance workstation would be just a small niche for them.


OK, there you have my rants and predictions for computing in 2018. I will probably have a 50% success rate with these predictions. Maybe. In any case I'm looking forward to the new year and seeing how things actually play out. One thing for sure is that it will be another crazy and interesting year for computing. I wish every company I have mentioned a great year ahead. Of course I wish all of you the same best wishes!

Happy New Year and Happy Computing --dbk

Tags: 2018, Predictions
Jack

Regarding NVIDIA, I disagree for a number of reason. First, I think NVIDIA is doing the right thing by all their customers. Gaming cards are not designed for data center use, they are not designed to 24/7 usage, they lack, as you pointed out, ECC VRAM, they have much smaller frame buffers than the cards meant for data center usage, they lack the tested and certified drivers, and using them in scenarios where they are not meant to be used can only hurt NVIDIA's reputation. When the gaming cards fail, do to memory corruption, out of memory error or hardware failures due to nonstop use, the data center customers are not going to blame themselves they are going to blame NVIDIA and drive up NVIDIA's support costs.

But let's assume that data center customers do decide that they will not be using NVIDIA anymore, who will they turn to? It's not like there are any other players in the data center market segment. AMD is an also ran, if they offered a compelling alternative NVIDIA wouldn't have made the billions they have and AMD wouldn't barely be in the black for the rest quarter in nearly a decade. Intel can barely compete but in all honest for certain workloads it's NVIDIA or it's nothing.

Lastly, NVIDIA is looking out for the gaming community that it targets with its Geforce branded cards. If data centers are gobbling them up to use then they shrink the supply of gaming cards for gamers and raise the prices through the roof, just like the crypto mining craze did for certain cards.

I support NVIDIA in what they are doing and I think it will have a minimum impact on their bottom line.

Posted on 2017-12-31 06:09:29
Donald Kinghorn

I know the NVIDIA one was harsh but I was not too happy that they dropped that little gem like 2 days before Christmas or something. It was a wake-up call for some. I love NVIDIA really, they are doing great stuff! It turns out they are not being too overly strict about that. I've burned up Geforce cards with compute but not for a long time. I really love the new Titan V it's a great workstation card! ... been doing a bunch of testing again ...

They are starting to get some pressure from Intel, Google and the FPGA community but the CUDA ecosystem is really solid and GPU's are hard to beat.

I've historically advocated for Tesla in clustered systems but my old grad-student/post-doc frugality thanks them for awesome compute capability on GeForce.

Posted on 2018-05-08 15:33:37
J. Erickson

I have chuckled a litte reading this. Merry chritmas everyone.

Posted on 2017-12-31 08:26:41
Jortiz3

Wonderful to see your predictions. Many people love to speculate, but to put hard-set predictions for all the world to see takes more rigor (the next step would be to place money on those predictions!). Would love to see updates as the year progresses on the status of these predictions.

Posted on 2018-05-04 22:10:40
Donald Kinghorn

I haven't looked back at this for a while. It looks like I did pretty good with these! :-) By the end of the year it will be pretty solid I think. I might take your suggestion and do a mid year review in a month or so.

Posted on 2018-05-08 15:35:52