In this post address the question that’s been on everyone’s mind; Can you run a state-of-the-art Large Language Model on-prem? With *your* data and *your* hardware? At a reasonable cost?
TensorFlow Performance with 1-4 GPUs — RTX Titan, 2080Ti, 2080, 2070, GTX 1660Ti, 1070, 1080Ti, and Titan V
I have updated my TensorFlow performance testing. This post contains up-to-date versions of all of my testing software and includes results for 1 to 4 RTX and GTX GPU’s. It gives a good comparative overview of most of the GPU’s that are useful in a workstation intended for machine learning and AI development work.
Intel Xeon W-3175X and i9 9990XE Linpack and NAMD on Ubuntu 18.04
There are 2 recent Intel processors that are really strange, the Xeon W-3175X 28-core, and the Core i9 9990XE overclocked 14-core. I was able to get a little time in on the these processors. I ran a couple of numerical compute performance tests with the Intel MKL Linpack benchmark and NAMD. I used the same system image that I had used recently to look at 3 Intel 8-core processors so I will include those results here as well. **There will be results for W-3175, 9990XE, 9800X, W-2145, and 9900K**.
RTX Titan TensorFlow performance with 1-2 GPUs (Comparison with GTX 1080Ti, RTX 2070, 2080, 2080Ti, and Titan V)
I’ve done some testing with 2 NVIDIA RTX Titan GPU’s running machine learning jobs with TensorFlow. The RTX Titan is a great card but there is good news and bad news.