Puget Systems print logo
https://www.pugetsystems.com


Read this article at https://www.pugetsystems.com/guides/2170
Dr Donald Kinghorn (Scientific Computing Advisor)

NVIDIA 3080Ti Compute Performance ML/AI HPC

Written on June 18, 2021 by Dr Donald Kinghorn

For computing tasks like Machine Learning and some Scientific computing the RTX3080TI is an alternative to the RTX3090 when the 12GB of GDDR6X is sufficient. (Compared to the 24GB available of the RTX3090). 12GB is in line with former NVIDIA GPUs that were "work horses" for ML/AI like the wonderful 2080Ti.


Read this article at https://www.pugetsystems.com/guides/1983
Dr Donald Kinghorn (Scientific Computing Advisor)

Quad RTX3090 GPU Power Limiting with Systemd and Nvidia-smi

Written on November 24, 2020 by Dr Donald Kinghorn

This is a follow up post to "Quad RTX3090 GPU Wattage Limited "MaxQ" TensorFlow Performance". This post will show you a way to have GPU power limits set automatically at boot by using a simple script and a systemd service Unit file.


Read this article at https://www.pugetsystems.com/guides/1974
Dr Donald Kinghorn (Scientific Computing Advisor)

Quad RTX3090 GPU Wattage Limited "MaxQ" TensorFlow Performance

Written on November 13, 2020 by Dr Donald Kinghorn

Can you run 4 RTX3090's in a system under heavy compute load? Yes, by using nvidia-smi I was able to reduce the power limit on 4 GPUs from 350W to 280W and achieve over 95% of maximum performance. The total power load "at the wall" was reasonable for a single power supply and a modest US residential 110V, 15A power line.


Read this article at https://www.pugetsystems.com/guides/1958
Dr Donald Kinghorn (Scientific Computing Advisor)

RTX3070 (and RTX3090 refresh) TensorFlow and NAMD Performance on Linux (Preliminary)

Written on October 29, 2020 by Dr Donald Kinghorn

The GeForce RTX3070 has been released. The RTX3070 is loaded with 8GB of memory making it less suited for compute task than the 3080 and 3090 GPUs. we have some preliminary results for TensorFlow, NAMD and HPCG.


Read this article at https://www.pugetsystems.com/guides/1902
Dr Donald Kinghorn (Scientific Computing Advisor)

RTX3090 TensorFlow, NAMD and HPCG Performance on Linux (Preliminary)

Written on September 24, 2020 by Dr Donald Kinghorn

The second new NVIDIA RTX30 series card, the GeForce RTX3090 has been released. The RTX3090 is loaded with 24GB of memory making it a good replacement for the RTX Titan... at significantly less cost! The performance for Machine Learning and Molecular Dynamics on the RTX3090 is quite good, as expected.


Read this article at https://www.pugetsystems.com/guides/1885
Dr Donald Kinghorn (Scientific Computing Advisor)

RTX3080 TensorFlow and NAMD Performance on Linux (Preliminary)

Written on September 17, 2020 by Dr Donald Kinghorn

The much anticipated NVIDIA GeForce RTX3080 has been released. How good is it with TensorFlow for machine learning? How about molecular dynamics with NAMD? I've got some preliminary numbers for you!


Read this article at https://www.pugetsystems.com/guides/1551
Dr Donald Kinghorn (Scientific Computing Advisor)

2 x RTX2070 Super with NVLINK TensorFlow Performance Comparison

Written on August 14, 2019 by Dr Donald Kinghorn

This is a short post showing a performance comparison with the RTX2070 Super and several GPU configurations from recent testing. The comparison is with TensorFlow running a ResNet-50 and Big-LSTM benchmark.


Read this article at https://www.pugetsystems.com/guides/887
Dr Donald Kinghorn (Scientific Computing Advisor)

PCIe X16 vs X8 for GPUs when running cuDNN and Caffe

Written on January 16, 2017 by Dr Donald Kinghorn

Does PCIe X16 give better performance than X8 for training models with Caffe when using cuDNN? Yes, but not by much!


Read this article at https://www.pugetsystems.com/guides/870
Dr Donald Kinghorn (Scientific Computing Advisor)

NVIDIA DIGITS with Caffe - Performance on Pascal multi-GPU

Written on December 23, 2016 by Dr Donald Kinghorn

NVIDIA's Pascal GPU's have twice the computational performance of the last generation. A great use for this compute capability is for training deep neural networks. We have tested NVIDIA DIGITS 4 with Caffe on 1 to 4 Titan X and GTX 1070 cards. Training was for classification of a million image data set from ImageNet. Read on to see how it went.


Read this article at https://www.pugetsystems.com/guides/825
Dr Donald Kinghorn (Scientific Computing Advisor)

Install Ubuntu 16.04 or 14.04 and CUDA 8 and 7.5 for NVIDIA Pascal GPU

Written on August 29, 2016 by Dr Donald Kinghorn

You got your new wonderful NVIDIA Pascal GPU ... maybe a GTX 1080, 1070, or Titan X(P) ... And, you want to setup a CUDA environment for some dev work or maybe try some "machine learning" code with your new card. What are you going to do? At the time of this writing CUDA 8 is still in RC and the deb and rpm packages have drivers that don't work with Pascal. I'll walk through the tricks you need to do a manual setup of CUDA 7.5 and 8.0 on top of Ubuntu 16.04 or 14.04 that will work with the new Pascal based GPU's


Read this article at https://www.pugetsystems.com/guides/832
Dr Donald Kinghorn (Scientific Computing Advisor)

NVIDIA Titan GPUs (3 generations) - CUDA 8 rc performance on Ubuntu 16.04

Written on August 12, 2016 by Dr Donald Kinghorn

I have a Titan Black, Titan X (Maxwell) and a new Titan X (Pascal) in a system for a quick CUDA performance test. Install is on Ubuntu 16.04 with CUDA 8.0rc. We'll look at nbody from the CUDA samples code and NAMD Molecular Dynamics. It is stunning to see how much the CUDA performance has increased on these wonderful GPU's in just 3 years.