Quad RTX3090 GPU Wattage Limited “MaxQ” TensorFlow Performance

Can you run 4 RTX3090’s in a system under heavy compute load? Yes, by using nvidia-smi I was able to reduce the power limit on 4 GPUs from 350W to 280W and achieve over 95% of maximum performance. The total power load “at the wall” was reasonable for a single power supply and a modest US residential 110V, 15A power line.

How to Install TensorFlow with GPU Support on Windows 10 (Without Installing CUDA) UPDATED!

This post is the needed update to a post I wrote nearly a year ago (June 2018) with essentially the same title. This time I have presented more details in an effort to prevent many of the “gotchas” that some people had with the old guide. This is a detailed guide for getting the latest TensorFlow working with GPU acceleration without needing to do a CUDA install.

RTX 2080Ti with NVLINK – TensorFlow Performance (Includes Comparison with GTX 1080Ti, RTX 2070, 2080, 2080Ti and Titan V)

More Machine Learning testing with TensorFlow on the NVIDIA RTX GPU’s. This post adds dual RTX 2080 Ti with NVLINK and the RTX 2070 along with the other testing I’ve recently done. Performance in TensorFlow with 2 RTX 2080 Ti’s is very good! Also, the NVLINK bridge with 2 RTX 2080 Ti’s gives a bidirectional bandwidth of nearly 100 GB/sec!

NVLINK on RTX 2080 TensorFlow and Peer-to-Peer Performance with Linux

NVLINK is one of the more interesting features of NVIDIA’s new RTX GPU’s. In this post I’ll take a look at the performance of NVLINK between 2 RTX 2080 GPU’s along with a comparison against single GPU I’ve recently done. The testing will be a simple look at the raw peer-to-peer data transfer performance and a couple of TensorFlow job runs with and without NVLINK.