Redshift 2.6.11 Multi-GPU Performance ScalingWritten on June 18, 2018 by William George
Redshift is a production-quality, GPU-accelerated renderer. Traditionally this type of rendering was done on CPUs, but graphics processors (GPUs) are ideal for highly parallel tasks like this - and it is easier to fit multiple video cards in a single computer, to boost performance, than multiple CPUs.
Speaking of multiple cards, how well does rendering speed scale across multiple GPUs in Redshift? Are there diminishing returns as more cards are added? We are putting Redshift 2.6.11 to the test, looking at scaling from one to four video cards in a single workstation.
To see how increasing the number of video cards in a system affects performance in Redshift, we ran the benchmark included in the demo version of Redshift 2.6.11 with 1, 2, 3, and 4 NVIDIA GeForce GTX 1080 Ti video cards. This benchmark uses all available GPUs to render a single, still image. For animations, there are also methods to assign a different frame to each video card - which may be more efficient in some situations, but is outside the scope of the benchmarking tool Redshift provides.
On the hardware side, we wanted to use a high clock speed processor so that the video cards could really shine. We also needed a platform that would support as many video cards as possible in a large tower workstation. Given that combination of goals, the configuration which made the most sense was Intel's Xeon W - specifically, the W-2125 processor on a Gigabyte MW51-HP0 board. That provided the right PCI-Express slot layout for up to four GPUs, and the Xeon W-2125 runs fast: 4.0GHz base and up to 4.5GHz turbo.
If you would like full details on the hardware configuration we tested on, just click here to expand a detailed list.
|CPU:||Intel Xeon W-2125 4.0GHz (4.5GHz Turbo) 4 Core|
|RAM:||8x Kingston DDR4-2666 32GB ECC Reg (256GB total)|
|GPU:||1 - 4 x NVIDIA GeForce GTX 1080 Ti 11GB|
|Hard Drive:||Samsung 960 Pro 1TB M.2 PCI-E x4 NVMe SSD|
|OS:||Windows 10 Pro 64-bit|
|PSU:||EVGA SuperNova 1600W P2|
|Software:||Redshift 2.6.11 Demo Benchmark (Age of Vultures scene)|
Here are the Redshift 2.6.11 benchmark render times with 1, 2, 3, and 4 of the GeForce GTX 1080 Ti 11GB graphics card:
Or another way to look at it, here is how adding video cards increased rendering performance - shown as a percentage compared to the speed of a single card:
As demonstrated above, video card performance in Redshift scales very well as additional cards are added. It isn't quite perfect, or linear, scaling - there is some level of diminishing returns - but it is still more than enough to justify their use in multi-GPU workstations.
Performance in Redshift scales very well across multiple GPUs - but that statement can lead to incorrect conclusions. Doubling the number of video cards in a system almost doubles performance, but does *not* double the price of the computer. Much of a workstation may stay the same, even as more video cards are added, so the percentage increase in price for an additional card is usually less than the percentage increase in Redshift performance you will end up getting. When looking at the total price of a system, a few lower-cost cards can often outpace one or even two top-end GPUs - so multiple video cards is the way to go for the best value in Redshift.