NVIDIA’s GeForce RTX 3080 and 3090 launched earlier this fall, and now the RTX 3070 has joined its siblings. How does it compare to the bigger RTX 30 Series cards? And how do they all stack up against the previous generation? In this article we take a look at how well they all fare in GPU based rendering engines like Maxon Redshift.
NVIDIA’s GeForce RTX 30 Series cards are here, with NVIDIA boasting significant performance gains over the previous generation. The RTX 3080 launched last week, and now with the RTX 3090 released today we can compare these models to each other as well as the older 20 Series to see how they stack up in GPU based rendering engines like Maxon Redshift.
The RTX 3000 series cards are here, with NVIDIA boasting significant performance gains over the previous generation. With the RTX 3080 now launched, we can find out how large those gains are in GPU based renderers like Maxon Redshift.
The second entry in a series of short articles looking at the best computer system specs for a variety of popular applications focuses on Redshift by Maxon.
Redshift is a GPU-based rendering engine, now owned by Maxon and available bundled with Cinema 4D – as well as in the form of plug-ins for other 3D applications. It was written to use NVIDIA’s CUDA graphics programming language, and since NVIDIA recently refreshed their GeForce series with new 2060, 2070, and 2080 “SUPER” cards we thought it would be a good time to re-test the whole RTX lineup.
Redshift is a GPU-based rendering engine, compatible with NVIDIA’s CUDA graphics programming language. We recently saw how GeForce RTX cards perform in this renderer, but now the Titan RTX is out with a staggering 24GB of memory onboard. That sounds great for rendering complex 3D scenes, but how does it actually perform? And are there any caveats?
GPU based renderers like OctaneRender and Redshift make use of the video cards in a computer to process ray tracing and other calculations in order to create photo-realistic images and videos. The performance of an individual video card, or GPU, is known to impact rendering speed – as is the number of video cards installed in a single computer. But what about the connection between each video card and the rest of the system? This interconnect is called PCI Express and comes in a variety of speeds. In this article, we will look at how benchmarks for these programs perform across PCI-E 3.0 and 2.0 with x1, x4, x8, and x16 lanes.
We found previously that stacking multiple RTX 2080 video cards next to each other for multi-GPU rendering led to overheating and significant performance throttling, due to the dual-fan cooler NVIDIA has adopted as the standard on this generation of Founders Edition cards. Now that manufacturers like Asus are putting out single-fan, blower-style cards we can repeat our testing to see if the throttling issues are resolved and find out how well these video cards scale when using 1, 2, 3, or even 4 of them for GPU-based rendering in OctaneRender and Redshift.
Redshift is a GPU-based rendering engine, and the latest version 2.6.22 is compatible with NVIDIA’s Turing graphics architecture in the GeForce RTX 2080 and 2080 Ti cards. Let’s take a look at how these new GeForce models compare to the previous generation.
The new GeForce RTX series cards perform well in GPU based rendering, as individual cards, and have great potential for the future thanks to their new RT cores. However, when stacking them together to measure multi-GPU scaling we ran into some serious problems.