Unreal Engine 4.25 - NVIDIA GeForce RTX 3070, 3080 & 3090 PerformanceWritten on October 29, 2020 by Kelly Shipman
TL;DR: NVIDIA GeForce RTX 3070, 3080 & 3090 performance in Unreal Engine
In virtually every test we performed, the RTX 3090 and 3080 outperformed both the Titan RTX and 2080 TI by a wide margin while costing significantly less. At 4k resolutions, the 3090 had an average 74% improvement in FPS over the Titan, and about 15% faster than the 3080. This roughly lines up with the CUDA Core count on each card. Some may look at the price difference between the 3080 and 3090 and question if the performance warrants it. The bigger deciding factor for creative professionals will be if they need the extra VRAM or not. Meanwhile, the 3070 is neck and neck with the 2080 Ti as long as VRAM is not a limiting factor, while costing half as much.
On September 1st, NVIDIA announced the new GeForce RTX 30 Series, touting major advancements in performance and efficiency. While gaming was a focus of the launch, applications like Unreal Engine have moved on from being “just a game engine” to become an important tool across multiple industries. As such, we’ll take a look at the performance of scenes tailored to Architecture, Cinematic Rendering, and Virtual Production.
If you want to see the full specs for the new GeForce RTX 3070, 3080, and 3090 cards, we recommend checking out NVIDIAs page for the new 30 series cards. But at a glance, here are what we consider to be the most important specs:
|VRAM||CUDA Cores||Boost Clock||Power||MSRP|
|RTX 2070S||8GB||2,560||1.77 GHz||215W||$499|
|RTX 3070||8GB||5,888||1.70 GHz||220W||$499|
|RTX 2080 Ti||11GB||4,352||1.55 GHz||250W||$1,199|
|RTX 3080||10GB||8,704||1.71 GHz||320W||$699|
|Titan RTX||24GB||4,608||1.77 GHz||280W||$2,499|
|RTX 3090||24GB||10,496||1.73 GHz||350W||$1,499|
While not labeled as a “Titan” product, Nvidia’s presentation positioned the 3090 as a replacement for the Titan RTX. Given the amount of VRAM available, this makes sense. 24GB is more than any game would need, but has many professional uses. As to why they added this card to the Geforce line instead of Titan is anyone’s guess. The 3080 and 3070 both have significant hardware upgrades from their previous generation counterparts, with double the amount of CUDA Cores, yet retaining roughly the same amount of VRAM.
We are still very interested in how Ampere based Quadro cards will perform, and when we are able to test those cards we will post follow-up articles with the results.
Unreal Engine Workstations
Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.
Listed below is the specifications of the system we will be using for our testing:
|CPU||AMD TR 3970X 32 Core|
|CPU Cooler||Noctua NH-U14S TR4-SP3|
|Motherboard||Gigabyte TRX40 AORUS PRO WIFI|
|RAM||4x DDR4-2933 16GB (64GB total)|
|Video Card||Gigabyte GeForce RTX 3080 OC 10GB
NVIDIA Titan RTX 24GB
NVIDIA GeForce RTX 2080 Ti 11GB
NVIDIA GeForce RTX 2080 SUPER 8GB
NVIDIA GeForce RTX 2070 SUPER 8GB
NVIDIA GeForce RTX 2060 SUPER 8GB
|Hard Drive||Samsung 960 Pro 1TB|
|Software||Windows 10 Pro 64-bit (Ver. 2004)
Unreal Engine (Ver. 4.25.3)
*All the latest drivers, OS updates, BIOS, and firmware applied as of September 7th, 2020
Big thank you to Gigabyte for providing the GeForce RTX™ 3080 GAMING OC 10G used in our testing!
To test each GPU, we will be using the “typical system used at Epic”, specifically the AMD Threadripper 3970X. Due to its speed at compiling shaders, building lighting, etc, Threadripper has become the go-to for Unreal development.
For the testing itself, we will be using four sample scenes from the marketplace with some modifications to make testing easier. These sample scenes are a much better representation of what someone in Virtual Production or Architecture may work with than a video game. Eventually we would like to have custom maps for these tests, however due to time constraints for this launch, these will work just fine. I’ll detail each scene and any modifications below.
Overall Unreal Engine Performance Analysis
With Unreal Engine, GPU performance is one of the key metrics regardless of industry. The draw to game engines is their real time nature. The game industry often is concerned with how many frames per second they can maintain, while filmmakers want to know how many effects, and large textures they can throw at a scene while maintaining their target 24 or 30 frames per second. These are two different sides to the same coin. Many of the examples below have a frame rate that would be unacceptable to gamers, but the key takeaway is how much of an improvement the new card does, or does not, provide.
These graphs average the FPS for a specific resolution across the various scenes, and then normalize them to the 2080 Ti. To get the average FPS for a scene, a camera sequence was scripted to give a range of lighting scenarios and polygon counts. The sequence would then auto start when selecting "Play in Editor" at the desired resolutions. From here the script counts how many frames were rendered during the duration of the test, giving us the average FPS. We’ll go into more detail of each specific test later. This gives us a broad look and what kind of performance increase to expect.
As you can see, the new Nvidia RTX 3090 and 3080 show significant improvements across the board, especially at higher resolutions. Not only are they faster than the Titan RTX, but they cost much less. Meanwhile, the 3070 is pushing 2080 Ti levels of performance and more than half the price. Unreal Engine will use every bit of power you give it, so this kind of performance improvement is in line with what we expected.
The first scene we will look at is “Virtual Studio” created by Epic. Virtual Production in News and Sports broadcast has exploded recently. This represents a fairly typical broadcast setup, with a virtual set, video wall, and hooks for a live camera feed. For this test, I did not connect a live camera as that has the potential to introduce issues outside of the scope of what we are testing. By default this scene does not use Ray Tracing, however I ran the tests both with and without so we could see how well the cards perform in both situations.
Since this is a pretty basic scene, when playing at 1080p with Ray Tracing disabled, we are being bottlenecked by the CPU. A note about the 1080p results. due to the staggered launch of these cards, each launch had a new driver. There appears to be a performance improvement due to these drivers, but with only two days between receiving the driver from Nvidia and publication, there wasn't enough time to retest all the old cards. I suspect once I am able to, we'll see the 1080p scores roughly equal again.
Once we add in ray tracing, and increase the resolution, we see the 3090 and 3080 take a commanding lead. At a 4k resolution, with ray tracing enabled, we see the 3090 almost double the FPS of the Titan. The 3080 similarly leads the previous generation by a significant margin. Keep in mind that the Titan has an MSRP more than three times that of the 3080. The 3070 just edges out the 2080 Ti.
The next scene is the Abandoned Apartment from Quixel. Epic acquired Quixel last year and brought their library of photogrammetry based materials to Unreal. This scene uses numerous 8k and 4k textures as well as some high poly count models, and is an example of what someone in Virtual Production or Cinematic Rendering may use. It wouldn’t be a stretch to extrapolate these results to an Architecture Visualization workflow, as they’d be using similar texture sizes, they just tend to want the apartment to look new and less abandoned.
For this, I only rendered with ray tracing enabled as that is the big draw to Unreal for the above use cases. Again, the new 3000 series cards take a commanding lead with almost double the frame rate at 4K over the Titan and 2080 Ti. The 3090 does hold the lead here, but not by as much as may have been expected.
In addition to the FPS test, I rendered out a 4K cinematic to see how these cards perform when not trying to be “real time.” As you can see, the results are interesting.
It appears that the render time, for this 43 second video clip, plateaued at 3:40. I suspect there is a platform limitation happening and I’ll dig more into it when I do CPU testing. A quick look at Task Manager during this test shows only a single CPU thread active, so there may be something to this.
We have another scene from Quixel, their Goddess Temple. Much like Abandoned Apartment, this scene features numerous large textures. One major difference is this scene has a lot of overlapping shadow casting lights and particle effects, which provide their own unique challenges.
Previously we saw the 3080’s lead narrow, however the 3090 continues to hold its lead at 4k. That said, the 3080 is still edging out a video card that costs three times as much. We can now see the interplay between processing power and VRAM when we look at the 4K graph. Wherever we see a big jump in performance, there is also a big jump in VRAM within that architecture. The 3070 only has 8GB of VRAM, just like the previous generations 2070 and 2080, as such, it take a performance hit in 4k. If you are working with a large number of very high resolution shaders, and trying to output to 4k or greater, the extra VRAM of the 3090 will really be useful.
Once again, cinematic rendering times seem to be hitting some limitation other than raw GPU performance. The times line up closer to VRAM amount than anything else.
Our last scene is the ArchViz Interior from Epic. The goal with Architecture Visualization is to have photorealistic materials and lighting. Once again, there are a lot of high resolution textures in use, reflective surfaces, and higher than normal Global Illumination rays. This scene is very demanding even for the most high end video cards. I ran this both with and without ray tracing.
The 3090 once again leads in every single test, followed by the 3080. When running this scene in 4k with ray tracing enabled would take over 16GB of VRAM, causing all cards to crash except for the Titan and 3090 with their 24GB of VRAM. At this point it is only two data points, and the numbers are so low that it's hard to determine an exact relationship, but hopefully once we can do some Quadro testing we’ll have a better understanding of these results.
For rendering out a 4K cinematic, only the 3090 and Titan RTX were able to complete the test without crashing. As you can see, there is a stark difference between the two. Again, hopefully we'll see more cards with more VRAM soon in the Quadro line.
How well do the NVIDIA GeForce RTX 3070, 3080, and 3090 perform in Unreal Engine?
In virtually every test we performed, the RTX 3090 and 3080 outperformed both the Titan RTX and 2080 TI by a wide margin while costing significantly less. At 4k resolutions, the 3090 had an average 74% improvement in FPS over the Titan, and about 15% faster than the 3080. This roughly lines up with the CUDA Core count on each card. Some may look at the price difference between the 3080 and 3090 and question if the performance warrants it. The bigger deciding factor for creative professionals will be if they need the extra VRAM or not. On the lower end, the RTX 3070 performs roughly on part with a 2080 Ti as long as VRAM isn’t an issue.
For users that don’t have a specific need for Quadro for things like Genlock, the RTX 3090 and 3080 are monsters in Unreal Engine and have easily taken the crown as the best option. We are anxiously waiting for Ampere Quadro cards and will get you test results as soon as they are available.
Unreal Engine Workstations
Puget Systems offers a range of poweful and reliable systems that are tailor-made for your unique workflow.