Dissecting the Unreal Engine 5 Demo

This week we got a glimpse of what the Next Generation of game engines will be capable of when Epic revealed Unreal 5. A lot of amazing things were shown in action, but some of the talking points left me with a few questions. Let's dive into the reveal and break it down.

Playstation 5 = AMD

First, and this kinda flew under the radar for most people, they say this demo is running on a Playstation 5, which is 100% AMD hardware, both CPU and GPU. Last year Nvidia began pushing real time ray tracing pretty hard. While not explicitly stated, this looks to be AMD’s response. Still no information if they have special ray tracing hardware like Nvidia does, or if Epic is doing this all in software. It does show that the industry as a whole is embracing ray tracing (as it should). I’m looking forward to hearing from AMD what they contributed and what they have in store for their next generation of video cards.

Nanite

“There are over a billion triangle of source geometry in each frame that Nanite crunches down losslessly to around 20 million drawn triangles” They then show an example of how many triangles are on the screen, but I’m not sure how many triangles they are trying to show. They say some triangles are the size of pixels, so let's take a 4k screen, that's 3840 x 2160, or 8,294,400 pixels. I’m guessing it's dynamically reducing the triangle count based on distance from the camera, something what is pretty common with terrain now, just applied to everything in the scene. If this works as they say, I’m curious how low end hardware will handle this. Will it just be more aggressive, therefore having even fewer drawn triangles? Will the artists have any control over this process? One of the key goals when making a traditional LoD is to make sure it maintains the correct silhouette, and that key details remain readable. The last thing you want is to have a hero object off in the distance get chopped up in a way you dont want.

This shows how many triangles are on screen. Its a lot.

Later in the demo, they show a statue they imported directly from Zbrush, with more than 33 million triangles. Then they add, “no baking normal maps, and no authored LoDs.” My first thought is, “are you sure you don’t want any normal maps?” Normal maps do way more than just make a low poly object appear to be a high poly. Sure, I could see the base geometry being high enough to not need a normal map, but what about the fine grain texture? Let's say you take the time to sculpt all the fine details, down to the tiny cracks, and dirt build up and all that. You show the final product to your art director, and they say “hey, that looks great, but we decided this statue needs to be made out of wood, not stone.” If all that micro detail is sculpted into your geometry, and you need to change it, that is not going to be a good day. Usually you would take that high poly sculpt into something like Substance Painter to create all that micro detail, then bake out the normal maps. Changing the material in Substance is much easier than trying to redo your sculpt.

A statue with 33 million triangles and no normal map running in real time.

Not discussed is animated characters. If you take a model directly from Zbrush, and asked an animator to rig and animate it, they’d probably say you are crazy. No animator, even in big budget films, is going to create animations on a 33 million triangle model. The Zbrush model probably isn’t going to have good edge loops that will make for good deformations while animating. Maybe I’ve overlooked some recent changes to animation, if so, please let me know in the comments.

They also say the assets are using 8k textures. I can’t help but wonder what kind of video memory is being used for all this? In the film space, this isn’t much of an issue. They could have a system with two Quadro 8000 connected with NVLink and just keep throwing more and more assets at it. But in the game world, they have to consider someone running a GTX 960 with 2GB of VRAM. It will be really interesting to see how this scales.

Lumen

Ok, I’ll admit, lighting is something I’ve always geeked out about. So Lumen is the part of this demo I got the most stoked for. Multi bounce Global Illumination is so incredibly important to the look of a scene. Games have had various ways of faking GI for a while, but it often required baking lightmaps, or using fill lights, or other such tricks. And game developers are really good at making things look right. The upside to realtime GI, in addition to the higher quality final image, is the time savings. Instead of spending time baking the lights, and then wanting to tweak something, and having to rebake the lights, etc, you just turn on the lights and everything works. You’ll be able to make adjustments on the fly.

On the right, Lumen turned on, the left is it off.

The cave demo is pretty good, but the demo later in one of the crypts is much more impressive. The way the specular highlights accurately depict the drastic changes to the lighting is so cool. Previously you would have had to rely on something like a cubemap to do the dark hallway, and then script a way to change the cubemap once the ceiling crumbled, or something like that. This new system should “just work”. (I’m highly suspicious whenever I hear “it just works” during a press conference) Again, I want to see how this works on a variety of hardware.

Changing this lighting situation in real time is next to impossible with current engines.

This will also be a huge gift to filmmakers. Whether they use Unreal for their final render, or just for previs, having better quality, physically based lights will speed up their workflow and result in a better end product. Unreal already had a fantastic real time lighting system, this seems to be an improvement, and not reliant on Nvidia's RTX cards. The question then becomes, is this using a specific AMD Ray Tracing hardware, or is Unreal doing this all in software?

Audio

Up next they talk about improvements to audio. They talk about “convolution reverb”. Honestly, I’m going to need an audio engineer to tell me if what they showed off was cool. From what I gather, the engine is looking at the space, and adjusting echos to match. If so, I wonder if it is also taking into account the materials? I.e, a cave with stone walls would sound much different than one that is covered in moss.

Niagara

The bats they show off when they first talk about Niagara, Unreal’s particle system don’t look super impressive. I’m not sure how it is any different than what developers would do today. The next time the talk about Niagara is much better, having insects reacting to light that is moving around is really cool. Much of this is in the current version of Niagara, so not a lot to cover. The fluid simulation they show doesn’t look that great.

Animation

They go on to briefly show some improvements to their animation system. Predictive foot placements and motion warping should help animations feel more natural, and contextual animation events would make things feel more lifelike. Honestly, I feel like every year or so we hear about these exact same improvements, and while it keeps getting better, it always leads to the engine unable to stick with one spot, so the character’s foot keeps moving around, or picking a super awkward position. Contextual animation events sound cool, and look good in a demo, but what if in this game, as the character reaches for the door, the player turns or backs up suddenly? In current games you’ll see a character put a hand on a door just like this, but usually control is taken away from the player and a scripting animation plays. So I’ll have to wait to see what developers actually come up with.

Hopefully we'll begin to see more of these contextual animations.

Animation is an area I’d like to see game engines put a lot more emphasis on. With the major improvements over the past several years to physically based rendering, real time lighting, higher and higher texture limits, environments are looking amazing. All that kinda breaks down if a character’s foot clips through a wall, or ragdolls in an unnatural way, or a character rotates without moving their feet. Hopefully the improvements Epic is discussing will help animators achieve the look they want.

Conclusion

All in all, I was impressed with the demo. Real time lighting is the future for games. Having it show up on consoles will certainly help push that forward. I’m also excited to see AMD embracing it. Since both the PS5 and Xbox Series X will feature the next generation of AMD’s video cards, I can’t wait to see what they have in store for PCs. I still have a lot of questions, and we will need to see how developers actually begin to add these features into their workflows. Sadly, it will be a year before we can begin using this version of Unreal.