Developing a New 3ds Max Benchmark

Welcome to the inaugural post of this ongoing series. I have a couple goals with this blog. First is to allow our customers and fans to get a little insight into how our Labs creates benchmarks. Puget Systems has always been very open with everything we do, this is another excuse to pull back the curtain. Second is to give our customers and other industry professionals the opportunity to give feedback on these benchmarks. These benchmarks are key to how Puget Systems recommends hardware. Since we focus on how our customers use their computer, these benchmarks need to be as close to real life workflows as we can make them. Synthetic benchmarks are good, but they don’t always tell you how that hardware will improve the work that you do. I’ll be detailing what I’m trying to do, and reading through the comments to see if you have ways I can improve the benchmark.

Where to Begin

Just looking at the 3d content creation space, there are dozens and dozens of applications used, so we had to pick a few to start, and just see how far we can go. The majority of our customers in this space say they use 3ds Max and/or Maya for the bulk of their work. Since that is also where the build of my professional experience is, it only makes sense to start there. I’m bundling these two together because of how similar they are. Yes, I hear everyone saying their preferred app is better. But when we drill down to how they utilize the hardware, there isn’t too significant of a difference. The benchmarks I build for Max and Maya will go through the same or similar series of tasks. I'm starting with Max specifically because that's the program I know the best.

3ds Max loading screen

Finding the Pain Points

The first thing we need to do is identify what parts of the software we want to test. What are the basic tasks that everyone goes through that may or may not be impacted by hardware? Loading, saving, and modeling are the easy answers. We’ve all had those scene files that get exceedingly large and take forever to open or save. Or Max tries to Autosave right as you get a good flow going and you just have to wait for it to finish.

Basic modeling is going to be a little tougher. Because these tests need to be automated, how do I test for when the software doesn’t feel “snappy?” I could always create a bunch of high poly objects and apply a series of modifiers including subdivide or tessellation and see how long the process takes, but do those things correlate? And what actions would be best to simulate modeling?

Along those lines, general viewport FPS would be useful. The trouble I found with this right off the bat is that the built-in FPS display is pretty unusable as the number fluctuates wildly and rapidly. Just looking at what Max has in their viewport, I'm not sure it's worth spending time trying to see if I can capture that as an output in a script.

For this 5 seconds, I am getting somewhere between 20 and 90 FPS… I think.

Another issue is the viewport only refreshes when something changes. So I could put 10 million polygons on screen, but if nothing is happening the FPS doesn’t update. If I made a script to move an object from point A to point B, it wouldn’t help because it instantly moves, it's not the same as a user dragging an object. As a test, I made a whole bunch of boxes, totaling 96,000,000 polys. Moving all of those via script took 0.097 seconds. However, trying to manually do the same in the viewport is very sluggish. I’m thinking of using a simple animation with Real-Time Playback disabled might be a solution. Set up a scene, create a timeline of a few hundred frames, and time how long it takes to playback those frames. This would force the viewport to update. Do some simple math and I could have a more accurate FPS.

A couple other areas that I’ll cover will be simulations and rendering. I won't be going too deep into rendering as that will be covered on its own depending on what renderer you use, but I did want to touch on the default renderer for things like previews or texture baking, etc. Even if you aren’t doing your final render in Arnold probably use it in some way. Arnold is a great CPU renderer and has recently added GPU rendering even though it is still in Beta. Max also has the option of ART in the viewport, but since that is also a highly multithreaded renderer, the Arnold CPU results should give us a pretty good idea of how ART will perform.

Two fluid simulations. One is Particle Flow and uses a single CPU thread, the other is Bifrost, which uses all of a CPU's threads.

Simulations are another pretty complex area. One thing I learned while researching is that different types of simulations utilize hardware differently. For instance, particle flow simulations are single threaded, but fluid simulations using Bifrost are multi-threaded. It appears cloth is also single threaded. So I am going to need to run a couple different simulations and give each its own score.

Setting Our Targets

This is how our to-do-list is looking:

  • Loading a scene
  • Saving a scene
  • Create and modify objects
  • Viewport FPS
  • Particle and Cloth simulations
  • Fluid simulations
  • CPU rendering
  • GPU rendering

That should cover the majority of use cases. If I've overlooked an area you've had trouble with, please let me know. The "Create and modify objects" step is pretty vague for now as I'll need to see how different tasks respond to scripts vs. how they respond by manual control. Because Arnold's GPU rendering is still in Beta I'm a little hesitant to include it, but I think it's well worth some investigation.

Arnold GPU render is still in beta

GPU rendering is a big deal, but being in beta makes me nervous to include it in a benchmark.

Next Week’s Plan

I’ve already set up some test scripts to get some practice with Max’s scripting. My next step is to see if I can use scripting to create the scenes and setup the simulations from scratch, or if I'll need to have scenes pre-made that I can load and have the script run. Once I see if I can script each of the bullet points above, I’ll begin working on combining all of that into one master script.

Next week's post will dive into some of the specifics of these scripts, and what I'm experiencing with their performance compared doing the same actions myself. I'm sure I'll hit a few roadblocks along the way and will have plenty of questions for the readers. As always, let me know if you have any suggestions.

Subscribe