Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/1690
Kelly Shipman (Puget Labs Technician)

Scripting in 3ds Max

Written on March 6, 2020 by Kelly Shipman
Share:

Last week, we worked out the rough outline for what areas in 3ds Max I wanted to cover in the benchmark. The goals I landed on are:

  • Loading a scene
  • Saving a scene
  • Create and modify objects
  • Viewport FPS
  • Particle and Cloth simulations
  • Fluid simulations
  • CPU rendering
  • GPU rendering

This week I dove into scripting and making test scenes to see how the program handles different scenarios and how to capture results from within MAXScript.

Recording the Results

The first task is to make sure that I can document the results somehow so we can compile the results of multiple systems later. Digging through the documentation, I found the FileStream class that does exactly that.

score = openfile "$scenes⧵scores.txt" mode:"w"

That line opens a text document in the default Scenes directory and allows Max to output data to it. According to the documentation, that should only open an existing file, but in testing the script, it is also creating it if it is not there. I’ll probably package a blank text file just to be sure everything works once we add this to our internal automation tools. The mode:”w” portion will overwrite any existing text so if we do multiple runs, we can be sure to have fresh data each time.

Loading and Saving

Next, let’s knock out some of the easy tasks, such as Loading and Saving. This is incredibly simple in MAXScript.

startload = timeStamp()
loadmaxfile "$scenes⧵advanced_test.max"
endload = timeStamp()
format "Loading took % seconds⧵n" ((endload - startload) / 1000.0) to:score
sleep 5
startsave = timeStamp()
savemaxfile "$scenes⧵advanced_test_save.max"
endsave = timeStamp()
format "Saving took % seconds⧵n" ((endsave - startsave) / 1000.0) to:score
sleep 5

Thankfully Max waits for the loading or saving to complete before moving to the next line, so no special wait commands or loops, etc. For this test I’m using a file that is roughly 2GB. That makes the test long enough that if some random Windows process kicks on for a second, it won't skew the results. The timeStamp() command goes by milliseconds, so some basic math to get seconds, and then it writes the result to a text file. The Sleeps are just short pauses between tasks for a slight buffer.

So far, this has been incredibly easy. A dozen lines of code and I have Max timing how long it takes to load and then save a file and reporting the times to a text document.

Rendering

This also turned out to be super easy. I didn’t want to dive into any complex render settings, as we are just comparing how the exact same scene/settings work on different hardware. The scene I made has multiple different materials with Sub Surface Scattering, transparancy, reflections, caustics, etc and is lit by an HDRI environment map.

CPUstart = timeStamp()
render()
CPUend = timeStamp()
format "CPU rendering took % seconds⧵n" ((CPUend - CPUstart) / 1000.0) to:score

This takes whatever render settings are saved in the scene and renders a frame. There is one setting I would like to control, and that is CPU vs GPU rendering. I would like to stress that Arnold’s GPU rendering is still in Beta. There is a good chance things will change, but since its available, I wanted to get started on it.

That said, because it is so new, there isn’t a lot of documentation yet. Through a bit of trial and error, I found that adding “render_device:0” will set it to render on the CPU, while “render_device:1” will render with the GPU. This is the only setting I'm changing between tests. There is a big difference in quality by only changing the render device, but because it is beta, I’m not going to worry about it too much. The goal here isn’t to compare render times between CPU and GPU, but instead to say, “if you are rendering with the CPU, this is how different CPUs perform.” or “if you are rendering with the GPU, this is how different GPUs perform”

comparison of CPU and GPU rendering with the same settings

On the left is the results from the Arnold CPU render. The right is from the Arnold GPU renderer that is still in Beta. All other settings are the same.

Simulations

Now that the easy stuff is out of the way, it’s time to dive into the deep end. First up, simulations. As it turns out, different types of simulations use hardware differently AND they respond to scripting differently. For example, a cloth simulation looks like this:

startTime = timeStamp()
$plane001.modifiers[#cloth].simulate true
endTime = timeStamp()
format "Simulation took % seconds⧵n" ((endTime - startTime) / 1000.0) 

Like before, the script waits for the simulation to finish before moving to the next line. However, the same is not true for fluid simulations that use Bifrost. When you start a bifrost simulation, as soon as the simulation starts, the script moves to the next line. After struggling with it for a few days I turned to the Autodesk forums and ended up with this:

start_time = timestamp()
$Liquid001.solvers[1].runSolve()
while ($Liquid001.solvers[1].IsSolveRunning()) do
(
-- while Fluids is running, do nothing, so script waits for the end of Fluid's calculation before going any further
)
end_time = timestamp()

Ended up being easy enough, it just needed a way to hold off on moving to the next line until the simulation completed. I still need to look into particle simulations, so hopefully one of these two solutions will work for that as well.

Viewport FPS

Ok, now things get really complicated. As I noted last week, the FPS counter within Max is pretty worthless. What I did was create a large scene then animated a camera along a path. That animation runs for 1000 frames. If we turn off “real time playback” it will render out each frame before moving on. So then I just need to time how long it takes to play all 1000 frames, and from that, calculate a FPS.

This is another one that requires a callback check if the animation is playing. This is what I came up with:

(
   global timeCheck 
   local s = animationRange.start as integer
   local f = animationRange.end as integer
   local startFrame = s 
   local endFrame = f 

startAnimTime = timeStamp()
fn timeCheck =
   (
     if (currentTime == 1000) do ( 
       stopAnimation()
       endAnimTime = timeStamp()
       format "Playback ran at % FPS⧵n" (1000 /((endAnimTime - startAnimTime) / 1000.0)) to:score
       unRegisterTimeCallback timeCheck 
      )
    )

(
playbackloop = false 
realtimePlayback = false 
playActiveOnly = true 
sliderTime = 0 
registerTimeCallback timeCheck 
playanimation() 
)
)

This seems messy, but it works. I feel like there might be a more elegant solution out there.

What is next?

That leaves us with “Create and Modify Objects.” I’ve been experimenting with various modifiers to see how resource intensive these actions are. I’m leaning toward, tessellate and meshsmooth. This is an area that I’d love to get some feedback from users. The FPS results we already have, added to this test, will give a pretty decent insight into how “snappy” Max will feel during most work that doesn't include the typical bottlenecks of Rendering and simulations. I’ll probably spend a significant amount of time on this specific test.

Next week I’ll go over what I’ve discovered. As always, be sure to subscribe to be notified when the next post is available.

Tags: 3ds Max, Autodesk, benchmark, Community, Graphics, Hardware, Modeling, rendering, simulation, Testing
Jan Dorniak

For short duration tasks, or just improved precision, you might want to look into Windows' system timer - I recall something about the timer having 15ms resolution by default, unless some app switched it to high resolution mode. Although that was on Windows 7 IIRC.

Posted on 2020-03-07 10:24:29
Kelly Shipman

the timestamp() function in Max returns the number of milliseconds since 00:00 of the system time, so its already really accurate.

Posted on 2020-03-09 16:58:32