The Search for Scripts Continues

The past couple weeks had been pretty smooth. I started with the low hanging fruit that I knew would be straight forward. It was also a time of settling into the new office, and getting acquainted with Labs. That all changed this week.

Before we get into the benchmark progress, I wanted to talk about the biggest challenge I faced, which is working from home. We made the decision pretty early on to have those of us that are able to work from home, do so. Not necessarily out of concern for ourselves, but to protect the people in the company that can’t work from home, specifically the Production department. If we suddenly didn’t have people to build and ship computers, it would hurt quite a bit. A simple step to minimize the chances of a virus entering the building is to limit the number of people that enter the building. You can read more about Puget Systems response in Jon’s recent blog post here: Puget Systems Response to Coronavirus

Luckily, before any of this happened, Puget Systems had built all of our tools to be able to be used online through a web browser. As is typical these days, everything is done on the cloud, and by cloud I mean on the server that Jon built in the closet. I just needed to install 3ds Max on my home PC and it was just like I was working from the office. You are probably wondering how we will be able to test all this hardware from home. Well, again unrelated to the current health situation, Matt built up test beds that cover most platforms and CPUs and connected them to Raritan KVM-over-ip boxes in our server room. This was done to make life in Labs much easier. When we wanted to run a benchmark, we could deploy it to as many systems as we wanted. The KVMs we used even allowed for bios level access to the test beds if we need. With a couple clicks, we can have direct control over a system with any CPU we want, from anywhere we want to be. This just covers CPUs though. If I want to test GPUs, I’ll need to go into the office and manually move GPUs around. We could in theory make more test beds with all the GPUs, but the cost of that starts to skyrocket pretty quick.

So from a technical side, transitioning to working from home was pretty seamless. The human side, however, was much more jarring. I didn’t realize how much I enjoyed the company of others until I didn’t leave the house for two weeks straight. The first week was fine. It was an interesting change of pace. The second week hit me hard. Other than a couple quick trips to the grocery store (which has been sold out of most things), I hadn’t left my house. I live less than a mile from the nursing home and hospital that have become the epicenter of Seattle’s outbreak. Normally, if I’ve been at home for a long time, I like to go out into public and walk around, but that seems like not the smartest move at the moment. So I’ve had to get creative. Every couple hours I take my dog outside to get some fresh air. He doesn’t play much or like long walks, so I’ll spend 10 minutes or so pulling weeds in the garden, or doing some stretches, etc. Luckily I have a fairly distraction free place to work at home, with the exception of the aforementioned dog that wants to be pet all the time. As time goes on, I’ve been getting more and more comfortable working from home.

He is not the most energetic, but he helps distract me when i need it.

So let's get back to the benchmark. The bulk of my time this week was trying to figure out how to simulate the modeling workflow in a way that can be measured. In talking with other industry professionals, one of the things I learned is that while I can make a script that can replicate the process of modeling an object, what actually happens is I would end up timing the speed of MAXScript. Many basic operations happen fast enough that the time the systems spends moving from line to line within the script is equal to, or longer than the operation itself. For example, if I were to extrude a polygon 100 times, how much of that time is the extrusions, and how much is it the script.

That means I need to find ways to exaggerate the operations, and minimize the scripting. I spent a significant amount of time testing a lot of the modifiers available to see what performance they have when performed numerous times in a row. This lead me to a few actions. First is to create an object and add a tessellate modifier. Pretty straight forward. Second, create another object, one with a lot of polygons, and extrude each face. Third, create another object, and chamfer all the edges. These are fairly common tasks in a modeling workflow, and each, when performed in significant quantity, utilize the CPU pretty heavily. All three of these are essentially creating a significant amount of geometry, but CPU usage is slightly different between them. Hopefully these tests, combined with the Viewport FPS tests, will give a pretty solid idea of how navigating the viewport feels and how quickly actions perform, aka, how snappy Max feels.

Here is an example of part of my script. I’m not 100% sure if this is the best way of doing this. But it extrudes each face of the sphere by a random amount. The only reason I have pause with it is because if I were to take the exact same sphere, select all the faces, and extrude them all the same, it is quite a bit faster.

myball = GeoSphere radius:30 segs:16
myball.name = "Ball"
select $ball
convertTo $ polymeshobject --the selected object
minE = 5.0  
maxE = 10.0
for f in $.faces do polyop.extrudefaces $ f (random minE maxE)

Another test I finalized is texture baking. This was a little odd as I couldn’t find a direct script command to start the bake. Apparently the whole Texture Baking interface is a macro, so I had to call the macro, and then script a button click to start the render. One problem I had with successive runs is that it would see the texture file generated from the previous run and ask if I wanted to overwrite it. I’m not going to be there to click yes each time, and it would throw off the timer, so I found a way to delete the generated textures when the script was complete. It looks something like this:

startbake = timeStamp()
select $Plane001
macros.run "Render" "BakeDialog"
gTextureBakeDialog.bRender.pressed()
endbake = timeStamp()
format "Tesslating took % secondsn" ((endbake - startbake) / 1000.0)
sleep 5
DiffName = @"$scenes⧵Plane001DiffuseMap.tga"
deleteFile diffname
HeightName = @"$scenes⧵Plane001HeightMap.tga"
deleteFile Heightname
NormName = @"$scenes⧵Plane001NormalsMap.tga"
deleteFile NormName
ShadowName = @"$scenes⧵Plane001ShadowsMap.tga"
deleteFile Shadowname
LightName = @"$scenes⧵Plane001LightingMap.tga"
deleteFile LightName

So that is where I’m at with the benchmark. The change to working from home has thrown me off a bit, but I’m still plugging away. I need to finish the "modeling" portion of the test.

One other major thing I want to investigate is breaking up this script into multiple scripts. I'm hoping I can make a master script that can call each individual script in turn. This will make adding or removing tests much easier. If you have any experience with that sort of thing, please let me know in the comments. I'd love to hear any ideas you may have.

Subscribe