Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/1815
Kelly Shipman (Puget Labs Technician)

Unexpected Data

Written on June 26, 2020 by Kelly Shipman
Share:

Often when we approach a new test, we are able to make some safe assumptions. For example, when testing the Arnold CPU Renderer, we can assume that more cores will result in a faster render. When testing GPU rendering, it's safe to assume that a 2080 Ti will outperform a 2070. Even though we can make these educated guesses, we still perform the tests because we want to see exactly how much of a difference exists, and if the price difference justifies the purchase. However, sometimes those assumptions are proven wrong.

One recent example is the results from my recent CPU testing in 3ds Max. Part of the testing suite including a GPU rendering pass. I left the test in place even though I wasn't testing the GPU because it didn’t take much extra time, and we thought if anything, we might see a very slight difference between platforms. However, these are the result we found:

As you can see, there is more than a slight difference between the platforms. The Intel 10900K is around 30% faster than the AMD Ryzen 3950X. We fully expected a few percent difference between platforms in favor of either Intel or AMD, but nothing like what we are seeing.

Further confusing the situation is the results of the Intel i9-10900X. It is significantly faster than any other CPU. If you look at the results, you’ll see each CPU within a platform grouped together, with the exception of the 10900X. This appears to be some sort of error. I’ve reran that test numerous times. Deleted the files and reinstalled and still get the same results. Even without this one odd result, the differences between platforms is very fascinating and worth more research.

While it's true we spend a lot of time in Labs verifying assumptions the we, and most people in the tech industry have about a new piece of hardware, what we are really looking for is the unexpected.

Tags: 3ds Max, Autodesk, benchmark, Testing, Intel, AMD
Ravi Jagannadhan

Hi there, can you share your workflow (and scene if possible)?

Posted on 2020-06-30 23:51:21
Luca Pupulin

These are kind of weird results....
I can't find out a proper reason for that...

Posted on 2020-07-03 16:49:14
Kelly Shipman

object manipulation in viewport has been difficult to create tests for. For example, in a large scene, you may experience some lag while dragging an object around the scene. However, if I try to move that object via a script so that I can record the time, it moves from point A to point B instantly. So I've broken it down into a few tests. 1. Take a couple high poly objects and apply tessellate modifier. 2. take a couple high poly objects and apply a meshsmooth modifier. 3. take an object an extrude each face a random amount. 4. record viewport fps on a scene with 50+ million triangles. Hopefully that should give people a good idea of what would help their workflow, i.e. are they seeing a slowdown because the CPU is trying to calculate the modifier, or because the GPU is trying to keep up with the triangle count.

Posted on 2020-07-06 15:33:32
Luca Pupulin

Thank you so much for your answer Kelly!
I'm looking forward to seeing your next post

Cheers

Posted on 2020-07-08 18:31:36