Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/1120
Article Thumbnail

SOLIDWORKS 2018 GPU Comparison: Monster (Sized) Model

Written on March 6, 2018 by William George
Share:

Introduction

A couple weeks ago I published an article looking at GPU performance in SOLIDWORKS 2018, where we came to the conclusion that our testing was no longer sufficient to show any performance difference within each GPU family (Quadro, GeForce, Radeon Pro, etc). I wasn't sure if it was our methodology that was lacking, or the complexity of our models, or some other factor like changes to SW itself - but based on the comments people left it sounded like the way we approached the issue was sound. To see if it was a deficiency in our available models, we reached out to folks involved with a SOLIDWORKS User Group event that we sponsored to ask if they could provide a bigger assembly - something truly monstrous - and they delivered. Armed with a model made up of an order of magnitude more parts and almost 30 times as many triangles as our biggest existing assembly, I ran our benchmarks once more.

As an aside, we already know that GeForce cards don't perform well in SOLIDWORKS - so I left those out this time around. We are just looking at the professional-grade video cards from NVIDIA and AMD, all of which are officially certified for use in SOLIDWORKS 2018.

Test Setup

For my testbed system, I used the following hardware:

This platform is built around an Intel Core i7 8700K, as that is the current generation CPU that gives the best possible performance in SOLIDWORKS for general usage and modeling. More than enough RAM was included, to avoid that being a bottleneck of any kind, and a super-fast M.2 SSD was used for the same reason. Since we saw effectively no difference between 1080P and 4K test results in our last comparison, and since we recently upgraded all the monitors on our lab test benches, this time the tests were only run at 4K.

To perform the actual benchmarking, I used the same basic testing we've used here at Puget for analyzing graphics performance in SOLIDWORKS in the past: a mix of AutoIt scripts and SOLIDWORKS macros to set the different display settings, load the relevant model, and record the average FPS while rotating the model. Since I already have data from the other models we've run in the past, I skipped repeating all that work and just ran the tests with the new assembly. It is a SOLIDWORKS representation of a Lego model, the Tower Bridge (set #10214) that honors the famous London landmark. It was provided to us by the organizer of the CAD Monkey Dinner we sponsored at SOLIDWORKS World 2018, Daniel Herzberg, and he has requested that the files themselves be kept private. It is massive. It consists of 4372 parts and 40.9 million triangles. This is substantially larger than anything we've been able to test within the past, so it should help answer the questions we were left with at the end of our last article.

Lego Tower Bridge by Daniel Herzberg

One note that I would like to make is that if you do not know how many triangles the models you work with have, the easiest method I know of to find out is to simply save the model as an .STL file. During the save process, a window pops up with information about the model including the number of files, the file size, and the number of triangles.

Results Chart

With this new, complex assembly we can finally see a difference in performance between some of the Quadro and Radeon Pro cards:

SW 2018 GPU Comparison - Huge Assembly at 4K

It isn't huge, but there is a noticeable drop-off in performance with the entry-level Quadro P1000 on this model. Shaded with Edges performance matches the other Quadro models, but the other three display modes all show a 13 to 38% drop in frame rate. The other Quadro cards, from the P2000 to P6000, all perform basically the same: within a single fps, and so well within the margin of error. On the Radeon Pro side, all three cards we tested performed about the same in the "normal" modes, but there is a clear progression on the Realview tests with each card being faster than the one before it. The jumps between each model are around 10% or so - not huge, but definitely not just a testing error either.

Where it gets even more interesting is comparing the Radeon Pro cards to their Quadro competition. The NVIDIA Quadro cards are noticeably faster across the board, with the minor exception of the P1000 being slower in Shaded (normal) mode - but it bests two of the three Radeon Pro cards in the other modes, and even passes the top-end WX 9100 in both "with Edges" tests.

Conclusion

We now know that with sufficiently complex assemblies there is a performance difference in SOLIDWORKS 2018! Unfortunately it also looks like the current crop of AMD Radeon Pro cards doesn't quite keep up with the NVIDIA Quadro line, at least when dealing with such massive models. The good news, though, is that even with an assembly like the one we used for testing here you don't need to spend a ton: the Quadro P2000, which is no more expensive than a lot of the mainstream GeForce cards, was just as fast as the P4000 and P6000 that cost substantially more. As such, it is going to be our standard recommendation for SOLIDWORKS workstations.

As always, I look forward to reading your comments and hope that we can continue to improve our testing and analysis with your help.

Tags: SOLIDWORKS, GPU, Graphics, Video, Card, NVIDIA, Quadro, AMD, Radeon, Pro
Igioz

commissioning draw is critical in solidworks.
so i ask, only frequency and IPC can save all of us?

Posted on 2018-03-08 10:49:44

I'm sorry, but I am not familiar with the term "commissioning draw" - can you elaborate on what you mean? If you mean just general 3D modeling, that is pretty much entirely based on single threaded CPU speed - so clock frequency and instructions per clock, as you noted.

Posted on 2018-03-09 17:15:50
AC

This is a better review than the previous one :)

Posted on 2018-03-12 05:28:45

Thanks! :)

Posted on 2018-03-12 21:43:16
photodude

How does the NVIDIA GeForce GTX 1070 or GeForce GTX 1080 stack up vs these Quadro cards for Solidworks?
(oh I see the comparison is in the other article https://www.pugetsystems.co... )

Posted on 2018-03-18 23:46:51

In the previous article on this topic (GPU performance in Solidworks) I tested a GTX 1080 Ti, as it is the pinnacle of the GeForce line, and it performed quite poorly:

https://www.pugetsystems.co...

We've seen similar results in the past as well, with the GeForce 900-series:

https://www.pugetsystems.co...

In short, GeForce cards are not ideal for any heavy-duty work in Solidworks. Moreover, they are not certified for this application by Dassault, so if you use one and then need tech support they may not be able to help. As such, I would advise against it.

Thankfully, even the affordable Quadro cards - like the P2000 and P4000 - do very well in Solidworks :)

Posted on 2018-03-19 18:14:00
photodude

Ya, I'm kind of regretting getting the GTX 1070 when it came out rather than picking up a Quadro.
Solidworks, Revit, and LR, are the areas I'm challenged with performance issues.
PS, ID, Ai seem reasonable with my current system.

Posted on 2018-03-20 23:59:39

Just FYI, Lightroom performance is probably not being heavily impacted by that video card. SolidWorks may well be though, as shown in the benchmarks we've run. Revit should be okay in terms of performance on that card, even though it's not technically certified certified by AutoDesk.

But all of those programs are more heavily impacted by the CPU anyway. What processor are you running?

Posted on 2018-03-21 16:23:51
photodude

My current system is slightly older
My current rig
i7-4770 CPU @ 3.40GHz, 32GB DDR3 ram
256gb OS SSD,
512gb LR catalog SSD,
a 3TB and 6TB internal 7200RPM image storage drives
GTX 1070 graphics

mostly working with
24.6MP raw files and a few small video files

From what I can tell LR is mostly affected by Disk operations even with automatic XMP writing turned off
30% CPU use with very high disk uses across all drives. I've been debating moving to a Raid setup because it's been clear to me that the 7200RPM drives are not keeping up.

Posted on 2018-03-21 23:03:28

Matt Bach could comment better on Lightroom performance than I can... you might want to check out some of the articles he has written about it: https://www.pugetsystems.co...

Posted on 2018-03-21 23:05:38
AC

You ought to buy a Quadro P2000 or P4000 and sell the GTX 1070 while the mining craze is still on.

Posted on 2018-03-21 18:24:44
Issac Roberts

As it looks like the video cards finally have sufficient VRAM to handle most/all SolidWorks use cases I assume that difference then comes down to subtle core speed and FP32 TFLOPS. So you have tested model articulation in 4K(2160p), but what about model articulation in 8K (4320p)?. GPU RAM utilization and GPU core % utilization would be nice so that users with older cards could open these models and see if an upgrade is worthwhile. Is the clock being maxed out? What is the cause of the 60 Hz ceiling? I assume CPU. What about drawing manipulation such as dragging views around. The real gatekeeper I've always found to be wire frame. While not particularly useful for massive assemblies, it puts a significant strain on all cards. Essentially I think you're hitting an 60Hz cap because of a bottleneck not in the GPU. What about Zebra Stripes or Ambient Occlusion on/off. Or maybe I have buyer's remorse after buying a P4000 before seeing this article.

A comparison between more dated cards (K2200/4200, M2000/4000) or even the P600 in future comparisons might help then end user decide if an upgrade is worth it? Maybe normalize the chart for Performance/$ for budget oriented teams. It's clear that the p4000 and p2000 are at a performance parity in model rotation, but when factoring cost the P2000 should visually as well as in your conclusion stand out more than it does. It's probably reasonable to assume that there are a lot of older card users out there wondering when and/or if to upgrade.

Posted on 2018-05-07 15:50:19
Adam

60Hz is the typical refresh rate of displays the GPU is outputting to, so likely why the GPU is capping it. Since the test is showing its hitting the 60Hz is good as it means the card still has capability left and could handle even a larger model with more triangles.

Posted on 2018-06-06 14:15:24

That is a possible explanation, but we've seen frame rates above 60Hz in other tests (with smaller models) - so I am not convinced that we are seeing SW capped by the monitor's refresh rate (though it is possible something changed in SW or a driver revision to cause that). The proximity of some of the Shaded (normal) scores to 60 may just be coincidence.

Posted on 2018-06-06 15:58:45
Brent Gaspard

Thank you for your "performance" insights on these systems as I am not a gamer. I am new to SolidWorks 2018 and will be relying quite heavily on Visualize for client interactions and marketing content.

Reading a great deal around the CPU and GPU battlegrounds, I just came across the video linked below ... In light of remarks in the SW forums re: "SW only testing and certifying Radeon "PRO" Cards", I am curious as to your feedback relative to the performance indicated in this video with Radeon Vega Frontier Edition VS a Titan Xp?

https://youtu.be/D5GcpYA7_wY

I was also curious about the AMD Threadripper 1950x with the Radeon Vega Frontier Edition.

Thank you guys!

Posted on 2018-11-11 00:35:20

Solidworks and Solidworks Visualize are a bit different in terms of requirements and compatibility.

Solidworks itself is only tested and certified by Dassault Systemes on "professional" video cards - NVIDIA Quadro and AMD Radeon Pro, usually.

Visualize, on the other hand, will use even mainstream GeForce cards from NVIDIA - which cost a lot less than the professional cards for similar levels of performance in rendering. However, the current Visualize 2018 system requirements page indicates that the GPU based rendering only works with CUDA capable cards, which means NVIDIA and not AMD. It even says that if non-NVIDIA cards are installed, Visualize will default to CPU-only rendering. I haven't had a chance to watch the whole video you linked to, but I am very skeptical (given the info on the link below) of AMD video cards working with Visualize 2018:

https://help.solidworks.com...

Now if running in CPU-only rendering mode, a processor with lots of cores like the Threadripper 1950X (or better yet, the newer 2990WX) would do very well! However, GPUs added in will be even better, and right now it seems like that requires NVIDIA cards in this application.

Posted on 2018-11-12 19:01:01
Brent Gaspard

NICE! Thank you SO much for your feedback William. You have helped me a GREAT deal with this decision.

Posted on 2018-11-12 19:09:37
Daina Jimenez

Hello❤ I wanna ċhαt with you

Posted on 2019-02-28 08:57:38
GodFear17

Still only a 2gb model, the base card is 4gb vram. I'd like to see these tests with something perhaps right at 3.5gb, 8gb, 12 or so gigabyte engineering model might better show how each card handles larger and larger models. That would be the use case per card. Yea each one is a step up in speed, but the main value of each larger card is the larger vram allowing larger and larger models.

William, I really like how you show kind of the limitations of a single threaded app. Personally I am kind of wondering how bad solidworks handles large engineering models. Perhaps no one really uses it for that kind of large work load?

What about simulations, like air / water / heat type simulations? Perhaps each card has a huge difference there in solidworks? Maybe you already did this and I missed it?

Anyway love the way you guys test every combo. It's useful, in this case of single threaded shows how a software bottleneck just limits what hardware can do for you.

Posted on 2019-01-08 17:38:55

I would be happy to do testing with an even larger, more complex model - but getting hold of such things is extremely difficult. Models like that are usually the intellectual property of large organizations, which of course wouldn't want to share such things outside of their company, or the work of many months or years of an individual - who, again, is not likely to want to share their work freely. We've considered making a model that was just a bunch of smaller components randomly thrown together, but that wouldn't necessarily be a realistic or fair comparison if it wasn't put together in the proper way.

If you know of a group or individual who has real-world models in those size ranges, and would be willing to share them for the purposes of benchmarks like this, please feel free to reach out to me directly. My email address is listed on our About Us page :)

Posted on 2019-01-08 17:44:52
coynatha

Looks entirely CPU bound. Really wonder what it looks like if you did the test with dual 4K monitors, or an 8K monitor. I feel inadequate with just a K620, but it looks like getting anything better for SolidWorks is a waste of money.

The bar charts certainly can't be used to justify an upgrade to the powers that be.

Posted on 2019-01-11 21:22:51

I am going to be starting in on SW 2019 testing very soon, and we'll see how it goes. I do agree that a lot of SW is CPU limited, and even just having a second 4K monitor plugged in (if not doing anything demanding on it) probably won't alter GPU requirements. 8K could, certainly, but we don't have any monitors like that around... nor do I expect them to become the norm for quite a while.

Posted on 2019-01-14 20:07:48
Valentin Leung

Hello,
I am currently speccing 2 new workstations, though, living abroad, can't order Puget.
Thank you for your work and report.

I was wondering though, does different settings in the Performance Tab or Image quality of the document properties have an impact on the viewport performance? You say you have a macro to set up the display settings etc.
If there is no real benefit of going high end, I might just end up with a P620... throwing the difference of money into an Ultrawide 34".
:)

Edit: I went to look at SPECAPC Solidworks 2017 results https://bit.ly/2sYnzhE, Lenovo graciously provided results for one box with 4 different nvidia GPU with and without FSAA. I don't know what is the index number representing, if I plot everything in terms of % of index increase with P620 being 100%.

I see two things:
Without FSAA, All figures are pretty much identical from 100 to 115%, most of the time pass P2000, there is no point in reaching for higher graphic card. The only difference is seen when AO is enabled, P620 - P1000 - P2000 - P4000 will have roughly 100% -115% - 140% - 150%.
With FSAA, It's very similar but we see more of a trend.

So basically, without FSAA without AO:
P1000 is average 106% of a P620
P2000 is average 108% of a P620
P4000 is average 108% of a P620

Without FSAA, with AO (only taking the two AO results):
P1000 is average 115% of a P620
P2000 is average 142% of a P620
P4000 is average 150% of a P620

Then with FSAA without AO:
P1000 is average 110% of a P620
P2000 is average 119% of a P620
P4000 is average 117% of a P620

With FSAA, with AO (only taking the two AO results):
P1000 is average 118% of a P620
P2000 is average 159% of a P620
P4000 is average 172% of a P620

So, The only utility of going higher end is AO at the complexity of the tested models (up to 4.75M)
Price wise P620 - P1000 - P2000 - P4000, 100% - 191% - 245% - 428%
I am tempted to stay a P620 then, maybe a P1000...

On the other hand, SPECVIEWPERF does show an incremental difference but it's running on Solidworks 2013, so not sure it has an impact on it or not, published results can be looked at the same way. I will be tempted to justify a P1000 just for that.. but it's a stretch.. I mean, I don't like to spend $$$ in useless thing, but 91% more $$$ for a measly 5% increase.. :S

At the end I'll be very curious about your results on SW2019. If the trend stays the same.
I'll just push the i7-9700K.

Thanks again for your hardwork and the fact that you are sharing all these.

Posted on 2019-01-31 08:07:45

There are definitely graphics quality settings that can affect performance, but we haven't thoroughly explored every option SolidWorks provides in that regard. The settings we change in our script are between shading with edges on vs off, normal view vs RealView, and ambient occlusion off vs on.

However, things are dramatically changing in SolidWorks 2019. We're testing that now, and there's a new setting which utilizes the video card to a much greater degree. I don't have all the results yet, but we should be publishing an article in the next few days with a lot more information about what you can expect in this new version with that setting enabled. If you don't plan to move to 2019, then the older information on our website and that data you got from Lenovo would be what you should base your decision on.

Posted on 2019-01-31 15:54:31
Valentin Leung

Thanks for your feedback, CNY here in this part of the world, so I can still wait for one or two weeks :)
Thanks for your input.

Posted on 2019-02-01 14:15:08

Here is our updated article, looking at SW 2019 graphics performance: https://www.pugetsystems.co...

Posted on 2019-02-06 22:56:30