SOLIDWORKS 2018 GPU Comparison: What Is the Meaning of This?Written on February 21, 2018 by William George
To go along with our recent SOLIDWORKS 2018 CPU comparison, I had planned on publishing an article looking at professional-grade Quadro and Radeon Pro graphics cards and how they stack up in SW 2018. The plan was to test the Quadro P2000-P6000 and Radeon Pro WX 5100-9100 cards, but part way through something strange emerged from the results. I stopped testing those cards and instead looked at the performance of a lowly Quadro P1000 and a more mainstream GeForce GTX 1080 Ti. What follows is a quick look at the strange results from these tests, along with a question for readers: is this really reflective of SOLIDWORKS graphics performance, or is there a better way we can test video cards in this application?
In order to include a GeForce card in this test, I had to use a workaround to enable RealView - which is normally disabled unless you have a workstation-class graphics card. This is absolutely something I don't recommend doing if you are using SOLIDWORKS professionally, and the performance of GeForce cards in this situation is not something worth pursuing anyway (as you will soon see).
For my testbed system, I used the following hardware:
|Motherboard:||Gigabyte Z370 AORUS 5|
|CPU:||Intel Core i7 8700K 3.7GHz (4.6GHz Turbo) Six Core|
|RAM:||4x Crucial DDR4-2666 16GB (64GB total)|
NVIDIA Quadro P1000 4GB (not originally planned)
NVIDIA GeForce GTX 1080 Ti 11GB (added late as a point of comparison)
|Hard Drive:||Samsung 960 Pro 512GB M.2 PCI-E x4 NVMe SSD|
|OS:||Windows 10 Pro 64-bit|
|PSU:||Antec 1000W HPC Platinum|
|Software:||SOLIDWORKS 2018 SP 1.0|
This platform is built around an Intel Core i7 8700K, as that is the current-gen CPU that gives the best possible performance in SOLIDWORKS for general usage and modeling. More than enough RAM was included, to avoid that being a bottleneck of any kind, and a super-fast M.2 SSD was used for the same reason. As I mentioned in the introduction, my original plan was to test the mid-range and high-end "professional" / workstation-class graphics cards from both NVIDIA and AMD: the Quadro and Radeon Pro lines, respectively. The specific models included are shown in the chart above, with some notes regarding cards that didn't end up in the final data set - and some that were added to try and make sure the video card's performance did actually have some impact. But I am getting ahead of myself...
To perform the actual benchmarking, I used the same basic testing we've used here at Puget for analyzing graphics performance in SOLIDWORKS in the past, just updated slightly for the 2018 release: a mix of AutoIt scripts and SOLIDWORKS macros to set the different quality settings, load the relevant model, and record the average FPS while rotating the model. In the past we had experimented with different LOD settings but found the difference to be marginal, so to keep things simple I will only be reporting the results with LOD off (which usually results in a small drop in FPS). To recorded the FPS, a macro is used with a timer to rotate the model 45 degrees to the left and right for a set number of frames. From the number of frames and the total time it took to render those frames, our software is able to determine the average FPS (frames per second). One key factor is that every model starts with the view set to front-top so that any reflections and shadows would stay in view while the model was being rotated.
For test samples, we have utilized models available from GrabCad.com that provide a range of complexities based on the total number of parts and number of triangles. Only results from the most complex of these will be included in this particular article, but the tests were run on all three and the overall results follow the same patterns. The models in our testing are the following:
One note that I would like to make is that if you do not know how many triangles the models you work with have, the easiest method I know of to find out is to simply save the model as an .STL file. During the save process, a window pops up with information about the model including the number of files, the file size, and the number of triangles.
If I had found the sort of performance differences between cards that I expected, based on our past testing, this is where I would have split up the results from each model run and 1080P versus 4K resolution. But as you will see in just a moment, the results were not at all as one might anticipate. Here is how each card stacked up when rotating the Audi R8 car model in SOLIDWORKS 2018:
I know there is a lot on that chart, but hopefully, the overarching trend is easy to spot. Within each graphics card family - Quadro, GeForce, and Radeon Pro - the performance of all cards at all resolutions is effectively the same! A Quadro P1000 pushing SOLIDWORKS on a 4K display gives the same results as a P6000, more than ten times the price when only hooked to a 1080P screen. Likewise, the Radeon Pro WX 5100 at 4K and WX 9100 at 1080P are the same too. Technically, the lower-end cards in each of those pairings actually performed slightly better than their big brothers, but the differences are so slim that they are within the margin of error.
Now when comparing between GPU families there is a difference. As with the previous generation of NVIDIA cards, the GeForce 1080 Ti - even though it is one of the fastest graphics cards on the planet for most applications - performs substantially worse than any of the Quadro or Radeon Pro models. The Radeon Pro cards also aren't quite as fast as the Quadro models, though the difference is small (less than 10%) when using Shaded with Edges view modes.
But how is it that a Quadro P6000 and P1000 give exactly the same results like this? Have we simply reached the point where even a low-end workstation graphics card is sufficient for smooth operation in SOLIDWORKS? Or is our 434 part / 1.4 million triangle car model simply not complex enough to show the difference between cards anymore? Perhaps our methodology for testing frame rates is not properly measuring real-world performance?
This is where I am hoping you, dear readers, can help. If you have any personal experience with upgrading or changing from one of the video cards tested to another - and actually noticed a difference in performance - please let me know in the comments! Or if you have input regarding better ways to test the impact that graphics cards have within SOLIDWORKS, or can provide a more complex model for us to test with, I'd love to hear about that as well!
As it stands, I cannot in good conscience make a blanket statement about GPU performance in SOLIDWORKS 2018. A couple things do seem pretty certain, though:
- Avoid GeForce cards, as they perform substantially worse in SOLIDWORKS than professional-grade cards (not to mention they are often in shortage these days, thanks to cryptomining).
- If you aren't working with models more complex than around 500 parts / 1.5 million triangles, then anything from the Quadro P1000 or Radeon Pro WX 5100 on up should do just fine.
I look forward to reading your comments and hope that we can improve this aspect of our testing in the future.
Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.