Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/1112
Article Thumbnail

SOLIDWORKS 2018 GPU Comparison: What Is the Meaning of This?

Written on February 21, 2018 by William George
Share:

Introduction

To go along with our recent SOLIDWORKS 2018 CPU comparison, I had planned on publishing an article looking at professional-grade Quadro and Radeon Pro graphics cards and how they stack up in SW 2018. The plan was to test the Quadro P2000-P6000 and Radeon Pro WX 5100-9100 cards, but part way through something strange emerged from the results. I stopped testing those cards and instead looked at the performance of a lowly Quadro P1000 and a more mainstream GeForce GTX 1080 Ti. What follows is a quick look at the strange results from these tests, along with a question for readers: is this really reflective of SOLIDWORKS graphics performance, or is there a better way we can test video cards in this application?

In order to include a GeForce card in this test, I had to use a workaround to enable RealView - which is normally disabled unless you have a workstation-class graphics card. This is absolutely something I don't recommend doing if you are using SOLIDWORKS professionally, and the performance of GeForce cards in this situation is not something worth pursuing anyway (as you will soon see).

Test Setup

For my testbed system, I used the following hardware:

Testing Hardware
Motherboard: Gigabyte Z370 AORUS 5
CPU: Intel Core i7 8700K 3.7GHz (4.6GHz Turbo) Six Core
RAM: 4x Crucial DDR4-2666 16GB (64GB total)
GPU:

NVIDIA Quadro P1000 4GB (not originally planned)
NVIDIA Quadro P2000 5GB
NVIDIA Quadro P4000 8GB (planned, not actually tested)
NVIDIA Quadro P5000 16GB (planned, not actually tested)
NVIDIA Quadro P6000 24GB

NVIDIA GeForce GTX 1080 Ti 11GB (added late as a point of comparison)

AMD Radeon Pro WX5100 8GB
AMD Radeon Pro WX7100 8GB (skipped 4K testing)
AMD Radeon Pro WX9100 16GB (skipped 4K testing)

Hard Drive: Samsung 960 Pro 512GB M.2 PCI-E x4 NVMe SSD
OS: Windows 10 Pro 64-bit
PSU: Antec 1000W HPC Platinum
Software: SOLIDWORKS 2018 SP 1.0

This platform is built around an Intel Core i7 8700K, as that is the current-gen CPU that gives the best possible performance in SOLIDWORKS for general usage and modeling. More than enough RAM was included, to avoid that being a bottleneck of any kind, and a super-fast M.2 SSD was used for the same reason. As I mentioned in the introduction, my original plan was to test the mid-range and high-end "professional" / workstation-class graphics cards from both NVIDIA and AMD: the Quadro and Radeon Pro lines, respectively. The specific models included are shown in the chart above, with some notes regarding cards that didn't end up in the final data set - and some that were added to try and make sure the video card's performance did actually have some impact. But I am getting ahead of myself...

To perform the actual benchmarking, I used the same basic testing we've used here at Puget for analyzing graphics performance in SOLIDWORKS in the past, just updated slightly for the 2018 release: a mix of AutoIt scripts and SOLIDWORKS macros to set the different quality settings, load the relevant model, and record the average FPS while rotating the model. In the past we had experimented with different LOD settings but found the difference to be marginal, so to keep things simple I will only be reporting the results with LOD off (which usually results in a small drop in FPS). To recorded the FPS, a macro is used with a timer to rotate the model 45 degrees to the left and right for a set number of frames. From the number of frames and the total time it took to render those frames, our software is able to determine the average FPS (frames per second). One key factor is that every model starts with the view set to front-top so that any reflections and shadows would stay in view while the model was being rotated.

For test samples, we have utilized models available from GrabCad.com that provide a range of complexities based on the total number of parts and number of triangles. Only results from the most complex of these will be included in this particular article, but the tests were run on all three and the overall results follow the same patterns. The models in our testing are the following:

Steam Engine w/ Horizontal Beam
by Ridwan Septyawan
80 parts - .26 million triangles

Spalker
by Andy Downs
364 parts - .5 million triangles

Audi R8
by ma73us
434 parts - 1.4 million triangles


One note that I would like to make is that if you do not know how many triangles the models you work with have, the easiest method I know of to find out is to simply save the model as an .STL file. During the save process, a window pops up with information about the model including the number of files, the file size, and the number of triangles.

Results Chart

If I had found the sort of performance differences between cards that I expected, based on our past testing, this is where I would have split up the results from each model run and 1080P versus 4K resolution. But as you will see in just a moment, the results were not at all as one might anticipate. Here is how each card stacked up when rotating the Audi R8 car model in SOLIDWORKS 2018:

I know there is a lot on that chart, but hopefully, the overarching trend is easy to spot. Within each graphics card family - Quadro, GeForce, and Radeon Pro - the performance of all cards at all resolutions is effectively the same! A Quadro P1000 pushing SOLIDWORKS on a 4K display gives the same results as a P6000, more than ten times the price when only hooked to a 1080P screen. Likewise, the Radeon Pro WX 5100 at 4K and WX 9100 at 1080P are the same too. Technically, the lower-end cards in each of those pairings actually performed slightly better than their big brothers, but the differences are so slim that they are within the margin of error.

Now when comparing between GPU families there is a difference. As with the previous generation of NVIDIA cards, the GeForce 1080 Ti - even though it is one of the fastest graphics cards on the planet for most applications - performs substantially worse than any of the Quadro or Radeon Pro models. The Radeon Pro cards also aren't quite as fast as the Quadro models, though the difference is small (less than 10%) when using Shaded with Edges view modes.

But how is it that a Quadro P6000 and P1000 give exactly the same results like this? Have we simply reached the point where even a low-end workstation graphics card is sufficient for smooth operation in SOLIDWORKS? Or is our 434 part / 1.4 million triangle car model simply not complex enough to show the difference between cards anymore? Perhaps our methodology for testing frame rates is not properly measuring real-world performance?

Conclusion

This is where I am hoping you, dear readers, can help. If you have any personal experience with upgrading or changing from one of the video cards tested to another - and actually noticed a difference in performance - please let me know in the comments! Or if you have input regarding better ways to test the impact that graphics cards have within SOLIDWORKS, or can provide a more complex model for us to test with, I'd love to hear about that as well!

As it stands, I cannot in good conscience make a blanket statement about GPU performance in SOLIDWORKS 2018. A couple things do seem pretty certain, though:

  • Avoid GeForce cards, as they perform substantially worse in SOLIDWORKS than professional-grade cards (not to mention they are often in shortage these days, thanks to cryptomining).
  • If you aren't working with models more complex than around 500 parts / 1.5 million triangles, then anything from the Quadro P1000 or Radeon Pro WX 5100 on up should do just fine.

I look forward to reading your comments and hope that we can improve this aspect of our testing in the future.

Tags: SOLIDWORKS, GPU, Graphics, Video, Card, NVIDIA, Quadro, GeForce, AMD, Radeon, Pro
James Allerton

What about render times for specific scenes? It would also be good to get an understanding on their performance in Visualize as I imagine a great number of Soliworks users will now be rendering in that package. Thanks, James

Posted on 2018-02-21 18:20:30

Render times within Solidworks itself - just the base application, using PhotoView 360 - is not impacted substantially by the video card. There was at most about a 4% variation in that, which should be within margin of error (26-27 seconds for the pre-pass, and then 158-162 seconds for the main render).

Now Visualize as I recall is GPU accelerated, so that should indeed be impacted by the graphics card model and quantity... and indeed, that is what we found in our testing last year:

https://www.pugetsystems.co...

https://www.pugetsystems.co...

https://www.pugetsystems.co...

To me, though, that is separate from testing Solidworks performance. What the testing in the article above was aimed at is measuring viewport performance, while modeling, to see if a more powerful GPU will help there or not. In the past we had seen some differences, especially between lower-end and mid-grade cards, but this time around it seems to depend only on which GPU family a given card is in. I feel like this must mean we are missing something - not enough complexity on our models, or a less-than-ideal method of measuring, or something... hence the appeal to readers at the end of the article :)

Posted on 2018-02-21 18:35:30
James Allerton

Thanks for replying so quickly.

It is interesting that there is such little difference in the viewport rendering given the price difference in cards!

I find myself in a frustrating position that I create relatively uncomplicated (low part count) assemblies in Solidworks, and use the bundled Visualise to create slick looking visuals for our clients. I understand from speaking with Solidworks that a card with the highest CUDA count would give me the best rendering performance in Visualise. Now bang for buck, and looking at your tests from last year that would lead me to the GeForce cards but they aren't supported by Solidworks!

Posted on 2018-02-21 19:01:28

Yeah, that is one of the unfortunate effects of only "professional" or "workstation" class cards being certified for use with Solidworks. If all you were doing is Visualize, then yeah - a GeForce would be great... but for modeling in SW, that would really hamper performance (and support if you ever needed it as well).

I would think that something around the Quadro P4000 level might give you the best compromise - it should be similar to the GeForce GTX 1070 in terms of rendering (maybe a little slower) but much better than a GeForce for modeling, without being too expensive (less than $1k). The next jump up, the P5000, is something like twice the price... but not twice as fast.

Posted on 2018-02-21 19:09:39
James Allerton

Thanks William! That's kind of the conclusion I've come to!

Posted on 2018-02-21 19:11:31
Streetguru

Vega FE may be useful to test, but I guess AMD is kind of phasing that card out. Wonder if HBCC to extend your VRAM is at all useful as well.

Posted on 2018-02-21 19:33:14

In what we are currently able to test, VRAM was not an issue / factor in SW viewport performance.

Posted on 2018-02-26 16:52:05
Paul Gaster

Nothing wrong with you testing, it's fine.

The GeForce line has crippled drivers for OpenGL CAD programs like SolidWorks and Siemens NX. Certain features are not enabled like they are with the Quadro cards.
Only the Quadro cards have hardware-accelerated, antialiased points & lines for example. This is why the GeForce cards don't do too bad in just shaded mode. But turn on the edges and the GeForce cards are not accelerating those lines like a Quadro will. This kills performance on typical GTX cards.

One previous article here showed a Quadro K620 curb stomping a Titan X with edges displayed. This driver/feature issue is the reason.
Most people working in SolidWorks want to see the edges, so a Quadro is pretty much mandatory for real work.

To also prove this is a driver issue, research what Nvidia did with the Titan Pascal cards on driver version 385.12. This driver release was a direct response to the AMD Radeon Vega Frontier Edition. This driver enabled most, if not all, the Quadro features for these GTX Titan Pascal cards. Performance increases in some Specviewperf 12 tests were amazing. Siemens NX went from 9 fps to 75 fps. SolidWorks went from 50 or 60 fps to over 100 fps.
The card was still, well, the card. No hardware changed. Only the drivers changed.
https://wccftech.com/nvidia...

AMD also has similar differences between the Radeon gaming cards and Radeon Pro WX cards, but the gap is smaller in most cases. You still should get a pro card for OpenGL CAD programs though.

Autodesk products are DirectX, so pro and gaming cards work about the same. Actually top gaming cards have more power and may be better.

As to why the P1000 and up Quadro cards all pretty much perform the same for model rotations, well I think it's because there isn't too much for geometry and texture data on the screen. The cpu will be the limiting factor. Games have much more detail and things move much faster. CAD just isn't as demanding on the GPU in most cases.

Posted on 2018-02-24 00:46:09

Thank you for your comments, Paul! I guess my only remaining question is: are there users of SW with models complex enough that the various Quadro and Radeon Pro cards will start to differentiate themselves when simply manipulating things in the viewport? I want to make sure that we are pointing folks with even extremely complex models to the right solutions...

Posted on 2018-02-26 16:49:36
AC

It's a first in that TITAN X Pascal & TITAN xP , let alone TITAN V have professional drivers; improvements. It was definitely a response to VEGA FE.

See quasarzone korea's testing https://quasarzone.co.kr/bb...
The specviewperf viewsets are typically 20 million vertices for Solidworks. Specapc for Solidworks is a more accurate benchmark since it uses the program though.

Posted on 2018-03-01 06:22:59

We've looked into using Specapc as a benchmark for some applications, but unfortunately it is mostly outdated - and quite expensive, for system builders. The SW variant, for example, is still using SW 2015! That is now three versions out of date, and while the results might still be applicable to newer versions there is no way to know without actually testing the modern versions... and then what point would there be to paying for a benchmark that only works on an old version? That is the same reason we have not used Specapc for Maya or 3ds Max, and may soon develop our own testing for those programs as well.

Posted on 2018-03-01 17:29:54
Nick

The software is not very optimized. It can probably only use a small part of the CPU resources, so it's CPU limited. How is the CPU usage? 17-20% perhaps? In this case, GPU cannot do much. The better driver of the pro card makes a difference, but that's it.

Resolution shouldn't make a difference, if the CPU is only pushing coordinates. All GPUs can render fast enough for all resolutions if the load is low.

I just tested an old machine vision app of mine on my intel 8100/IGPU and my new 1950X/GTX1080 workstation. The practical way was to optimize some parts for 4 cores and it uses GDI drawing (not important in this case). It's practically identical frame rate on both machines, as I expected. We do have an EPYC workstation, but I won't bother. Half frame rate of the intel 8100.

Posted on 2018-02-24 17:26:36
Nick

I just checked the CPU benchmark. 7980xe and 7820x at same performance. 8700 faster than 7820x. There is probably a single/dual thread bottleneck in the rotation benchmark. In this case, a Pentium G4560 should be 5-10% slower than a 1950x.

Posted on 2018-02-24 17:33:10
Gavin Prinsloo

Hi William,
thanks for the excellent article - was about to purchase a mid-tier Quadro, might change that now. For me, there seems to be several usage scenarios / realms within the Solidworks / CAD environment: 1. Basic CAD (basic shape geometries and assemblies), 2. Advanced CAD (complex surfaces, larger assemblies), 3. Very Large CAD (massive assemblies - think entire processing plants - I have seen this done in different packages with special pipe and cable routing add-ins), 4. Rendering (Photoview 360, Composer, etc.), 5. Simulation, 6. Flow Simulation, 7. Other add-ins

Most people will typically have 1 or 2 of these that they use regularly, and maybe another 1 or 2 they use on the odd occasion. The demands on the PC for these are quite different though. I imagine that scenarios 1 through 3 have similar demands, just increasing in scale. 4 may require more in the GPU / RAM department. 5 only seems to use GPU for representing the results and deformed body, so is similar to 1-3.

6, I believe, can use the GPU for the calculation itself, so in this case I imagine the higher-end Quadros will be great. Of course, it depends on the scale of the project. The company I used to work for used Flow simulation for modelling heating cycles inside large industrial enclosures. Some of the simulations would easily take a week on a 6 core machine with 32GB of RAM. Flow Sim also allows you to pass the simulation off to another machine (as long as it has Solidworks & Flow Sim licenses). So, you can have a Simulation-only machine with high spec gear sitting on the network as a common resource, while everyone has a CAD (1-3) optimised workstation.

I know that it's not the typical use case, but it would be good to see some tests on this to get clarity on what is required/best for each use case.

Posted on 2018-02-27 21:21:29

Hi Gavin -

We actually have test data on rendering (PhotoView 360), Motion Study, Stress Simulation, and Flow Simulation in Solidworks 2018. That info was included in the CPU comparisons we did earlier this year, which are linked to at the end of the article above, but I didn't include the results in this GPU comparison because the GPU didn't make a difference in any of those areas.

I just pulled up results for the P1000, P6000, WX 5100, and GTX 1080 Ti to get a quick sample, and the results vary by only a few percent. The biggest difference is in the motion study test we did, which saw a variance of about 9% from the lowest to highest results, but it was also the shortest test - so even a fraction of a second difference looks bigger when taking it as a percent. That test took from 20.5 to 22.2 seconds across the various video cards (with the same CPU - an i7 8700K). It does look like the AMD Radeon Pro WX 5100 might be slightly slower for some reason, as it always came back in the 22 to 22.2 second range, but it is hard to say for sure.

On the other tests, these were the variance ranges for the results:

Render Pre-pass (PhotoView 360): 26.4 to 26.8 seconds
Render Time: 157.8 to 161.3 seconds
Stress Simulation: 111.6 to 115.0 seconds
Airflow Simulation: 101 to 103 seconds
Thermal Simulation: 151 to 156 seconds

As you can see, none of those appears to be impacted in any significant way by the video card. Because of that, I left these results out of the article above - and instead focused on the area I was hoping to see some difference, which is general modeling.

We've recently gotten our hands on a more complex model, with over 4,000 parts and ~35 million triangles. When I have a chance, I am going to add it to our benchmarking utility and see if we find any difference between GPUs with something that complex. Hopefully I can put out an update, one way or the other, in the coming weeks.

Posted on 2018-02-27 22:30:07
Gavin Prinsloo

Hi William,

thanks for the reply. My apologies - I was so sure that (when I used it about 6 years ago) that there was an option to use the GPU for Flow Sim. Looking at the instructor's handbook for 2015, as well as a few other articles on-line, no such luck.

An article from 2013 stated that they were "looking into the possibility". Five years later, and some amazing increases in GPU power and efficiency, an we're still CPU bound.

I suppose we have to wait for companies like Onshape to take a chunk of market share to force SW to make some changes.

Thanks again for the awesome articles, and thanks to Puget Systems for making this info readily available.

Posted on 2018-02-28 23:23:29
Doug Shepherd

I had an interesting experience when my work computer was upgraded recently. I'm an EE but I do often use SolidWorks -- 3D rendering is a hot topic in PCBs these days. I was a bit disappointed when my new computer didn't seem much improved over the old dyno mule. It's a Dell something something with a quadro P2000 video card. I was poking around online and found that NVIDEA makes several counters available to the Windows performance monitor, and found how to watch them. (See https://www.youtube.com/wat... ).

What I found was that rotating a generated rendering in a 3D PDF only used a small amount of the GPU's horsepower, no more than 20%. In Altium, the PCB creating software we use here, the meter would often spike to near 100% when doing 3D operations. This isn't terribly surprising to me -- Adobe probably doesn't do nearly as much work to utilize the processing power that's available in modern display adapters, whereas Altium probably focuses on that pretty hard --performance is very important to sales for them.

What was VERY surprising to me was that SolidWorks used little, if any, of the GPU's horsepower, and in fact the GPU's usage rate would sometimes go DOWN when I was rendering or rotating a part in SW (it seems to idle in the 10% range for whatever reason). I'm not sure what to make of that, but it does kind of square with my experience...3D PDFs in Adobe Acrobat will often freeze as I'm rotating them for multiple seconds before rendering, SolidWorks is smoother but still far from what I'd call speedy, but 3D operations in Altium are positively zippy. I can rotate and/or regen in Altium very quickly and very smoothly, without ever seeing a single hiccup or momentary freeze. It's an order of magnitude better performance.

Again, I'm not entirely sure what to make of all this, but I'm forced to conclude that SolidWorks may not do as good a job of utilizing the GPU's power for 3D operations as Altium is doing. Maybe it's a multi-core thing, maybe it's an approximation thing (SolidWorks is, after all, first and foremost a mechanical drawing package and requires unwavering accuracy above all else), but whatever it is, I'm much more impressed with Altium's performance in the 3D world than in SolidWorks's performance.

Posted on 2018-06-06 19:16:03
Vincent

I've opened identical models opened in solidworks in both sldassem and stp formats and noticed that the sldassem take a substantial amount of time to load relative to the stp counterpart. I believe a lot of this corresponds to the metadata generated for features, mates, and constraints. I'm by no means an expert regarding solidworks and it's effects on hardware but thought this might be a factor in your analysis.

Thanks for your detailed analysis on components. You guys are definitely helping me out in choosing a new PC!

Posted on 2018-10-30 03:47:19
Andrew Morgan

I built a machine to primarily run Solidworks and Agisoft Photoscan. As a result, I went with a pair of P2000 cards for CUDA cores and found some serious problems in Solidworks. Removing one card caused all those problems to go away IMMEDIATELY. I'm looking to upgrade to a single P4000 as it should be roughly as fast as the pair of P2000s in Photoscan while hopefully not causing issues in Solidworks.

Posted on 2018-11-04 18:40:56

Switching to a single P4000 should definitely help. Solidworks is not really built to run on multi-GPU systems. I've also heard that SW 2019 will have more GPU acceleration built in, which will likely benefit from the faster GPU you are moving to.

Unfortunately, we don't have any recent test data showing how well the Quadro cards perform in Photoscan. If you weren't also using Solidworks, I would advise a high-end GeForce card for Photoscan... but with SW in the mix, a Quadro is probably the right choice. I hope that ends up working well for you! :)

Posted on 2018-11-05 17:49:56
Andrew Morgan

I think your most recent article is still pretty accurate where the GPU mostly helps with the dense cloud step. However, during a recent run, I noticed that the software switches to CPU only processing or single thread only processing when it nears completion of a step and if your CPU doesn't have screaming fast single thread performance, this can actually cause a step to take a LONG time to complete.

I honestly think Photoscan is like the Olympics for computers as each step seems to stress a different aspect of the machine.

I was surprised to learn that Solidworks had issues with multiple GPUs even if all the monitors were plugged into a single card or each monitor into its own card.

Posted on 2018-11-05 18:00:46

Photoscan, at least in recent versions, definitely does the best on a high clock speed CPU with a moderate amount of cores... at least when used with a single video card. The highest clock speed CPUs usually can't handle more than two GPUs, though, including the Core i7 9700K and i9 9900K which topped the performance charts in our last round of testing.

Posted on 2018-11-05 18:10:18
GodFear17

This benchmark is very bad. It's not testing anything of real value. RIght off the bat when your graph is saying everything is doing the same FPS in 1080p and 4k you know something is up. When a p1000 with 600 cores is doing better or same as something with 3500 cores, you know your results are skewed somehow. This benchmark should just be debunked as it's not a real test of anykind.

Posted on 2019-01-07 17:55:36

I believe that is exactly the point of this article. We couldn't find a way to show a performance advantage of a higher-end GPU in SOLIDWORKS, so William put up the data we did have in order to get feedback from SOLIDWORKS users on when a higher-end GPU would actually be faster. I believe the main issue we have hit is that SW is largely a single threaded application, and even with the fastest CPUs available, we are hitting a CPU bottleneck on all the tests and models we have tried.

Posted on 2019-01-07 18:20:08
GodFear17

Totally agree, but Instead of "cpu bottleneck" it's a software bottleneck I think better explains to the end user what's actually happening. My worry is someone "not so much in the know" could look at this, and say, Oh ok well there is no reason to buy anything better than a p1000 for anything. When most other software packages absolutely would take advantage of the hardware, mainly the vram of the cards.

I think a larger model that is taxing the vram would show better differences perhaps. But yea solidworks is way behind the curve in the multi-threaded area.

Posted on 2019-01-08 17:32:14

I don't know if it is really behind the curve really, it is just the nature of parametric software to be single threaded. Since ever point in a model depends on the location of the point before it, you can't really spread that out across multiple cores. Things like simulations and ray traced rendering can be multi-threaded (and it is in SOLIDWORKS), but actually generating the model and displaying it isn't going to be multi-threaded until some new form of modeling is developed.

The article William linked to included a test with a very large model and he was able to see a performance benefit with a P2000, but nothing beyond that.

Posted on 2019-01-08 17:37:55
GodFear17

true, I'm not sure direct would be the solution either.

But parametric can be multithreaded there is a good write up on "Parametric Model in Speculative Multithreading"
https://www.mdpi.com/2073-8...

It can be done. There is almost always a way to multithread things. The question is, right now would their be a real benefit to doing it? Does having higher than 50-60fps when rotating a model *need* it? I dunno, I of course would say yes. I'd like to see what some kind of 20gb file does in solidworks using the 24gb card does solidwork handle that? Or does no one every do this in solidworks? I have no clue.

Yea the 2nd article was a 2gb file. P1000 is a 4gb card, etc. So it's not really going over that 8gb vram testing so see what happens.

Posted on 2019-01-08 17:56:36

After this article, we were able to get hold of a larger, more complex model - which did show a little more performance difference, particularly in regards to the lower-end cards like the P1000:

https://www.pugetsystems.co...

However, if you have any suggestions for better ways to test interactive performance in Solidworks (as opposed to things like rendering and simulations, which we already do separately) I would be happy to hear them! We are always looking for ways to improve our testing methodology :)

Posted on 2019-01-07 23:13:13
GodFear17

I think that's the problem until Solidworks does a better job of multi-threading their software this particular type of bench I don't believe would change much.
But I do think some kind of model that actually uses the VRAM of the higher end cards would help show off the power of the larger cards.

Not sure if my comment was taken the wrong way or not, it was not a dis to pugetsystems, but the way solidwork is kind of behind in the times.

Posted on 2019-01-08 17:29:16

I am no expert on this and I just want to get the best out of the rig for fast, accurate rending. I have both Geforce GTX 1080p and a P2000 when reading on this its usually one Verses the other, Geforce or Quodro which one is best, I had Geforce already in the set up any I now use Solidworks visualiser which is why I got the P2000 but can see it not been utilised at all. The PC favours Geforce and P2000 is just doing nothing.

How do I utilise both graphics cards. It would be ideal to have a manager that can designate a purpose to a driver like only use P2000 for Rendering and GTX as general graphics processes for my screen and every other application.

the set up has Gefoce in stop 1 and P2000 in Slot 2. Would flipping them do anything at all?

Posted on 2019-01-28 12:27:28

Mixing a GeForce and Quadro card isn't a very common configuration, and many applications may not know how to best handle that. In general, the video card your monitor is connected to is going to be the one responsible for displaying graphics - so for Solidworks (for example) you would usually want a Quadro to be doing this, since GeForce cards are not certified to work properly with that application. Programs which use the video card(s) for compute purposes, like rendering, usually have other ways of specifying which card(s) should be used for that purpose. I am not sure about how Visualize handles that off the top of my head, but I would look in the settings and see if there are any options relating to which GPUs to use. For those sorts of calculations, though, the GTX 1080 is going to be far faster than a P2000.

Posted on 2019-01-28 19:46:26