Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/710
Article Thumbnail

Agisoft PhotoScan GPU Acceleration

Written on September 11, 2015 by Matt Bach
Share:

Introduction

When designing a computer there are literally thousands of different hardware components to choose from and each one will have an impact on the overall performance of your system in some shape or form. Depending on the software you will be using, however, some components will simply be more important than others. In the case of Agisoft PhotoScan, the two components that most directly affect performance are the CPU and the video card. In this article we want to look at how the performance and number of video cards in a system affects the time it takes to generate a 3D model in PhotoScan. If you are interested in how the CPU affects PhotoScan performance, we recommend reading our related article Agisoft PhotoScan Multi Core Performance.

While PhotoScan can do a number of different tasks, for this article we are going to focus on one of the four basic steps PhotoScan goes through to convert a series of photographs into a 3D model:

  1. Align Photos
  2. Build Dense Cloud
  3. Build Mesh
  4. Build Texture

Of these four steps, only the "Build Dense Cloud" step is able to utilize the video card. However, that one step takes longer than all the others combined so it makes choosing the correct GPU for PhotoScan critical to ensuring that the hardware in your system is correctly optimized. In terms of the time it takes to generate a 3D model, building the dense cloud ranges from being approximately 50% of the total time if you use medium settings, 80% if using high settings, or 96% (!) if you use ultra high (or highest) settings.

In this article we are going to look in detail at a number of different models and quantities of video cards at medium, high, and ultra high. If you would rather simply view our conclusions, feel free to jump ahead to the conclusion section.

Test Setup

For our test system, we used the following hardware:

Testing Hardware
Motherboard: Asus Z10PE-D8 WS
CPU: 2x Intel Xeon E5-2687W V3 3.1GHz Ten Core
RAM: 8x Kingston DDR4-2133 8GB ECC Reg.
Hard Drive: Samsung 850 Pro 512GB SATA 6Gb/s SSD
OS: Windows 8.1 Pro 64-bit
PSU: EVGA SuperNOVA 1600W P2 Power Supply
Software: Agisoft PhotoScan 1.1.6 build 2038 (64-bit)

We learned in our article "Agisoft PhotoScan Multi Core Performance" that how well PhotoScan can use multiple CPU cores improves as you increase the number of GPUs. At the same time, Agisoft recommends disabling one core per GPU (which is done through Preferences -> OpenCL) for best performance. Because of these two factors we wanted to use a pair of Xeon E5 V3 ten core CPUs (for twenty physical cores in total) in order to ensure that the CPU would not bottleneck the video cards to any significant degree.

Originally, we were only planning on testing single and dual GPU configurations but as you will soon see we quickly found the need to bump our testing all the way up to four video cards. The video cards we used in our testing are the following:

You will notice that we are only testing NVIDIA GeForce video cards and not NVIDIA Quadro or AMD Radeon/FirePro cards. PhotoScan should work fine with almost any video card since it uses OpenCL (which is compatible with any modern GPU), but since it does not require dual precision performance or ECC memory the NVIDIA Quadro and AMD FirePro video cards would simply be a waste of money as you would be paying for features that PhotoScan cannot use or need. AMD Radeon cards should work great and would actually probably be a bit better performance per dollar than the NVIDIA GeForce cards, but as we've discussed in the past we feel that in most situations AMD Radeon cards are simply too unreliable historically for a professional environment. Especially if you will be running a job that takes a very long time you really want to have video card(s) that are as reliable as possible.

For our test data we used the Monument sample data that Agisoft has made available on their website. We found that this set of images was a good mix of being hard enough on the hardware to really test the GPU performance while also being relatively quick to finish (since we had to complete a large number of runs to give us the data we need). To ensure that our results will be accurate for larger data sets, we also did some spot testing using other projects including a larger data set provided to us from one of our customers. What we found is that while the resolution of the images doesn't influence the time it takes to align the photos, it does change the amount of time it takes to complete the other three steps at close to a 1:1 ratio. In addition, the number of images included in the data set also causes a linear increase in the amount of time it takes to complete all four steps (including aligning the photos). Since PhotoScan only uses the GPU for the "build dense cloud" step, this means that a data set with either twice the number of images or twice the MP count will take approximately twice the amount of time to complete for that step. Further, if you have a data set with twice the number of images and twice the MP count it will have a roughly 4x longer build time.

What that means is that we have found the performance of PhotoScan to be roughly linear so the CPU and GPU that is best for a large number of high res images will also be best for a smaller number of lower res images. The amount of actual build time will of course be different - and you will need more RAM for larger data sets - but a configuration that gives a 50% performance increase for a small data set should give a roughly 50% performance increase for a larger data set.

Medium Quality Setting

Starting with the quality setting for "Build Dense Cloud" on medium we already see some very interesting results. With a single video card, there is a nice decrease in the build time of about 10% each time the GPU was upgraded to a faster model. For the dual, triple, and quad GPU configurations we saw an even better improvement (about 15-20%) between the GTX 960 and the GTX 970, but almost nothing when upgrading to either a GTX 980 or GTX Titan X.

The really useful information here is the amount of improvement we saw each time we increased the number of video cards. Depending on the model of video card, going from one GPU to two decreased the build time between 25-35%. Going from two to three was an additional 20-25% decrease in build time and going from three to four was another 10-15% decrease in build time on top of that. 

What this means is that (assuming you are not CPU limited) it is much better to have more GTX 960 or GTX 970 video cards than it is to have fewer, high-end video cards if you will be using the medium quality setting. However, there are some additional factors that come into play that we will discuss in the conclusion section that depending on your budget may prevent you from using three or four mid-range video cards instead of one or two high-end video cards.

High Quality Setting

Upping the quality setting to high gives us fairly similar results to the medium settings except that we see a benefit of using faster GPUs even at the higher video card counts.

On average, upgrading to a GTX 970 from a GXT 960 gives a great 20-25% decrease in build times while upgrading to a GTX 980 or GTX Titan X results in a further 5-10% decrease in the time it takes to build the dense cloud. Again, however, this is somewhat overshadowed by how much of a performance benefit there is to having multiple video cards.

Going from one GPU to two results in about a 35-40% decrease in build time depending on the GPU which is 5-10% better than what we saw on the medium settings. Going form two to three video cards results in another 20-30% decrease in build time while going from three to four video cards results in a further 10-20% decrease in build time.

Ultra High Quality Setting

While using the ultra high quality setting significantly increases the time to takes to build the dense cloud, the relative performance difference between the model and number of video cards is actually very similar to what we saw with the high quality setting.

The only oddity is that with only one or two video cards the GTX Titan X isn't as much of a performance improvement over the GTX 980 as it was on either the medium or high quality setting. It is certainly still faster than the GTX 980, but only by about 3-4%. Once you get up to triple or quad GPU configurations, however, it is around 7-13% faster than the GTX 980.

Overall, we saw about a 20-25% decrease in build time going from the GTX 960 to the GTX 970 depending on the number of video cards and anywhere from a 5-10% further decrease in build time upgrading to the GTX 980 or GTX Titan X.

Upgrading the number of video cards from one to two results in a 35-40% decrease in built time, while going from two to three is about a 20-25% further decrease in build time. Increasing the GPU count to four is a bit less effective, but still results in a 15-20% decrease in build time compared to the triple GPU configurations.

Conclusion

At first glance, it looks for choosing the right video card for PhotoScan follows the classic "more is better" approach. If you can buy four GTX 960 video cards for the same price as a GTX Titan X but the four GTX 960's result in a build time that is twice as fast (or even more at high and ultra high settings) why wouldn't you do that? Yes, the power draw of four GTX 960s is twice that of a single GTX Titan X, but if they can cut the build time in half or more that actually makes the four GTX 960's both cheaper and more power efficient than a single GTX Titan X.

The problem really comes about when you factor in the rest of the system - most notably the CPU. If you read our Agisoft PhotoScan Multi Core Performance article, there are a couple of very interesting CPU considerations when it comes to using multiple video cards in PhotoScan:

  1. As you increase the number of GPUs, the multi core efficiency of the "Build Dense Cloud" step increases by about 5% per GPU
  2. For every physical GPU in the system, Agisoft recommends you disable one CPU core (done through Preferences -> OpenCL)

Since you need one core reserved per GPU and the multi core efficiency increases as you add more GPUs to the system this means that as you add video cards you also have to have a CPU with a higher core count as well. Since CPUs with higher core counts are more expensive, this means you have to balance purchasing multiple cheaper video cards with a more expensive high core count CPU to ensure that you properly allocating your budget. Unfortunately, this makes it very difficult to recommend one GPU configuration over another.

We came up with two ways to help you make sure that you have the correct combination of CPU and GPU to give you the absolute best performance in PhotoScan for your budget. The first is a Google doc where we used both the results from this testing and our CPU article to come up with an estimate of approximately how long it should take different combinations of CPU and video cards to build a theoretical 3D model. If you want to play around with that, it is available at:

You will need to make a copy of the sheet (through File -> Make a Copy) before you can edit anything, but this gives you the freedom to input different CPU options (although the ones we have in there should be among the best choices) as well as what your budget is. Based on that, it will highlight the CPU/GPU combinations that fall within your budget so that you can look through them to find the one with the lowest build time.

If you are configuring a system for PhotoScan, we recommend also reading our other articles regarding the hardware requirements for PhotoScan:

If you don't want to wade through all those different results, we also came up with three different recommended systems based on whether you have the budget for a dual, triple, or quad GPU system. These systems have a few CPU and video card options but they were designed so that you can't possibly make a bad decision. If you choose a more expensive CPU or GPU option, you will see a decrease in the time it takes to build a 3D model in PhotoScan:

Recommended Systems for Agisoft PhotoScan

Tags: Agisoft, PhotoScan, GPU, Video Card

I wanted to add a quick note for folks reading this, who might wonder why the top-end GPU we tested with was the GeForce GTX Titan X but then we use the GTX 980 Ti in our recommended systems. That is because the 980 Ti is almost as fast as the Titan X, but about $400 less expensive. The biggest difference is that the Titan X has 12GB of memory vs 6GB on the 980 Ti, but in our testing the video card memory had no impact (only a small amount of it was being used). As such, the 980 Ti gives the best performance / value combination on the high end, and the 970 is a solid cost savings option for lower priced systems.

Posted on 2015-09-11 21:52:44
stephen long

Hi William im looking at doing large environment landscapes using 3000 ish pictures according to the documents agisoft recommends 320gb of memory im wondering is that memory helped out with video card memory or does video card memory have little overall effect on processing ? so 2x Intel Xeon E5-2637 V3 chips , 4x 980ti's and 320gb (10xSamsung 32GB (1 x 32GB) Registered DDR4-2133 Memoy)

resting in a

Asus Z10PE-D16 WS
board overkill ? or just right for the job ?

Posted on 2015-10-15 01:47:43

You can't fit four GTX 980 Ti cards on that motherboard, but you can fit 3 - which is still a lot of power / performance. The CPU sockets are also closer together, limiting the cooling options somewhat.

I'm also not sure that the E5-2637 is the best value for this application. And you can't do 10 sticks of RAM - each CPU needs either 4 or 8 memory modules; anything else will throw off the quad-channel memory setup.

Posted on 2015-10-15 06:10:44
stephen long

Sorry yes 3 video cards typo there :) as for the memory though using large amount of photos agisoft recommends in this document http://www.agisoft.com/pdf/... that 5000 pictures would require 320gb ram probably because with less the software would dump to hdd then back into memory to often killing performance right ? have you William done any larger projects any idea on run time ?

Posted on 2015-10-16 02:49:53
stephen long

or option 2 buying a dell r910 with 4 x7560 processors and 8 sticks of the 32gb ? 32 cores effective or not fully used by the application ?

Posted on 2015-10-16 03:32:54

We did not have any image sets nearly that large to work with, so I have no idea how long they will take - I'm sorry!

I would note that if you are willing to step back to High instead of Ultra High, you could make do with 80GB (so probably 128GB in an actual system). That should still give great results, without requiring as expensive or complex of a system. To have 320GB, you would need a system equipped with 512GB... which is a lot of money, and more than you would need for just about any other task / application.

Posted on 2015-10-16 05:44:33
stephen long

Thanks William i really appreciate the time you took to answer my questions may your business grow because of your kindness

Posted on 2015-10-16 13:05:31
Josh

FYI, the Asus Z10PE-D8 WS is able to support 4x SLI cards, as well as using 2 of the new Intel Xeon e5-2696 v4 (22 core/44 thread) at once.

Posted on 2017-05-17 22:30:15
Prasanna Sreenivas

Hi . Am looking to buy a machine which supports AGI photo scan software.Kindly advise if imac machine with AMD Radeon M390 would support this? If not kindly advise me on the workstation required for this software.Am from India and would require your help on buying the machine suitable

Posted on 2017-02-08 13:02:44
Milos Lukac

Hi Matt can get contact on you ? can contact me on muzeumhb@gmail.com for discussion ?

Posted on 2015-09-17 07:41:09
Eyal Saiet

This is a great article. Something I think worth adding, Agisoft files (after done processing) tend to be several GB to tens of GB. Software such as Quick Terrain Modeller, depend on huge memory in the GPU to work with these massive files. Therefore I think there is significant value that one of the four graphic cards has the largest memory you can afford.

Posted on 2015-10-07 00:18:05

The two cards we are currently listing on our recommended systems are the GTX 970 4GB and the GTX 980 Ti 6GB. Is 6GB of VAM enough or do you think we should add the GTX Titan X 12GB to that lineup? It doesn't give much performance over the GTX 980 Ti, but we could add it with a note about the additional VRAM being needed for certain post-processing software.

Posted on 2015-10-07 00:55:02
Dave Martin

Have found these pages most useful resource, but I now have a multi-GPU / motherboard query.

Currently looking to assemble a system with a pair of GeForce GTX-980 GPUs to go in a dual-socket Xeon system with probably just one processor initially.

Initially looked at a Dell 7810/7910 workstation, but was concerned that position of the two slots suitable for the GPU cards meant that the fan on one card is only a couple of millimetres away from the PCB of the next GPU so there was negligible air circulation space between the two GPUs. Also a little concerned as to whether the two suitable slots were tied one per processor? - so if only one processor installed initially, can it use both those GPUs?

Alternatively, looking at building a system using the Asus "Z10PE-D8 WS" motherboard (or the D16 version) - they have four suitable Gen-3 16x link slots - but it appears these slots may be in two pairs, two per processor, and the slots for each processor are adjacent so I couldn't have a single processor / two GPU cards with a big air gap between them.

Do you have experience with long runs with two GTX-980 cards right up alongside each other? any vendors implementation of the GTX-980 more reliable in such situations?

Thanks in advance / Dave (Isle of Man)

Posted on 2015-10-11 16:14:23

Video cards can be run right next to eachother, though when they can be spaced out more it is ideal from a cooling standpoint. When they have to be right next to eachother, you need to make sure they have airflow directly on them. If building your own system, getting a case with a large side fan blowing onto the expansion card area is idea. The cards themselves also need to use heatsink / fan setups on each card that exhaust most of the heat out the back of the computer.

Hopefully that info helps!

Posted on 2015-10-12 04:31:01
Dave Martin

Thanks William. Do you actually have anyone who is successfully running a pair of GTX 980-Ti right next to each other for prolonged periods (many hours, even days) without the cards throttling-back due to temperature issues - and if so, would it be possible to share which make / variant of card is being used?

Many thanks / Dave

Posted on 2015-10-21 13:29:34

Yes - if you use the reference cooler design (single fan toward the front of the card, pushing air across the length of the card and out the back) those can be used next to eachother with reasonable results. In addition, though, you want airflow directly coming in toward the cards in such a configuration. The best solution for that is usually a side fan on the case, directly over the video cards. With a sufficiently powerful fan there, we've done as many as four GTX 980 Ti style cards (Titans as well, with the same cooler design) in a single system.

Posted on 2015-10-21 15:36:38
Dave Martin

Just to close this query:

In the end we discounted the Dell Precision 7910 for our specific situation. Our config was a 7910 dual socket but with only one Xeon installed initially. In the 7910, as with most other dual-socket motherboards, if you only have one processor the two GPUs would be adjacent; with a dual processor you could have free slots.

Having spoken to someone who has had heat problems with two adjacent GPUs in a Dell workstation, we went with a workstation with specific GPU cooling which actually has sufficient cooling for passive GPUs.

The new workstation we're using is based on a SuperMicro 7048GR-TR 'super GPU workstation', with one E5-2667v3 Xeon and 128GB RAM (4 x Samsung 32GB). The two GPUs are EVGA GTX 980 Ti SC (EVGA part no. 06G-P4-4992-KR) with no overclocking yet. These GPUs have linear airflow, and although we bought the workstation additional GPU cooling kit (p/n MCP-320-74701-0N-KIT), the workstation's standard cooling is so good we haven't had to install the extra cooling kit yet.

Dave

Posted on 2015-12-10 16:58:04
Terrence Elliott

These series of articles have been most useful to read and I wish it had been written prior to us purchasing our PC back in 2013.

We process large aerial image datasets of around 2000-3000 images taken from our UAV.

Our current system spec is:
---------------------------------------------------------
Intel Core i7-3970X

X79 Professional

Workstation PC

NZXT Switch 810 SE Matte Black Full
Tower Case

Corsair Professional PLATINUM AX1200i
80+ Fully Modular PSU

Intel Core i7-3970X (3.5GHz 15M Cache
12x Cores) Extreme CPU

Corsair H100i with Corsair Ultra Quiet
Performance Fans

* Push / Pull Confiq

MSI Xpower II LGA 2011 Motherboard

64GB(8x8GB)DDR3 1600Mhz High Performance
Quad Channel RAM

OCZ VECTOR 256GB r: 550mb/s w: 530mb/s
SSD

2TB Western Digital 7200RPM SATA3
6.0Gb/s 64MB Cache HDD

3 x MSI Gaming Edition TFIV GTX 770 2GB
256bit OC Cards In SLI

----------------------------------------------------------------------------------

We are quite keen to upgrade a few components but budget is always an issue.
If we were to theoretically have around $750 for upgrades - which component upgrade would give us the best bang for buck based on our current configuration?

I'm thinking that a RAM upgrade as well as adding a 2nd CPU may be in order?

Posted on 2015-12-07 11:12:13

Honestly, there isn't much you can do for $750 to get higher performance for your system. You are already using the maximum amount of RAM that CPU is capable of handling and upgrading the CPU to something that can handle more (which would require a new motherboard and all new RAM in addition to the CPU itself) would cost much more than $750. Adding a CPU would be a similar problem since you would need a new motherboard that can handle dual CPUs, compatible RAM, as well as power supply and likely chassis as well. Basically, if you want to upgrade CPU or RAM you are more looking at an all new system - although you could re-use the hard drives and video cards - than just doing an upgrade.

Really, the only thing I can think of that might get you more performance for your budget would be to upgrade one of the video cards to a NVIDIA GTX 980 Ti. Photoscan doesn't require matching video cards so you should be able to get better more performance for the "build dense cloud" step by upgrading one of the cards. I should mention though that I haven't actually tested mixing different model cards so I'm not sure if there are any complications that can come up by doing so. So if you do that, make sure you buy the GPU from somewhere that has a good return policy since there is a chance you may run into problems.

Posted on 2015-12-07 19:56:34
Terrence Elliott

Hi Matt

Thanks - I appreciate the feedback. I must admit this sort of hardware is not my speciality so was not aware we had maxed out on the motherboard/CPU/RAM - good to know. Looks like it will be a whole new system at some stage then.

Regards

Posted on 2015-12-08 07:00:36
Carlo Rindi

Hi William,

I currently have a 4930k on Asus x79 Deluxe, 64 gb DDR3 2133, intel 750 800gb as hard drive and a very old video card that I would like to replace. As my need is to realise the core of the model in Photoscan and then working in Maya, Cinema 4D and other 3d programs (ZBrush), which video card would you recommend? I was leaning towards a 980ti, but saw that Quadro/FirePro are supposed to be better in the viewports and rendering. With my current setup The model is very hard to navigate in the viewports as the video card stutters.

Thanks

Posted on 2016-03-23 05:47:02

It has been a while since the last tests we did with Maya, but at least as of the 2014 version it was one of the few programs where consumer-grade cards (especially the GeForce line) did very poorly. Even the GTX Titan was outperformed by entry-level Quadro cards:

https://www.pugetsystems.co...

Based on that, if your budget allows it I would go for the Quadro M5000 or maybe M4000. The latter should be close in price to the 980 Ti you were considering, though not as fast for other applications. You might also consider a FirePro card, as they do particularly well in Maya... but I'm not sure how well they do with Photoscan, as they weren't included in our testing of that software.

Posted on 2016-03-23 16:54:04
Carlo Rindi

Many thanks for the kind reply William, I have a colleague who has that card (m4000) and will look into it. I will see if performance-wise it could be worth taking one of those, or just sticking to a 980ti / Titan and how much difference there is in the viewport for the use we have to make of it. Many thanks again and happy Easter!

Posted on 2016-03-26 01:20:03
voojoo

How about NVIDIA Tesla

Posted on 2016-04-05 01:16:07

We didn't test those, but since they are currently older generation (based on Kepler instead of Maxwell) and PhotoScan doesn't seem to use double precision... I would expect they'd perform worse than the GeForce 900-series cards. They also cost a *lot* more. The one advantage that some of them have is more video memory, so there might be edge cases where that would make a difference - but I wouldn't want to bet thousands of dollars on that.

Posted on 2016-04-05 05:26:47
Ryan Huberdeau

I have a 5930k with two GTX 970's its my understanding that I have 6+6 cores and im supposed to disable 4 out of the 12 cores. Is this correct. Reason i ask is im confused as to the reasoning behind disabling cpu cores. What is the purpose?

Posted on 2016-05-16 20:04:06

You should disable one physical core (or two threads since you have hyperthreading) per GPU, so you should disable 4 of the 12 threads you see in Photoscan. The reason behind this is that Photoscan will utilize both the CPU and GPU at the same time, but if the CPU is 100% loaded it will have a hard time getting whatever data to the GPU that the video card needs to do it's own processing. By disabling some cores, Photoscan will not use them which leaves them free to get the video card everything it needs.

The one core per GPU is mostly a rule of thumb, so you may be able to get away with just disabling one core for both of your video cards. It really depends on the speed of your CPU, the speed of your video cards, and the exact data you are working with. In your case, I think you very well might be able to just disable one core since your CPU is relatively high frequency and your GPUs are mid-range cards. Really, the only way to know if to do a couple quick tests on your own data. Do it once with two cores disabled, then again with only one core. Whichever is faster, go with that configuration

Posted on 2016-05-16 20:54:56
John Waite

Hi, really enjoyed reading the article and would appreciate advice as just starting to use software. Plan on using existing server (FujitsuTX300 S7 with two E5-2670 Xeons, 96GB RAM and twin 300GB drives. To experiment would want to start with 2 video cards (thinking 1060) but more if need develops. Do you think server is suitable and could I mix 1060s with 1070s or 1080s if project is a success and additional funding support is forthcoming?

Posted on 2016-11-21 15:33:53
Dale Wendorff

How does your software work with a GTX Titan Z? I have a Lenovo D30 tower that will not utilize SLi unless I am running Quadro video cards, and my Quadro 6000 is extremely unstable with your software. The Titan Z is the most powerful single card setup I could find.

Am I correct in assuming the Titan Z is merely two Titan X chips on a single card?

Posted on 2017-01-24 22:15:17

Hi Dale! PhotoScan is not our software, we just did testing on it to see how it performs with different hardware configurations. We build and sell computer systems based on that testing as well. I cannot answer for the PhotoScan developers - you would need to contact Agisoft, the maker of that program - but I can give you a little info that may help.

The Titan Z is indeed a dual GPU card, but it is not based on the Titan X chips. The Titan X is newer. The Titan Z was more like two of the original Titan cards put together... or maybe the Titan Black, I cannot remember now. It is definitely old, though, and I would generally recommend newer cards (unless you have old models lying around).

Additionally, video cards do *NOT* need to be in SLI to be used by PhotoScan. SLI is primarily a gaming technology, and software that uses video cards for other compute functions (like this program) do not use it. In fact, it can cause such programs to work improperly. You should be able to simply run multiple video cards without needing to connect them via SLI bridges or anything. The number and type of cards that will fit in your computer are determined by the motherboard, chassis, and power supply. I am not familiar with the Lenovo system you mentioned, though, so I cannot provide further info in that regard.

Posted on 2017-01-24 22:27:27

Edit: William beat me to a reply by a few minutes, but I'm going to leave mine here since we both said very slightly different things.

The GTX Titan Z is actually an older card and is two slightly downclocked GTX Titan Black GPUs - not the newer GTX Titan X or Titan X (Pascal). Since it is two GPUs on one board, Photoscan should see it as two individual cards but things often get weird with the Titan Z since dual GPU on one PCB is not something that is done very often so some software has problems utilizing both cards.. I would suggest getting into contact with Agisoft directly to see if they know whether it will work or not.

If you need a single GPU and can't do multiple cards, however, the new Titan X using the Pascal architecture (not the GTX Titan X we tested in this article) is actually going to be the most powerful single GPU available. The old GTX Titan Z might have more CUDA cores, but the Titan X runs at almost exactly twice the frequency alongside all the power/performance improvements that come with using a newer architecture. Hard to say how the performance compares without direct benchmarks, but just as an offhand guess I would imagine the Titan X is probably around 15-20% faster than the GTX Titan Z in Photoscan.

Posted on 2017-01-24 22:31:56
Prasanna Sreenivas

Hi . Am looking to buy a machine which supports AGI photo scan software.Kindly advise if imac machine with AMD Radeon M390 would support this? If not kindly advise me on the workstation required for this software.

Posted on 2017-02-08 09:25:18
Jeff Stubbers

We do not offer Apple systems, so unfortunately we cannot provide recommendations there. However, the systems listed at: https://tinyurl.com/jnovwnv would include the hardware we would recommend for Agisoft PhotoScan.

Posted on 2017-02-08 13:43:20
Josh

Hello, just wanted to add something, in case anybody is still reading these.

Just discovered that if you use the Linux client with CUDA enabled, you can cut the time it takes to create a dense cloud in half.

Just downloaded the monument test file that this article used, and finished the entire dense cloud process in 67 seconds using 4x 1080ti graphics cards on Linux, whereas when I tried it on Windows with my same exact setup, it took 5 minutes.

Have tested a 1000 image generation as well, it took 1900 seconds on linux, stock, and 4200 seconds on windows while heavily overclocked.

Posted on 2017-05-17 22:22:25
Tohidul Islam

Hi Josh, that's really interesting. I am about to start a VR project where we're recreating a whole village using photogrammetry and I'm just looking through these articles for the best hardware and software setup. Which Linux disto do you use and what is your hardware setup? We will most likely be using well over a thousand images. Thanks in advance

Posted on 2017-07-29 16:49:05
Josh

I've got the machines that we're currently using set up on Ubuntu gnome 16.04, with the cuda 8.0 toolkit installed from nvidias website (the local .deb version). The agisoft linux install is pretty straightforward, just extract the download and run the .sh. After install, be sure to go into the preferences and enable all of the cards, and make sure the tickbox that states "use cpu when performing gpu processing" is NOT ticked. <--- is a huge performance boost

Posted on 2017-07-30 19:40:56
Tohidul Islam

OK I'll try that. Also on the cpu front do you recommend going for a dual xeon processor or the new amd thread rippers? I heard a lot of people complain about amd gpu. In your experience is it better to go for something like gtx 1080 instead?

Posted on 2017-08-01 13:16:56
Josh

The gtx1080ti is the best card around for computing that I've found as far as price\performance. Its got an 11ghz transfer speed, which puts it at the top of the list as of the moment. For cpu, I've noticed that Intel has quite a bit less latency than the amd processors. Intel + Nvidia.

Unless you wanted to try the thread ripper amd processors -- they seem like theyd do well based on their specs -- paired with some Vega GPUs.

Posted on 2017-08-04 03:55:37
Brian Stokes

Love this article. I am wondering how accurate it is now that PhotoScan has probably evolved since then? Is it still true that 2 not as fast GTX cards > 1 beefy GTX card in the app, provided you have enough cores? (say, a 7820X) (It's all getting weird with the miners making the GPU costs go up.)

Posted on 2017-08-17 19:14:29