Agisoft PhotoScan 1.4.1 - Testing IntroductionWritten on April 20, 2018 by William George
PhotoScan is a photogrammetry program: an application that takes a set of images and combines them to create a 3D model or map. It has been a couple years (and several version updates) since we last tested PhotoScan, so we are revisiting it to see what has changed and how it performs on modern computer hardware.
This is the first in a series of articles we will be publishing, based on PhotoScan Professional version 1.4.1. Version 1.4.2 came out in the middle of our testing, but according to our contacts at Agisoft: "There shouldn't be any considerable changes in the processing performance" - so we are sticking with 1.4.1 throughout this series.
To start off, we are just going to talk about what aspects of computer hardware impact PhotoScan performance, and what sort of settings we are going to use in our tests going forward. As such, this will be a relatively short article - with more discussion and explanation than actual data and charts to look at.
First off, then, let's discuss what hardware in a computer has the biggest impact on PhotoScan. That is pretty straightforward: the CPU and GPUs (video cards) do the bulk of the computation here. Each step in PhotoScan can utilize one or both of those, depending on how Agisoft has coded it and what preferences the user has selected within PhotoScan. Breaking it down further, here are the major processing steps as listed under the Workflow menu in PhotoScan:
- Align Photos
- Build Dense Cloud
- Build Mesh
- Build Texture
- Build Tiled Model
- Build DEM
- Build Orthomosaic
In that workflow, only the first two items - Align Photos and Build Dense Cloud - include steps which can utilize the GPU(s). Agisoft claims in the manual for PhotoScan 1.4 that "photoconsistent mesh refinement" also uses the GPU, but based on our testing that does not seem to be part of any of the core workflow steps. Instead, it is available as the Refine Mesh option under the Tools menu as well as in batch processing. Both aligning photos and building the dense point cloud do show huge performance differences with and without a GPU, though, as well as differences from one GPU model to another and with multiple GPUs working together.
With that said, the Align Photos step is relatively short: often only 20 to 60 seconds with our test image sets. The Build Dense Cloud step, on the other hand, is one of the longer steps - and with lower-end hardware, or with quality settings at maximum, it can easily be the longest single step... and indeed, in many cases can take longer than all the other workflow steps put together! The settings on that step default to using the best quality - "Ultra High" - but we found that sticking with that setting makes testing multiple runs of multiple image sets on multiple hardware configurations take far too long. On the flip side, reducing quality settings down to "Medium" or "Low" makes steps go very quickly, and results in very rough looking models.
As such, we have settled on "High" quality settings across the board for our testing... at least whenever such settings are an option on those workflow steps. We perform Align Photos at "High" quality, which happens to be the default anyway; we build the dense point cloud at "High" quality; we build a "High" face count mesh; etc. That has resulted in reasonable lengths for each test run, while also having the tests take long enough that we can plainly see the performance difference between various hardware combinations.
Additionally, we will only be showing performance charts for steps that are appropriate to each article. GPU related tests, then, will only show Align Photos and/or Build Dense Cloud times - while CPU comparisons may include steps that are GPU accelerated since the CPU still plays a role in them. The image sets we use will also vary, depending on the focus of the testing in each article.
When each article in this series is published, I will update this list with a link to it. For now, though, this shows the topics we plan to cover (subject to change, of course, as our testing proceeds):