Puget Systems print logo


Read this article at https://www.pugetsystems.com/guides/1457
Article Thumbnail

Metashape Benchmark

Written on May 3, 2019 by William George

Metashape Logo (all rights to this image belong to Agisoft)

Here at Puget Systems, we have put together a benchmark utility for Agisoft Metashape which measures system performance by running two small projects - one model and one map - and tracking the time taken to process each step. This benchmark is freely available to download, below, though running it requires a valid installation of Metashape on a 64-bit (x64) Windows operating system. We built and tested it on Metashape Professional, which is available for a 30-day trial if you do not already own a license for it.

How to Use the Metashape Benchmark

Using this benchmark is quite simple, as it functions through the Python scripting support built into Metashape Professional.

  1. Download the Metashape Benchmark file by clicking on the blue button above
  2. Once downloaded, unzip "Puget Systems Metashape Benchmark.zip" to a location where you have write permission
  3. Run your installation of Metashape
  4. Open the "Puget Systems Benchmark.psx" project file within Metashape
  5. Go to the Tools drop down menu and click on Run Script
  6. Click on the folder icon and navigate to the location where you unzipped the benchmark in step 2
  7. Select the "Puget Systems Benchmark.py" file, click on Open, and then click on OK
  8. At the intro screen, read the description and then click on Start to begin
  9. Avoid using the mouse or keyboard during the benchmark execution - changing focus at the wrong time may interfere with it
  10. When the benchmark is complete, a summary screen will be displayed with the time (in seconds) for processing each step
  11. That information will also be saved to a results file in the same folder, along with some basic system specs, for future reference

You can compare the results from running this benchmark on your system to those shown in in future Metashape performance articles we publish. Past PhotoScan articles used different image sets, so those results will not be comparable to this benchmarking tool.

Please also share your results and system specs in the comments below!

Update Log

5-3-2019: Initial public release
5-8-2019: Update to benchmark, breaking it into a few smaller steps and adding model calculation via Depth Maps. Total processing times are directly comparable to previous release, but some of the step times are not.
5-15-2019: Update to benchmark, enabling support for Metashape Professional 1.5.2 (1.5.0 and 1.5.1 should still work as well)

Tags: Metashape, PhotoScan, photogrammetry, Benchmark, Agisoft, CPU, GPU, Performance

Are you running a bench for the Depth Maps meshing method or only meshing via Dense Cloud (not GPU accelerated)?

Edit: After looking at the .py script it seems like you are not bench-marking the new Depth Maps based mesh reconstruction method. Why ? This is literally the biggest features addition of this year's release.


Posted on 2019-05-05 10:09:06

The dataset is unfortunately too "simple" to be effective as a benchmark IMO. Only 36 20MP photos isn't going to sweat any modern CPU/GPU especially when running the reconstruction at medium settings (which is what the benchmark is setup to do).
Anyways here's my results for the Rock running the "benchmark" manually on Metashape 1.5.1 Standard and using the Depth Maps based meshing method.

Win10 Pro/Intel i7 6700K/32GB Ram/Samsung 970 EVO/Radeon VII:

Alignment: 19.289 sec
Build Mesh: 92.424 sec (Depth Maps reconstruction took 17.096 seconds)
Texturing: 64.24 sec


Posted on 2019-05-05 10:40:13

Our benchmarks are indeed using fairly small data sets, which was intentional in order to keep the zipped file sizes reasonable. We have bigger image sets, but I didn't want to ask folks to download 20GB+ files to benchmark, plus those also take much longer to process. Internally we will be running both the smaller image sets included in these public benchmarks as well as larger ones, and my *hope* is that we will find performance to be similar in both cases (systems that are faster with the small image sets also being faster with the large ones, etc).

As for building the mesh, most of my experience is with PhotoScan and other photogrammetry applications where the only method of creating a mesh is by using the dense point cloud. I hadn't dabbled with using depth maps instead, but I did look into that a little over the weekend after I first read your comments. Maybe I will see if I can add that capability to the benchmark, such that it tests both methods. I would also like to try and create a different benchmark utility which would work with Metashape Standard, avoiding the use of scripting, but I don't have access to a license of that version so I can't account for any other potential differences compared to the Professional demo.

Posted on 2019-05-06 16:15:17

I just tried both the normal (dense cloud) and depth map approach to building the mesh on one of my test systems here, and got the following results:

Core i9 9900K + RTX 2080 Ti + 64GB (starting from the same project with Alignment already completed)

Build Dense Cloud: 29.6 seconds
Build Mesh from Dense Cloud: 53.2 seconds
Face Count: 137k

Build Depth Maps: 14.0 seconds
Build Mesh from Depth Maps: 72.2 seconds (86.2 seconds minus the 14.0 seconds taken by the Depth Maps portion)
Face Count: 23k

So at least on this system, with this image set, the Depth Maps method was actually slightly slower (86.2 seconds vs 82.8 seconds) and resulted in a far less complex model (about 1/6th the polygon count compared to the mesh derived from the Dense Cloud method).

Now one thing that I did note is that it looked like the Depth Maps method was "cleaner" - in that it avoided some of the extra points which the Dense Cloud picked up that shouldn't have been there and had to be manually removed. So maybe that is an advantage it has? I'll do a little more testing, but this seems sort of a mixed bag at the moment.

Posted on 2019-05-06 17:01:53

If I had to guess the Depth Maps method was slower in your test because you photo-set wasn't super complex and the medium settings used. Once you crank up the number of photos and also the settings (high & Ultra-High) you will see things reverse because the GPU will finally be put to work.
Also note that the Depth Maps meshing method is out-of-core which means that you are no longer going to hit a RAM limit when processing large data-set of photos. For example I can process project using this method which I was able to do using the Dense Cloud method because I have only 32Gb of RAM at home.

Also note that the face count preset are just that: presets. In Photoscan/Metashape the Mesh is always generated at the highest possible detail and decimated to the preset you selected or the custom face count you chose. If you set the face count to 0 the mesh won't be decimate and you will be provided with the full blown mesh that the app has processed.

Regarding the complexity of the sample date. It is highly probable that more complex database (or higher settings ) are faster on a certain GPU arch than another and that this same GPU arch is slower when the data set or the settings are lower...that's the "fun" part of photogrammetry unfortunately

Posted on 2019-05-06 17:32:38

I remember your comment a couple months ago about issues with 1.5.2, so I've been only using 1.5.1 in my development of the benchmarks and the testing that I am starting to do here. Even though that might not impact the dense cloud method, I wanted to err on the safe side. Thank you for that tip :)

I wasn't aware that Metashape worked as you describe in regards to the face count, though. Again, thank you for the insight!

Given all of this, I'm curious what you would advise. For the public benchmark, we wanted something that would be small enough (in terms of download size) and fast enough (in terms of processing time across a wide range of system hardware) that it wouldn't create a barrier to lots of people being able to try it out... while still retaining some level of value in terms of hardware comparison accuracy. Then with our larger data sets, we want to be able to verify whether the results from the smaller tests are applicable as image counts grow - and in the cases where they may not be, to provide a second set of data for readers to use in selecting system hardware. At the same time, even those tests cannot take more than a few hours to process or we simply cannot get through enough testing in a timely manner to make effective content.

Posted on 2019-05-06 17:47:59

I think that those benchmark datasets are a good idea (I didn't know that you planed on also running larger ones for the articles which is great). With more projects I have under the belt the more I realize that every dataset behaves differently which makes benchmarking extremely difficult. Your best bet would be to have something around 100x 24MP Photos as your larger bench dataset @ High settings so it doesn't take hours to complete but also gives you a good idea regarding overall processing time. Also ad a Depth Maps mesh geration bench for metashape to be complete as this option is super usefull for systems with less than 64GB RAM (but still not as fast as Reality Capture obviously) as results in faster & more detailed reconstruction as long as the source images are of top quality (well..note in 1.5.2..).

The official description in the manual: "Depth maps setting allows to use all the information from the input images more
effectively and is less resource demanding compared to the dense cloud based reconstruction. The
option is recommended to be used for Arbitrary surface type reconstruction, unless the workflow used
assumes dense cloud editing prior to the mesh reconstruction."


Posted on 2019-05-06 18:03:26

Hmm, interesting - so I can at least avoid worrying about the depth map approach when doing map projects (since they use height field rather than arbitrary mode). That is good to know, and makes a lot of sense.

EDIT: Wait, what am I saying? The workflow for maps used Build Tiled Model instead of Build Mesh, and that too offers Depth Maps as an option. Argh, I'm getting myself all confused now. Too many options across too many steps :/ lol

I'm still leaning toward testing both modes, if possible, since that could provide some really interesting data points. Maybe with a certain level of GPU vs CPU one method or the other would be faster, for example? I'll have to do a bit more digging, but as always I appreciate your input :)

Posted on 2019-05-06 19:37:06

William, feel free to DM on Twitter so we can exchange emails (it would be easier than communicatingthrough this comment section..)

Posted on 2019-05-06 22:19:55

I tried to send you a DM on Twitter, but it said I was not allowed to send a DM to your account (@MobileTechWorld I assumed). If you'd prefer to email, my address isn't particularly a secret: william {at} pugetsystems {dot} com

Posted on 2019-05-06 23:36:27

Mail sent.


Posted on 2019-05-07 15:23:40
David Fletcher

Hi, thank you for doing this. I've found the Metashape benchmarking info you've posted previously really useful in my own PC set up.

Running the new benchmark I get the following error after about 5 minutes:

'reuse_depth' is an invalid keyword argument for this function

I'm using the professional edition.

Here is Benchmark Results.txt file to as far as it got:

Agisoft Metashape Professional Version: 1.5.2
Benchmark Started at 22:08 on May 10, 2019
CPU: Intel64 Family 6 Model 158 Stepping 9, GenuineIntel
Number of GPUs Found: 2
GPU Model(s): GeForce GTX 1080'}, {'clock': 1100, 'compute_units': 24, 'version': 'OpenCL 2.1 , Intel(R) HD Graphics 630'}]
Project: Rock Model
Align Photos: 31.9
Build Depth Maps: 19.4
Build Dense Cloud: 32.8
Build Mesh from Dense Cloud: 85.2
Decimate Mesh: 18.5
Build Texture: 91.0
Total Processing Time: 278.8
Project: Rock Model using Depth Maps
Align Photos: 31.9
Build Depth Maps: 19.4

Many thanks

Posted on 2019-05-10 21:26:56
David Fletcher

I'm using metashape version

Posted on 2019-05-10 21:28:05

Hmm, that is interesting - I built / tested this using Metashape 1.5.1 (unsure of the full version off the top of my head)... I wonder if something changed in 1.5.2? I've been avoiding it based on some comments expressed by folks in comments on other articles here, indicating that a certain part of the workflow was no longer functioning properly in 1.5.2, but maybe I need to give it a shot and see if I can replicate the issue you are seeing.

Just FYI, the step it looks like it is failing on is what I call "Build Mesh from Depth Maps", which runs this command:

chunk.buildModel(surface=Metashape.Arbitrary, interpolation=Metashape.EnabledInterpolation, face_count=0, source=Metashape.DepthMapsData, quality=Metashape.MediumQuality, keep_depth=True, reuse_depth=True)

The purpose of "reuse_depth=True" there is to have it use the previously-calculated depth maps, which are saved in the project file that is opened prior to this command, so that tracking the time of calculating the mesh can be separated from calculating the depth maps. If you want to explore this further, you could check to see what running a similar command directly from the console might achieve.

Posted on 2019-05-10 22:01:10

Actually, I don't even have to test it - I found the following line in the updated Metashape Python API 1.5.2 PDF:

• Removed quality and reuse_depth arguments from Chunk.buildModel() method

(source: https://www.agisoft.com/pdf...

Hmm, I'll have to think about how I can approach this now. Thank you again for notifying me! I'll let you know when I've updated the benchmark, so you can run the full test.

Posted on 2019-05-10 22:40:39
David Fletcher

Thank you kindly. Look forward to trying it. :)

Posted on 2019-05-11 13:30:06

Okay, I've split up the code path for that single step into one for 1.5.0 & 1.5.1 and another for all other versions - intending it to cover 1.5.2 and future updates, though it is always possible that Agisoft will change more of the commands in the next version(s). The link above should be updated, but if you'd rather not re-download the whole package I can email you the Python script by itself instead. Just drop me an email at william [at] pugetsystems [dot] com and I'd be happy to send that over.

Posted on 2019-05-15 20:37:47
David Fletcher

Hi, thank you for the update.

I have now benchmarked 2 computers:

Computer 1 -> Alienware 17 R4
CPU: core i7-7820HK
RAM: 32gb
GPU: 1 x gtx 1080

Agisoft Metashape Professional Version: 1.5.2
Benchmark Started at 19:08 on May 16, 2019
CPU: Intel64 Family 6 Model 158 Stepping 9, GenuineIntel
Number of GPUs Found: 2
GPU Model(s): GeForce GTX 1080, Intel(R) HD Graphics 630
Project: Rock Model
Align Photos: 32.7
Build Depth Maps: 20.1
Build Dense Cloud: 33.5
Build Mesh from Dense Cloud: 96.1
Decimate Mesh: 17.7
Build Texture: 89.8
Total Processing Time: 289.9
Project: Rock Model using Depth Maps
Align Photos: 32.7
Build Depth Maps: 20.1
Build Mesh from Depth Maps: 63.6
Decimate Mesh: 13.2
Build Texture: 91.1
Total Processing Time: 220.7
Project: School Map
Align Photos: 49.1
Build Depth Maps: 106.8
Build Dense Cloud: 88.6
Build Tiled Model: 818.3
Build DEM: 11.1
Build Orthomosaic: 88.7
Total Processing Time: 1162.6
Benchmark Completed at 19:35


Computer 2 -> Custom built bench-mounted PC
CPU: i9-9900X
RAM: 128gb
GPU: 2 x gtx 1080ti

Agisoft Metashape Professional Version: 1.5.2
Benchmark Started at 19:49 on May 16, 2019
CPU: Intel64 Family 6 Model 85 Stepping 4, GenuineIntel
Number of GPUs Found: 2
GPU Model(s): GeForce GTX 1080 Ti, GeForce GTX 1080 Ti
Project: Rock Model
Align Photos: 16.0
Build Depth Maps: 9.7
Build Dense Cloud: 16.1
Build Mesh from Dense Cloud: 57.5
Decimate Mesh: 16.3
Build Texture: 44.9
Total Processing Time: 160.5
Project: Rock Model using Depth Maps
Align Photos: 16.0
Build Depth Maps: 9.7
Build Mesh from Depth Maps: 49.7
Decimate Mesh: 12.2
Build Texture: 47.4
Total Processing Time: 135.0
Project: School Map
Align Photos: 22.9
Build Depth Maps: 48.2
Build Dense Cloud: 44.9
Build Tiled Model: 527.7
Build DEM: 6.9
Build Orthomosaic: 53.8
Total Processing Time: 704.4
Benchmark Completed at 20:06

Posted on 2019-05-16 19:12:25

Awesome, I'm glad it works properly on 1.5.2 now :)

Posted on 2019-05-17 17:44:27
Atanas Dinchev

Agisoft Metashape Professional Version: 1.5.2
Benchmark Started at 17:52 on May 25, 2019
CPU: Intel64 Family 6 Model 79 Stepping 1, GenuineIntel
Number of GPUs Found: 4
GPU Model(s): GeForce GTX 1080 Ti, GeForce GTX 1080 Ti, GeForce GTX 1080 Ti, GeForce GTX 1080 Ti
Project: Rock Model
Align Photos: 18.2
Build Depth Maps: 10.7
Build Dense Cloud: 23.5
Build Mesh from Dense Cloud: 80.8
Decimate Mesh: 21.2
Build Texture: 58.1
Total Processing Time: 212.5
Project: Rock Model using Depth Maps
Align Photos: 18.2
Build Depth Maps: 10.7
Build Mesh from Depth Maps: 107.4
Decimate Mesh: 15.9
Build Texture: 60.3
Total Processing Time: 212.5
Project: School Map
Align Photos: 23.9
Build Depth Maps: 42.5
Build Dense Cloud: 80.8
Build Tiled Model: 730.3
Build DEM: 8.0
Build Orthomosaic: 60.5
Total Processing Time: 946.0
Benchmark Completed at 18:15

Posted on 2019-05-25 15:19:01

7. ''Select the "Puget Systems Benchmark.py" file, click on Open, and then click on OK''

There is some ''arguments'' box below the field where the py script file would be selected. What is this arguments nonsense? I only need this program for the benchmark, nothing more.

Posted on 2019-07-11 12:09:32

That is in case you want to add additional commands that are passed to the script. In this case, for the purposes of the benchmark I made, just leave it empty :)

Posted on 2019-07-11 20:02:52