Reality Capture Testing Methodologies

Reality Capture Benchmark Testing Methodologies

Introduction

It has been a while since we have benchmarked any photogrammetry applications. However, we’ve always wanted to return to photogrammetry since work in these applications can often take hours or days, and users are always looking for ways to speed up the process. In addition, over the past two years, numerous changes have occurred in both software packages and industries as a whole that have changed what type of hardware works best. This means that we’ve needed to start over on our benchmark development, and we have chosen to start with Reality Capture, which we included in our recent AMD Ryzen 9000 Content Creation Review.

Reality Capture Testing Methodologies

Several big developments drew us to this Reality Capture over others like Pix4D or Metashape. First, Epic acquired it and incorporated it into the Unreal Launcher. Many of our customers rely on Unreal Engine, so having a convenient and compatible photogrammetry option makes Reality Capture an easy choice. Second, with the most recent update, Reality Capture now has a free option. Previously, users either had to buy a perpetual license or use their PPI (pay-per-input) pricing model. Now user can try reality capture for free to see how well it fits their workflows. Businesses have the option to buy a seat license for $1250 or use the Unreal Subscription for $1850, which includes Reality Capture, Unreal Engine, and Twinmotion.

Tower Computer Icon in Puget Systems Colors

Looking for a Reality Capture Workstation?

We build computers tailor-made for your workflow. 

Talking Head Icon in Puget Systems Colors

Don’t know where to start?
We can help!

Get in touch with our technical consultants today.

Benchmark Tasks

When it comes to benchmarking Reality Capture, there are several tasks that are time-consuming and tedious. The amount of time these tasks take varies drastically depending on the size and detail level of the project. Some users will be able to align their images and generate a 3D mesh in a couple of minutes, while others will need several hours. To that end, we need to cover multiple sub-tasks on projects of different sizes. 

For this first round of benchmark development, these are the sub-tasks we are tracking:

Image Alignment

This is the first step everyone takes and the key part of photogrammetry. Aligning images is where the application analyzes all the photos for similar markers, determines where cameras where the cameras were in 3d space, and generates a point cloud. This is the basis for all steps to come.

Reconstruct in Preview Mode

This is where the application takes the point cloud from the previous step and generates a 3D mesh. Preview Mode is a CPU-based and lightly threaded specific setting that gives the users a rough approximation of what geometry was captured. Users can then edit what parts of the point cloud are actually included in the final model. Going through this step will significantly improve the speed of the following steps.  

Reconstruct in Normal/High Detail

This is the step where the final geometry is created, and it is the most time-consuming. During this phase, the processing will go through times of using only a few cores, using as many cores as are available, using the GPU, and writing cache files to disk.  

Unwrap and Texture Model

The last step we are tracking is generating the texture for the final model. This is actually the combination of a few sub-tasks, specifically unwrapping and generating UVs and then generating a texture based on all the photos.  

We can also track loading and saving, importing a Laser Scan Point Cloud, and several other tasks, although some of these do not contribute much time to the overall process. They may prove useful in storage drive testing, but for now, we aren’t tracking them or factoring them into the overall score. 

Benchmark Projects

Now that we have identified the specific tasks we want to track, we can use various projects to provide data for a range of workflows. Thankfully, Reality Capture has an excellent selection of sample datasets and CLI samples on their website. These are the three we are using for now:

Habitat 67 Sample

A screenshot showing the Project 67 finished project

The Habitat 67 Sample is a highly detailed scan of a large apartment complex incorporating both ground and aerial photography and a laser scan point cloud. This project allows us to evaluate the software’s ability to handle complex, multi-source data. Included in this project are 458 images at a resolution of 4243×2828 plus 72 Laser Scan Point Cloud files. In total, 24.4GB of data is being analyzed. We use two scripts for this test. The first imports and aligns the images, then saves and exits. The second loads a previously aligned project and reconstructs in Preview, Normal, and High detail levels, then generates the textures.

The exact code we use to test this project is:

:: run RealityCapture
%RealityCaptureExe% -set "appIncSubdirs=true" ^
	-addFolder %Images% ^
	-importLaserScanFolder %Laser% ^
	-align ^
	-calculatePreviewModel ^
        -save %Project% ^
        -quit

:: run RealityCapture
%RealityCaptureExe% -load %projectpath% ^
	-calculatePreviewModel ^
        -calculateNormalModel ^
	-calculateHighModel ^
	-calculateTexture %TextureParams% ^
        -save %Project% ^
        -quit

Object Reconstruction

A screenshot showing the Object Reconstruction finished project

The Object Reconstruction CLI sample from Epic involves a smaller, more focused project: reconstructing a woodworker’s hand plane. This project simulates a typical workflow for 3D scanning objects where there are image groups for each side of the object with location markers used for alignment. For this one, we aren’t tracking the image alignment step, as it only takes a few seconds. The dataset for this is 177 images at a resolution of 2736×1824 plus masks, totaling 253MB.

The script used is as follows:

:: Run RealityCapture.
%RealityCaptureExe% -newScene ^
	-addFolder "%~dp0side1" ^
	-setProjectCoordinateSystem Local:1 ^
	-detectMarkers "%~dp0detectSettings.xml" ^
	-importGroundControlPoints "%~dp0markerPositions.csv" "%~dp0gcpSettings.xml" ^
	-align ^
	-setReconstructionRegion "%~dp0object.rcbox" ^
	-calculatePreviewModel ^
	-selectLargeTrianglesRel 30 ^
	-removeSelectedTriangles ^
	-setReconstructionRegion "%~dp0forCut.rcbox" ^
	-selectTrianglesInsideReconReg ^
	-removeSelectedTriangles ^
	-selectLargestModelComponent ^
	-invertTrianglesSelection ^
	-removeSelectedTriangles ^
	-importLicense "%~dp0plane.rclicense" ^
	-exportDepthAndMask "%~dp0maskSettings.xml" ^
	-newScene ^
	-addFolder "%~dp0side2" ^
	-setProjectCoordinateSystem Local:1 ^
	-detectMarkers "%~dp0detectSettings.xml" ^
	-importGroundControlPoints "%~dp0markerPositions.csv" "%~dp0gcpSettings.xml" ^
	-align ^
	-setReconstructionRegion "%~dp0object.rcbox" ^
	-calculatePreviewModel ^
	-selectLargeTrianglesRel 30 ^
	-removeSelectedTriangles ^
	-setReconstructionRegion "%~dp0forCut.rcbox" ^
	-selectTrianglesInsideReconReg ^
	-removeSelectedTriangles ^
	-selectLargestModelComponent ^
	-invertTrianglesSelection ^
	-removeSelectedTriangles ^
	-importLicense "%~dp0plane.rclicense" ^
	-exportDepthAndMask "%~dp0maskSettings.xml" ^
	-newScene ^
	-addFolder "%~dp0side1" ^
	-addFolder "%~dp0side2" ^
	-importLicense "%~dp0plane.rclicense" ^
	-align ^
	-mergeComponents ^
	-setReconstructionRegionAuto ^
	-calculateNormalModel ^
	-selectLargestModelComponent ^
	-invertTrianglesSelection ^
	-removeSelectedTriangles ^
	-calculateTexture ^
	-simplify "%~dp0simplifySettings.xml" ^
	-save "%~dp0plane.rcproj" ^
	-exportSelectedModel "%~dp0model\woodplane.obj" "%~dp0exportSettings.xml" ^
	-quit

Lumberyard Aerial Orthographic Project:

A screenshot showing the Orthographic reprojection finished project

This project involves processing overhead aerial photographs of a lumberyard to create an orthographic view of the space. This scenario tests Reality Capture’s efficiency in handling large-scale, top-down photography. The detail level isn’t as high as the Project 67 Sample, as the goal is a more general survey of the land rather than fully recreating the space in 3D. The dataset includes 482 images at a resolution of 2736×1824, totaling 647MB of data.

:: Run RealityCapture.
%RealityCaptureExe% -newScene ^
        -setProjectCoordinateSystem epsg:4258 ^
        -setOutputCoordinateSystem epsg:4258 ^
        -addFolder %Images% ^
        -selectAllImages ^
        -editInputSelection "inpPose=0" ^
        -importControlPointsMeasurements %ControlPoints% %ControlPointsParams% ^
        -importGroundControlPoints %GroundControl% %GroundControlParams% ^
        -align ^
        -setReconstructionRegionAuto ^
        -calculateNormalModel ^
        -calculateTexture ^
        -calculateOrthoProjection %OrthoParams% ^
        -save %Project% ^
        -exportOrthoProjection %OrthoPhoto% %OrthoExportParams% ^
        -quit

Gathering and Logging Results

Reality Capture outputs a log file with times for all steps in the respective scripts. We then collect these times and filter out the results needed for the hardware being tested. For example, some steps, such as import, loading, or saving, will have a greater impact on hard drive choice. Aligning images or reconstructing the geometry is impacted by what CPU is being used.

Scoring

To provide a comprehensive performance assessment, we calculate a Geomean (geometric mean) of the times recorded for each key step across all projects. Additionally, an overall score is calculated to summarize the system’s performance across the entire Reality Capture benchmark suite. This allows us to have a score for quick reference to see how much of an improvement new processors may provide, as well as more detailed results for those who want to dig into the specifics of their workflows. 

Is this benchmark available for download?

At the moment, our full Reality Capture benchmark is internal only, and not available to the public. From the information in this article, you should be able to reproduce it fairly easily, but a polished benchmark is still in the works. We do not yet have an ETA, but keep an eye out! Until then, we do plan to test Reality Capture regularly in our CPU and GPU articles in order to help users select the right hardware to help speed up their workflow.

That said, this is only the first iteration of this benchmark. We will continue to refine it and add new tasks as we hear from Reality Capture users. If there is something you think we should look into, please let us know in the comments below, and we’ll add it to our list of things to investigate.

Tower Computer Icon in Puget Systems Colors

Looking for a Reality Capture workstation?

We build computers tailor-made for your workflow. 

Talking Head Icon in Puget Systems Colors

Don’t know where to start?
We can help!

Get in touch with one of our technical consultants today.