Photo Editing Workflow Page Header

Puget Bench for Lightroom Classic

Our benchmarks are designed in partnership with hardware and software industry leaders, as well as end users and influencers who use Lightroom Classic daily, to ensure that they are representative of real-world workflows.

Quickly Jump To: Key FeaturesHardware RequirementsTest BreakdownScoringUpdate LogLegacy & Beta Versions

Puget Bench for Lightroom Classic runs on top of your installed copy of Adobe Lightroom Classic, providing benchmark data directly from the application. Our benchmarks are designed in partnership with hardware and software industry leaders, end users, and photography influencers to ensure that they are representative of real-world workflows.


Key Features

Adobe Lightroom Classic CC Icon

Realistic Testing

Interfaces with Adobe Lightroom Classic and benchmarks real-world workflows.

Checklist Icon in Puget Colors

Comprehensive

Provides a detailed analysis of results across multiple tests outlined below.

Computer Screen Icon in Puget Systems Colors

Public Database

Compare your scores to thousands of other user-submitted results.

Hardware Requirements

Windows

  • Adobe Lightroom Classic version 15
  • Intel, AMD, or ARM CPU meeting the System Requirements for Lightroom Classic
  • 16GB of memory
  • Discrete GPU with 8GB of VRAM (Extended preset)
  • >150GB of free storage
  • Windows 10/11
  • Lightroom Classic/OS language must be set to English

MacOS

  • Adobe Lightroom Classic version 15
  • 16GB of memory
  • >150GB of free storage
  • MacOS 13
  • Lightroom Classic/OS language must be set to English

Test Breakdown

Our Lightroom Classic benchmark evaluates performance using photo sets from four popular cameras, along with dedicated tests for AI-based features. Scores are generated for each camera (plus AI) based on the time it takes to complete each test, and combined into a single Overall Score.

The tests are divided into two presets:

Standard

  • Cameras Tested
    • Canon EOS R5 Mark II
    • Panasonic LUMIX S1R11
  • AI Tests
    • None

Extended

  • Cameras Tested
    • Canon EOS R5 Mark II
    • Panasonic LUMIX S1RII
    • Sony Alpha 1 II
    • Nikon Z8
  • AI Tests
    • Masking – Select Sky, Select Subject
    • Remove – Reflection Removal
    • Details – Denoise, Enhanced Details, Super Resolution

Note: The Extended preset should be run only on systems that meet (or ideally, exceed) the “Recommended” Lightroom Classic system requirements. If you encounter a test timeout error, you can adjust the default timeout in the benchmark settings (accessed via the gear icon in the bottom-left corner of the Creators app). The default is 300 seconds (5 minutes), and depending on your system specs, it may need to be increased to 900+ seconds.

Camera Tests

The cameras and images used in the benchmark (depending on the preset) are:

CameraImage FormatResolutionMegapixel
Canon EOS R5 Mark II.CR38192 x 546445 MP
Panasonic LUMIX S1RII
(DC‑S1RM2)
.RW28144 x 542444 MP
Sony Alpha 1 II
(ILCE-1M2)
.ARW8460 x 576050 MP
Nikon Z8 .NEF8526 x 550447 MP

For each camera, the following tests are performed:

Test NameSettings
Import 250 Photos– Add from location
– Minimal previews
– Do not generate Smart Previews
Create 100 Smart Previews– Default settings
Export 100 JPG– Format: JPEG
– JPEG Quality: 60
– Limit Size: Disabled
Export 100 DNG– Format: DNG
– JPEG Premiew: Medium

AI Tests

To test the performance of AI in Lightroom Classic, we focus on tasks that are processed on the local computer, rather than those done in the cloud. In general, features that involve the creation of new image information (object/people/dust removal) are processed in the cloud. Our benchmark instead focuses on tasks that are processed on the local system, including automatic masking (select sky/subject) and enhancements of an existing image (denoise, upscaling, etc.).

The AI tests are divided into six individual tasks, utilizing three photo sets. Each test is run on five photos in total, with the first result being thrown out due to AI model load time, and the final result being the average time to completion for the last four photos.

The three image sets used are:

Image Set NameImage FormatMPDescription
AI Select.DNG44-50 MPSelection of images from all cameras (converted to DNG) with a defined sky and subject
AI Reflection.DNG44-50 MPSelection of images from all cameras (converted to DNG) with a glass/window reflection
AI Details.CR3, .RW2, .ARW, .NEF44-50 MPSelection of images in their native RAW format

Using these image sets, we test the following:

Test NamePhoto SetSettings
Select SkyAI SelectMasking – Select Sky
Select SubjectAI SelectMasking – Select Subject
Reflection RemovalAI ReflectionRemove – Reflection Removal
DenoiseAI DetailsDetail – Denoise
Enhanced DetailsAI DetailsDetail – RAW Details
Super ResolutionAI DetailsDetail – Super Resolution

How Does the Scoring Work?

All of the scores in our Lightroom Classic benchmark are calculated using geometric means rather than averages or performance relative to a reference result. This helps to normalize the scores so that larger or small results are not unfairly weighted. It also allows for the benchmark to be more flexible and better able to handle large performance shifts due to either application optimizations or the launch of more powerful hardware.

For the score calculations, we start by dividing the tests by camera (as well as the AI tests). Since our scores are based on a “Higher is Better” method, each result is converted from a result in terms of time in seconds to complete (lower is better) to the performance rate (higher is better) by dividing 1 by the result (example: 1/1.17 = .855).

A score is generated for each group by calculating the geometric mean of all test performance rates and applying a scoring coefficient to roughly align the scores across all our benchmarks.

Canon Score = geomean((1/Canon_Test_1), (1/Canon_Test_2), ...) * 2000
Panasonic Score = geomean((1/Panasonic_Test_1), (1/Panasonic_Test_2), ...) * 2000
Sony Score = geomean((1/Sony_Test_1), (1/Sony_Test_2), ...) * 2000
Nikon Score = geomean((1/Nikon_Test_1), (1/Nikon_Test_2), ...) * 2000
AI Score = geomean((1/AI_Test_1), (1/AI_Test_2), ...) * 450

Overall Score

These major scores are then combined into the Overall Score using a geometric mean and multiplied by 100 to differentiate the Overall Score from the Major Scores. Currently, we do not weigh any of the major scores more than any of the others, so each contributes equally to the Overall Score.

Overall Score = geomean (Canon Score, Panasonic Score, Sony Score, Nikon Score, AI Score) * 100

This method results in an Overall Score with a typical run-to-run variance of about 1-2%, and Major Scores with a variance of about 5%, assuming no thermal or other performance throttling on the system.

Benchmark Groups

Results in the Puget Bench public benchmark database are grouped according to the benchmark and application version in order to maintain a balance of performance consistency and sample size. For example, if we are forced to change our benchmark to add, remove, or change a test, we do not want to group the results with those from a different benchmark version. More commonly, however, there is a change in the host application (Lightroom Classic, Photoshop, etc.) that affects performance. This could be an update that improves GPU usage or changes how effects are processed, which can throw off comparisons.

At the same time, we try not to split results into too many small groups. Having more results in each group makes the data more useful—it smooths out odd scores and gives you a better idea of how different hardware stacks up.

Currently, for Lightroom Classic, we have the following score groups:

Puget Bench for Lightroom Classic 1.0.0

  • Initial “1.0” release

Puget Bench for Lightroom Classic 0.9-0.96 Beta

  • No changes between the supported versions of Lightroom Classic caused significant changes to system performance for the tasks our benchmark examines.

Benchmark Update Log

This includes updates to the benchmark itself. Changes to the desktop application are recorded on the Puget Bench for Creators page.

Version 1.0.0

  • Requires Lightroom Classic v15
  • Integrated into the Puget Bench for Creators application (minimum version 1.4.0).
  • Supports Windows and MacOS.
  • Updated test image sets using RAW images from the following cameras:
    • Canon EOS R5 Mark II
    • Panasonic S1RII
    • Nikon Z8
    • Sony Alpha A1 II
  • Score Adjustments:
    • All scores are now divided into groups based on the camera/RAW format.
    • In addition, the Extended preset includes a new “AI” score, which is a compilation of the AI tests.
  • Test Adjustments:
    • Adjusted Import/Export/Smart Preview tests.
      • Changed the exact count of images used for each test to balance benchmark run time, and accuracy.
      • Import – 250 photos
      • Smart Previews – 100 Photos
      • Export to JPG – 100 Photos
      • Export to DNG – 100 Photos
    • Added tests for AI Denoise, Enhanced Details, Super Resolution, and Reflection Removal.
      • Results are integrated into the “AI Score”.
      • These tests are each run on four images (one from each camera), plus a “warmup” test that is not recorded.
      • The final result is the average time to process the task on the four images.
    • Added tests for AI Select Subject and Select Sky.
      • Results are integrated into the “AI Score”.
      • These tests are each run on four .DNG images, plus a “warmup” test that is not recorded
      • Uses DNG format images to bypass the camera debayering process.
      • The final result is the average time to process the task on the four images.
  • Camera RAW cache files are now cleared before each benchmark loop. This will work even if the user has a custom cache location set in the application preferences.
  • To account for temporary caching, all test photos are duplicated to a temporary location with a unique filename before running the tests. This ensures that Lightroom Classic treats them as a unique set of images, even if the benchmark is run multiple times.

Version 0.96 BETA

  • Added plugin and CLI support for Lightroom Classic 14.0

Version 0.95 BETA

  • Added plugin and CLI support for Lightroom Classic 13.0

Version 0.94 BETA

  • Added plugin and CLI support for Lightroom Classic 12.0
  • Updated benchmark upload/view URLs to match web hosting changes
  • Required application information for the CLI moved to an .ini file that resides alongside the CLI in the plugin folder. This information was previously baked into the CLI itself, but by having it in an editable file, end users can add support for things like beta and pre-release versions of Lightroom Classic. The .ini requires the following entries:
    • Main section title: The major version of Lightroom Classic. This is used by the CLI for the “/app_version” argument
    • prefLoc: Location of the application preferences folder
    • prefFile: Specific preferences file that needs to be adjusted for automation
    • appLoc: Path to the Lightroom Classic application
    • appEXE: Name of the Lightroom Classic .exe when it has been launched

Version 0.93 BETA

  • Added CLI support for Lightroom Classic version 11.x
  • Created dedicated Benchmark catalogs for LrC version 9, 10, and 11.

Version 0.92 BETA

  • Added GPU driver and motherboard BIOS to the system specs for Windows systems
  • Misc bug fixes

Version 0.91 BETA

  • Added checks for Photomerge HDR/Panorama settings. If they are not default, it will notify the user that they need to adjust the settings or reset the preferences to default.
  • The “catalog:triggerImportFromPathWithPreviousSettings()” API hook does not properly use the last settings from the catalog if it does not match what is stored in preferences (common when changing catalogs). To fix this issue, the first import is done using the old method of opening the import dialog and using an automation script to import the image set. The other imports are still done with “catalog:triggerImportFromPathWithPreviousSettings()”, however, since it is more consistent when it works properly.

Version 0.9 BETA (Major Update)

  • Results are now uploaded to our online database. This is required for the free version, but opt-in if you have a commercial license.
  • Removed the result screen at the end of the benchmark run now that the full results can be viewed on our benchmark listing page.
  • Added licensed configuration options to a new GUI that loads when the benchmark is run.
  • License validation moved from the CLI utility to the plugin itself.
  • Added tooltips for the various settings that can now be configured.
  • Status logs and configuration settings moved to “~Documents⧵PugetBench⧵Lightroom Classic⧵TIMESTAMP” since we cannot always log directly to the plugin folder.
  • Dropped Develop Module Brush Lag tests. This is something we really want, but the current methodology we are using is too inconsistent to be reliable. We will continue to work on this type of test and hopefully be able to add a similar test back in the future.
  • Importing is now done via the “catalog:triggerImportFromPathWithPreviousSettings()” API hook. This should be much more reliable then the current window watch/button click method we were using.
  • Scoring has been adjusted based on the test changes. Due to this, the Overall and Active scores will not be consistent with previous versions.
  • General bug fixes and stability improvements.

Version .85 BETA

  • Added functional progress bar to DNG and export image deletion during cleanup (this can sometimes take longer than you would expect)
  • Additional logging for plugin and catalog path (useful for troubleshooting)

Version .8 BETA

  • Rename to “PugetBench for Lightroom Classic”
  • Improved timing accuracy for the Active tests
  • There is now a “Benchmark Results” screen that comes up at the end of the benchmark that displays a bunch of useful information including: benchmark version, cores, results for each individual test, and system information like CPU, RAM, OS, GPU, and Photoshop version
  • The benchmark now also makes a PNG of the “Benchmark Results” screen for easy sharing
  • Removed .csv log file support in the free edition (log files will be a feature in the commercial use version)

Version 0.2 BETA

  • First public release.

Legacy Benchmark Versions

Modern versions of our Lightroom Classic benchmark are available via the Puget Bench for Creators application. However, older versions of the benchmark are available via a LrC plugin. Note that these versions are no longer maintained and may not have support for recent versions of Lightroom Classic.