Puget Systems print logo
Read this article at https://www.pugetsystems.com/guides/1893
Article Thumbnail

Adobe Lightroom Classic - NVIDIA GeForce RTX 3080 & 3090 Performance

Written on September 24, 2020 by Matt Bach

TL;DR: NVIDIA GeForce RTX 3080 & 3090 performance in Lightroom Classic

Adobe has been steadily adding GPU support into Lightroom Classic over the last few versions, but for the tasks we currently test, there is little advantage to using a powerful GPU like the new GeForce RTX 3080 10GB or 3090 24GB. In fact, there is almost no appreciable difference between the fastest GPU we tested and having no GPU at all.


On September 1st, NVIDIA launched the new GeForce RTX 30 Series, touting major advancements in performance and efficiency. While gaming is almost always a major focus during these launches, professional applications like Lightroom Classic are becoming more and more important for NVIDIA's GeForce line of cards. Due to the significant improvements, Adobe has made around GPU acceleration in Lightroom Classic, this is the first time we will be doing GPU-focused testing for Lightroom Classic, Because of this, we are very interested to see what kind of performance delta we will see between the various cards we will be testing.

Lightroom Classic GPU Performance Benchmark - NVIDIA GeForce RTX 3080 10GB & RTX 3090 24GB

If you want to see the full specs for the new GeForce RTX 3070, 3080, and 3090 cards, we recommend checking out NVIDIAs page for the new 30 series cards. But at a glance, here are what we consider to be the most important specs:

VRAM CUDA Cores Boost Clock Power MSRP
RTX 2070S 8GB 2,560 1.77 GHz 215W $499
RTX 3070 8GB 5,888 1.70 GHz 220W $499
RTX 2080 Ti 11GB 4,352 1.55 GHz 250W $1,199
RTX 3080 10GB 8,704 1.71 GHz 320W $699
Titan RTX 24GB 4,608 1.77 GHz 280W $2,499
RTX 3090 24GB 10,496 1.73 GHz 350W $1,499

While specs rarely line up with real-world performance, it is a great sign that NVIDIA has doubled the number of CUDA cores compared to the comparable RTX 20 series cards with only a small drop in the boost clock. At the same time, the RTX 3080 and 3090 are also $500-1000 less expensive than the previous generation depending on which models you are comparing them to.

With the launch of the RTX 3090, we can update our previous Adobe Lightroom Classic - NVIDIA GeForce RTX 3080 Performance article with results for the 3090, but since the RTX 3070 is not launching until sometime in October, we cannot include it at this time. However, we are very interested in how the RTX 3070 will perform, and when we are able to test that card, we will post another follow-up article with the results.

Lightroom Classic Workstations

Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!

Test Setup

Listed below is the specifications of the system we will be using for our testing:

*All the latest drivers, OS updates, BIOS, and firmware applied as of September 7th, 2020

To test each GPU, we will be using the fastest platform currently available for the "Active Tasks" in Lightroom Classic - most notably the Intel Core i9 10900K. We overall recommend AMD Ryzen or Threadripper CPUs for Lightroom Classic due to their significantly higher performance for "Passive Tasks" like generating previews and exporting, but since the GPU is not significantly used for any of those tasks, we decided to use the 10900K to minimize the impact of the processor for the tasks that use the GPU. Even with this, however, be aware that there typically isn't much variation in performance between different video cards in Lightroom Classic.

We will also include results for the integrated graphics built into the Intel Core i9 10900K and with GPU acceleration disabled to see how much the recently added GPU acceleration features improve performance.

For the testing itself, we will be using our PugetBench for Lightroom Classic benchmark. This tests a number of range of effects and tasks in Lightroom Classic including importing, exporting, and tests simulating culling tasks. If you wish to run our benchmark yourself, you can download the benchmark and compare your results to thousands of user-submitted results in our PugetBench database.

Raw Benchmark Results

While we are going to go through our analysis of the testing in the next section, we always like to provide the raw results for those that want to dig into the details. If there is a specific task that tends to be a bottleneck in your workflow, examining the raw results is going to be much more applicable than our more general analysis.

NVIDIA GeForce RTX 3080 10GB & 3090 24GB Lightroom Classic GPU Performance Benchmark

Lightroom Classic Performance Analysis

Since this is the first time we are specifically testing GPU performance in Lightroom Classic, we do not yet have a specific "GPU Score" built into our benchmark. In fact, there are several tasks that we hope to include in the future (such as slider and brush lag) that should be an even better indicator of GPU performance than the tasks we currently test.

However, we should be able to see at least some indication of relative GPU performance with our current tests.

Overall, we didn't see much of a difference between the various GPUs we tested, or even the test using just Intel integrated graphics and GPU acceleration disabled entirely. NVIDIA is definitely a hair faster than AMD (which oddly was slower than having no GPU acceleration at all), but the performance between each NVIDIA GPU is close enough to be within the margin of error. In fact, Lightroom Classic tends to have a larger margin of error than our other benchmarks, and anything within ~5% we would consider to be effectively the same.

We could go into our results in more detail, but what we are taking from this is that for what we are testing, the GPU has almost no impact. As we mentioned earlier in this post, we do hope to include a number of other tests that should be a better indicator of GPU performance, but this simply reinforces that your GPU is a very low priority relative to your CPU, RAM, and storage.

How well does the NVIDIA GeForce RTX 3080 & 3090 perform in Lightroom Classic?

Adobe has been steadily adding GPU support into Lightroom Classic over the last few versions, but the different cards we tested all performed roughly the same. NVIDIA has a small lead over AMD, but the fact that having GPU acceleration disabled was also faster than the AMD cards tells us that GPU acceleration in Lightroom Classic is still very early in its development.

As we mentioned earlier in the article, there are a number of tasks that we currently do not test that should leverage the GPU a bit more (such as slider/brush lag) that we hope to add to our benchmark in the future. We are unfortunately limited to what is possible via the Lightroom Classic API, but with luck, we will be able to improve our GPU-specific testing in the future. However, without significant changes to the Lightroom Classic, we don't expect there to be any reason to invest in a high-end GPU any time in the near future.

As always, keep in mind that these results are strictly for Lightroom Classic. If you have performance concerns for other applications in your workflow, we highly recommend checking out our Hardware Articles (you can filter by "Video Card") for the latest information on how a range of applications perform with the new RTX 3080 and 3090 GPUs, as well as with different CPUs and other hardware.

Lightroom Classic Workstations

Puget Systems offers a range of poweful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!
Tags: NVIDIA, NVIDIA vs AMD, AMD, Vega 64, Radeon RX 5700 XT, RTX 2060 SUPER, RTX 2070 SUPER, RTX 2080 SUPER, RTX 2080 Ti, Titan RTX, RTX 3080, Lightroom CLassic, RTX 3090

Interesting comment " there is almost no appreciable difference between the fastest GPU we tested and having no GPU at all".

I assume tests were done with 1080p resolution, but if run at 4k, I assume there is a difference depending on CPU and inbuilt graphics device? Is there a "break" point with CPU/GPU where there is no difference between 1080 and 4k?

Posted on 2020-09-26 08:57:45

All our testing is actually done at 4K. So barring someone with an 8K display, this is pretty much the ideal situation for testing GPUs in Lightroom Classic. Well, except for the other tasks we noted in the article that we don't have a testing method for yet that might be able to show a difference.

Posted on 2020-09-28 16:37:37

Matt Bach How large was the lightroom catalog thought? Have you tried also via editing large images?

I have a GFX 100 MP, and loading up and editing is using 96%+ of VRAM on my RTX 2080, which makes editing really slow (when this happens).

Posted on 2020-10-09 23:08:23

I love your Puget tests - let me share my experience. I have been standardized to 4K displays for several years now. A couple years ago, my workstation's gem 7 motherboard died and I started an experiment to replace that by a 9th gen notebook with discrete GPU and discrete (4GB) video RAM. 16GB RAM, Thunderbolt 3, 4K touch screen - the works and 2K currencies. I never got it to work well with 14-bit RAW shots from my 24MP (*) camera using two 4K displays. Then I migrated to a 45.. MP camera and Lr could only be started after a cold reboot, but the system would crash after some time.
Not able to open Ps as well, or a (resource hungry) web browser.
A year ago, I decided to build a new workstation based on a 10700K, threw in 64GB of the fastest latency RAM and re-used my 1080ti with 11GB of fast vRAM. The workstation now has TB3 and I have a RAID 0 array of 4 SSD directly on PCIe (not on the South Bridge) used for temporary, cache and so on files. Starting Lr with focus on a folder of 45MP shots with one of these in Develop, easily jumps RAM-used up over 32GB and vRAM over 6GB. Taking a smart object to Ps and back, opening web browsers (Firefox) and starting MS-Office apps never give an issue.
When I take shots from Lr into Ps, easily vRAM-used grows over 8GB - Ps has its own GPU acceleration.

Puget tests show a performance advantage of the old 1080ti still over most of its younger siblings and even an RTX 3090 cannot do better. This may be an NVIDIA driver issue, or inherent to the code from the Mudbricks. We have to be aware that the RTX cards add "ray tracing" instructions to the GPU and ray tracing is important in virtual worlds where light sources have their light reflected by surfaces with patina towards a virtual camera (our monitor/display). Prima facie, this seems completely meaningless to what happens in Lr and Ps. And I do not expect a programmer with the Mudbricks to come up with an approach that uses ray tracing instructions to speed up Lr and Ps processing. Or, the RT series add gaming relevance, but may actually have lost a bit of power elsewhere (in the Lr and Ps niches of the GPU).

There are a couple finicky details in the whole thing. The Mudbricks only support one GPU - the one with the primary (from OS point of view) display on it and only GPU-optimize content associated with that primary display. I was led to believe that preview generation is based on horizontal display resolution and in the case of two 4K displays, this actually might be 7680 pixels. Knowing "programmers" all too well, I would not be surprised if in one place optimization is based on the single display indeed (as the Mudbricks write) but elsewhere a simple call to the OS for horizontal resolution drives the 7680 parameter value into the algorithms.

The fact that having more than one display is not abstracted away into one "logical display" by the operating system nor the driver layer, to me, is highly debatable. In the early 1990s we could hook up 6 displays to a Mac and have our After Dark screensaver fish swim across all of them, no problem. My thoughts go back to MS-DOS, pronounced aMeSs-DOS and being an acronym of Maybe Some-Day an Operating System.
In a way, it could be easy to have two GPU cards, each driving one display and each being GPU accelerated - provided proper standards. Well, look in detail at display scaling in Windows and "historical" choices look visionless and hysterical today.

My perception based on live experience with different computers, is that Lr and Ps are more memory constrained than anything else. But that may only become apparent when you feed heavy enough content into your apps. So, with relatively light content, up to a point you'll see clock cycle, memory bandwidth and a couple associated architecture qualities predict performance. With heavy content, memory becomes much more important - because if it is not there, your computer will spend most of its clock to swapping (thrashing, paging). Note that at CPU level, thrashing can happen too and the amount of CPU cache may become a bottleneck depending on how you load your system.

Or so I think.

(*) Note that MP are an "area unit" but for predicting human perception of detail resolution we need a linear unit. This is why ages ago, the resolution unit of linePairs per millimeter (**) got invented. If we compare side by side - linearly - then 96MP is 2x the detail resolution of 24MP (ceteris paribus, assuming all other attributes are good enough to reveal the best quality).
(**) A really good lp/mm value in the 1970s/80s for film and lenses (135 confection aka 35mm film shot at 24mm x 36mm aka full frame) was 100 lp/mm. There are 25.4 mm in an inch, so this converts to 25.4 x 100 = 2,540 lp/inch, or 5,080 lines/inch - each lp being a clear black line paired with a clear white line. If we assume perfect alignment of the black and white lines in the test card with the photosites in the digital sensor, then this boils down to a minimum of a 7,144 * 4,763 = 34MP sensor.

Posted on 2021-07-10 08:01:35