Table of Contents
In the latest version of Lightroom Classic CC (8.2), Adobe has added a new featured called "Enhanced Details". Adobe has an excellent blog post that goes over the fine details, but in a nutshell, it uses machine learning to improve the quality of the debayering process for RAW photographs (the process of turning raw sensor data into a usable image). The amount of improvement depends on a number of factors including the camera sensor format and what is going on in the photograph itself. If you notice image artifacts or false colors when viewing your RAW images in Lightroom Classic, this feature may be something you will want to experiment with.
If you are interested in giving Enhanced Details a try, be aware that there are a number of requirements you need to meet:
- You need Lightroom Classic CC version 8.2 or newer
- You must be on Windows 10 ver. 1809 or MacOS 10.13 or newer. This is due to the fact that these are the first versions that support either WinML (Windows) or CoreML (Mac)
- You can only "Enhance Details" on RAW images
While the benefit of this feature is going to depend on your camera sensor and the exact image, what is really interesting to us is the fact that it is using the WinML and CoreML frameworks. These are pretty new and essentially allow developers to leverage machine learning in their software with a pre-existing API that is built into the OS. As an analogy, think of it as being able to drive to the supermarket on public roads instead of having to build the road first.
In this article, we don't want to get into when this feature should be used, or even how well it works. Adobe's blog post and a number of other articles already cover those questions in much more detail than we could provide. What we do want to look at, however, is whether there is a way to improve the performance of this feature since it can take a long time to process on some machines. Since WinML (and CoreML) can leverage the power of your GPU to improve performance, we decided to test a range of GPUs to see if a more expensive GPU would be able to speed up this effect.
Test Setup & Methodology
In order to see what kind of performance you can expect with different GPUs, we used the following testing platform:
|CPU||Intel Core i9 9900K|
|CPU Cooler||Noctua NH-U12S|
|Motherboard||Gigabyte Z390 Designare|
|RAM||4x DDR4-2666 16GB (64GB total)|
|Video Card||NVIDIA Titan RTX 24GB
NVIDIA GeForce RTX 2080 Ti 11GB
NVIDIA GeForce RTX 2080 8GB
NVIDIA GeForce RTX 2070 8GB
NVIDIA GeForce RTX 2060 6GB
NVIDIA GeForce GTX 1080 Ti 11GB
NVIDIA GeForce GTX 1060 6GB
NVIDIA GeForce GTX 980 Ti 6GB
AMD Radeon Vega 64 8GB
Intel UHD 630 1024MB (i9 9900K onboard)
|Hard Drive||Samsung 960 Pro 1TB|
|Software||Windows 10 Pro 64-bit (version 1809)
Lightroom Classic CC 2019 (Ver 8.2)
To benchmark these GPUs using the "Enhanced Details" feature, we used three images:
- 16MP .RAF – Fuji X-Pro1
- 22MP .CR2 – Canon EOS 5K M3
- 45MP .NEF – Nikon D850
If you want to run the same tests on your system, you can download the 16MP Fuji file at the end of Adobe's Enhanced Details blog post. The Canon and Nikon images are a part of our normal Lightroom Classic benchmark process and can be downloaded here.
To see how each GPU performed, we created four copies of our test media and applied Enhance Details to all four at once. Not only does this remove the preview window (which could possibly influence the time it takes to fully apply the effect), but it also lets us get a more accurate result by applying the effect four times, then dividing by four to get the average.
Enhanced Details GPU Benchmark Results
The results are very interesting, but they can largely by summed up with two main points:
First of all, if you expect to use this feature at all regularly, you really should have a discrete GPU. Even the relatively decent onboard graphics on the i9 9900K was almost 20x slower on average than the modest GTX 1060. In fact, it was so slow that we had to do a break in many of our charts (which we hate doing) since including it at the proper scale would make the other results look tiny in comparison. This means that most laptops are really going to struggle with this feature, often taking multiple minutes to complete what a desktop could do in a matter of seconds.
The second thing to point out is that the GPU model makes a significant impact on performance. We are used to seeing relatively little differences in performance between GPUs in most Adobe applications since the CPU tends to be the primary bottleneck, but for this, a faster GPU translates to a pretty good increase in performance. In addition, what is really interesting is that the architecture of the GPU appears to make as big of a difference as the raw performance of the card. For example, the lowest-end RTX card from NVIDIA (RTX 2060) easily beat the top model from the previous generation (GTX 1080 Ti).
This variation based on GPU architecture is especially apparent with the Radeon Vega 64. This card actually did extremely well, averaging right between the RTX 2080 and RTX 2080 Ti. What made it a bit odd wasn't the overall performance, but how it did with the different RAW formats. With the Canon image, it beat even the Titan RTX which made it the fastest GPU we tested. With the Nikon image, it did a little worse, but still took third place. However, with the Fuji image (which was actually where we saw this biggest image quality improvement), it ended up being a hair slower than any of the RTX-series cards.
Does the GPU affect performance with Enhanced Details?
If you use Enhanced Details often enough that you want to speed up the process, getting a newer and faster GPU will certainly do so. If you already have a modern GPU, the difference may only be a few seconds, however, so don't expect to be able to apply this to thousands of images at once without taking a coffee break.
Beyond the raw performance gain to be had with higher-end GPUs, what really interests us is what this may mean for the future. To our knowledge, this is one of the first times Adobe (or any major software developer) has leveraged WinML/CoreML in photo or video editing software in this way. The question is really: is this a one-off feature or will AI and machine learning be used more and more in the future?
If Lightroom Classic, Photoshop, Premiere Pro, etc. all start to use machine learning for complex tasks like this, we could start to see a significant benefit to using higher-end GPUs in photo/video editing workstations. Right now, it is often very important to have a decent GPU, but for most users there isn't a big reason to invest in an expensive video card.
At the moment, AI is in a bit of a wild west situation where developers are experimenting with it without knowing exactly what will work well and what won't. It certainly isn't going away, but whether we will see more AI-powered features like "Enhanced Details" or not is something not even the developers are likely to know. There is no way to know for sure, but AI is definitely one area we are keeping a close eye on.
Looking for a
Content Creation Workstation?
Puget Systems offers a range of workstations designed specifically for video and image editing applications including Premiere Pro, After Effects, Photoshop, DaVinci Resolve, and more.