Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/1366
Article Thumbnail

Lightroom Classic CC 2019: Enhanced Details GPU Performance

Written on March 4, 2019 by Matt Bach
Share:

Introduction

In the latest version of Lightroom Classic CC (8.2), Adobe has added a new featured called "Enhanced Details". Adobe has an excellent blog post that goes over the fine details, but in a nutshell, it uses machine learning to improve the quality of the debayering process for RAW photographs (the process of turning raw sensor data into a usable image). The amount of improvement depends on a number of factors including the camera sensor format and what is going on in the photograph itself. If you notice image artifacts or false colors when viewing your RAW images in Lightroom Classic, this feature may be something you will want to experiment with.

Lightroom Classic CC Enhance Details on RAW images

If you are interested in giving Enhanced Details a try, be aware that there are a number of requirements you need to meet:

  1. You need Lightroom Classic CC version 8.2 or newer
  2. You must be on Windows 10 ver. 1809 or MacOS 10.13 or newer. This is due to the fact that these are the first versions that support either WinML (Windows) or CoreML (Mac)
  3. You can only "Enhance Details" on RAW images

While the benefit of this feature is going to depend on your camera sensor and the exact image, what is really interesting to us is the fact that it is using the WinML and CoreML frameworks. These are pretty new and essentially allow developers to leverage machine learning in their software with a pre-existing API that is built into the OS. As an analogy, think of it as being able to drive to the supermarket on public roads instead of having to build the road first. 

In this article, we don't want to get into when this feature should be used, or even how well it works. Adobe's blog post and a number of other articles already cover those questions in much more detail than we could provide. What we do want to look at, however, is whether there is a way to improve the performance of this feature since it can take a long time to process on some machines. Since WinML (and CoreML) can leverage the power of your GPU to improve performance, we decided to test a range of GPUs to see if a more expensive GPU would be able to speed up this effect.

Test Setup & Methodology

In order to see what kind of performance you can expect with different GPUs, we used the following testing platform:

To benchmark these GPUs using the "Enhanced Details" feature, we used three images:

  • 16MP .RAF - Fuji X-Pro1
  • 22MP .CR2 - Canon EOS 5K M3
  • 45MP .NEF - Nikon D850

If you want to run the same tests on your system, you can download the 16MP Fuji file at the end of Adobe's Enhanced Details blog post. The Canon and Nikon images are a part of our normal Lightroom Classic benchmark process and can be downloaded here

To see how each GPU performed, we created four copies of our test media and applied Enhance Details to all four at once. Not only does this remove the preview window (which could possibly influence the time it takes to fully apply the effect), but it also lets us get a more accurate result by applying the effect four times, then dividing by four to get the average.

Enhanced Details GPU Benchmark Results

The results are very interesting, but they can largely by summed up with two main points:

First of all, if you expect to use this feature at all regularly, you really should have a discrete GPU. Even the relatively decent onboard graphics on the i9 9900K was almost 20x slower on average than the modest GTX 1060. In fact, it was so slow that we had to do a break in many of our charts (which we hate doing) since including it at the proper scale would make the other results look tiny in comparison. This means that most laptops are really going to struggle with this feature, often taking multiple minutes to complete what a desktop could do in a matter of seconds.

The second thing to point out is that the GPU model makes a significant impact on performance. We are used to seeing relatively little differences in performance between GPUs in most Adobe applications since the CPU tends to be the primary bottleneck, but for this, a faster GPU translates to a pretty good increase in performance. In addition, what is really interesting is that the architecture of the GPU appears to make as big of a difference as the raw performance of the card. For example, the lowest-end RTX card from NVIDIA (RTX 2060) easily beat the top model from the previous generation (GTX 1080 Ti). 

This variation based on GPU architecture is especially apparent with the Radeon Vega 64. This card actually did extremely well, averaging right between the RTX 2080 and RTX 2080 Ti. What made it a bit odd wasn't the overall performance, but how it did with the different RAW formats. With the Canon image, it beat even the Titan RTX which made it the fastest GPU we tested. With the Nikon image, it did a little worse, but still took third place. However, with the Fuji image (which was actually where we saw this biggest image quality improvement), it ended up being a hair slower than any of the RTX-series cards.

Does the GPU affect performance with Enhanced Details?

If you use Enhanced Details often enough that you want to speed up the process, getting a newer and faster GPU will certainly do so. If you already have a modern GPU, the difference may only be a few seconds, however, so don't expect to be able to apply this to thousands of images at once without taking a coffee break.

Beyond the raw performance gain to be had with higher-end GPUs, what really interests us is what this may mean for the future. To our knowledge, this is one of the first times Adobe (or any major software developer) has leveraged WinML/CoreML in photo or video editing software in this way. The question is really: is this a one-off feature or will AI and machine learning be used more and more in the future?

8:1 zoom of 16MP Nikon Image with (right) and without (left) Enhanced Details in Lr Classic

8:1 zoom of 16MP Nikon Image with (right) and without (left) Enhanced Details in Lr Classic.
Also available: 22MP Canon and 45MP Nikon

If Lightroom Classic, Photoshop, Premiere Pro, etc. all start to use machine learning for complex tasks like this, we could start to see a significant benefit to using higher-end GPUs in photo/video editing workstations. Right now, it is often very important to have a decent GPU, but for most users there isn't a big reason to invest in an expensive video card.

At the moment, AI is in a bit of a wild west situation where developers are experimenting with it without knowing exactly what will work well and what won't. It certainly isn't going away, but whether we will see more AI-powered features like "Enhanced Details" or not is something not even the developers are likely to know. There is no way to know for sure, but AI is definitely one area we are keeping a close eye on.

Looking for a
Content Creation Workstation?

Puget Systems offers a range of workstations designed specifically for video and image editing applications including Premiere Pro, After Effects, Photoshop, DaVinci Resolve, and more.

Tags: Lightroom Classic, Enhance Details, Benchmark, Titan RTX, 2080 Ti, 2080, 2070, 2060, 1080 Ti, 1060, 980 Ti, Vega 64
Hwgeek

Thanks for the review, looks like Radeon VII will crush this test :-).

Posted on 2019-03-05 10:17:02
Mark Harris

lol amd guys

Posted on 2019-03-30 04:09:38
TwoMetreBill

Too bad you didn't read the article before entering your troll comment.

Posted on 2019-08-11 00:09:07
Myga

Nice one. Would be great if you guys could add Sony raw files to the mix, as the Sony cameras gained massively on popularity.

Posted on 2019-03-05 17:10:47

A lot of the time, the difficult part is simply getting our hands on a set of images that we can use (and have permission to re-distribute). If you have a Sony camera and are willing to take some shots for us, we certainly can include them in our testing!

What we need for our normal testing is just 6x shots at a range of exposures for HDR and 6x panorama shots. Most of the testing is actually done on copies of the 6x panorama shots so that we don't have to actually store and make people download hundreds of images if they want to replicate our testing. If that is something you are interested in working with us on, toss me an email at mattbach@pugetsystems.com !

Posted on 2019-03-05 17:48:34
Ondrej

i7 9700K
Intel HD630
32GB RAM

Fuji GFX50R (50Mpx)....... 30s
Sony A7R3 (42Mpx)......... 25s
Canon EOS 5D4 (30Mpx)... 20s
Fuji X-T3 (26Mpx).......... 20s
Fuji X-H1 (24Mpx).......... 15s
Sony A7III (24Mpx).......... 15s
Nikon D7500 (20Mpx)...... 15s
Oly E-M10III (16Mpx)....... 10s

(for one image)

EDIT:

D850 NEF file from Puget Systems - it shows estimated 30s and took in real time 33s for one file.
Four 45Mpx NEF pictures (copied) without ED window: 2min 13s = 133 seconds. Why you have 374s with a bit better CPU?

Posted on 2019-03-05 18:01:35
Ryan DeYoung

Ive been using this feature alittle bit in Lightroom Classic but im running into a glitch. It produces an image that looks a bit cleaner but with a cube that looks like its not even from the same image. It will disappear if I move the image around in Lightroom but returns eventually and even exports with it visible. Only way to get rid of it is to export the image to photoshop and then save the jpg from there :( Ive been wanting to reach out to Adobe about this issue but I cant figure out where to submit bugs to.

Posted on 2019-03-05 20:08:25

LR Classic bug report forum is here: https://feedback.photoshop....

I know there are plenty of reports already on there about image artifacts or even completely black images after using this feature, so definitely search to see if anyone has reported the same issue before submitting a new bug.

Posted on 2019-03-05 20:13:14

I know this isn't a photographer forum, but from a computing standpoint, I have seen little to no benefit of this new Lightroom feature with my Nikon D850. I shoot a lot of timelapses and gigapans, often focus stacked, and with thousands of images the time it takes to convert them all (i7-7700K and 1080Ti) is just not worth it. Even your sample images here with your D850 show no noticeable difference. I suspect it's just the sheer number of megapixels that makes it take a very long time, and net very little gain on overall sharpness.

Posted on 2019-03-10 12:09:44

From what I've seen, it makes the biggest difference with Fuji cameras (or any other cameras I don't know of that use an X-trans sensor). On Nikon images, it rarely helps from what I've seen, and when it does you have to zoom way, way in in order to see any difference.

I don't think this is something you just apply to every image you have to be honest. I see it more as just another tool you can use to help clean up an image that is mostly great, but just needs a little touch up in some areas where the normal RAW debayering process struggles.

Posted on 2019-03-11 16:44:57

Agreed!

Posted on 2019-03-11 18:22:37
Nuno Fernandes

I agree with you but I think that this shows that maybe Adobe will use the GPU a lot more in other tasks. The "meh" news is that this filter is pointless, I don't see myself using it but on the other hand if Adobe uses this for other tasks it can speed up the workflow a lot.
I can imagine how much faster it could be to merge panoramas, do use stack modes or hdr's using the GPU.

Posted on 2019-11-18 01:35:01

I agree about GPU in general, absolutely! I was referring to the "Enhanced Details" for rendering raw files differently. It just doesn't make a difference worth waiting for unless you have a Fuji X-Trans sensor I don't think.

Posted on 2019-11-18 05:20:41
Tobi

Matt Bach
Thanks PugetSystems for all the great reviews.
It makes our life much easier! :-)

NVidia launched a new "Creator ready" driver for GTX and RTX cards.

http://us.download.nvidia.c...

The new driver should boost up Premiere and Photoshop up to +9% on a RTX2080.
And even Lightroom should get a speedboost from the RTX cores.

Do you plan to bench the new driver to the older ones?

Posted on 2019-03-21 07:19:18

Working on that now :) . I don't know how we would test Lightroom, however, since beyond this Enhanced Details feature, we've never been able to measure any difference between different GPUs.

Posted on 2019-03-21 16:11:55
Tobi

Yes, I was really surprised also to read it would speed up lightroom too.
I‘m tuned to see if it‘s really true. Just installed the new driver.
But it‘s also good if just PS and Premier gets boosted.

Posted on 2019-03-21 18:18:32

So far, I'm not seeing any gain with the 419.67 Creator Ready driver in particular. The thing is, the R418 branch includes drivers going back to 418.xx, so I'm pretty sure what they are talking about it just general driver optimization improvements (just presented in a misleading way). In fact, everything I've read indicates that the Creator Ready drivers are if anything more of a slow-ring approach. So the Game Ready drivers will probably always be the first to get any performance improvements, and they will only make their way to the Creator Ready drivers once they have been thoroughly tested in creative apps.

To be honest, I like this approach a lot. Generally you only see minor performance improvements with individual driver releases, so NVIDIA taking a few weeks (or whatever) to make sure it isn't buggy or cause issues is well worth that temporary, small performance sacrifice.

Posted on 2019-03-21 18:30:11
Richard Paul

Great articles Matt as I am in the process of rebuilding a new system for Photoshop and Lightroom. As your question on whether AI powered features, I have started looking at incorporating Skylum's Luminar and Aurora HDR to my process. Although young compared to Adobe, they have several different AI powered features. From an AI perspective something to keep an eye on.

Posted on 2019-04-17 15:33:19
Colton

I did post this on another website, funnily enough the same format of website too. But I will also mention it here because you folks are computer wizards!

Perhaps you can shed some light on my situation; I've read an immense amount of information and watched and ultimately I am stumped and confused. I am tossing up 3 potential builds.
https://ca.pcpartpicker.com... - TR2950
https://ca.pcpartpicker.com... - Ryzen 2700x
Or a build I haven't put together that is Intel i9 9900k
Possibly upping 32 GB RAM into 64 GB too.

I use Adobe Lightroom, After Effects (hi-res jpg timelapse creation, with minor effects like simple stabilization, pans, etc. very basic effects), Photoshop, Illustrator, and InDesign. I am a photographer [46 MP camera], and timelapse content creator [4k and 8k videos from stills]. I don't use Premier, mostly because I've read not the best things about it. The reason for my confusion is this; in the last few years Adobe has removed multi-frame rendering which utilizes more cores/threads than the current version does in AE. AE does still use a handful of cores/threads but adding a singular effect can drop it from all cores/threads to 1 single core. It seems the entire Adobe suite tends to favor Intel as opposed to AMD. I do also play current high end games, so that's a consideration as well.

I initially aimed at the TR2950x in the sense that it would be future proofing + immense performance for anything I could throw at it. However my concern is that AE [primarily AE/AME, but also other programs] won't make use of the vast amount of cores/threads. Say, only using 50% of what it's capable, which in essence is wasting it's potential. If I was to go Intel I might yield a faster performance output. From what I can gather Intel offers less cores/threads at a higher GHz, where TR2950 and Ryzen 2700x are more cores/threads but less GHz. I've also read how the TR2950x can actually decrease FPS in games?

I've tossed around the idea of migrating away from AE for rendering/stabilizing and going into DaVinci Resolve [uses GPU as far as I can tell], and trading out Lightroom for Capture One [not sure what it uses]. I'd prefer to stick with the Adobe Suite as it's something I am completely fluent with but not entirely opposed to switching. If I am to go the TR2950x route, am I wasting the potential of the build by using Adobe programs? What about the 2700x? I read on here on Puget Systems (which I couldn't find a date for?) Intel i9 9900k you can yield almost as much as 20-40% better efficiency in AE. Do Adobe programs look like they're going to aim towards more cores/threads for rendering in the future or continue favoring Intel?

I was told at this level I need to decide what my priority is, gaming first, editing second or the opposite. These linked builds are with editing first and gaming second. Frankly I think they're equal for which comes first in terms of building. Maybe even gaming first. My fear is if I go the route of editing first, the gaming side will suffer and suffer even more so down the line in the future. Then I put gaming first and am worried that it will cripple rendering and editing 4-8k content too greatly. I tend to build a $3-4k computer every 7-8 years, and it's due time!

Although! Would your opinion change a little if I was to say that I also use Lightroom [daily use], Photoshop [daily use], InDesign [occasionally] and Illustrator [occasionally]. They offer the option to engage GPU Acceleration to put more stress on the card and less on the computer. After Effects does not, nor does Adobe Media Encoder. It might seem silly to ask, but you might likely suggest the Intel because the majority of my programs are Adobe, however if all but two of the programs can make use of a GPU (a good one in this case) would it sway your opinion? So GPU use for the majority of programs, and CPU for AE/AME? Or is the CPU going to out perform the GPU [even a nice one] in any scenario?

I still reading into GPU Acceleration myself but because my GPU is rather weak on this current computer I have no experience with it myself. I wouldn't bank on it 100% for effectiveness. It could be hit or miss. I know how thorough you guys are, after all what you do is build incredible units so I very much respect your help and input in this decision.

Thank you immensely for taking the time to read this wall of text and offer any guidance!

Posted on 2019-04-27 21:54:57
라이트룸

Thank you for the good news.
QUADRO P4000 to RTX8000 . What's the difference in speed?

Posted on 2019-05-02 16:21:04
Malcolm Clint

will the new 1660 be ok compare to 1060?

Posted on 2019-07-14 11:47:32
TwoMetreBill

Thanks for using an i9 as the primary performance bottleneck is moving the image back and forth between RAM and the CPU. Because of the 3 layer cache, it takes hundreds of data movements to make a single adjustment. With 4 memory lanes on XEON and i9 processors we get far superior performance compared to the 2 lane i7. I'm hoping in a year or two to be able to afford one of the fast 6 or 8 lane XEONs.

As high end image processing systems tend to use Nvidia quadro cards, it would be nice to see them included in this benchmark. Starting with the the K1200 (old but still in wide use) up to at least the P4000.

Posted on 2019-08-11 00:18:39

Do you mean memory channels? Quite a few i7 CPUs support quad channel RAM, and several i9 CPUs are only dual channel. Some Xeons support up to 12 channels, but it will cost ya! Most are 6 channels.

Posted on 2019-08-12 00:26:51