Impatience got the best of me so I didn’t wait for Adobe’s new Super Resolution feature to reach Lightroom (it’s said to be coming soon). So I tried it in Photoshop’s Camera Raw. Let’s cut to the chase – the results in certain circumstances are nothing less than staggering.
The following images tell the tale. The one on the right is the Super Resolution image with four times the number of pixels as the original.
Note that this example is extremely blown up to 200% for the comparison. At normal viewing levels, the differences aren’t nearly as impressive (more on this later).
Fuji shooters know that certain features such as leafy vegetation haven’t done so well with Adobe’s demosaicing algorithm. Fuji’s X-trans sensor uses a non-standard photosite array that while resolving some issues, has not had the greatest results with non-specialized (read: Adobe, for one) RAW sharpeners.
The easiest way to run it currently is by opening your RAW image (or jpg, but why?) in Photoshop (set to open files in Adobe Camera RAW mode). It’s hidden under the three dots and “enhance image.”
The massive file produced is autosaved to the original directory. You need import it into Lightroom. I’ve had intermittent app crashes, and so far the best results seem to come when I close Lightroom and open the target file after Photoshop is already loaded, but have seen no clear reporting of this at the Adobe site. Your mileage may vary.
The below image represents a 200% blow-up.
A picture does say a thousand words, doesn’t it? This was shot on my Fuji X-E2 which has 16 megapixels. It might forestall my need to upgrade in the neverending chase for more pixels. I don’t know if images from cameras using traditional Beyer sensors will see as marked improvement.
Tony Northrup in a youTube video Photoshop Super Resolution: 4X megapixels (actually tested-surprising!) reports that the enhancement offers little improvement for non-Fuji images. Tony is wrong by being right only in a limited sense: Wrong About Super Resolution.
How does Adobe do this magic? You’ve probably been hearing a lot more about artificial intelligence (AI) recently. From Adobe’s website: “The idea is to train a computer using a large set of example photos. Specifically, we used millions of pairs of low-resolution and high-resolution image patches so that the computer can figure out how to upsize low-resolution images.”
Prior to AI, achieving higher resolution was done by blowing an image up to double its dimensions and then using a mathematical algorithm (bicubic interpolation) which essentially smooths the image by giving each pixel a bit of information from its neighboring pixels. (Imagine each pixel as the center of a tic-tac-toe board, “borrowing” a little bit of information from each of its eight neighbors.)
With AI, something very different is happening; new information is added based on what the software thinks (from massive trained experience) should be there!
It should be understood that by creating pixels out of whole cloth, so to speak, AI can create problems of its own. The information supplied might not be right. Artifacts can be introduced.
Below: the same image at 100%. Notice how at this resolution, differences are minimal. Pay close attention to the bricks, directly under the glass portion of the light, the bare branches to the right of the light, and the bare branch that parallels the light. Both detail and color are improved, but ony marginally.
What’s the takeaway here? If you’ve captured a scene full-frame and it is displayed at a normal size on, say the internet, or a 4×5 sized print – the difference will be visible, but very marginal. But say you’re blowing up the image to an 8×11 or much larger print – then the difference can be very visible.
Let’s take a different example: you’ve taken a picture but discover in post-processing that you want to crop heavily. Or perhaps you would have rather used a telephoto lens, but didn’t have one with you. Blowing up your image would normally have shown extreme degradation.
Stephen Bay has done a super comparison of Super Resolution to Gigapixel AI, a product of Topaz software. Both products do essentially the same thing, with similar results. I might prefer the Gigapixel treatment slightly; I like the denoising they add, not see it as fakey as Stephen does and am not bothered as much by the artifacts.
But these are quibbles; both products create magic. It should be noted that both products create new image files that are much larger than the original RAWs. My Fuji shots are around 33mb in size and Super Resolution adds a new file about eight times larger! In other words, this is a process best reserved for truly deserving shots.
The Topaz product, according to Bay, takes several minutes to process an image. the damage for Adobe’s isn’t nearly as great; it took under a minute and a half for my XE-2 RAW on a mediocre computer.
Imagine these two treatments represented two different lenses. Would you want to take one back?