NVIDIA Details New DLSS Technique in Control, Explains How DLSS Can Further Improve in the Future

NVIDIA DLSS, the Deep Learning Super-Sampling technology powered by the Tensor Cores of the GeForce RTX GPUs, has improved a lot over time. It didn’t have the greatest start when it first landed, mostly due to the blurring introduced in the initial implementation.

That has improved since and the latest incarnation of DLSS, featured in Remedy’s Control, is certainly the best one yet as mentioned by Keith in his analysis.

Related Control Review – Full-on Paranatural Remedy Metroidvania Action

On Friday, NVIDIA published a blog post where they revealed what went on behind the scenes to craft the DLSS for Control.

During our research, we found that certain temporal artifacts can be used to infer details in an image. Imagine, an artifact we’d normally classify as a “bug,” actually being used to fill in lost image details. With this insight, we started working on a new AI research model that used these artifacts to recreate details that would otherwise be lost from the final frame.

This AI research model has made tremendous progress and produces very high image quality. However, we have work to do to optimize the model’s performance before bringing it to a shipping game.

Leveraging this AI research, we developed a new image processing algorithm that approximated our AI research model and fit within our performance budget. This image processing approach to DLSS is integrated into Control, and it delivers up to 75% faster frame rates.

That said, NVIDIA did note how even this improved version of DLSS isn’t on the level of native resolution yet.

While the image processing algorithm is a good solution for Control, the approximation falls short in handling certain types of motion. Let’s look at an example of native 1080p vs. 1080p DLSS in Control. Notice how the flames on the right are not as well defined as in native resolution.

Clearly, there’s opportunity for further advancement.

Going forward, NVIDIA’s goal for DLSS will be to optimize the AI research model so that it can be used to reach playable frame rates.

Let’s look at an example of our image processing algorithm vs. our AI research model. The video below shows a cropped Unreal Engine 4 scene of a forest fire with moving flames and embers. Notice how the image processing algorithm blurs the movement of flickering flames and discards most flying embers. In contrast, you’ll notice that our AI research model captures the fine details of these moving objects.

With further optimization, we believe AI will clean up the remaining artifacts in the image processing algorithm while keeping FPS high.

The new DLSS techniques available in Control are our best yet. We’re also continuing to invest heavily in AI super resolution to deliver the next level of image quality.

Our next step is optimizing our AI research model to run at higher FPS. Turing’s 110 Tensor teraflops are ready and waiting for this next round of innovation. When it arrives, we’ll deploy the latest enhancements to gamers via our Game Ready Drivers.

Between DLSS and NAS (NVIDIA Adaptive Shading), the house of GeForce is clearly busy trying to boost the average performance in games, likely so that the computational cost of ray tracing becomes more and more bearable for the hardware. We’ll keep track of their progress and report to you here on Wccftech.



Submit



[ad_2]