cancel
Showing results for 
Search instead for 
Did you mean: 

Facebook now has its own DLSS

Anonymous
Not applicable
https://wccftech.com/neural-supersampling-is-a-hardware-agnostic-dlss-alternative-by-facebook/

Looks like even Facebook now has its own DLSS another step into working with all hardware and one step closer to allowing everyone to get into VR 😄

A new paper published by Facebook researchers just ahead of SIGGRAPH 2020 introduces neural supersampling, a machine learning-based upsampling approach not too dissimilar from NVIDIA's Deep Learning Super Sampling. However, neural supersampling does not require any proprietary hardware or software to run and its results are quite impressive as you can see in the example images, with researchers comparing them to the quality we've come to expect from DLSS.


2 REPLIES 2

hoppingbunny123
Rising Star
A watched a video on youtube about supersampling and here is my understanding of lower to higher resolution;

https://youtu.be/_DPRt3AcUEY

first you have to take a 2*2 gris of pixels on a lower resolution, and upgrade the 2d grid to a larger 2d grid which creates unknown pixels in-between, this is upscaling.

zt0jspbi5yij.jpg

imagine you have a straight line and you have the ability to make it curve like stringy spaghetti. You take the smaller grid as the straight line and make the string loopy line in the larger grid. Because the loopy bit created is facing some specific direction.

tdtkaysto1ag.jpg

the red line is the artificially created loopy string and the green line is the direction the loopy string is facing.

Thats the upscaling part, then with upscaling you know the line, the loopy bit creating in upscaling, now you have to create the anti aliasing to make the line in the pixels. and here you take the 16x dlss and make the best match.

Now dlss takes these two steps and makes a pre-made recipe for the best results, like a shopping list. For nvidia dlss it takes the actual game and create unique game specific dlss recipe, but for oculus dlss it uses a generic recipe. first the curvy line made with upscaling then the straight line made with anti aliasing.

how to do it i imagine is to take the revolution of the circle, to create stronger loopiness to generate which direction the green is. more revolutions more green more red.

then take the same approach with antialiasing, more passes more straight lines instead of loopiness.

if you can make it both loopy then straight its probably part of the image. so you take a generic recipe of loopy then straight and paste this into a neural network to create a loopy then straight picture and now you have your own dlss.

Anonymous
Not applicable
While I am sure NV DLSS is better - the fact that DLSS can now be perform on any hardware is the real take away. They just need to now "force" the use of it and boom. I do wonder if the way they are making it work can work on mobile though. I know code is code - but there are still some underline hardware changes that can make or break if code works in some cases.

If they did make it work for mobile - you could see a 20-30% push of free performance even with a generic recipe version. I guess that's what makes me happy to see this:3 and very interesting read and video on how they can make it work without the need for hardware requirement that NV was shift to push on game devs (the training network).