cancel
Showing results for 
Search instead for 
Did you mean: 

Lens distortion model for DirectX raytracing based rendering?

mattnewport
Protege
I'm interested in experimenting with VR rendering using DirectX raytracing where the Rift lens distortion could be directly corrected in the ray generation shader rather than rendering an undistorted image relying on the Oculus runtime to then warp the texture in a postprocess. Looking at the current SDK I don't see any way to have a layer that has no distortion correction applied by the runtime or any way to get the lens distortion model. I remember in the distant past both of these kind of existed (there were undistorted layers and a way to get a distortion mesh) but I can't find them in the current SDK docs (and I may be mis-remembering what was provided before).

Are there any plans to provide SDK access to allow for this kind of pre-distorted ray traced rendering now that real time raytracing API and hardware support is on the horizon?
7 REPLIES 7

owenwp
Expert Protege
I would be more interested in the ability to provide a custom post-timewarp shader. That way it would be possible to do things like temporal antialiasing or some other reconstruction filter in the native screen space of the rift, lowering the cost and increasing the quality. This would be a nice place to put something like the new deep learning antialiasing.


 For ray generation, I don't imagine it would be worth it for the image quality gains, because you would give up the latency reduction of timewarp by doing the distorted projection at the beginning of the frame rather than at the end. I would rather ray trace a noisy image the old fashioned way, then correct the noise and reconstruct a sharp image in post-distortion screen space.

mattnewport
Protege
It's a good point about timewarp but ideally I think you'd want both. I don't want to waste precious rays around the periphery where a lot of resolution is wasted when rendering undistorted (although things like lens matched shading help there, there's still a lot of waste) and where some pixels are never even visible. I'd also like to spend extra rays improving image quality towards the center of the view and let denoising work with less data around the edges.

The ideal setup would probably be to be able to take lens distortion into account during ray generation, along with a just in time update of head tracking information right before the ray generation shader runs (rather than when the work is scheduled on the CPU) and together with support for timewarping from a pre-distorted image to update with the latest head tracking data right before display. And yeah, if you could customize the timewarp shader to also do denoising that might be more efficient than running them separately but I'd settle for just having the HLSL code / lens model for the ray generation shader and a non-customizable timewarp that can work with pre-distorted buffers..

volgaksoy
Expert Protege
Hi Matt,

We currently do not expose the distortion profile to VR apps. Tracing primary rays won't be a good fit for raytracing with current hardware (even Turing). It makes sense to experiment however. We are evaluating the ability to expose distortion profiles in a way that is easy for the application to utilize as well as able to tell the Oculus compositor what the application used as its profile. The assumption is that the app wouldn't necessarily have to use the same distortion profile suggested by the SDK.

While it's not the same, I would actually recommend looking into Variable Rate Shading (which Turing supports) as a potential field for saving GPU cycles while still using rasterization.

volgaksoy
Expert Protege
Hi Matt,

We currently do not expose the distortion profile to VR apps. Tracing primary rays won't be a good fit for raytracing with current hardware (even Turing). It makes sense to experiment however. We are evaluating the ability to expose distortion profiles in a way that is easy for the application to utilize as well as able to tell the Oculus compositor what the application used as its profile. The assumption is that the app wouldn't necessarily have to use the same distortion profile suggested by the SDK.

While it's not the same, I would actually recommend looking into Variable Rate Shading (which Turing supports) as a potential field for saving GPU cycles while still using rasterization.

thewhiteambit
Adventurer
I did it like that: Give the Oculus API only a Red-Green* UV mapped texture, then grab it from the mirroring viewport while head is steady. Voila, you have the distortion values for your raytracer.

Still the Timewarp is a hard to workaround when not desired and will give you undesired jitter. This is because many parts of the OVR-API are really stupid designed, but I gave up explaining that to oculus.

*) you should also set blue channel to 1.0 to have a reference on the outer areas where they are gradually shaded. since chromatic aberration correction will distort red and green differently, you can do it multiple times with rotated colors to acquire the shift introduced by aberration correction also. but I hope you got the idea. I spent to much time creating workarounds for a mediocre API...

thewhiteambit
Adventurer

volgaksoy said:

Hi Matt,

We currently do not expose the distortion profile to VR apps. Tracing primary rays won't be a good fit for raytracing with current hardware (even Turing).


Well, it can be a good fit and I even did this 2013 with DK1 and DK2 - you seem to have a very limited idea of what raytracing is. You can use raytracing for example on geometric surfaces with 1000fps even on a GTX8800 in a simple shader. Raytracing does not have to be slow, if your scene is simple enough it can be much faster than a scanline approach!

Still you would need the distortion for primary rays. It would be stupid to calculate a linear scanline picture to a texture first and then calculate the distortion from that framebuffer. Scanline detour will pick from a frambuffer-texture and will end up blurring with interpolation. To reduce blur a much bigger texture would be needed then, just to drop most of the sample information.
With raytracing you are f.e. capable of directly picking the samples from a texture of a quad without any unnecessary drawing of triangles to a texture first, then picking some of the samples from the scanline-framebuffer. Each ray would be used and have a perfect fit in the simulated scene.

This is much faster with raytracing than a detour via scanline, that's why internal quad layers are calculated like this in the OVR-API! Why don't you want to give these capabilitys to developers and decide whats best for them instead?

mattnewport
Protege
@volgaksoy thanks for the info, for some reason I missed the notification for your reply and only just saw it. I'm still waiting on my RTX GPU but yeah this is really about being able to experiment with approaches that might make more sense with future generations of hardware, not necessarily for anything shipping in the near term. Variable Rate Shading does look like an interesting option but I do think longer term it will make sense for VR to do all the optical compensation as part of ray generation and it would be great to start experimenting with potential futures now!

As @thewhiteambit says, I think there are interesting applications for tracing primary rays even now for specialized use cases. We're probably not looking at general purpose engines moving straight to tracing primary rays but there are interesting possibilities for use cases like data visualization, fractal fly throughs, CAD data, etc. where tracing primary rays may make sense in the nearer term and again it would be cool to be able to experiment now.