Java rendering example using mesh based distortion — Oculus
Welcome to the Oculus Developer Forums!

Your participation on the forum is subject to the Oculus Code of Conduct.

In general, please be respectful and kind. If you violate the Oculus Code of Conduct, your access to the developer forums may be revoked at the discretion of Oculus staff.

Java rendering example using mesh based distortion

jhericojherico Posts: 1,419
Nexus 6
I've just pushed a new Java example for rendering to the Rift to my public example repository here.

There are three individual subprojects, each defined using a Maven pom.xml file.

The first project is One in /resources. This holds all my non-code resources, like the shader definitions, images and meshes. It's a little oddly defined because it's intended to share the non-code resources with my C++ projects. You can import the project into Eclipse, but you may need to manually set up the classpath so that the resources are found properly (Eclipse doesn't seem to do it correctly even though the Maven jar file is properly built.

The second project is in /java/Glamour. This is a set of OpenGL wrappers to make GL object and shader management suck less, similar to a set of template classes I use in C++. Shaders, textures, framebuffers, vertex buffers, and vertex arrays are all encapsulated.

The last project is in /java/Rifty. This is Rift specific stuff, unsurprisingly. There is a with a small rendering test program in src/test/java/org/saintandreas/RiftDemo.java that draws a simple spinning cube.

The example is very raw. The projection isn't quite right yet, because I haven't replicated all the logic from Util_Render_Stereo.cpp. Instead I'm doing a lot of hard-coded constants, but it should convey the idea of the rendering mechanism you need to use in LWJGL to do the distortion.

As for the distortion I'm using a new (to me anyway) approach of using a mesh rather than a shader to do the heavy lifting of the distortion. The Oculus SDK examples have you take the scene which has been rendered to a framebuffer and render it to a quad which takes up the whole display (or half of the display, depending on whether you're distorting each eye individually) and uses a hefty shader to do the distortion.

Mesh based distortion instead pre-computes the distorted locations of a set of points on a rectangular mesh. You can then render this mesh with the scene texture painted on it with conventional texture coordinates and a simple shader.

Here is the mesh drawn as a wireframe with the texture coordinates represented as red and green:

MeshDistortion.png

There are some drawbacks to this approach. For one, the distortion isn't as accurate from pixel to pixel. This drawback would also apply to a rendering approach that used a lookup texture for distortion, if the lookup texture was of lower resolution than the framebuffer being distorted.

Another drawback is that it currently has no mechanism for chroma correction. There are two paths forward for that correction that I can think of. First would be to generate three meshes, one for each channel, and then enable rendering only that channel as you render each of the meshes in turn. The other option is to turn once again to the shader implementation to manipulate the texture coordinates. However, since this would require you to place the texture coordinates into rift space and then back into texture space, you'd end up with a shader that's almost as complex as the existing ones, just minus the barrel warp.

For the time being I'm ignoring the chroma distortion because I'm pathologically lazy and because my glasses already introduce so much chroma distortion into my world that it's basically a non-issue for me. However, I'm having surgery tomorrow :shock: to correct my eyes and hopefully reduce or eliminate the need for glasses with such heavy prism. After that I might find more interest in working on chroma based solutions.

Many thanks to Joe Ludwig from Valve who turned me on to the mesh based approach to distortion.
Brad Davis - Developer for High Fidelity
Co-author of Oculus Rift in Action

Comments

  • cyberealitycybereality Posts: 26,156 Oculus Staff
    Interesting technique.

    Is this more cost effective than a distortion pixel shader?
    AMD Ryzen 7 1800X | MSI X370 Titanium | G.Skill 16GB DDR4 3200 | EVGA SuperNOVA 1000 | Corsair Hydro H110i
    Gigabyte RX Vega 64 x2 | Samsung 960 Evo M.2 500GB | Seagate FireCuda SSHD 2TB | Phanteks ENTHOO EVOLV
  • owenwpowenwp Posts: 681 Oculus Start Member
    It is on mobile and maybe intel chipsets, because they can only stay on the fast path if you pass texture coords right from the vertex interpolators to the sampler, without doing any math. This is a pretty common optimization for mobile post processing (the most common example is computing all of your blur samples in the VS, instead of adding offsets in the PS, for a big savings), because the smaller chips have simpler pipelines that are easier to stall.

    On a discrete chipset on PC it won't really matter. The parallel nature of pixel shader execution makes it pretty efficient at this sort of thing, and it puts less pressure on the fixed function stages which cannot load balance with other execution units. So you should measure, but when the performance gain isn't an clear win pick the method that is more precise. Lookup textures are usually the fastest though.
  • jhericojherico Posts: 1,419
    Nexus 6
    owenwp wrote:
    It is on mobile and maybe intel chipsets, because they can only stay on the fast path if you pass texture coords right from the vertex interpolators to the sampler, without doing any math.

    This is the guidance I got from Joe @ Valve, that this approach would improve the performance on android devices in particular, hence the Java implementation. I haven't tested it yet though.
    owenwp wrote:
    Lookup textures are usually the fastest though.

    My experience has shown otherwise, at least on my hardware. Direct computation is fastest on my GeForce 650ti. Of course I'm offloading a significant amount of the computation that's being done in the fragment shader in the SDK examples and placing it in the vertex shader. All of the pre-coefficient computation can be done there, since it's simple translation / scaling operations.

    It's not clear to me that a texture lookup using the same resolution texture as a given mesh won't be just as imprecise, if not more, than the mesh version. The lookup texture will include the 'out of bounds' areas, because it's storage still has to be rectangular. The mesh approach on the other hand uses every last vertex as part of its rendering of the output so the resulting distance in pixels between interpolated points on the mesh will be smaller.
    Brad Davis - Developer for High Fidelity
    Co-author of Oculus Rift in Action

Sign In or Register to comment.