I've just pushed a new Java example for rendering to the Rift to my public example repository here
There are three individual subprojects, each defined using a Maven pom.xml file.
The first project is One in /resources. This holds all my non-code resources, like the shader definitions, images and meshes. It's a little oddly defined because it's intended to share the non-code resources with my C++ projects. You can import the project into Eclipse, but you may need to manually set up the classpath so that the resources are found properly (Eclipse doesn't seem to do it correctly even though the Maven jar file is properly built.
The second project is in /java/Glamour. This is a set of OpenGL wrappers to make GL object and shader management suck less, similar to a set of template classes I use in C++. Shaders, textures, framebuffers, vertex buffers, and vertex arrays are all encapsulated.
The last project is in /java/Rifty. This is Rift specific stuff, unsurprisingly. There is a with a small rendering test program in src/test/java/org/saintandreas/RiftDemo.java that draws a simple spinning cube.
The example is very raw. The projection isn't quite right yet, because I haven't replicated all the logic from Util_Render_Stereo.cpp. Instead I'm doing a lot of hard-coded constants, but it should convey the idea of the rendering mechanism you need to use in LWJGL to do the distortion.
As for the distortion I'm using a new (to me anyway) approach of using a mesh rather than a shader to do the heavy lifting of the distortion. The Oculus SDK examples have you take the scene which has been rendered to a framebuffer and render it to a quad which takes up the whole display (or half of the display, depending on whether you're distorting each eye individually) and uses a hefty shader to do the distortion.
Mesh based distortion instead pre-computes the distorted locations of a set of points on a rectangular mesh. You can then render this mesh with the scene texture painted on it with conventional texture coordinates and a simple shader.
Here is the mesh drawn as a wireframe with the texture coordinates represented as red and green:
There are some drawbacks to this approach. For one, the distortion isn't as accurate from pixel to pixel. This drawback would also apply to a rendering approach that used a lookup texture for distortion, if the lookup texture was of lower resolution than the framebuffer being distorted.
Another drawback is that it currently has no mechanism for chroma correction. There are two paths forward for that correction that I can think of. First would be to generate three meshes, one for each channel, and then enable rendering only that channel as you render each of the meshes in turn. The other option is to turn once again to the shader implementation to manipulate the texture coordinates. However, since this would require you to place the texture coordinates into rift space and then back into texture space, you'd end up with a shader that's almost as complex as the existing ones, just minus the barrel warp.
For the time being I'm ignoring the chroma distortion because I'm pathologically lazy and because my glasses already introduce so much chroma distortion into my world that it's basically a non-issue for me. However, I'm having surgery tomorrow :shock: to correct my eyes and hopefully reduce or eliminate the need for glasses with such heavy prism. After that I might find more interest in working on chroma based solutions.
Many thanks to Joe Ludwig from Valve who turned me on to the mesh based approach to distortion.