cancel
Showing results for 
Search instead for 
Did you mean: 

SDK 0.3.1 feedback (OpenGL)

sth
Level 2
I just wanted to give some feedback on Oculus SDK 0.3.1 in combination with OpenGL.
Despite a few problems (see below), I think the API is a step in the right direction since it's more straightforward and requires less code.

A few days ago, I started porting my codebase over from 0.2.5 to 0.3.1 and wrote down everything I noticed in the process – so forgive me if this post is a bit of a random collection of stuff.


Getting SDK rendering to work
So far, I have not gotten SDK rendering to work. Opening the HMD and reading the sensor orientation works (with exceptions, see below). Configuring the SDK renderer doesn't return an error but calling ovrHmd_EndFrame() results in a black screen being rendered. This could very well be a problem on my side, but since there's no OpenGL sample code available, I can't really check.

Here's my setup:

1. Create a new OpenGL framebuffer object as my render target

  • Color format: GL_RGB / GL_UNSIGNED_BYTE

  • Depth format: GL_DEPTH24_STENCIL_8 / GL_UNSIGNED_INT

  • Size is 1928x1432 for DK1 (calculation based on the SDK sample code)

2. Configure SDK rendering

  • I'm setting the texture id for both eyes to the target FBO's color buffer texture id and set TextureSize and RenderViewport for each eye based on the SDK sample code

  • I'm using wglGetCurrentContext() for the WglContext, wglGetCurrentDC() for the GdiDC and the HWND returned by SDL2 (info.win.window) for the Window parameter

In my rendering loop, I do the following:

3. Call ovrHmd_BeginFrame()
4. For each eye: Call ovrHmd_BeginEyeRender(), render my scene and call ovrHmd_EndEyeRender()
5. Bind FBO 0, so stuff gets drawn on the screen and not on the target FBO
6. Call ovrHmd_EndFrame()

The result is a black screen. I can see in geDebugger, that the GL calls are being sent and that the FBO's color buffer texture gets bound correctly (it contains two side-by-side views of the scene, as it should).

Did I miss something? Vertex order was the problem (see below) – thanks to jherico!


Miscellaneous Observations
Apart from the rendering problem, here are some of the things I noted:


  • The documentation is still a bit rough (e.g. the members of Sizei and the parameters for ovrHmd_EndFrame() have changed compared to the docs). It usually helps to look at the OculusRoomTiny example next to the documentation.

  • ovrHmd_EndFrame() sets glClearDepth to 0 without resetting it afterwards (hooray for geDebugger). I know saving and restoring OpenGL state can be problematic but the SDK rendering functions should at least reset any state it touches back to OpenGL defaults afterwards. Otherwise, things get unpredictable and very hard to debug.

  • So far, I haven't been able to get a usable projection matrix from ovrMatrix4f_Projection()
    Fixed. Forgot to transpose the matrix.



Questions
Last but not least, some quick questions:


  • Does SDK rendering require a depth buffer to be present? Previously, I only had a depth buffer on my target FBO, but not the screen itself. I changed that now, while debugging the SDK rendering problem, but I wonder if it's needed.
    Works without one, so let's restate the question: Should there be a depth buffer present (maybe for future use)?

  • The new SDK docs do not mention latency tester support. Is that part not finished yet or is latency tester support automatically enabled via SDK rendering?
    Latency tester automatically works when SDK rendering is enabled.
4 REPLIES 4

jherico
Level 5
Take a look at the changes I've made in the Oculus SDK here. It's possible you're encountering the same issue I was, which is that the current rendering mechanism isn't compatible with Open GL 3.x core profiles. Core profiles require a VAO to be active for drawing commands to work, and the SDK doesn't use VAOs, so if you're using a core profile nothing will render.

If you're not using core profiles, there's at least one other thing that can cause rendering to fail. the SDK uses clockwise winding of the vertices in the distortion mesh, so if you have face culling enabled, and your front faces are set to GL_CCW, then again, nothing will render.

I suggest you add the following immediately before the end frame call


glDisable(GL_CULL_FACE);
glDisable(GL_DEPTH_TEST);


If that doesn't fix the issue, then I suggest you take a look at my changes in the above link and consider applying them, or alternatively see if you can work with a compatibility profile in OpenGL.

sth
Level 2
"jherico" wrote:
If you're not using core profiles, there's at least one other thing that can cause rendering to fail. the SDK uses clockwise winding of the vertices in the distortion mesh, so if you have face culling enabled, and your front faces are set to GL_CCW, then again, nothing will render.

You are a hero! 😄 That was exactly the problem. I totally didn't think of this.
This is a thing Oculus needs to fix, as the default order in OpenGL is counter-clockwise (GL_CCW). At the very least, the SDK renderer should disable (and reenable) culling automatically.

Also thanks for your work on core profile compatibility. I have a core profile render path in my engine but I did all my testing in legacy mode so far.

[edit]: I also found out what my projection matrix problem was: I forgot to transpose the matrix I got from the SDK.

jherico
Level 5
"sth" wrote:
You are a hero! 😄


Yes. Yes I am.


"sth" wrote:
This is a thing Oculus needs to fix, as the default order in OpenGL is counter-clockwise (GL_CCW). At the very least, the SDK renderer should disable (and reenable) culling automatically.


More specifically, they should properly set the winding, or disable face culling, and document that these state settings should be restored by the application to the desired values. The basic problem with changing state in a library is that it's not simply to fiddle with a state and then put it back. In order to do so you'd need to make glGet* calls in order to query what the state currently is, so that you can set it back to that value when you're done, but glGet* functionality is something that should generally be avoided, since it can stall the OpenGL pipeline while it synchronizes the client and server threads in order to be able to return your the state you're asking for. One of the fundamental things to understand about modern OpenGL is that the application is responsible for knowing about what the OpenGL state should be at any given moment, so that it can simply apply the state it wants from operation to operation, without triggering any synchronization issues.

The SDK docs should say "We change the GL state in these ways: X, Y, Z. You should restore the default using these calls: A(), B(), C(), or reset the state to the values your application needs if it doesn't work with the defaults."

The basic problem is that they have a rendering abstraction (released with their examples in the CommonSrc directory) on which this code is all clearly based. But unfortunately they continue to use the abstraction along with the code that migrated into the CAPI for distortion. This means it's not immediately obvious to them where the new distortion mechanism is still relying on state set by the outer renderer. Hence, they do change glClearDepth and don't change it back, because the renderer they use reinitializes it to a good value at the start of every frame. Similarly they don't set the winding order because the renderer already puts GL into the state they need.

To test this kind of thing you really need to build up at least one or two different applications using different rendering code, or even something just using raw OpenGL to make it clear where the SDK is failing to cross the chasm.

sth
Level 2
Yeah, pipeline stalls are bad, that's why I suggested setting the OpenGL state to the default values instead – that way, you at least know what you're gonna get. I totally agree that the documentation needs to be updated regarding these problems, and I'm pretty sure they'll do for the final release (after all, 0.3.1 is only a preview right now).

Funny enough, ovrHmd_EndFrame() is one of the very few cases I can think of, where a pipeline stall might not be that much of a problem after all. That's because at this point the SDK forces the GPU to finish the frame anyway (e.g. ovrHmd_EndFrame() calls glFlush() and glFinish()) for latency reasons.

[edit]: Just found out that SDK rendering automatically includes latency tester support, which means I can delete even more code from my project. 8-)