cancel
Showing results for 
Search instead for 
Did you mean: 

Can I just blit a texture into the Oculus backbuffer?

ksleet
Honored Guest
I'm integrating Oculus into my own OpenGL engine, and trying to figure out if this technique will work. Essentially, I'm rendering the left eye and right eye views into my own backbuffer, and then directly blitting the whole thing into the Oculus backbuffer like this (where "backbuffer" is my input texture):
// Get the render target for this frame
int currentIndex = 0;
ovr_GetTextureSwapChainCurrentIndex(mOVRSession, mOVRTextureSwapChain, &currentIndex);
GLuint currentRenderTargetTextureId;
ovr_GetTextureSwapChainBufferGL(mOVRSession, mOVRTextureSwapChain, currentIndex, &currentRenderTargetTextureId);

// Attach current texture to framebuffer
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, mFramebufferId);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, currentRenderTargetTextureId, 0);

// Clear target
glClearColor(0.5f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);

// Copy input texture to framebuffer
GLuint inputFramebufferId = backbuffer->GetFrameBufferHandle();
glBindFramebuffer(GL_READ_FRAMEBUFFER, inputFramebufferId);
glBlitFramebuffer(
0, 0, mBackbufferSize.w, mBackbufferSize.h,
0, 0, mBackbufferSize.w, mBackbufferSize.h,
GL_COLOR_BUFFER_BIT, GL_NEAREST);

// Submit texture to HMD
ovr_CommitTextureSwapChain(mOVRSession, mOVRTextureSwapChain);
ovrLayerHeader* layers = &mMainLayer.Header;
ovr_SubmitFrame(mOVRSession, mCurrentFrameIndex, nullptr, &layers, 1);
When I look in the HMD, I'm finding that half of my backbuffer is present as a screen floating in space ninety degrees to the left in a black void. Which is kind of weird! So, two questions:

1. Is this possible to do at all, or is my only option getting the backbuffer from OVR and rendering my scene directly into it?
2. If this is possible, what could lead to this situation? Layer setup, perhaps? My layers are an exact copy of the eyeFov example from the SDK docs.
6 REPLIES 6

galopin
Heroic Explorer
It is of courre possible, and it is easy to explain your problem 🙂 when you prepare a frame to commit, you fill a later structure, my guess is that the source image bounds and the tracking data are wrong and lead the timewarp to be super wrong.

ksleet
Honored Guest
Well, as far as tracking data goes, in order to narrow down the problem I've actually turned off the camera motion in my own scene! I'm never asking for the tracking data at all.

I should add that the "screen" floating in space is handling correctly with regards to my head orientation when wearing the HMD. I can look around, look towards it, look away, and everything updates smoothly and accurately. That's why I'm wondering if it's something to do with layers.

ksleet
Honored Guest
As an added note, if I adjust the RenderPose of the layer I can rotate the "screen"'s position, so that clearly has something to do with it. Here's how I'm setting up the layer:

ovrEyeRenderDesc eyeRenderDesc[2];
eyeRenderDesc[0] = ovr_GetRenderDesc(mOVRSession, ovrEye_Left, hmdDesc.DefaultEyeFov[0]);
eyeRenderDesc[1] = ovr_GetRenderDesc(mOVRSession, ovrEye_Right, hmdDesc.DefaultEyeFov[1]);
mMainLayer.Header.Type = ovrLayerType_EyeFov;
mMainLayer.Header.Flags = ovrLayerFlag_TextureOriginAtBottomLeft;
mMainLayer.ColorTexture[0] = mOVRTextureSwapChain;
mMainLayer.ColorTexture[1] = NULL;
mMainLayer.Fov[0] = eyeRenderDesc[0].Fov;
mMainLayer.Fov[1] = eyeRenderDesc[1].Fov;
mMainLayer.Viewport[0].Pos.x = 0;
mMainLayer.Viewport[0].Pos.y = 0;
mMainLayer.Viewport[0].Size.w = mBackbufferSize.w / 2;
mMainLayer.Viewport[0].Size.h = mBackbufferSize.h;
mMainLayer.Viewport[1].Pos.x = mBackbufferSize.w / 2;
mMainLayer.Viewport[1].Pos.y = 0;
mMainLayer.Viewport[1].Size.w = mBackbufferSize.w / 2;
mMainLayer.Viewport[1].Size.h = mBackbufferSize.h;

I don't touch the structure again after this point.

galopin
Heroic Explorer
it is what i say. The layer have to contains a valid orientation and position ( from the tracking ) because the rift evaluate the tracking just before applying the distortion and time warp, and if you do not set anything, it is the result, it will try to apply a crazy correction because it believe your image was from a completely different point of view.

ksleet
Honored Guest
I shouldn't have doubted you 😄 I updated the layer with the current pose data and it's working as expected now. Thanks very much!
const ovrTrackingState& ts = GetTrackingState(); // my internal function which gets this
ovr_CalcEyePoses(ts.HeadPose.ThePose, mHmdToEyeOffsets, mMainLayer.RenderPose);

galopin
Heroic Explorer
Ideally, it is not how you do it. The time-warp is a system that help a lot at reducing the feeling of latency and fighting dizziness. When you are about to render a new frame, you interrogate the tracking sensors and evaluate a position and orientation, not at that time, but in a predicted future that should be the closest to when your image will be ready. It is the ovr_GetPredictedDisplayTime to know that future time.

Now, because you are nether a current position, neither a perfect prediction, the Rift software evaluate at the closest from displaying the picture the sensors again to account for the deviation from the prediction to the reality.

If you interrogate the tracking just before you call submit, you lose one of the most important feature to fight latency and sickness. ( the async part of the system is just the cherry on top of the cake ).