Our game is currently on the 1.0.3 SDK using the DrawEyeView path.
I’ve looked into the RENDER_MULTIVIEW approach and while I understand its purpose, I think that it’s very limiting and it would require quite a bit of refactoring. Right now our priority is to implement the Gear VR controller, so I’d rather not have to rework the rendering pipeline around this.
One possible solution would be to render my two eye images offscreen and then pass them as “ovr surface”. All I’d pass is two quads with two textures, one per eye. Is that a viable solution ? If so, is there some sample code already doing that somewhere ?
Also, I see this RENDERMODE_STEREO, but I don’t see it used in the samples. So, I’m not sure if it would be a valid alternative.
I see that in VrCubeWorld_NativeActivity.c ...I should have said this more clearly, but we're trying to replace the DrawEyeView path, so we're using the VrAppInterface, meaning that we'd have to use ovrSurfaceDef to render anything to the final display.
But I'm assuming that the official answer at this point is go "full native" or to pass draw calls like in VrCubeWorld_Framework.cpp
I am facing a similar situation, and I like the idea that is being proposed, of drawing off screen and then pass these in as ovr surface. but I haven't quite figured out how to pull this off. Did this ever get sorted out, and is there any sample guide that might serve as a guide?
I ended up creating one of those OVR surfaces to be used in a blit (full-screen quad) of the Left and Right eye textures coming from the rendering of the engine. The shader picks one texture or the other depending on the "view ID".