Showing results for 
Search instead for 
Did you mean: 

Possible to Layer Stereo and Mono Camera Rigs?

Level 3
Working on porting Z0NE to the GearVR I noticed that when I edit the OVCCameraController script to use Monoscopic rendering, I can make more complex levels while still hitting 60 FPS. I was also surprised that I didn't really miss the stereoscopic effect except in the cockpit. So I'm wondering If it might be possible to render the near field stereroscopically.. i.e. a 0.1 to 6 meter clip plane for just the cockpit and the aiming reticle, and then render the level monoscopically. Can this be done wit the current Unity integration?

Level 15
Yes, you can totally do this (it's being done on Titans of Space).

Here are instructions for the PC SDK, mobile may be similiar:

AMD Ryzen 7 1800X | MSI X370 Titanium | G.Skill 16GB DDR4 3200 | EVGA SuperNOVA 1000 | Corsair Hydro H110i Gigabyte RX Vega 64 x2 | Samsung 960 Evo M.2 500GB | Seagate FireCuda SSHD 2TB | Phanteks ENTHOO EVOLV

Level 3
Oh nice. Thanks for such a quick reply! Yes, I remember reading a blog post Steve wrote where I described experimenting with it. I think I can take it from here.

Level 3
One thing.. I cannot find the 'Multi Camera' Sample anywhere in the OVR Sdk.. What am I missing?

Level 3
Never mind. I found it. I was looking in the Gear integration which does not have it.

Level 3
So I went through and did my best to follow the example scene with the changes listed in that post and its not quite working properly yet. Here I made a video showing what I have now. I would sure love to get this working.

Level 5
Looking sweet, maybe Drash can help.

Level 7
Hi, this thread just came to my attention, and I'll try to shed a bit of light.

First of all, I heard your comment at the end of your video that behavior is different on Gear VR. This is because stacking multiple OVRCameraControllers may have worked in the older desktop PC SDK, but when you're actually building for Gear VR, there's a sizable amount of code being compiled that's different than what you're able to preview with on the desktop. If you look inside the OVRCameraController script there's a ton of code that's wrapped in #if UNITY_ANDROID && !UNITY_EDITOR ... #else ... #endif directives. Inside some of the android specific code you'll see that camera clearflags are being forced, and that the code in general expects there to just be one OVRCameraController, since timewarp and lens correction happen at the same time, asynchronously, and inside the Oculus plugin.

The Oculus plugin just needs two (separate) eye textures, so the key is just to make sure you've rendered the background imagery onto those two textures before the nearby scenery is drawn normally by a single OVRCameraController.

For example, what I did in Titans of Space for Gear VR is to have a standard "far" camera render the distant scenery once onto a render texture which happens to be the left eye texture for that frame (this changes every frame), and then draw a "fullscreen" quad onto the right eye texture using the same render texture. If you're able to force GLES3 without crashing or other glitches, you can try using the Multiple Render Target feature to just render the distant scenery to both eye textures at once, which would likely be a cleaner solution and possibly improve performance further.

That said, it wasn't all that straightforward and you'll definitely need to script your own solution for this (mine's far too integrated with my project since I was rushing to get that out the door for launch), to grab the current frame's eye textures, managing your far camera, rendering the imagery onto the 2nd eye, etc.

Good luck. I can definitely see this helping with performance in Z0ne.
  • Titans of Space PLUS for Quest is now available on

Level 3
Thanks so much Steve! This is like a treasure hunt for me and you just gave me a big clue.

I've suspected that the OVRCameraController is not designed to work with more than one in a scene, and I think the 'jittery' behavior I see in the Gear has something to do with that.

I've been working on a native C++ plugin for another project and there is a place where you need to issue a command like GL.IssuePluginEvent( eventID ) to get the plugin to 'do it's thing'. So unless you write specific code in your native plugin to account for multiple instances running at the same time, they will all respond to the "IssuePluginEvent" calls in your Unity .cs scripts, causing all sorts of trouble.

What you are telling me is that what I want to do is not actually supported directly in the Gear SDK, although it is supported in the desktop SDK. (i.e. the MultiCamera example which ships with the Desktop Unity SDK will just not work (even after fixed according to the earlier reply in this post) on the Gear. You have come up with a really nice hack and I'm going to try and replicate it or improve upon it. Great work.

Level 3
So I tried to implement my own version of your basic approach to layered rendering, Steve, and it works really well in the editor! For some reason though, my render textures show up as black quads on the Gear though.

Here is my Rig. If you or anyone else has any idea what I might do to get it working on the Gear, that'd be awesome. - GearVR : Combination Stereo/Mono Camera Rig in Unity [ YouTube ]