cancel
Showing results for 
Search instead for 
Did you mean: 

Separate Loading Screen

CrazyNorman
Protege
I'd like to draw to the HMD from two different threads, each having its own ID3D11Device. I believe I actually saw a post from Cyber suggesting calling SubmitFrame from a background thread to draw a loading screen while content is loading in the primary thread.

Problem is... I can't. If I try to setup two different swap chains, one for each device, on the same session, "ovr_CreateTextureSwapChainDX" fails with ovrError_InvalidParameter, pointing out that my d3dDevice pointer was different.

On the other hand, if I try to create a second session using "ovr_Create" the second call will fail with -1006 (ovrError_ServiceError).

Is there a way to submit frames from multiple ID3D11Devices in the same application (I'll only be submitting from one at a time)? This would be extremely helpful for me, and also useful for those who'd like interactive loading screens.

I know it can't be too far off from existing either given the way Oculus Home/the compositor mediates between multiple applications without issue.

EDIT: Error from service log is:
21:49:09.954 {!ERROR!} [OAF ERROR]
os\file.cpp(1064) : Failed to open process. (1971039)

My guess? The Oculus runtime is opening the process to share memory, and attempts to call OpenProcess for each ovr_Create. Perhaps it tries to open the process for exclusive access and doesn't share the handle it already has open? Total shot in the dark though.

EDIT2: Error from debug output when trying to call ovr_CreateTextureSwapChainDX twice with two different devices:
Code: -1005 -- ovrError_InvalidParameter
Description: Provided "d3dPtr" doesn't match previous calls.
OVRTime: 27443.307980
9 REPLIES 9

SiggiG
Protege
What engine are you using? The UE4 integration now comes with a sample VR loading screen.
CCP Games, EVE: Valkyrie developer | @SiggiGG

CrazyNorman
Protege
I'm using a custom engine based upon directx 11. Thanks for the tip, I'll take a look at ue4 and see how they do it.

Another possibility is having the loading screen as a separate process. Unfortunately my loading screen shares quite a bit of data with the main game so this wouldn't be ideal.

SiggiG
Protege
The loading screen is actually just using the Oculus compositor layer feature.
CCP Games, EVE: Valkyrie developer | @SiggiGG

volgaksoy
Expert Protege
There are a couple of ways of doing this without creating multiple D3D devices, although we are looking into eventually allowing for multiple ovrSessions and similarly multiple D3D devices.

You can offload your loading into a separate thread (assuming nothing in the loading thread requires the same immediate context) and keep calling ovr_SubmitFrame on the same thread with the loading screen. Alternatively, if you properly protect the immediate context, you could start using the same immediate context on the loading thread while doing the loading in the original rendering thread.

We're keeping notes of all these use cases to further improve the SDK in time, but sadly they wouldn't be implemented in the SDK for a while. Still, keep them coming.

volgaksoy
Expert Protege
The way UE4 currently deals with this is by submitting a quad layer that has a certain texture assigned to it, and then stops calling ovr_SubmitFrame until loading is done. When the app stops calling ovr_SubmitFrame, then compositor will keep rewarping the last submitted layers, so in this case the quad will keep smoothly head tracking in VR. That said, it won't prevent the hour glass from showing up, and you wouldn't be able to have any animations or loading bars when/if you do it this way.

CrazyNorman
Protege
"volgaksoy" wrote:
There are a couple of ways of doing this without creating multiple D3D devices, although we are looking into eventually allowing for multiple ovrSessions and similarly multiple D3D devices.

You can offload your loading into a separate thread (assuming nothing in the loading thread requires the same immediate context) and keep calling ovr_SubmitFrame on the same thread with the loading screen. Alternatively, if you properly protect the immediate context, you could start using the same immediate context on the loading thread while doing the loading in the original rendering thread.

We're keeping notes of all these use cases to further improve the SDK in time, but sadly they wouldn't be implemented in the SDK for a while. Still, keep them coming.


Thank you for the thorough response. My product https://flyinside-fsx.com actually interfaces with third-party software so I can't protect the immediate context or draw to it while things are loading. The best option I've come up with is to have two separate processes, one for the game, one for the menu. That way each has its own session and its own device. The IPC is a pain but it works 😉

The only problem I'm having is displaying the proper session on the Rift and automatically switching between them. What I've found works is:

  • In game, destroy swap texture set, destroy session.

  • Switch focus to mirror window for menu process

  • Once user chooses to re-enter game:

  • Re-create session in game process, create new swap texture set

  • Pass focus back to game window


My biggest problem is that sometimes the game shows 2D dialogs, and I show these 2D dialogs from my menu. This means that the game will sometimes have focus even when it is not submitting frames, and my menu should continue to be displayed on the HMD. As long as I've destroyed the ovrSession before the game is given focus, my game keeps focus, and everything works as it should. It just feels a bit fragile.

Is there any issue with destroying/recreating a session multiple times in a process's life-time? Do you see any potential pitfalls with what I'm doing vs future SDK changes? I'd feel more comfortable with an explicit "Give up session" API, or some way to sticky a process's session as active, but I also understand that you must be super busy prepping for launch, hence my work-arounds.

jherico
Adventurer
For our OpenGL application, we create a context & thread that is dedicated to nothing but presenting to the display plugin (of which the Oculus plugin is one) and keep all rendering work on a different thread. The primary thread passes a texture and the poses used to render each frame to the presentation thread for display. With such a system it would be trivial to implement a loading screen to display before any frames had been received from the main rendering thread.
Brad Davis - Developer for High Fidelity Co-author of Oculus Rift in Action

CrazyNorman
Protege
"jherico" wrote:
For our OpenGL application, we create a context & thread that is dedicated to nothing but presenting to the display plugin (of which the Oculus plugin is one) and keep all rendering work on a different thread. The primary thread passes a texture and the poses used to render each frame to the presentation thread for display. With such a system it would be trivial to implement a loading screen to display before any frames had been received from the main rendering thread.


This is what I was originally doing. I had my display thread and my simulator thread and used a surface sharing queue between them. Unfortunately I found that the surface sharing lead to a pretty heavy toll both in terms of latency and frame-rate.

jherico
Adventurer
"CrazyNorman" wrote:

This is what I was originally doing. I had my display thread and my simulator thread and used a surface sharing queue between them. Unfortunately I found that the surface sharing lead to a pretty heavy toll both in terms of latency and frame-rate.


By surface do you mean something like a 'render target'?

Concurrently reading from and writing to a texture or framebuffer (the GL equivalent to a render target) in OpenGL produces undefined behavior. I'm going to assume the same is true for Direct3D. If you're not getting lots of corruption on the surface, then I assume that you're doing some D3D equivalent of glFinish() to flush the pipeline every time you move between threads, or that D3D is doing something equivalent in the background. This is almost certainly creating a bunch of GPU/CPU sync points that will kill performance.

The mechanism we use is to not share a surface, but to have a producer/consumer model.

The rendering thread can ask the active display plugin for the desired render target size, whether the target should be stereo, what the head pose is, etc. It creates a framebuffer and a texture to act as the color target. Once a frame is rendered it's detached from the framebuffer and sent into an escrow object shared between the two threads. The escrow object creates a GL fence so it can determine when the GPU has finished writing to the texture.

On the presentation side, for each frame the thread asks the escrow if a new texture is available, and if one is, it grabs it and releases the last texture it was using back to the escrow object (which again creates a GL fence to ensure that the texture is no longer being read from by the GPU). Once a texture is no longer being read from, it goes into a recycler function that makes it available to the rendering thread again.

Back on the rendering thread, since we detach the color attachment texture from the framebuffer each frame, we have to have a new one. First it checks the escrow object to see if there are any recycled textures available, or if there are none, it creates a brand new one.

Ultimately you end up with about 2-5 textures going back and forth between the threads, with one of them always in use as a write target on the rendering thread and one as a read target on the presentation thread.

This gave us the advantage of having the equivalent of asynchronous timewarp before it was available in the runtime. However, we still keep it around because operations that block on v-sync from causing a bunch of idle time on the main thread.
Brad Davis - Developer for High Fidelity Co-author of Oculus Rift in Action