03-03-2016 06:43 PM
03-04-2016 02:22 AM
03-04-2016 04:55 AM
03-04-2016 05:42 AM
03-04-2016 01:01 PM
03-04-2016 01:13 PM
03-04-2016 02:51 PM
"volgaksoy" wrote:
There are a couple of ways of doing this without creating multiple D3D devices, although we are looking into eventually allowing for multiple ovrSessions and similarly multiple D3D devices.
You can offload your loading into a separate thread (assuming nothing in the loading thread requires the same immediate context) and keep calling ovr_SubmitFrame on the same thread with the loading screen. Alternatively, if you properly protect the immediate context, you could start using the same immediate context on the loading thread while doing the loading in the original rendering thread.
We're keeping notes of all these use cases to further improve the SDK in time, but sadly they wouldn't be implemented in the SDK for a while. Still, keep them coming.
03-04-2016 10:50 PM
03-05-2016 06:27 AM
"jherico" wrote:
For our OpenGL application, we create a context & thread that is dedicated to nothing but presenting to the display plugin (of which the Oculus plugin is one) and keep all rendering work on a different thread. The primary thread passes a texture and the poses used to render each frame to the presentation thread for display. With such a system it would be trivial to implement a loading screen to display before any frames had been received from the main rendering thread.
03-05-2016 06:21 PM
"CrazyNorman" wrote:
This is what I was originally doing. I had my display thread and my simulator thread and used a surface sharing queue between them. Unfortunately I found that the surface sharing lead to a pretty heavy toll both in terms of latency and frame-rate.