cancel
Showing results for 
Search instead for 
Did you mean: 

Why get the head orientation for each eye?

Afuerg
Level 2
The example code in the documentation on page 31 is reading and applying the head position for each eye seperately.

This leads to both eyes rendering something differently if my program is running slow enough. So why not read the orientation once and just apply it to both eye cameras?

Sure, programs for the Rift should run at a constant fps rate anyway, but you can always have a bit of lag at times.
8 REPLIES 8

Vrally
Level 4
A bit related to this thread:
viewtopic.php?f=20&t=12052

kaetemi
Level 2
"Afuerg" wrote:
This leads to both eyes rendering something differently if my program is running slow enough.
Yeah, that could lead to some terrible parallax inconsistency in the stereo render. I just read the main orientation. The sideways screen refresh is annoying enough.

brantlew
Level 5
There a sublety here. Timewarp works to precisely predict the position of each eye at the time of scanout. Because the screen scans out left to right there is small but significant difference between the predicted position of the left and right eye. If you only use a single position then this prediction latency will show up. If you have a real use case for "slow" render times (ie. debugging), then you might conditionally compile that. Otherwise, I would caution to use the separate eye orientations and let timewarp do its thing. 😉

kaetemi
Level 2
"brantlew" wrote:
Timewarp works to precisely predict the position of each eye at the time of scanout. Because the screen scans out left to right there is small but significant difference between the predicted position of the left and right eye.
Which would work until you drop to 30fps. Please no sideways scan on CV.

jherico
Level 5
"kaetemi" wrote:
"brantlew" wrote:
Timewarp works to precisely predict the position of each eye at the time of scanout. Because the screen scans out left to right there is small but significant difference between the predicted position of the left and right eye.
Which would work until you drop to 30fps. Please no sideways scan on CV.


If you've dropped to less than half of the refresh rate, then you should render one eye per frame and rely on timewarp to position the most recently rendered image for the other eye. It's not ideal but it's better than nothing. It doesn't how the eyes are refreshed.
Brad Davis - Developer for High Fidelity Co-author of Oculus Rift in Action

kaetemi
Level 2
"jherico" wrote:
If you've dropped to less than half of the refresh rate, then you should render one eye per frame and rely on timewarp to position the most recently rendered image for the other eye. It's not ideal but it's better than nothing.

That sounds kind of terrible. I find inconsistent parallax a bit more annoying than a little framerate drop.
Timewarp can't magically hande positional movement. IPD would appear jumping around like crazy.

jherico
Level 5
"kaetemi" wrote:
"jherico" wrote:
If you've dropped to less than half of the refresh rate, then you should render one eye per frame and rely on timewarp to position the most recently rendered image for the other eye. It's not ideal but it's better than nothing.

That sounds kind of terrible. I find inconsistent parallax a bit more annoying than a little framerate drop.
Timewarp can't magically hande positional movement. IPD would appear jumping around like crazy.


Moving objects might suffer, but I doubt it would be that bad. It wouldn't be hard to forcibly simulate the effect. Alternatively, if you're not hitting your framerate targets you can dynamically lower the texture size of the offscreen rendering buffer to reduce the number of pixels you need to render at the cost of some image quality.
Brad Davis - Developer for High Fidelity Co-author of Oculus Rift in Action

kaetemi
Level 2
"jherico" wrote:
Moving objects might suffer, but I doubt it would be that bad. It wouldn't be hard to forcibly simulate the effect. Alternatively, if you're not hitting your framerate targets you can dynamically lower the texture size of the offscreen rendering buffer to reduce the number of pixels you need to render at the cost of some image quality.

Unfortunately the engine I'm implementing this in is a bit old and now completely CPU bound, and I can't do much about that in the near term. GPU usage does not even reach 20% when at the maximum framerate.

Oh well, I suppose I could settle for the timewarp for now to tween the frames. It is indeed better than flashing a black frame inbetween. Although in any case I'd do it with a renderpair that was rendered with the same frametime, rather than alternating, in order to keep animated models behaving in a sane manner.

Still not convinced on rendering the source render with two differently timed camera positions though, that does seem like it would screw with the parallax. Two differently timed timewarps to handle the scanout does seem fair, as that only rotates the view, so I can't complain about that.

In any case, though, in reality occasional vsync issues seem unavoidable, and this side-to-side refresh is really much worse when this occurs than a top-bottom refresh. I really don't think a side-to-side refresh screen is suitable for this type of hardware.