cancel
Showing results for 
Search instead for 
Did you mean: 

Independent Camera Pose Control

rkkonrad
Explorer
Hi there! I have a rather nuanced question and I hope there is an easy answer! I was wondering if there is a way to independently control the left and right eye camera poses? And if so. From where? I have the newest Oculus SDK and Unity Plugin (1.31 and 1.30.0, respectively) and have been picking at the OVRCameraRig.cs but whenever I make any modifications to the anchor points the cameras don't seem to update. Can pose updates to the cameras be done in UpdateAnchors() or are the anchors only intended to have things attached to them? I've tried updating as well in LateUpdates() like the following, and it updates the Rotation in the Unity Gui but has no effect on the camera itself.

private void LateUpdate() {
OVRHaptics.Process();
var lefteyeanchor = GameObject.Find("LeftEyeAnchor");
lefteyeanchor.transform.localRotation = Quaternion.Euler(0, 90, 0);
}

I know this is must be a rather odd question because why would anyone want to do something so weird!? But I'm looking into a specific depth cue and need control over these cameras independently. I just need to add small independent rotations to the left and right cameras after they have been transformed into head space (i.e. after the tracker has performed its transform). Is this possible? I've read somewhere that Unity performs the local rotation and translation transforms of the left and right eyes relative to the tracking space.   
6 REPLIES 6

rkkonrad
Explorer
Hi @imperativity! Thanks for the response. I wasn't aware of that sample framework, but when I went through it in detail I couldn't quite find anything that helps with my problem. Essentially what I'm trying to do is apply a transform to the left and right cameras once their positions and poses have been completely set (even after the IPD transform). This is how I understand things to work currently. Unity takes in the tracker information (from UnityEngine.XR.InputTracking) and applies that transform to each camera along with the eye specific IPD shift to get the left and right eye views. What I need to do is apply a transform after all of this has already been done. Is this possible or does Unity do all of this under the hood?

rkkonrad
Explorer
Hi @imperativity! Thanks for the response. I wasn't aware of that sample framework, but when I went through it in detail I couldn't quite find anything that helps with my problem. Essentially what I'm trying to do is apply a transform to the left and right cameras once their positions and poses have been completely set (even after the IPD transform).

This is how I understand things to work currently. Unity takes in the tracker information (from UnityEngine.XR.InputTracking) and applies that transform to each camera along with the eye specific IPD shift to get the left and right eye views. What I need to do is apply a transform after all of this has already been done. Is this possible or does Unity do all of this under the hood?

rkkonrad
Explorer
Hi @imperativity! Thanks for the response. I wasn't aware of that sample framework, but when I went through it in detail I couldn't quite find anything that helps with my problem. Essentially what I'm trying to do is apply a transform to the left and right cameras once their positions and poses have been completely set (even after the IPD transform).

This is how I understand things to work currently. Unity takes in the tracker information (from UnityEngine.XR.InputTracking) and applies that transform to each camera along with the eye specific IPD shift to get the left and right eye views. What I need to do is apply a transform after all of this has already been done. Is this possible or does Unity do all of this under the hood?

R0dluvan
Explorer
Did you ever figure this out? I want to do the same thing.

Woodardst
Honored Guest
The features of this method are as follows. First, this method can deal with a large amount of uncertain data, such as in the case of any shooting angle, in the case of any reference point, and in the case of a small number of feature points. Finally, because of using Internet of Things technology.

QuirkyOracle
Honored Guest
I wasn't aware of that sample framework, but when I went through it in detail I couldn't quite find anything that helps with my problem. Essentially what I'm trying to do is apply a transform to the left and right cameras once their positions and poses have been completely set (even after the IPD transform).