Welcome to the Oculus Developer Forums!

Your participation on the forum is subject to the Oculus Code of Conduct.

In general, please be respectful and kind. If you violate the Oculus Code of Conduct, your access to the developer forums may be revoked at the discretion of Oculus staff.

How do I do Positional Tracking?

How do I do Positional Tracking?

Best Answer

  • andyborrell-ahubandyborrell-ahub Posts: 13
    Accepted Answer

    Whether you are using the IR tracking camera or the neck model, you should almost always use the eye positions returned by ovrHmd_GetEyePoses(..). They include the inter-pupillary distance (IPD) and the base-of-neck offset. The positions are relative to the initial center eye position, so you can simply scale them to account for the scale of your world.

    // At start-up  
    ovrEyeRenderDesc desc[ovrEye_Count];

    ovrHmd_ConfigureRendering(m_hmd, &apiConfig, distortionCaps, fov, desc);

    ovrVector3f offsets[ovrEye_Count] = {
    desc[ovrEye_Left].HmdToEyeViewOffset,
    desc[ovrEye_Right].HmdToEyeViewOffset
    };

    ...
    // Before rendering each frame
    ovrTrackingState state;
    ovrPosef poses[ovrEye_Count];
    ovrHmd_GetEyePoses(m_hmd, frameIndex, offsets, poses, &state);

    for (int i = 0; i < 2; ++i)
    cameraViewMatrix[i] = MatrixInverse( RigPoseMatrix * MatrixFromTRS(&poses[i].Position, &poses[i].Orientation, worldScale) );

    If you are showing monoscopic content or 360 videos/photos, you can ignore position tracking by setting the eye positions to (0, 0, 0) relative to the initial center eye position.

    If you are using Unity all of this is handled automatically when you use the OVRPlayerController or OVRCameraRig prefabs.

Sign In or Register to comment.