cancel
Showing results for 
Search instead for 
Did you mean: 

Rendring a webcam that stands "still" - Solved

Oero
Honored Guest
Hey!

Been trying out my Oculus Rift the last few months and wanted to start to develop something for myself. I want to start off with creating an environment where I have a webcam that stands still in the environment and doesnt follow when you move your head.

I have based my work on Brad Davis' example code from "Oculus Rift in Action" and the example porject called HighResWebcamDemo. So does anyone know what I need to add to the code to make the camera stand still in the 3D environment instead of following my head movement?

He crates the MatrixStack as following:
MatrixStack & mv = Stacks::modelview();

mv.withPush([&]{
mv.identity();

glm::quat eyePose = ovr::toGlm(getEyePose().Orientation);
glm::quat webcamPose = ovr::toGlm(captureData.pose.Orientation);
glm::mat4 webcamDelta = glm::mat4_cast(glm::inverse(eyePose) * webcamPose);

mv.preMultiply(webcamDelta);
mv.translate(glm::vec3(0, 0, -IMAGE_DISTANCE));

texture->Bind(oglplus::Texture::Target::_2D);
oria::renderGeometry(videoGeometry, videoRenderProgram);
oglplus::DefaultTexture().Bind(oglplus::Texture::Target::_2D);
});
Any help would be appreciated!

4 REPLIES 4

jherico
Adventurer
Most of this is wrapper code.  The main point is that you render into the scene compensating for the difference in pose between the time of capture and the time of render.  The line above where I multiply the inverse eye pose by the webcam pose.

Oero
Honored Guest

jherico said:

Most of this is wrapper code.  The main point is that you render into the scene compensating for the difference in pose between the time of capture and the time of render.  The line above where I multiply the inverse eye pose by the webcam pose.


Ok, but then what should I change to have the "screen", showing the webcamera-feed, stand still in the 3D room? I want the screen to be at one point in the room like you have done with the ColorCube in Example_5_4_RiftSensors. I looked at the difference in the two shaders you used when calling on the loadProgram function in the two examples, RiftSensors and HighResWebcamDemo, but I dont know what to look for.
I want the cameras to stand still in the room and not always be in front of the view.

Oero
Honored Guest
I have now tried something else that sort of works as I want it to.

Setting the ovrTrackingState only on the first gathered state. This results in FPS loss, but the 2D texture with the camera feed stands still in the 3D space.

Is there a better way to do it?

Oero
Honored Guest
I solved my problem for now. I changed from:
CaptureData captured;
float captureTime =
ovr_GetTimeInSeconds() - CAMERA_LATENCY;
ovrTrackingState tracking =
ovrHmd_GetTrackingState(hmd, captureTime);
captured.pose = tracking.HeadPose.ThePose;
to:
CaptureData captured;
float captureTime =
ovr_GetTimeInSeconds() - CAMERA_LATENCY;
ovrTrackingState tracking =
ovrHmd_GetTrackingState(hmd, captureTime);
captured.pose = tracking.LeveledCameraPose;
in the captureLoop(). Time to implement 4-6 cameras to display a full circle!