cancel
Showing results for 
Search instead for 
Did you mean: 

I need help with matrix transformations

IGameArt
Protege
Hey guys, I've been trying to get this to work for over an hour now, but I need a glsl vertex shader that will simulate the camera sliding along the relative x axis so i can create separate images for the left and right eye.

Here's where I'm at
//
// Simple passthrough vertex shader
//
attribute vec3 in_Position; // (x,y,z)
attribute vec4 in_Colour; // (r,g,b,a)
attribute vec2 in_TextureCoord; // (u,v)
//attribute vec3 in_Normal; // (x,y,z) unused in this shader.

varying vec2 v_vTexcoord;
varying vec4 v_vColour;

void main()
{
vec4 object_space_pos = vec4( in_Position.x, in_Position.y, in_Position.z, 1.0);
vec4 new_position = gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] * object_space_pos;
new_position.x -= 0.005*new_position.z;
gl_Position = new_position;

v_vColour = in_Colour;
v_vTexcoord = in_TextureCoord;
}


as you can see i'm subtracting from new_position.x to move the vertexes on the screen to the left, the problem is i need to move them more the closer they are to the viewer, which is why I implemented *new_position.z. However that's not working. all vertexes are moving to the left evenly, created a flat 2d image that just appears farther away. Not really what i want.

Has anybody else used a shader for camera offsets before? Can anybody tell me what i'm doing wrong?
3 REPLIES 3

renderingpipeli
Honored Guest
Not sure if I understand what you do, but for 3D in the Rift you need one translation to the left/right in camera space for the eye-separation (6.5 cm on average, basically your IPD) and then a translation from the left to your projection as the lens center is not the screen center. Only doing the first will result in images that are so far off, that you can't focus on them in the Rift while only doing the second part will result in flat, 2D images (=> IPD = 0).

(And please don't use magic constants as the next Rift might be build differently).

IGameArt
Protege
Okay that makes sense, i've been having a hell of a time getting my images to converge, because i've only been doing the first part. Here is my updated shader that implements a a left and right eye transformation matrix (handles only the first transformation):

mat4 leftMat=mat4(1,0,0,1,0,1,0,0,0,0,1,0,0,0,0,1);
/*(1,0,0,1),
(0,1,0,0),
(0,0,1,0),
(0,0,0,1);*/

void main()
{
vec4 object_space_pos = vec4( in_Position.x, in_Position.y, in_Position.z, 1.0);
mat4 view_Off = gm_Matrices[MATRIX_VIEW]*leftMat;
gl_Position = ((gm_Matrices[MATRIX_WORLD]*gm_Matrices[MATRIX_PROJECTION])* view_Off ) * object_space_pos;


v_vColour = in_Colour;
v_vTexcoord = in_TextureCoord;
}


In my letMat matrix, the view offset is temporarily set to 1 for testing purposes, it will be variable in the future. This actually works to form a 3d image that doesnt converge. What is the second transformation that needs to occur to get my images to converge? One thing to keep in mind is that i'm not working in c++, so some of my variables are a little different, as you can see my matrixcs are MATRIX_VIEW, MATRIX_WORLD, MATRIX_PROJECTION.

jherico
Adventurer
The code that OVR uses to calculate the projection offset is here.


float viewCenter = HMD.HScreenSize * 0.25f;
float eyeProjectionShift = viewCenter - HMD.LensSeparationDistance*0.5f;
ProjectionCenterOffset = 4.0f * eyeProjectionShift / HMD.HScreenSize;


The HMD object contains information returned by the Rift concerning it's physical characteristics. Of course you can boil this down to


ProjectionCenterOffset = 0.5 - (HMD.HScreenSize / HMD.LensSeparationDistance);


The value is the left or right shift of the projection matrix, depending on which eye you're rendering.