Converting Coordinate Systems and Shader help. — Oculus
New to the forums? Click here to read the "How To" Guide.

Developer? Click here to go to the Developer Forums.

Converting Coordinate Systems and Shader help.

KennyRulesKennyRules Posts: 3
edited May 2013 in Support
Hello! I'm part of a student team trying to integrate the Oculus Rift into our DX11 rendering engine.
Unfortunately, it seems we are using a left-handed coordinate system and multiplying our matrices in the opposite order than what is done in the samples.

I'm a bit confused too by the wording in the documentation for the exact order of things to do regarding render textures and viewport setting. To keep things simple, I'm working on just getting the left eye to render correctly. I am using the utility classes provided as well and they seem to be getting the correct "dummy" data albeit they are using RH coordinate system while our entire system is using LH. It'd be easier to switch the Oculus ones at this point then re-write everything. I should also mention I can not seem to edit the Stero's .cpp file and switch it there! It might have to do with how it was included, but I'm not sure.

So, firstly, the shader. I took it out from the .cpp file and have the following vertex shades:
float4x4 gOcView;
VertexOut OculusVS(VertexIn vin)
{
	VertexOut vout;
	vout.PosH = mul(float4(vin.PosL, 1.0f), gOcView); 
	vout.PosW = mul(float4(vin.PosL, 1.0f), gOcView);
	vout.Tex = mul(float4(vin.Tex, 0, 1), gTexTransform).xy;
	return vout;
}
and pixel:
float4 OculusPS(VertexOut pin) : SV_Target
{
	float2 theta = (pin.Tex - LensCenter) * ScaleIn; // Scales to [-1, 1]
	float rSq = theta.x * theta.x + theta.y * theta.y;
	float2 theta1 = theta * (HmdWarpParam.x + HmdWarpParam.y * rSq + HmdWarpParam.z * rSq * rSq + HmdWarpParam.w * rSq * rSq * rSq);
	
	// Detect whether blue texture coordinates are out of range
	// since these will scaled out the furthest.
	float2 thetaBlue = theta1 * (ChromAbParam.z + ChromAbParam.w * rSq);
	float2 tcBlue = LensCenter + Scale * thetaBlue;
	if (any(clamp(tcBlue, ScreenCenter -float2(0.25, 0.5), ScreenCenter+float2(0.25, 0.5)) - tcBlue))
		return 0;

	// Now do blue texture lookup.
	float blue = gDiffuseMap.Sample(Linear, tcBlue).b;

	// Do green lookup (no scaling).
	float2 tcGreen = LensCenter + Scale * theta1;
	float green = gDiffuseMap.Sample(Linear, tcGreen).g;
	// Do red scale and lookup.
	float2 thetaRed = theta1 * (ChromAbParam.x + ChromAbParam.y * rSq);
	float2 tcRed = LensCenter + Scale * thetaRed;
	float red = gDiffuseMap.Sample(Linear, tcRed).r;

	return float4(red, green, blue, 1);
}

Regarding the Linear struct, I am not sure what the 'register' line is supposed to do and after digging set it to:
SamplerState Linear
{
	Filter = MIN_MAG_MIP_LINEAR;
	AddressU = BORDER;
	AddressV = BORDER;
	AddressW = BORDER;
};

Now in the rendering code, is this the correct psuedo code?
Set the 6 variables for the shader using eye parameters (should be working)
Set the view matrix like so: 
view = (XMMatrixMultiply(centerCamera.view, leftEye.viewAdjust)); 

Set the projection matrix to the left eye's projection matrix which apparently is wrong because of coordinate system.

Ultimately the transformation in our system is:
World * View * Proj. So this becomes World * (view * viewAdjust) * Proj, yes?

Take the left viewport, transform it by the distortion scale, and set the viewport to this new one.
Render the entire scene, using the transformed left viewport to a texture that is the same size as the final viewport.

Take this texture that has the scene rendered from left eye and now render it to a texture that has half the width of the final viewport.

The 'ocView' matrix:
Matrix4f ocView(2.0f, 0.0f, 0.0f, -1.0f,
			0.0f, 2.0f, 0.0f, -1.0f,
			0.0f, 0.0f, 0.0f, 0.0f,
			0.0f, 0.0f, 0.0f, 1.0f);

The 'texTransform' matrix:
Matrix4f texmLeft(w, 0.0f, 0.0f, x,
								  0.0f, h, 0.0f, y,
								  0.0f, 0.0f, 0.0f, 0.0f,
								  0.0f, 0.0f, 0.0f, 1.0f);


The sample transposes it behind the scenes but I do not because I do the reverse multiplication, yes?
I can get really close but usually it is just a corner that has the correct curve while the rest just go over the final viewport.

The sdk makes it sound like I want to render to a texture that is the size of the modified viewport.. but then seems to say that isn't necessary thanks to the shaders?

Thanks for reading! I hope someone can guide me in the right direction! Unfortunately due to time, I'm okay with more quick and dirty solutions just to get it working and then I can optimize later.

Comments

  • tlopestlopes Posts: 163
    Hi,

    I also run my engine using a LH Y-up coordinate system (with LH view and LH projection matrices). The Rift SDK likes to use RH (and also Y-up) view and projection matrices for everything, but you can convert from one to the other fairly easily. Here are all of the changes that I've made to my code and to the Rift SDK to enable everything to work:

    In file LibOVR\Src\Util\Util_Render_Stereo.cpp
    In the function void StereoConfig::updateEyeParams()
    Change this line from:
    Matrix4f projCenter = Matrix4f::PerspectiveRH(YFov, Aspect, 0.01f, 1000.0f);
    
    to:
    Matrix4f projCenter = Matrix4f::PerspectiveLH(YFov, Aspect, 0.01f, 1000.0f); // RH -> LH
    

    Note that the ProjectionCenterOffset calculation should stay the same as it was. Same goes for the projLeft and projRight calculations.

    When converting between my engine's matrices (which are all row-major because it uses D3D9 and the D3DX helper library) and the OVR SDK's matrices, you need to perform a transpose(). After that, they're so similar that you can (and I do) do a memcpy() to get matrix data back and forth.

    The next thing that I had to change to get the graphics code up and running was to multiply the viewAdjust matrices by my worldscalar (because in my game's world, 1 game unit does not equal 1 meter). If you do all of this then your stereoscopic rendering should work fine.

    The next thing that needs to be changed to support LH coordinate systems is in the head-tracking code. This is the function that they call in the OculusRoomTiny demo:
    hmdOrient.GetEulerAngles<Axis_Y, Axis_X, Axis_Z>(&yaw, &EyePitch, &EyeRoll);
    
    I instead use this to give me the proper left-handed, Y-up yaw and pitch and roll:
    hmdOrient.GetEulerAngles<OVR::Axis_Y, OVR::Axis_X, OVR::Axis_Z, OVR::Rotate_CCW, OVR::Handed_L>(&hmdOnlyYaw, &hmdPitch, &hmdRoll);
    hmdPitch *= -1.0f; // RH to LH conversion requires negating one of the axes (in this case, the X axis)
    
    I hope that all of this helps you out, and I also hope that in the future a few things get changed with the SDK (namely to make LH programs work more easily with less SDK modification, and also I'd like to be able to pass my nearclip and farclip values in instead of them being hardcoded in the SDK!)
  • KennyRulesKennyRules Posts: 3
    Oh awesome! That worked for me for to get it mostly showing up compared to nothing from before.

    I did also have to render to a larger texture for each eye but that was covered in the sdk - I misread and thought the shader variables magic the need for it away.

    Thanks again!
  • tlopestlopes Posts: 163
    KennyRules wrote:
    Oh awesome! That worked for me for to get it mostly showing up compared to nothing from before.

    I did also have to render to a larger texture for each eye but that was covered in the sdk - I misread and thought the shader variables magic the need for it away.

    Thanks again!
    No problem. Now have fun getting it from "mostly showing up" to "man that looks great"! :P
Sign In or Register to comment.