04-25-2013 05:26 PM
12-29-2013 04:43 AM
12-29-2013 07:19 AM
"mrboggieman" wrote:
Many thanks for the reply but could you expand on baking the eye offset, technically speaking.
If you use cameras, one for each eye, how do you keep the separating distance between the eyes consistent whilst looking around - I assume you attach them to be fixed distance and then pivot them around that point - doesn't this only really keep the separation distance for the middle pixel in the result image for the direction you are facing.
I have thought about ray tracing this way and use sphere maps instead of cube maps but there would be too much distortion.
There are some 360 stereo demos in this thread: viewtopic.php?f=28&t=5285 but they don't feel right when I view them in the oculus - I don't know if this is because there is a time delay for each eye and so the result imagery is different and makes you feel sick or if it is the varying separation distance as you look around.
So the equidistant concept is bound to produce stitching issues. This parallax problem with stereo rigs is the explanation why stereo panos with twin rigs require lots of shots for each camera. More shots means the errors are reduced for adjacent pairs.
An alternative arrangement is to have one camera rotating on axis, in a NPP fashion. And the other image is rotating with a greater parallax error than before . So the first camera can have perfect stitching and the second camera will require more shots (maybe twice as many) in a sequence to get the same quality of stitching as in an equidistant setup. Here is an example of this kind of rig.
12-30-2013 02:08 AM
12-30-2013 06:05 AM
12-30-2013 09:42 AM
12-30-2013 09:58 AM