Showing results for 
Search instead for 
Did you mean: 

Tom Heath - Development Engineer

Level 4
– I’m Tom Heath, I’ve been with Oculus since 2013, and am very enthusiastic
about VR!  Those early days saw a lot of developing and pioneering for VR,
and I still continue with that to this day, especially exploring locomotion in
VR.   My role at Oculus includes a lot of time working with
developers worldwide, to make the best VR products possible – it’s great to
work with the array of talent that exists.    My background is
in the games industry where I spent 20 years in a variety of senior management,
production and programming roles, and I’m also interested in gaming educational
products.     Really excited to be answering questions and
chatting with you today!

Level 4
Hi Tom,

First of all, thank you for doing this AMA!

My question is regarding a diffent kind of locomotion than walking. I work with driving simulators and have been using the Oculus since DK1. Many larger driving simulators have access to motion system, which ranges from small 3-DOF actuator system up to large 6-DOF Stewart platforms. In the recent years smaller motion systems which targets the enthusiast consumers market have been released as well. Some of them with quite impressive performance.

The problem with these systems when combined with the Oculus Rift is that the inertial forces generated by the motion system confuses the inertial sensors inside the Rift. This was no problem with the DK1 since it lacked the tracking cameras, as a developer you could just subtract the external accelations from the resulting poses (as suggested by [Foxlin]). But since the introduction of the tracking cameras with DK2, this does not work. This is probably due to the mismatch between the data from the interal sensors and the tracking camera. So if you apply a rotating accelation to the external motion system while having your head stationary relative to the camera the resulting image will be very jerky as the Rift first beleives that it is being rotated, but then the tracking camera picks up the HMD pose and do a quite hard correction. I have tried to plot the resulting orientation pose from the SDK and what you get in this case is a sawtoothed plot, instead of the correct orientation pose which would be a fixed value.

Would it be possible to add a function to the SDK so developers like me can supply the SDK with the inertial forces from the external motion systems, so that these can be subtracted from the internal forces detected the Rift?

Eric Foxlin. Head tracking relative to a moving vehicle or simulator platform using differential inertial sensors.
Proceedings of SPIE: Helmet and Head-Mounted Displays V, 4021:133–144, 2000. doi: 10.1117/12.389141

My previous forum thread about this problem from back in February 2015:

Level 3
Hi Tom, as a locomotion expert, I'd love your thoughts on the SprintR VR foot controller. Https:// , also seen in this video:

Level 2
Hi Tom. I'm using Unity to create a 3d 360 video experience for the Oculus Rift. When I add an object to a scene those objects are out of focus (like double-vision). Do you know if there is a way to view stereoscopic video and 3D objects at the same time without "double-vision"? Thanks for any help you can provide.

Not applicable
Hi Tom! I am not exactly sure if this is the correct place to post such a question but since Oculus Staff's answers have been a little bit lacking, maybe you could poke someone over there to give some answers on GearVR input's issues?

Some of us are waiting for a fix on this so we can release updates to our live apps.

Some resources:

Thank you!

Level 4
Have you hear Anying news on  octuls go

and what is software on the go 

Level 3
Hi Tom, thanks for doing this AMA!

Since you've been doing this for nearly 5 years at this point, you've had the opportunity to see a lot of evolution in the VR space. Are there any early bits of "VR Wisdom" that are turning out to no longer be true? 

Level 5

Hi Tom!

with some pretty good advancement in markerless/video human pose replication being done lately, is that something that Oculus might add? For say Rift CV2?
Even just as an option for social avatar representations as it might not be low latency or accurate enough for gameplay for a while more?

EDIT: I'm thinking more about the legs and elbows and other areas not tracked more directly already.

Cheers, Fredik

Level 2
Will oculus ever adopt a mixed-reality shape just like Holo Lens. I mean a device which can become transparent at one time and opaque at the other 

Level 4
Hi – I’m really excited and admiring of all new innovation, including such foot controllers. I have tried this, and another solution in a similar vein. I find them very interesting, and there is definitely some value in such a control method. Its my opinion that it doesn’t completely eliminate the undesirable effects of more extreme motion, but the enhanced sense of control given to the user is definitely helping comfort.