How to implement Room scale with locomotion — Oculus
Welcome to the Oculus Developer Forums!

Your participation on the forum is subject to the Oculus Code of Conduct.

In general, please be respectful and kind. If you violate the Oculus Code of Conduct, your access to the developer forums may be revoked at the discretion of Oculus staff.

How to implement Room scale with locomotion

WelbyWelby Posts: 1,063 Oculus Start Member
Hi,

I'm trying to implement an artificial locomotion.. You can simple move around with the thumbstick of the Oculus Touch. However if you wish to move with your room scale you can do it as well.

Now where's the problem.. I'm using the standard OVRPlayerController to do that and everything works fine but when i move myself in roomscale the collider stay back in its original position (of course.. because only the camera is moving and not the collider itself).

So i'm wondering what's quickest and better way to accomplish this? Should i completely change the way i move my character in order to move its collider with it?

The problem is.. if i move the collider with the camera while i'm using the current system.. everything goes nuts because OVRPlayerController uses a Character controller which use physics and so if I force the collider to move the player will easly bounce around.

Instead i was thinking something like a raycast to make sure the player is always on the right height and translate him instead of relying on physics. 

Suggetions? 

TLDR: I need a system to have both locomotion and roomscale in VR while still being able to detect triggers and other stuff.

Comments

  • SlayeminSlayemin Posts: 37 Oculus Start Member
    The other big problem you'll run into is that players can walk through solid objects using room scale locomotion. The VR environment needs to handle collisions. I came up with a solution, where each frame, I measure the delta movement vector of the player in their roomscale environment and apply that as an input request to the player character. The character attempts to satisfy that request by moving to that location. If there isn't a collision, the request is satisfied and the player camera follows the HMD location. If there is a collision, the request is denied and the camera and character doesn't move forward. In effect, you can be walking forward in your room but your VR POV can be blocked by colliding geometry.
    "But wait!" many of you will shout in protest, "That's terrible! The disparity between physical action and visual feedback is going to cause motion sickness in the users! You should be ashamed!"
    You're so wrong. That is a wild untested assumption. The truth is that it doesn't happen very often, and when it does, it only serves to reinforce the experienced reality of virtual reality by validating expectations of environment solidity. After users are convinced that things are solid and impassible, they won't continue to test it. Let's assume that it does create motion sickness: Is it better to let users clip through a wall to avoid motion sickness, or to let them clip through a wall, break immersion and fall to their deaths and experience even more motion sickness? Clearly, a little bit of motion sickness would be far better than a lot of motion sickness and broken game designs.

    Anyways, I've got this working in my game "Spellbound" if you want to try it out for yourself.
Sign In or Register to comment.