cancel
Showing results for 
Search instead for 
Did you mean: 

Use GearVrController with Selection Ray - Unity

davidbeloosesky
Explorer
Hi, I figure out how to add a GearVrController to a scene.
(Adding a OVRCameraRig, and Add GearVRController under the RightHandAnchor)
The controller follows the hand movements. 

But how can I add a selection options + a ray from the device to the scene?
I mean a ray that can select item, like in the Oculus store below:

z55w50ytmdyp.png




43 REPLIES 43

oculus_gabor
Protege
I see, that code is for using the Gear VR touchpad, or Gear VR Controller touchpad to scroll a scrollable area, like a text field with long text. It's not something you can hook into.

myBadStudios
Protege
@oculus_gabor
First off, just wanna say thank you again for taking the time to make this thing work. The moment I put the Rift on my head and I saw that orientation demo I said I am not making anything that is not made for the Rift! I was about 90% done with a (Darn it, I can never remember that acronym Universal Windows SomethingOrOther) game and it was made to be mobile and Mac Store compliant also and I just scrapped it and am now down to about 50% complete on the VR version thanks to all the extra stuff that I now wanna add since the static camera is becomming a VR experience :smiley:

I am THAT committed to making Rift games now that I am prepared to sacrifice any and all other platforms (maybe make an exception for the GearVR)... but it is super disheartening to discover that something this basic is so hard to pull off so yeah, truly appreciate the assistance.

Now the naysayer in me has to point out: What happens when the next version of the SDK is released? Will all your effort be for nothing again like the other demos and samples out there? 😮 All I can say is I hope my game is done by then :worried:

Now, back on topic. I used your sample scene and just dragged my prefab in there so if and when the time comes for you to figure out "So why the heck is this not working for him???" then I can always send you back your project with my prefab in it or I could just send you the prefab and you can add it in yourself. Either way, you getting your hands on that sample project will be super easy as it will be a small upload.

To answer your questions: Yes, it is a world space canvas and yes, I use the centre eye as the event camera. I thought that maybe the fact that I scaled my canvas down to 0.01 might be the issue but I see you did the same with yours. Seems we are both aware of Unity's sorely lacking support for showing text in world space without scaling down a huge canvas... So with that idea proving not to be the issue I was again stumped. 

I thought I saw, when looking at the code, that you build up a list of items the ray intersects with and then calculate from there if this is something to be concerned with or not. I was wondering if that list doesn't get updated in time or at all and retains a pointer to an old selected object or something and thus it sends the input to the wrong object. I was going to go look into that as I could not think of what else it could possibly be.

You saying the cast ray and the visible ray might not match up... interesting... wonder what might be causing that... Since before I found your samples (I actually did this before even looking for any demos/samples at all) I created the raycast from the hand to the forward direction. The first version of your script I saw you made provision for existing line renderers or creating it if not present. I removed that code and just made the script always create the line renderer. I am a big fan of that option... It also means you know for sure you know the start and end point of your ray and raycast. I didn't do it with your latest code, though, cause I wanted to see it working before I started tinkering with it. 

Your code is clean, very little and easy to follow and understand even for someone completely new to your API so I look forward to your next update and hope it will finally put this issue to bed. Looking forward to seeing the code even without the documentation that goes with it. Your code is easy enough to follow without it. 

...but if I might make one little suggestion (and perhaps I should do this also just to prove your theory about the mismatched ray): Could you modify that raycast code so that when it points to something the ray ends where it is pointing in stead of pointing miles out into the distance? That single change will make it infinitely more clear what is being pointed at/to, wouldn't you agree?

Again, thanks a lot for your efforts

myBadStudios
Protege
PS: I did notice this:
void RayHitSomething(Vector3 hitPosition, Vector3 hitNormal) {
if (lineRenderer != null) {
lineRenderer.SetPosition(1, hitPosition);
}
}

...but although not clearly visible in the video, when the headset is on you can clearly see the ray is going straight through and pointing to the middle of nowhere. It seems you THOUGHT you were doing just that but for whatever reason it just 'aint playing ball...

Well, not with UI elements at least. I noticed you calling the callbacks and the text getting updated when the way intersects 3D objects but nothing happens with UI objects. Also, just in case you missed it, the camera rig has it's own canvas at exactly the same position as the camera

myBadStudios
Protege
Good news: I just got my first successful demo out of the way. Everything works fine now! 😄 Still want to run a few more tests to verify but so far it all works just dandy 😄 

Bad news: I got no idea what I did to make it work. Sigh 😞  

Basically, I noticed there is this extra canvas on the rig so I took that out. Then I noticed that I basically duplicated the scripts that were on your canvas on mine so I started removing all the duplicates. I.e. you had a canvas and I had a canvas and since my object was the only one I wanted to interact with I removed your canvas. I just got rid of all duplicated stuff like that.

Next I removed your code to interact with world objects and left it as GUI interact-able only. Finally (and I think this might have been the big one) my canvas had a GraphicRaycaster on it where you used the OVRRaycaster. I had actually moved that from the camera rig to my canvas and that was when I noticed my scene had both active at the same time. So I disabled the GraphicRaycaster, hit play and went looking for trouble... but I couldn't find any. Worked smooth as a baby's bottom. :smile:  Yeay! 🙂 Finally, I can press buttons! 😄 Yoohoo! 😄 

oculus_gabor
Protege
Hey yo!

Good news: I just got my first successful demo out of the way.
Glad to hear you have something up and running! Congrats, that's awesome!

Finally (and I think this might have been the big one) my canvas had a GraphicRaycaster on it where you used the OVRRaycaster. 
Indeed, the graphicsraycaster / overraycaster was the issue, this has to do with how unity's ui system works. You can't have two raycasters on the same canvas. This will be better explained in the updated blog post. Sorry this is so convoluted.

Your code is clean, very little and easy to follow and understand even for someone completely new to your API so I look forward to your next update and hope it will finally put this issue to bed. 
The sample code is clean, but it's not architected well. Interaction between different systems is not well planned out and random things need to be assigned in editor with no defaults. All of these issues are addressed in the updated blog post, i managed to make it simple enough that all that's required to interact with ui is to drag 'n drop three components in the scene.

What happens when the next version of the SDK is released? Will all your effort be for nothing again like the other demos and samples out there?
For the most part, our SDK is surprisingly solid.The core of the input module is actually Andy Borell's code from 2015 (written against some ancient version of the sdk), you can find details about how it works here. All i had to do was update the SDK and replace the gaze pointer with a ray, very similar to this. Updating the SDK had no issues 🙂

The holidays are going to push the blog update out a bit, i think it's landing sometime early January.

tamer_ozturk2
Explorer


The holidays are going to push the blog update out a bit, i think it's landing sometime early January.


We shouldnt wait 1 month for a blog post update, you should also post the new scripts here or somewhere else.

tamer_ozturk2
Explorer
okay, the new 1.21 samples framework is out and OVRInputModule there is fixed. If it is your doing oculus_gabor, thank you .

oculus_gabor
Protege
Blog update is live: https://developer.oculus.com/blog/easy-controller-selection/

vivalavida
Honored Guest


Blog update is live: https://developer.oculus.com/blog/easy-controller-selection/


Thank you so much for this, why isn't basic UI interaction with the GearVR controller part of the utilities?
(I've just finished the Daydream build and that has been so much smoother,
Main pain points with GearVR so far are
- ability to test in Unity Editor
-UI interaction with controller)

oculus_gabor
Protege
Hi @vivalavida thanks for the feedback, it's really useful!

I don't know the answers to your questions, but when i have some time i'll try to get the issues you pointed out resolved.