Hi, I figure out how to add a GearVrController to a scene.
(Adding a OVRCameraRig, and Add GearVRController under the RightHandAnchor)
The controller follows the hand movements.
But how can I add a selection options + a ray from the device to the scene?
I mean a ray that can select item, like in the Oculus store below:
Comments
EDIT: The documentation mentioned above is now live at https://developer.oculus.com/blog/adding-gear-vr-controller-support-to-the-unity-vr-samples/
It works like a charm!
What needed to add a "clicking" functionality on buttons using this controller Ray?
EDIT: The documentation mentioned above is now live at: https://developer.oculus.com/blog/adding-gear-vr-controller-support-to-unitys-ui/
Oculus already has samples of how to interact with Unity's UI using a gaze pointer at: https://developer.oculus.com/blog/unitys-ui-system-in-vr/
If you use the scripts / UI system from that post, you'll have to remove references to OVRGazePointer.instance, from the OVRInputModule.cs script as you probably wont have the gaze pointer in your scene. Even if you don't remove it, you'll have to add null checks.
The ray used to interact with UI is constructed in the GetGazePointerData function of OVRInputModule.cs it looks like this:
You need to replace leftData.worldSpaceRay with the pick ray of the controller. Once you do that this should be a drag and drop solution. You might want to do something like:
That way you have support for Gear VR Controller picking rays, and gaze rays when no controller is present
Holy cow, after getting lost in the blogs and messy oculus docs, here i find the answer to finally begin development with oculus.
You should add this to the blog posts to help new comers, so people know the basic code before diving into sample framework mess.
Dude, I cannot thank you enough! I'm gonna download this immediately and give this a go. Again, thank you ever so much!
Unfortunately, if no controller is connected, gaze input doesnt work with gear vr.
RayPointer.cs UpdateCastRayIfPossible() function needs to be modified to make it work.
Or
RayPointer.cs needs another if check for the active controller to fall back to;
OVRInput.Controller.TouchPad
The question is, which one and how?
@tamer.ozturk2, you are right, that sample has no gaze fallback currently. I put it together pretty hastily for touch controllers specifically. There are also some issues with how the active controller is being tracked and a few other API pain points.
I've just finished writing a more in depth sample with fallback support for gaze, as well as a bunch of API improvements. This blog post will be updated with the new and instructions. I'll be writing the blog update on Friday, it should be live in about two weeks. I'll let you know as soon as its online!
Thank you, could you kindly share the code for the new sample here, so we dont wait 2 weeks.
Here's the thing: I have an asset on the asset store that is supposed to block the game from starting until after you have passed my kit. Some of my customers have asked me "Will this work in VR?" and I said "I have no idea. I never tried it" so when I got my Rift the first thing I did was see if my kit could work and after spending the 30 seconds updating the UI to work in world space I immediately noticed a HUGE stumble block that will prevent my kit from working in VR:
I am asking people to enter text into a text field and, well, for that I kinda need a keyboard...
So I went and created a world space keyboard that can be skinned and customised to contain any combination of letters that the player chooses and all of this can be done in 2 lines of code and changing the background image of 1 prefab. I made that keyboard so skinnable and so customisable and so easy to use that there is nothing out there that can beat it as far as I have seen...
Only problem is... although it works great as a skinned keyboard to replace the native keyboards on mobiles etc, as soon as I go into VR I have no means of clicking that keyboard. This means I can now release this asset as a "skinnable native keyboard replacement" not the "VR Keyboard" that I intended. I want to include it as a free update to my existing kit but now I am forced to tell my customers "Here is a keyboard you can use in VR. Use this and you can now use this asset of mine in VR!!!!!! ....you just have to figure out for yourself how to actually press the buttons"
Not good
This is what it is doing at the moment...
https://youtu.be/coIy2F0QJBI
Thank you, do you mind sharing the code as is, so we dont wait for 2 weeks? I like to read code for learning purposes anyhow.
Thank you, do you mind sharing the code as is, so we dont wait for 2 weeks? I like to read code for learning purposes anyhow.
@myBadStudios I just took a look at your video, it's really odd that the selection ray would be so far off. There are two potential issues i could think of:
First some configuration of the visual ray might not match up with the ray actually being cast into the world (this is likely the cause).
Second, I assume the canvas that the keyboard belongs to is a world-space canvas? Is the Event Camera of the canvas set to the center eye anchor?
The code i attached to this forum post was a bit hack-y (proof of concept quality). When the blog is updated with the new code, if that code still breaks your input i'll take a closer look and help you resolve the input issue.
Okay great news, sorry for the triple post by the way but the forum has its own problems as i see.
Could you atleast add swipe controls to the blog post or a new post as well?
From: https://developer.oculus.com/documentation/unity/latest/concepts/unity-ovrinput/
OVRInputModule has the following variables defined but not used anywhere, so i thought there is/was/supposed to be some swipe related code in the module when interacting with ui/non ui items.
First off, just wanna say thank you again for taking the time to make this thing work. The moment I put the Rift on my head and I saw that orientation demo I said I am not making anything that is not made for the Rift! I was about 90% done with a (Darn it, I can never remember that acronym Universal Windows SomethingOrOther) game and it was made to be mobile and Mac Store compliant also and I just scrapped it and am now down to about 50% complete on the VR version thanks to all the extra stuff that I now wanna add since the static camera is becomming a VR experience
I am THAT committed to making Rift games now that I am prepared to sacrifice any and all other platforms (maybe make an exception for the GearVR)... but it is super disheartening to discover that something this basic is so hard to pull off so yeah, truly appreciate the assistance.
Now the naysayer in me has to point out: What happens when the next version of the SDK is released? Will all your effort be for nothing again like the other demos and samples out there? :O All I can say is I hope my game is done by then
Now, back on topic. I used your sample scene and just dragged my prefab in there so if and when the time comes for you to figure out "So why the heck is this not working for him???" then I can always send you back your project with my prefab in it or I could just send you the prefab and you can add it in yourself. Either way, you getting your hands on that sample project will be super easy as it will be a small upload.
To answer your questions: Yes, it is a world space canvas and yes, I use the centre eye as the event camera. I thought that maybe the fact that I scaled my canvas down to 0.01 might be the issue but I see you did the same with yours. Seems we are both aware of Unity's sorely lacking support for showing text in world space without scaling down a huge canvas... So with that idea proving not to be the issue I was again stumped.
I thought I saw, when looking at the code, that you build up a list of items the ray intersects with and then calculate from there if this is something to be concerned with or not. I was wondering if that list doesn't get updated in time or at all and retains a pointer to an old selected object or something and thus it sends the input to the wrong object. I was going to go look into that as I could not think of what else it could possibly be.
You saying the cast ray and the visible ray might not match up... interesting... wonder what might be causing that... Since before I found your samples (I actually did this before even looking for any demos/samples at all) I created the raycast from the hand to the forward direction. The first version of your script I saw you made provision for existing line renderers or creating it if not present. I removed that code and just made the script always create the line renderer. I am a big fan of that option... It also means you know for sure you know the start and end point of your ray and raycast. I didn't do it with your latest code, though, cause I wanted to see it working before I started tinkering with it.
Your code is clean, very little and easy to follow and understand even for someone completely new to your API so I look forward to your next update and hope it will finally put this issue to bed. Looking forward to seeing the code even without the documentation that goes with it. Your code is easy enough to follow without it.
...but if I might make one little suggestion (and perhaps I should do this also just to prove your theory about the mismatched ray): Could you modify that raycast code so that when it points to something the ray ends where it is pointing in stead of pointing miles out into the distance? That single change will make it infinitely more clear what is being pointed at/to, wouldn't you agree?
Again, thanks a lot for your efforts
Well, not with UI elements at least. I noticed you calling the callbacks and the text getting updated when the way intersects 3D objects but nothing happens with UI objects. Also, just in case you missed it, the camera rig has it's own canvas at exactly the same position as the camera
Bad news: I got no idea what I did to make it work. Sigh
Basically, I noticed there is this extra canvas on the rig so I took that out. Then I noticed that I basically duplicated the scripts that were on your canvas on mine so I started removing all the duplicates. I.e. you had a canvas and I had a canvas and since my object was the only one I wanted to interact with I removed your canvas. I just got rid of all duplicated stuff like that.
Next I removed your code to interact with world objects and left it as GUI interact-able only. Finally (and I think this might have been the big one) my canvas had a GraphicRaycaster on it where you used the OVRRaycaster. I had actually moved that from the camera rig to my canvas and that was when I noticed my scene had both active at the same time. So I disabled the GraphicRaycaster, hit play and went looking for trouble... but I couldn't find any. Worked smooth as a baby's bottom.
The holidays are going to push the blog update out a bit, i think it's landing sometime early January.
(I've just finished the Daydream build and that has been so much smoother,
Main pain points with GearVR so far are
- ability to test in Unity Editor
-UI interaction with controller)
I don't know the answers to your questions, but when i have some time i'll try to get the issues you pointed out resolved.
could you please maybe show which script handles this?
I was unable to find any references to scroll related variables.