cancel
Showing results for 
Search instead for 
Did you mean: 

Voice SDK (Feedback/Issues)

mouse_bear
Retired Support

Do you have any feedback and/or issues in regards to the Voice SDK? Use this place to discuss, as we'll have members of the team reviewing this thread!

 

Read the blog on Presence Platform (of which the Voice SDK is a part of) here: https://developer.oculus.com/blog/introducing-presence-platform-unleashing-mixed-reality-and-natural...

66 REPLIES 66

As far as I know, that's down to the 'Min Keep Alive Time in Seconds' variable on your AppVoiceExperience component; "The amount of time in seconds an activation will be kept open after volume is under the keep alive threshold"

After activate is called it will listen indefinitely until the minimum volume threshold is hit. Once the minimum volume threshold is hit it will send up to 20 seconds of audio to Wit for processing.

There isn't a simple workaround for it. You could potentially use an asset from the asset store that handles transcriptions locally on device and trigger wit activation from that. The only catch is this technique impacts perf on Quest so depending on your budget may not be viable for your project.

Good to know! Thank you!

Good to know! Thank you!

Kijimu7
Explorer

Hi, it seems simple thing but I cannot call method from AppVoiceExperience class to another class. I get an error says

 

Assets\Apartment_Door\Scripts\DoorController.cs(19,12): error CS0246: The type or namespace name 'AppVoiceExperience' could not be found (are you missing a using directive or an assembly reference?)

 

 

-----------------------------------------

private Animation doorAnim;
private BoxCollider doorCollider;
public AppVoiceExperience AppVoiceExperience; //this line is getting the error

if (AppVoiceExperience.Activate() && playerInZone)

 

What I am trying to do is to call an activate method from the AppVoiceExperience class.

 

Thank you!

 

 





 

Posting this as feedback, and for comments...

So I found that going through the lengthy process of manual training and relying on the responses from Wit.ai was making things in my app very unpredictable.

 

For example, asking the user "What's your name?" and wanting to get the reply back as a custom 'name_response' Intent with an Entity ' name_of_person' was impossible without training Wit.ai with a whole lot of first names (which of course is impossible). These single word responses just came back as 'out of scope' until I actually taught the ai that 'John' is a name, 'Jill' is a name, 'Rajesh' is a name, etc. etc.


So I had to cheat and created a wildcard option is WitResponseMatcher.cs:

 

        private bool IntentMatches(WitResponseNode response)
        {
            var intentNode = response.GetFirstIntent();
            if (string.IsNullOrEmpty(intent))
            {
                return true;
            }

//JDP add '*' wildcard to ignore intent
            if (intent == intentNode["name"].Value || intent == "*")
            {
                var actualConfidence = intentNode["confidence"].AsFloat;
                if (actualConfidence >= confidenceThreshold || intent == "*")
                {
                    return true;
                }

                Debug.Log($"{intent} matched, but confidence ({actualConfidence.ToString("F")}) was below threshold ({confidenceThreshold.ToString("F")})");
            }

            return false;
        }
    }

 

 

Then using the path "text" I was able to get back whatever the user says

Screenshot 2021-11-23 135933.jpg

Seems very hacky! (I especially hate to change SDK scripts...) So fine for a Hackathon, but not sure how else I could have done it?

There's actually already the equivalent of a wildcard for intents. Just provide an empty value for intent. We could probably make that UI a little clearer on that. When you're training your app make sure you use the wit/contact builtin entity. It is already trained to recognize names. In the screenshot below I didn't have to train Rajesh when using the wit/contact entity it recognized it out of the box.

yolan_0-1637682686873.png

 

Thanks @yolan   I didn't realise that's what wit/contact was for 🙂
The name thing was more of an example. I think I tried leaving the field blank but it wasn't working. I think because wit.ai would sometimes mistakenly return an erroneous intent name, which then would match and I think make the code fail on the confidenceThreshold. That's why I had to be more explicit. (that's what I meant by "responses from Wit.ai was making things in my app very unpredictable")
I also found the ability to code for essentially any return value (and pick up the text) valuable in that I could do my own wit-like comparisons in Unity - freed from wit.ai doing the choosing for me.
Essentially in that mode I'm just using the VoiceSDK to do speech-to-text

Dry_asa_dead_dingo
Honored Guest

Hi I am a quadriplegic and unable to move my fingers I was wondering when voice control will be available in Australia because when I bought the quest i missed the note about voice control only available in the US.

Is it possible to use a US vpn ? your help would be appreciated