10-27-2021 04:29 PM - edited 10-30-2021 01:55 AM
Do you have any feedback and/or issues in regards to the Voice SDK? Use this place to discuss, as we'll have members of the team reviewing this thread!
Read the blog on Presence Platform (of which the Voice SDK is a part of) here: https://developer.oculus.com/blog/introducing-presence-platform-unleashing-mixed-reality-and-natural...
11-20-2021 04:10 PM
As far as I know, that's down to the 'Min Keep Alive Time in Seconds' variable on your AppVoiceExperience component; "The amount of time in seconds an activation will be kept open after volume is under the keep alive threshold"
11-20-2021 05:19 PM
After activate is called it will listen indefinitely until the minimum volume threshold is hit. Once the minimum volume threshold is hit it will send up to 20 seconds of audio to Wit for processing.
11-20-2021 05:23 PM
There isn't a simple workaround for it. You could potentially use an asset from the asset store that handles transcriptions locally on device and trigger wit activation from that. The only catch is this technique impacts perf on Quest so depending on your budget may not be viable for your project.
11-20-2021 10:19 PM - edited 11-20-2021 10:19 PM
Good to know! Thank you!
11-20-2021 10:21 PM
Good to know! Thank you!
11-20-2021 10:26 PM - edited 11-20-2021 10:39 PM
Hi, it seems simple thing but I cannot call method from AppVoiceExperience class to another class. I get an error says
Assets\Apartment_Door\Scripts\DoorController.cs(19,12): error CS0246: The type or namespace name 'AppVoiceExperience' could not be found (are you missing a using directive or an assembly reference?)
-----------------------------------------
private Animation doorAnim;
private BoxCollider doorCollider;
public AppVoiceExperience AppVoiceExperience; //this line is getting the error
if (AppVoiceExperience.Activate() && playerInZone)
What I am trying to do is to call an activate method from the AppVoiceExperience class.
Thank you!
11-23-2021 06:03 AM - edited 11-23-2021 06:04 AM
Posting this as feedback, and for comments...
So I found that going through the lengthy process of manual training and relying on the responses from Wit.ai was making things in my app very unpredictable.
For example, asking the user "What's your name?" and wanting to get the reply back as a custom 'name_response' Intent with an Entity ' name_of_person' was impossible without training Wit.ai with a whole lot of first names (which of course is impossible). These single word responses just came back as 'out of scope' until I actually taught the ai that 'John' is a name, 'Jill' is a name, 'Rajesh' is a name, etc. etc.
So I had to cheat and created a wildcard option is WitResponseMatcher.cs:
private bool IntentMatches(WitResponseNode response)
{
var intentNode = response.GetFirstIntent();
if (string.IsNullOrEmpty(intent))
{
return true;
}
//JDP add '*' wildcard to ignore intent
if (intent == intentNode["name"].Value || intent == "*")
{
var actualConfidence = intentNode["confidence"].AsFloat;
if (actualConfidence >= confidenceThreshold || intent == "*")
{
return true;
}
Debug.Log($"{intent} matched, but confidence ({actualConfidence.ToString("F")}) was below threshold ({confidenceThreshold.ToString("F")})");
}
return false;
}
}
Then using the path "text" I was able to get back whatever the user says
Seems very hacky! (I especially hate to change SDK scripts...) So fine for a Hackathon, but not sure how else I could have done it?
11-23-2021 07:53 AM
There's actually already the equivalent of a wildcard for intents. Just provide an empty value for intent. We could probably make that UI a little clearer on that. When you're training your app make sure you use the wit/contact builtin entity. It is already trained to recognize names. In the screenshot below I didn't have to train Rajesh when using the wit/contact entity it recognized it out of the box.
11-23-2021 08:11 AM
Thanks @yolan I didn't realise that's what wit/contact was for 🙂
The name thing was more of an example. I think I tried leaving the field blank but it wasn't working. I think because wit.ai would sometimes mistakenly return an erroneous intent name, which then would match and I think make the code fail on the confidenceThreshold. That's why I had to be more explicit. (that's what I meant by "responses from Wit.ai was making things in my app very unpredictable")
I also found the ability to code for essentially any return value (and pick up the text) valuable in that I could do my own wit-like comparisons in Unity - freed from wit.ai doing the choosing for me.
Essentially in that mode I'm just using the VoiceSDK to do speech-to-text
01-10-2022 10:01 PM
Hi I am a quadriplegic and unable to move my fingers I was wondering when voice control will be available in Australia because when I bought the quest i missed the note about voice control only available in the US.
Is it possible to use a US vpn ? your help would be appreciated