I am creating an experience that involves a good deal of social interaction and I was trying to use expressive avatars with mouth movements on Oculus Go. Intuitively I imagined this would be too heavy but figured I'd give it a shot. Looking at the profiler, the expressive avatar phenome analysis call (ovrLipSyncDll_ProcessFrameEx) takes up a great deal of DSP CPU. And that call is out of the realm of my optimization because its in a dll. Essentially making expressive avatar phenome analysis unusable on Oculus Go.
My question is: Am I right in thinking that voice and phenome analysis with expressive avatars is just too much for Oculus Go right now? Is there an alternate path forward? Is there a way to optimize this so it's possible? I'm trying to get at least 5 avatars in there, it currently works at frame rate with 1, maybe 2.
If you've worked on getting expressive avatars into your Oculus Go project, let me know!