Hi @MikeF @NinjaGaijin
, I've got the 1.36 Utilities Integration on a branch on my repo prepping a migration to the Expressive Avatar update. Just trying to get a feel for the differences and limitations of the new features. As far as I can tell, it's not yet possible to serialize and manipulate the cool mouth stuff for remote avatars, is that correct? The OvrAvatar.cs script itself seems to show that those features only come online when it's being driven by a Local Driver Component.
I was curious how this was going to be achieved at scale since it's being driven by a fair bit of DSP, and it appears that for the time being it simply isn't being done at scale. Will it eventually replace the Mouth Vertex Animation system? I imagine that's probably just being left in for backward compatibility sake. How does the team anticipate supporting this eventually, since it's implied that it must be done? I like the simplicity of sending a normalized float to the VoiceAmplitude member in the Mouth Vertex Animation system, but I figure you'd probably want an audio buffer being fed at regular intervals for the new system.
Just curious if I could get some more detail here and looking for any corrections to my current understanding. Thanks!
Samsung Galaxy S8 (Snapdragon 835), Gear VR (2017), Oculus Go (64GB), Unity 2018.3.14f1