Hey @MikeF thanks for the quick response.
OK, so if I understand you correctly, as long as the LocalAvatar that is getting serialized for network replication is the one that has CanOwnMicrophone enabled and is technically doing the blendshape work locally on a first person version invisibly, that that gets serialized into the SDK Avatar packets?
The only reason I'm skeptical about that is because I have already been updating VoiceAmplitude locally on the previous incarnation of the Avatars SDK with the value I'm getting from our VoIP client with Mouth Vertex Animation enabled, and yet it's not getting picked up as part of the packet serialization process by Photon. I ended up having to send it as a separate IObservable Component observed by my Photon view in order to get remote avatar's mouth vertex moving. But I will give it a try here with the new tech and see what happens.
Our stack uses PUN 2 for replication and Vivox for VoIP, so I'm glad to hear you were prototyping in-house without the bespoke Oculus VoIP as it is. I'm okay with the latency you mention. Utilities-wise, we're only using Platform and Avatar (and I guess LipSync now too).
It's also a little tricky because our app has a mirror in it, so we'd need the blendshapes to be driven in that separate LocalAvatar mirror as well and I can't set CanOwnMicrophone for it if I'm already running all that CPU for the version I can't see that's going out to everyone else. Would be good to have some flexibility on this moving forward (i.e. there can only be one LocalAvatar running the blendshape stuff, determined by that CanOwnMicrophone).
Samsung Galaxy S8 (Snapdragon 835), Gear VR (2017), Oculus Go (64GB), Unity 2018.3.14f1