I'm sure you've seen the latest updates to Oculus Avatars, which introduced OVRLipsync driven mouth movement, eye gaze simulation and micro-expressions.
I wanted to flag something we came up against while we worked on this update, in case it was causing issues with folks building multiplayer Quest and Go experiences.
Android only allows access to the microphone from a single process. This wasn't an issue when networking avatars previously, as the mic input wasn't being used. But with the expressive update, we specifically need to run the mic through the OVRLipsync plugin to generate blend-shapes and drive the mouth shapes.
Trying to hook up the mic to both VoIP and Lipsync therefore causes an inevitable race condition. The loser gets a bunch of zeros.
So either there's no networked audio, or no blend-shapes.
Fortunately, Oculus VoIP and Photon both have an available workaround, in the form of the ability to relay the mic buffer using SetMicrophoneFilterCallback()
(for Oculus VoIP) as documented here: https://developer.oculus.com/documentation/platform/latest/concepts/dg-cc-voip/
We're in the process of documenting the specifics of how this can then be wired up to Lipsync and Avatars in more detail, but in the meantime, please refer to Social Starter in the Avatar SDK Unity samples, which has implemented the Avatar / Lipsync / VoIP stack correctly.