Welcome to the Oculus Developer Forums!

Your participation on the forum is subject to the Oculus Code of Conduct.

In general, please be respectful and kind. If you violate the Oculus Code of Conduct, your access to the developer forums may be revoked at the discretion of Oculus staff.

Expressive Avatars: Lipsync, VoIP and Android Mic permissions

Ross_BeefRoss_Beef Posts: 170 Oculus Staff
Hey folks,
I'm sure you've seen the latest updates to Oculus Avatars, which introduced OVRLipsync driven mouth movement, eye gaze simulation and micro-expressions.
I wanted to flag something we came up against while we worked on this update, in case it was causing issues with folks building multiplayer Quest and Go experiences. 

Android only allows access to the microphone from a single process. This wasn't an issue when networking avatars previously, as the mic input wasn't being used. But with the expressive update, we specifically need to run the mic through the OVRLipsync plugin to generate blend-shapes and drive the mouth shapes.

Trying to hook up the mic to both VoIP and Lipsync therefore causes an inevitable race condition. The loser gets a bunch of zeros.
So either there's no networked audio, or no blend-shapes. :disappointed:

Fortunately, Oculus VoIP and Photon both have an available workaround, in the form of the ability to relay the mic buffer using SetMicrophoneFilterCallback()
(for Oculus VoIP) as documented here: https://developer.oculus.com/documentation/platform/latest/concepts/dg-cc-voip/

We're in the process of documenting the specifics of how this can then be wired up to Lipsync and Avatars in more detail, but in the meantime, please refer to Social Starter in the Avatar SDK Unity samples, which has implemented the Avatar / Lipsync / VoIP stack correctly.

Comments

  • beaulima9933beaulima9933 Posts: 50 Oculus Start Member
    @Ross_Beef
    I was just going to write this!
    ETA for Unreal integration??
    Thanks!
  • jphilippjphilipp Posts: 26
    Brain Burst
    I wanted to ask, on Quest, can Lipsync still perform great if there's say 6 people chatting, or would it start show lag?

    And does it only hook in to one's local mic, then transmit only the phoneme data result to others, or does everyone's local client analyze all (e.g. 6) audio streams?

    Thanks!
    - Anyland dev -
  • gizmhailgizmhail Posts: 4
    NerveGear
    We indeed had some difficulties to make lipsync and Photon Voice VOIP coexist in our project.

    The closest we were to a success was to activate the lipsync microphone capture, capture the MouthAnchor audiosource clip and send it to Photon Recorder.

    It works, but there’s a audible/visible difference between the VOIP and the avatar lip moves (around 0.25 or 0.5 seconds).

    I have 2 questions :

    1) is this kind of lag expected in the solution you will document in the future ? Or maybe it is our solution that leads to those delays (it works a bit in a reverse way compared to the one you imply, as the microphone is not used for the VOIP first, but by the lipsync first, in our implementation).

    2) prior reading this topic, we thought this lag was normal as the voice and the avatar mesh sync go through different paths, so we tried to find another solution.
    We did not capture the lip sync locally (the microphone is fully given to the VOIP), but instead, we wanted on the receiver side to forward the received audio to the avatar, so that it blends the avatar moves send by the network with the lip moves computed locally with the audio (sent through the network).

    We did not succeed yet in making it work, so 2.1) is it something that you think possible ? and 2.2) I’ll describe our hack to see if we are not just missing an important step :)

    On the RemoteAvatar gameobject, we removed the remote driver component and added a local driver component (please, wait, even if it sounds crazy ;) ). 
    We also set CanOwnMicrophone to false on OvrAvatar. With these settings, and OvrLipSyncContext is created on the Mouth of the remote avatar.
    To continue having the OvrAvatar work properly, we added a child GameObject to RemoveAvatar, with a remote driver component on it, and we used it to fill the driver field in the RemoteAvatar’s OvrAvatar component.
    It "works" as the OvrAvatar class checks the kind of driver (local/remote) on the gameObject locally to initialize the audio part, but use the driver field for other usages (I did say it was a hack ;) ).
    From there, we expected that sending the audio data (received through the network) to the OvrAvatar UpdateVoiceData method would do the job (we checked that it is properly send, with a success result, to the OvrLipSyncContext). However, no lip move happens.
    I wondered if some additional thing must be done to blend the network avatar description with the local lip sync, so I tried to disable the avatar moves to free the lip sync from a potential override by the avatar, but it did not work either.

    By advance, thanks for any suggestions :) 

  • HDelattreHDelattre Posts: 2
    NerveGear
    I managed to get the workaround using SetMicrophoneFilterCallback() working decently. I had to resample the audio (it's 48000 Hz but the lip sync component expects 16000) and amplify it a bit, but my quest avatar is now animating its mouth a fairly well (not as well as with the regular voice capture implementation but better than nothing). I am also seeing a small delay between voice and animation similar to gizmhail. I'm gonna try to tweak it but I can at least verify that it is a viable workaround.
Sign In or Register to comment.