I'm using LipSync on my Gear VR project, but it's taking up about 25% of my frame when decoding the phonemes for the blendshapes. It looks great, but drops frames on my S6. On my S7 it doesn't drop frames but the sound can become choppy with lots of pops and cracks. I suspect this is due to how much performance is gobbled up by the dynamic sound analyzer.
Are there any optimizations planned for the LipSync SDK? It hasn't been updated in awhile. One thing I can think of is the ability to generate lipsync files in the editor to go with each sound effect. That way it can just play back pre-set blendshapes per frame instead of doing it on the fly. Maybe an editor tool where you select a bunch of audio files, and it generates a lipsync file for each one with the same name.
The only time I'd need dynamic lipsync is for voice chat, which my game does not have.
Are there any future plans for the LipSync SDK?