Wwise max voices & priorities — Oculus
Welcome to the Oculus Developer Forums!

Your participation on the forum is subject to the Oculus Code of Conduct.

In general, please be respectful and kind. If you violate the Oculus Code of Conduct, your access to the developer forums may be revoked at the discretion of Oculus staff.

Wwise max voices & priorities

Hello,

When exceeding the maximally allowed voices, how does Oculus Spatializer for Wwise choose which ones to process? I would assume it looks at the Wwise "Playback Priority" of any sound before processing HRTF. But it seems to just process the newest sound, dropping the oldest.

How can I control this?

I'm currently on Wwise 2017.1.1 and Oculus Spatializer 1.17.

Thanks!
Christian

Answers

  • petergiokarispetergiokaris Posts: 170 Oculus Staff
    edited November 2017
    Hi Christian,

    We have not included a priority system in our spatializer in order to remove complexity when a spatialized resource is requested by a playing voice in Wwise. We do not steal a currently playing sound (unless it is a reflected sound that has stopped playing and is in the phase of it's reflection/reverb tail, at which point we will prioritize an incoming sound to stop the resource from finishing the reflection portion and assign it to the current voice).

    The maximum number of spatialized/ambisonic voices that can be played at once is 64. One of the things you may want to do is add a simple priority system to the part of your engine which is firing off the Wwise events.

    We haven't completely moved away from the idea of having a priority system in place. Any suggestions that you may have in how you would like to see a priority system work for spatialized resources please let us know. It could be something as simple as telling a sound to steal a voice because it's high-pri and knock out the oldest playing voice, for example.

    Best,
    Peter
    Peter Giokaris
    Senior Software Engineer
  • Charlie.ataCharlie.ata Posts: 22
    Brain Burst
    Hi Peter.

    An idea will be to grab the virtualized voices from Wwise and remove them from the hrtf process.
  • CryChristianCryChristian Posts: 8
    NerveGear
    edited November 2017
    I believe, excluding virtual voices plus checking the playback priority of active voices will make a huge difference for larger games.

    If a smaller game never ever exceeds the 64 voices - there could be a checkbox to disable the priority awareness and/or virtual voices. Perhaps even separate check boxes, one for each additional 'awareness'.

    Actually another idea could be to process Events rather than individual Sound Objects. Each Event can contain multiple sound objects, but all of them only play at one single 3D spot in the game. That would remove a lot of redundant processing.

    Anyway, just some thoughts. How are chances for any of this coming to live, and what would be a rough time estimation? Any quick help on this would be highly appreciated.

    Best,
    Christian
  • CryChristianCryChristian Posts: 8
    NerveGear
    Are there any chances for excluding virtual voices? That would help a lot.
  • CryChristianCryChristian Posts: 8
    NerveGear
    This discussion has only been partially answered. The post from Charlie.ata in regarads to virtual voices hasn't. In fact, excluding virtual voices is probably the safest, simplest, and most effective way to make sure Oculus Spatializer only processes what you actually hear. Processing virtual voices is 100% unnecessary imho. How would anybody not benefit by excluding virtual voices from the HRTF process, and instead keeping the 64 HRTF voices exclusively for the stuff that is really playing? Thank you for your time, and support. I'm sure you are very busy. But really this would improve the Spatializer a lot I believe.
Sign In or Register to comment.