cancel
Showing results for 
Search instead for 
Did you mean: 

3D Sound

voodooRod
Honored Guest

Now that there is virtual 3D ability, why not couple it with 3D sound. Check out this demo of true 3D sound using the Unreal 3 engine. You will need headphones, any will do.

http://www.youtube.com/watch?v=IyJqHDIRMhU 

7 REPLIES 7

spyro
Expert Protege
Sounds way better then flat stereo, but I miss the interaction of the sound with the environment. There is no hall or reflection on the rocks.

spyro

Anonymous
Not applicable

voodooRod
Honored Guest
@Spyro The SDK has the ability to do surface reflections and absorptions based on the material type. Just have to set up boundaries (virtual rooms) and give the material settings.

The demo is mainly to show off true binaural sound with multiple objects in a gaming environment.

Gateshot
Explorer
I read a post by Denny Unger on this over at MTBS3d. Someone requested this feature in the The Gallery, and he was basically discussing the hardware challenges of Binaural audio and the limits of the game engine.

ganzuul
Honored Guest
Hmm... Usually unspecified 'technology' means a rehash of something old. In this case, HRTF:
http://en.wikipedia.org/wiki/Head-related_transfer_function

OpenAL already has this integrated. The real difficulty in using these techniques is that sound and our relationship to it is much more complex than vision. The virtual barbershop is a fantastic introduction to start thinking about and analysing what's involved.

A great demonstration of the complexity of the relationship between space and sound is this:


It's FOSS software and it works fine in e.g. a VirtualBox VM, so you don't have to dual-boot Linux just to play with it.
http://factorial.hu/plugins/lv2/ir
Setting up a Linux DAW isn't really easy for the uninitiated, but the Ubuntu forums will tell you how to do it.

So you can play a rising sine tone 'sweep' in a room, record it with ideally a tetrahedral soundfield microphone, 'deconvolve' the recoding and put the resulting .wav file in the above software. If you play this 'transient' .wav directly it sounds like a gunshot, or a thunder clap followed by rolling thunder if the room is very big. Then when you put your audio signal through it, it will sound exactly like it is in the room that you recorded. Magic? Almost. Maybe yes.

This is some deep DSP black mojo. Essentially it means that any finite space can be largely represented as a .wav file of similar length. - A multi-channel soundfield recording of thunder could be used to reconstruct a 3D representation of the area that it is heard in.
Our brains habitually fill in the missing details, such as there being no ceiling outdoors, and making assumptions like that is extremely difficult to do in software. However if the 3D representation is a given it may become possible.

Moreover, your body reflects sound so you get that quantum effect of not being able to observe without interfering. We all have different bodies of different sizes and densities, and some frequencies travel mostly through us, getting absorbed to some degree on the way. If you're clever about configuration, you could perhaps accomplish the effect of giving the player the approximate feeling of having a different body.


ED:
http://www.mtbs3d.com/phpBB/viewtopic.php?f=138&t=16870
Second video from the top.

NegativeCamber
Honored Guest
"ganzuul" wrote:
Hmm... Usually unspecified 'technology' means a rehash of something old. In this case, HRTF:
http://en.wikipedia.org/wiki/Head-related_transfer_function

OpenAL already has this integrated. The real difficulty in using these techniques is that sound and our relationship to it is much more complex than vision. The virtual barbershop is a fantastic introduction to start thinking about and analysing what's involved.

A great demonstration of the complexity of the relationship between space and sound is this:


It's FOSS software and it works fine in e.g. a VirtualBox VM, so you don't have to dual-boot Linux just to play with it.
http://factorial.hu/plugins/lv2/ir
Setting up a Linux DAW isn't really easy for the uninitiated, but the Ubuntu forums will tell you how to do it.

So you can play a rising sine tone 'sweep' in a room, record it with ideally a tetrahedral soundfield microphone, 'deconvolve' the recoding and put the resulting .wav file in the above software. If you play this 'transient' .wav directly it sounds like a gunshot, or a thunder clap followed by rolling thunder if the room is very big. Then when you put your audio signal through it, it will sound exactly like it is in the room that you recorded. Magic? Almost. Maybe yes.

This is some deep DSP black mojo. Essentially it means that any finite space can be largely represented as a .wav file of similar length. - A multi-channel soundfield recording of thunder could be used to reconstruct a 3D representation of the area that it is heard in.
Our brains habitually fill in the missing details, such as there being no ceiling outdoors, and making assumptions like that is extremely difficult to do in software. However if the 3D representation is a given it may become possible.

Moreover, your body reflects sound so you get that quantum effect of not being able to observe without interfering. We all have different bodies of different sizes and densities, and some frequencies travel mostly through us, getting absorbed to some degree on the way. If you're clever about configuration, you could perhaps accomplish the effect of giving the player the approximate feeling of having a different body.


ED:
http://www.mtbs3d.com/phpBB/viewtopic.php?f=138&t=16870
Second video from the top.


What you are describing is reverb convolution using Impulse responses, a technique only useful for capturing physical spaces. Useful if you want to capture your favourite reverb from an old church but In the virtual world it's pretty useless. With the virtual world we do know the 3D space around us, and using DSP should be able to calculate reverb on the fly if we know the landscape and its material's sonic properties.

ganzuul
Honored Guest
The second video behind the link I added says how they did it. They actually use a lot of the terminology we are familiar with from IR, so my example was apparently very relevant. 😃