By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming - When will video games get "real" sound?

Turkish said:
curl-6 said:
That video was freaking awesome, despite not even being that mellow it really gave me ASMR.
I would love to have that level of sound quality in a game, nothing I've heard to date comes close.


I know what you're getting at, the binaural effect, but I can't recall a game that used it with as much precision and nuance as the barber video. As good as Silent Hill sounds, it and other games seem to me to have fewer degrees of variation between directions.



Around the Network

Binaural audio is difficult to do properly in a video game because the sounds are almost all digitally created rather than natural.

To record binaural audio, you need a human head analog with 2 mics attached at the ear. You can't really do that to record, say, the collapsing of a building around you or a 100 foot tall dragon breathing fire at you.



The rEVOLution is not being televised

ignore me 



Turkish said:
JoeTheBro said:

First off I'll need you to listen to this and be amazed.

That is freaking cool isn't it?

 

Now that I have you interested I'll get back on topic. Remember when games were like this?

Well that is how audio currently works. Pre-recorded sounds are manipulated within 3D space to sound like they are coming from onscreen objects. That's literally the equivalent of using videos of real people in games.

So how long will it be before games render sound on the fly? The benefits of such a system are obvious. Sadly yes, it will be the death of voice acting.

lol

What has voice acting to do with a game rendering sounds on the fly? Why did you post that youtube video to support your argument? The audio in that video is also pre recorded. Its called binaural recording, a method to record sound with 2 microphones: http://en.wikipedia.org/wiki/Binaural_recording

I'm not sure how this is connected to what you have in mind.

When Uncharted is rendering Nathan Drake, is Nolan North actually on screen? No. In not too long audio will be the same way with voice synthesizers reading off scripts in real time to give a heightened sense of reality. The video is mostly unrelated but it's cool and will be as simple as changing settings in future games.



Actually, it doesn't mean the end of voice acting and recorder music. All researchs in this area today are focused on using recorded audio (music, effects, voice), calculate the interaction of each individual soud wave with the ambient (distance, echoes, materials, etc) and change at real time the sound to add the correct distortion. So we got the same quality that we have today with a complete new dimension in realism and immersion. Of course, this tech is at least 5 or even 10 years away from now.

JoeTheBro said:

When Uncharted is rendering Nathan Drake, is Nolan North actually on screen? No. In not too long audio will be the same way with voice synthesizers reading off scripts in real time to give a heightened sense of reality. The video is mostly unrelated but it's cool and will be as simple as changing settings in future games.

Synthesized voices aren't even capable of beeing convincent today. Being able to pass emotions with them is something even more distant than the tech I posted above. Of course, when things reach this level, they will save a lot of money without actors :D



Around the Network

i played lots of games that give me spacial feedback... back in the days... in toca race driver 3 i can feel the sound of the back car in the back and know before it happens if he was making a move from the right or the left to over come me... while playing Counter Strike Source i can feel the foot steeps of the guy coming from behind to knife me... using a 5.1 sound system with the speakers around my gaming seat off course...



Proudest Platinums - BF: Bad Company, Killzone 2 , Battlefield 3 and GTA4

That Barber shop is actually demo for QSoundLabs products - it's nice demo, but you have to realize it was "sound designed" just as anything else (someone who makes a living by doing "sound" speaking here). You can listen to some more demos at:

http://www.qsound.com/demos/binaural-audio.htm
http://www.qsound.com/demos/3d-audio.htm

What I thought you meant when you asked about "real" sound is when we'll get actually created sound by in-game engines according to laws of physics.

What I mean by that is this - take for example footsteps - current approach is to take recordings of them on various surfaces, and run them true audio engine, setting parameters such as level, pan, EQ and reverbaration.

The "real" sound would be in-game physics engine calculating interaction between footwear and surface and "creating" sound at that moment, which would then behave in game world just as it behaves in real world.

I think we'll need to wait for quite some time for this to happen, and in my opinion to move away from polygon based worlds and move to fully voxel based engines as a first step.



JoeTheBro said:
the_dengle said:
It would also be the death of live orchestrated OSTs.

I hope it never happens.


Why would it be the death of OSTs? Even though we have Avatar CGI we still have traditional actors a majority of the time.

We hardly ever get live instruments as it is. I hope that games like Journey encourage more publishers to push for live music, but if developers want their games to render sound in real-time, this is the end of any recorded soundtrack at all. All of the music will be rendered in-game -- as the OP points out, it's the end of voice acting as well.

Game over, synthesized music wins. No, I can't accept that.



HoloDust said:
That Barber shop is actually demo for QSoundLabs products - it's nice demo, but you have to realize it was "sound designed" just as anything else (someone who makes a living by doing "sound" speaking here). You can listen to some more demos at:

http://www.qsound.com/demos/binaural-audio.htm
http://www.qsound.com/demos/3d-audio.htm

What I thought you meant when you asked about "real" sound is when we'll get actually created sound by in-game engines according to laws of physics.

What I mean by that is this - take for example footsteps - current approach is to take recordings of them on various surfaces, and run them true audio engine, setting parameters such as level, pan, EQ and reverbaration.

The "real" sound would be in-game physics engine calculating interaction between footwear and surface and "creating" sound at that moment, which would then behave in game world just as it behaves in real world.

I think we'll need to wait for quite some time for this to happen, and in my opinion to move away from polygon based worlds and move to fully voxel based engines as a first step.


That is pretty much what I'm talking about. I included the video because it's cool and is somewhat on topic.