In the final analysis, what we hear, how we perceive and evaluate it is such a unique combination of quantitative and subjective inputs that coming to any agreement on what perfect sound is, is nearly an impossible undertaking.
Perception vs. Reality: A Sound Conundrum
Win Jeanfreau | Aperion Audio
The pursuit of “perfect” sound is more than a preoccupation at Aperion… it’s an obsession.
Unfortunately it’s a lot like the pursuit of that perfect glass of wine (oenophiles chime in here), or that perfect swing of golf. To make the pursuit even more challenging, there are technical standards and equipment you can apply to the pursuit of perfect sound that measure elements of a speaker’s performance… but not necessarily the nuances of our listening experience that we use to define perfection.
The science of what we perceive as sound occurs only after the brain selects which pressure waves to pay attention to. That alone could occupy a chapter in a book on the topic. What I’m referring to is the need for the brain to filter out the massive sonic input we all experience every moment of every day, to then focus on that input most likely to cause us injury. That “processing” job is an unconscious endeavor and a powerful filter.
Most of us have experienced moments when, even during a conversation with a loved one, we find our ears hearing other input, only to be caught unaware that a question was asked of us during our sonic wandering. That same filtering process is actively at work during music listening and movie watching.
Here are a few of the important “processing” jobs you routinely but unconsciously accomplish when listening to sound:
It is possible for your ears to hear sound at 0 dB (but just barely). They are also capable of handling sounds with a trillion times that energy at 120 dB! For the ears and brain to pull off this remarkable feat the filter that is put in place is an insensitive to changes in sound energy levels. For example, a speaker receiving 100 watts of energy will sound only four times as loud as when it’s receiving 1 watt. As a result of this phenomenon, you don’t need to concern yourself with amplifier power nearly as much as you might think. 70 watts—100 watts—what’s the difference? Only about 1½ dB… and if a change in loudness is perceived every 3 dB in change, that’s not a compelling reason to pony up the dollars for the extra watts.
This graph shows the sound intensity range that you’re able to make sense of. It’s interesting to note that although each 10dB increase represents 10 times the energy, it only creates the perception of twice the loudness.
Another unique property of our “hearing” related to loudness is that our ears become increasingly sensitive to bass when the sound is loud and sensitive to the midrange when everything quiets down. The purpose of the “loudness” button on your receiver is to compensate for this by boosting the bass at lower listening levels. This context-sensitivity was probably quite useful for cavemen by allowing them to derive useful bass information when encountering stampeding wooly mammoths — yet be able to tune into the slight rustle of a skulking saber-toothed tiger.
We take for granted that, for those of us with two working ears, when we hear a sound we can immediately turn our heads and face it. What’s remarkable is that this simple gift for locating sound requires our brains to perform a hard-to-believe number of calculations. This gift for location is only part of a spatial model constructed in our brains that updates constantly using sound combined with sight inputs.
To maintain this directional audio construct we constantly gather information from a variety of sources:
These sources include the kind of space we are in, the unique “signature” of the sound, the “fingerprint” our brain gives it within a few milliseconds of its arrival, how it relates that fingerprint to the family of other sounds that bear the same signature that are arriving in the form of reflections. This association of the initial source of sound with the related reflections allows our ears and brain to ignore the cacophony of other sounds around us that are ignored. By calculating the direction of these delayed arrivals, how long they were delayed and the way that their signature has been “smeared,” we are able to tell a lot about what kind of environment we are in.
For example, we can learn the size of the room we’re in, the nature of the surfaces reflecting the sound (hard or soft) and its proximity to us. In order to do this we had to determine the directions of the original sound and the way it echoed around the room. Our brain just took into consideration at least three different kinds of information to calculate the direction. First, one ear heard the sound as louder simply because our head created a “sound shadow” and blocked the sound to the ear furthest away. Secondly, the part of our ear that sticks out from your head modified the sound in ways that clued us in regarding the direction the sound came from. And lastly, our brain calculated the phase: the delay between the wave arriving at the left ear versus the right ear. This is the formula our brain unconsciously applied to what we just experienced… who ever said we weren’t good at math!
As mentioned above, these are only some of the elements that create the sound we hear.
Other points we consider when designing speakers include, but are not limited to, the following:
- For familiar sounds, we are very sensitive to “tonal balance,” that is, the treble, bass and midrange parts of the sound need to be in the right proportion to one another. If a speaker’s frequency response graph is “flat”, that tells us that it’s reproducing the sound with the right balance (at least for the position of the measuring microphone). This one measurement has proven to be the most important element for most of us in our perceived “accuracy” for an audio system.
- Time-delay is a serious issue. Here we’re addressing the arrival of sound identified as a reflection rather than part of the original sound’s signature. The scientific jury is out on this one, but if the “delayed arrival” is soon enough, say from reflections off of the grill frame or speakers that aren’t mounted flush, it is heard as part of the signature and we will hear it as a distortion to the tonal balance.
- Our brain mostly ignores reflected signals when evaluating the balance of sound. Bass reflections get treated a little differently. This plays into the decisions we make regarding placement of subwoofers in relation to tweeters.
- One unique property of our “hearing” is that we cannot locate bass sounds unless we correctly associate a bass note’s overtones and then locate them in space. This allows for speakers that specialize in low bass—subwoofers—to be placed away from the main speakers and successfully fool us into believing that the bass is coming from the small speakers that reproduce the overtones… the sounds we “hear.”
In the final analysis, what we hear, how we perceive and evaluate it is such a unique combination of quantitative and subjective inputs that coming to any agreement on what perfect sound is, is nearly an impossible undertaking. However, there are those transcendental moments, when even crowd of strangers come to a collective agreement, like at a live performance of our favorite artists, that perfection has been achieved. If you would like to know more about psychoacoustics, http://en.wikipedia.org/wiki/Psychoacoustics is a good place to start.
The content & opinions in this article are the author’s and do not necessarily represent the views of HomeToys
This post does not have any comments. Be the first to leave a comment below.
Post A Comment
You must be logged in before you can post a comment. Login now.