View Single Post
  #70   Report Post  
MZ
 
Posts: n/a
Default

So the microphone is using the same exact piece of information to make its
measurement (compression and rarefaction of air molecules). Therefore, it
can't possibly have more information available to it than the microphone.
So, in light of this explanation, how could it not be telling the whole
story?


The problem is not the information. The problem is the measurement of the
information. First, you are not strictly correct - the microphone does not
have the same information available to it, because it's not shaped like an
ear. If it were, not all human ears are shaped the same. But that's not
relevant.


The pinna introduces distortion, actually. That's the point of it. It
improves high frequency response for sounds coming in front of you (this
is good) but intentionally blocks high frequencies for sounds coming
behind you. As such, it assists the brain with localization.
Importantly, it's also responsible for the brain's ability to estimate
elevation of the source. You'll note that it's not symmetrical from top
to bottom. Early auditory areas deep in the brain spend the bulk of their
resources making these computations (the inferior colliculus perhaps the most
prominent - anyway, not even having reached the cortex yet).

This is an example of the auditory system, like all of the other sensory
systems, intentionally introducing distortion into the signal in order to pull
out attributes of the stimulus that are important for the animal to work. The
visual system is probably even more guilty of employing this strategy. It's a
common trend, all the way from humans to invertebrates.

So yes, it's a GOOD THING that microphones don't use these tricks. We
want accuracy, so ideally it will collect sounds from all directions
equally.

The real problem is that microphones are not perfect and can't
send a perfect signal to be analyzed. There is always some distortion of
the original signal.


But substantially less than the human auditory system introduces.
Microphones tend to have a reasonably flat response from 20 to 20kHz (the
good ones at least). The human auditory system has an awful response,
peaking around 1kHz or less (the dominant part of human speech,
incidentally) and responding poorly above 15kHz and below about 100Hz.
Additionally, microphones have a cleaner transduction mechanism, not
having to rely on a network of bones attached to an asymmetric diaphragm.
Also, the auditory system inherently produces its own distortion known as
otoacoustic emissions which are much more significant than distortion
effects produced by decent microphones. Finally, and most importantly,
the microphone is able to make an electrical measurement that's limited
only by the inductance of the coil (which is why it's able to have such a
great spectral range). The auditory system, however, relies on a network
of neurons that are each tuned to relatively wide band of frequencies to
encode the signal by essentially performing a rough fourier transform of
the signal, and then, before the signal is even transmitted to the brain,
computations are performed to essentially subtract adjacent frequencies
from each other (a form of lateral inhibition - another bit of distortion
added to the system). As a result, the signal being sent to the brain is
a far cry from the signal that reached the ear drum.

In short, microphones do a much better job at capturing the original
signal than does the human auditory system. Not only because it uses more
precise materials and mechanisms, but also because it's designed for
perfect reproduction - the auditory system is not.