View Single Post
  #23   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default Mind Stretchers

On Mon, 18 Jun 2012 03:39:56 -0700, KH wrote
(in article ):

On 6/17/2012 6:34 PM, Audio Empire wrote:
On Sun, 17 Jun 2012 12:59:06 -0700, KH wrote
(in ):

snip

So add a second microphone, and you have the same signals recorded from
a different position in space. As long as you know the microphone
positions, its easy to determine the relative positions of the two
instruments, aurally or mathematically. Yet when you add in the effects
of the reverberant sound field you have a whole new set of signals of
varying strengths and arrival times, and thus phase differences. As a
listener, in place of the microphones, even minor head movements allow
you to localize the instruments by sampling different angular
presentations (i.e. the HRTF effect) and analyzing multiple wave fronts.
This depth of information is simply not captured in a stereo recording.


Sure it is. It's capture by only two mikes (ideally), but in the right
circumstances, that's enough.


How is it captured? I'm not referring to *soundstage* depth, clearly
that only requires two mics, rather I'm talking about information
density. A listener, with only minute head movements, samples a number
of different wavefronts, providing an information density much greater
than that achieved by any fixed recording setup, whether stereo,
multichannel, or binaural.


I think that the soundfield, by the time it reaches the audience, has
coalesced into a single whole that is perceived in a certain way from each
location within the audience. The human brain allows us to search within that
soundfield and pick out certain sounds upon which to concentrate, but that's
part of the human ear/intelligence interface that allows us to pick certain
sounds out of a plethora of background. I.E. it's a survival skill that
allows us to pick out the snap of a twig against a waterfall, or for a mother
to distinguish her lost child crying in a crowd. It is not a result of the
orchestra being comprised of many different soundfields which moving our
heads allows us to intersect and sample, and which microphones miss because
they are locked in a single location.

That is the information that is missing; that's the information that
allows us to establish accurate positional data.


I maintain that in a properly made recording, it's not missing.


I believe the information to which I'm referring is missing from the
recording. Where, in a stereo recording, is information from multiple
wavefronts, both normal and off-angle, recorded?


It's not necessary as there aren't multiple wavefronts, or if their are, both
our microphones and our ears intersect all of them arriving at that point in
space.

There is no doubt that there is sufficient information in a stereo
recording to create a left/right soundstage, as well as depth
localization, and at least an illusion of height, although I admit I
don't havee a firm geometric/visual conception of quite how that is achieved.


Subtle phase differences that give out ears the (relative) height of a sound
source. They are captured by microphones too in a true stereo recording.

But the ability to sample a virtually endless number of stereophonic
(relative to listener reception) wavefronts, available to an audience
member, does not translate to a recording made from any fixed perspective.


If you accept the premise, then your conclusion is correct. However from my
knowledge and experience, I find that your premise isn't correct.