View Single Post
  #29   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default Need advice for a small room

On Fri, 11 May 2012 06:02:09 -0700, Gary Eickmeier wrote
(in article ):

"Audio Empire" wrote in message
...

While I agree with you in theory, the reality is that the "sweet
spot" concept is part of the baggage that stereo recording
methodology carries with it. I've mentioned this before, but it
doesn't hurt to reiterate:

When you are at a live, unamplified concert, and you move around
with respect the stage (or other locus of performance) your ears
go with you and your perspective changes with location. When
listening to a recording, and you move around the room, your
surrogate ears, the microphones, and ultimately, (in the case of
multi-miked, multi-channel recordings) the final mix has one set
perspective because the "mikes" DON'T move. That being the case,
there is only ONE set place where the perspective is correct, IOW,
there is only one place in front of the speakers (right to left)
where the listener is on the same axis as the surrogate ears.
Naturally, this is the place where the imaging and soundstage snap
into sharp focus. If you aren't in that spot, you still hear a
sound-field, but the focus will be gone. An analogy would be a 3-D
visual image. As you move the right-eye image and the left-eye
image closer together and further apart, there is only one relative
position where these two images coalesce in your mind as a single
3-D image (assuming, of course, you are wearing the glasses) with
depth as well as height and width. That's because your surrogate
eyes are a pair of lenses on a camera that are a set distance
apart. That means that when viewing those images they must give the
illusion of being the same distance apart to your brain, or you
won't see the stereo effect - with or without the glasses.


No. Not analogous.

There need not be a single "sweet spot" nor a single perspective
that was viewed by some single stereo microphone.

The example in my Mars paper - which I think I sent you - was of an
imaginary recording made with one microphone per instrument, then
played back on speakers that have similar radiation patterns to the
instrument they are reproducing, and are placed in positions that
are geometrically similar to the original. Such an ideal system
creates a sound field that is spatially a duplicate of the
original. You can move around in it just like live. The realism of
it depends on the size of the room being similar to the original,
or what the original would be good in. And most notably, neither
the recording nor the reproduction have anything to do with the
number of ears on your head or the spacing between them or any
single position of any microphone during the recording. Stereo is
not a head-related system like binaural, nothing to do with the
human hearing mechanism, but rather the creation of sound fields in
rooms.

It is confusing when we begin to simplify the system down to fewer
channels, especially if it gets all the way down to two channels,
because then we begin to think that the two speakers are
reproducing EAR SIGNALS, which they are not. The only aspect of
playback that relates to the human hearing mechanism is the summing
localization that is employed to create the phantom imaging between
speakers. Discrete surround sound with the center channel gets us
some of the way away from that confusion, but the nature of the
system remains the same, and the confusion will always be with us.
But even with a simplified-down system there needn't be a single
sweet spot or single perspective on the instruments, if you employ
a proper radiation pattern, speaker positioning, and a good room.


While much of what you say is true, with two channels, there is only a
very narrow range of listening positions where the aural images are in
focus. This has to be. Microphones aren't ears and they don't even ACT
like ears and in fact we don't want them to act like ears, because if
they did, we would have binaural recordings, not stereo recordings.
But they do build-up a snapshot of the performance from a fixed
perspective. It doesn't matter whether this perspective is the result
of some co-incident microphone technique such as M-S or ORTF, or
whether it's the result of widely-spaced omnis, or whether it's a
studio-mixed sound-field made up from the outputs of dozens of
microphones recorded to dozens of separate channels all mixed down to
two. The result, on the listener's end is the same. A fixed
perspective that does not move when the listener moves.

You are right again when you say that the only way around this is to
have a microphone and channel per instrument and a speaker on the
listening end per instrument all arranged exactly where the original
instrument was arranged during the recording process. This would give
the playback a similar image specificity to a real performance. Bell
Labs noted this in their 1933 stereophonic experiments. They started
with one channel per instrument (not recorded, of course, merely
piped-in by hard-wire from another, remote location) and kept reducing
the number of channels (on both ends) until but two remained. They
noted that it was entirely practical to convey the stereophonic
effect with merely two channels, but they also added the caveat that
with two channels, the optimum stereo effect was achieved only at the
point in front of the speakers where the sound-fields from the two
channels intersect.