Log in

View Full Version : Ping Scott Dorsey, The New Stereo Soundbook, Time


Gary Eickmeier
August 17th 13, 06:08 PM
Scott,

I am more than half way through the book now, haven't found a single thing
that I didn't already know. Why did you want me to get this? I may have a
rant later on about some of the misconceptions I have found in here, which I
am taking notes on as I go, but for now let me keep it very simple with a
question for the group.

After all of the types of microphones and stereo recording methods, he goes
into multi-miking and mixing. He relates how you need to pan the spot miked
instruments into the same position as the master stereo pickup places them,
but he says nothing about time delay.

If you do a stereo pickup at the front of the orchestra PLUS several spot
mikes nearer to the instruments, then if you do not delay the spotted
instruments it will be the same as if you advanced their sound by several
milliseconds, no? This would not be a trivial error and I thought it was
well-known.

Or was this book written before such techniques became available?

Gary Eickmeier

PStamler
August 18th 13, 07:32 AM
On Saturday, August 17, 2013 12:08:58 PM UTC-5, Gary Eickmeier wrote:

> Or was this book written before such techniques became available?

It was.

Peace,
Paul

Scott Dorsey
August 18th 13, 04:25 PM
Gary Eickmeier > wrote:
>
>I am more than half way through the book now, haven't found a single thing
>that I didn't already know. Why did you want me to get this? I may have a
>rant later on about some of the misconceptions I have found in here, which I
>am taking notes on as I go, but for now let me keep it very simple with a
>question for the group.

I suggested you get it (or read it at a library) because it is the standard
undergraduate textbook on the subject. It is a very good introduction to
stereophony in general, and you have made some statements which imply that
you either do not understand or do not believe in conventional stereophony.

Your reading this book at least gives you a point of reference where you can
discuss the subject with other people. You can say, "Streicher says this,
but I believe he is wrong." My sneaking suspicion is that the misconceptions
are more apt to be yours than his, but until everybody has a standard frame
of reference nobody can be sure either way.

Because it is the standard reference does not necessarily mean every word is
correct but it does mean there has been a lot of review on it, making it as
authoritative as Thomas and Finney or Terman.

>After all of the types of microphones and stereo recording methods, he goes
>into multi-miking and mixing. He relates how you need to pan the spot miked
>instruments into the same position as the master stereo pickup places them,
>but he says nothing about time delay.

Delaying spot mikes was not really possible at all until a decade ago except
in a rather crude way. It's really only been in the last few years that it
has started to become widely used in classical recording, and it is still
almost unknown outside of the classical recording community.

None of the big high-budget aggressively-spotted recordings that people are
routinely familiar with (like von Karajan's second Beethoven set) have
employed time alignment of spot mikes. And sadly, as the delay systems have
become practical, those big sessions have pretty much all disappeared.

When the first edition of Streicher's book was written, the only possible
delay anybody had was sel-sync and it was a thing to be used only when
absolutely desperate. When the second edition came out, I think delay was
technically possible but nobody actually used it. Only now are people
starting to learn how to use it.

>If you do a stereo pickup at the front of the orchestra PLUS several spot
>mikes nearer to the instruments, then if you do not delay the spotted
>instruments it will be the same as if you advanced their sound by several
>milliseconds, no? This would not be a trivial error and I thought it was
>well-known.

Yes, but it was not a thing anybody could do anything about, save for
placing the main pair away from the orchestra at the approximate distance
that, when placed in sel-sync mode on playback, the main pair was advanced
the same time as that delay. I saw this technique used once in my life
when I was an assistant in an attempt to get around a severe acoustical problem,
and it had many drawbacks.

So for many decades, people just lived with the degradation when they brought
spots up, and they used the tightest spot mikes possible so that that
degradation was localized.

>Or was this book written before such techniques became available?

Long before such techniques became available, and long long before they
became popular in the classical world. They are still mostly unknown
outside of the classical world, in part because with physically smaller
ensembles the delay times are shorter. Still, it can really change the
sound of spotmiked drumkits.
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Gary Eickmeier
August 18th 13, 05:20 PM
Scott Dorsey wrote:
> Gary Eickmeier > wrote:
>>
>> I am more than half way through the book now, haven't found a single
>> thing that I didn't already know. Why did you want me to get this? I
>> may have a rant later on about some of the misconceptions I have
>> found in here, which I am taking notes on as I go, but for now let
>> me keep it very simple with a question for the group.
>
> I suggested you get it (or read it at a library) because it is the
> standard undergraduate textbook on the subject. It is a very good
> introduction to stereophony in general, and you have made some
> statements which imply that
> you either do not understand or do not believe in conventional
> stereophony.

I well understand that my stereo theories differ from the "two ears two
speakers" norm. But if this book represents the norm that most agree with, I
have a longer row to hoe than I imagined. This has got to stop. But you are
correct, if it gives a common starting point from which we can diverge and
discuss, then it is worth the trip.


> So for many decades, people just lived with the degradation when they
> brought spots up, and they used the tightest spot mikes possible so
> that that degradation was localized.

Yes - I am thinking that it would have the tendency to not only make the
spotted instruments clearer, but also move them to the front of the group,
which is not what you want. This of course would have to be sumitted to some
stringent listening tests to see exactly how audible it is and what it does,
but that is my supposition.

Thanks,
Gary Eickmeier

William Sommerwerck
August 18th 13, 06:58 PM
Strictly speaking, all conventional stereophony is "wrong", because it doesn't
provide correct directional cues. Only binaural and Ambisonic recordings do
(AFAIK).

A friend of mine has a proprietary recording process that uses heavy
multi-miking, combined with delay compensation (and likely other processing he
won't describe). The result sounds a lot like an ideal single-point recording.

If you get "Stereophile", note my letter in the September issue.

William Sommerwerck
August 18th 13, 07:00 PM
"William Sommerwerck" wrote in message ...

Whoops. That was supposed to go directly to Scott. Sorry about that.

Gary Eickmeier
August 18th 13, 08:04 PM
William Sommerwerck wrote:
> Strictly speaking, all conventional stereophony is "wrong", because
> it doesn't provide correct directional cues. Only binaural and
> Ambisonic recordings do (AFAIK).
>
> A friend of mine has a proprietary recording process that uses heavy
> multi-miking, combined with delay compensation (and likely other
> processing he won't describe). The result sounds a lot like an ideal
> single-point recording.
> If you get "Stereophile", note my letter in the September issue.

Well, I'm glad it went into the group, because it shows that "stereo" or at
least some topics within it, is still open for discussion. Do you have a
file of your friend's recording or recordings? I just bought a copy of TAS,
but I don't normally read these mags any more. Can you give us the letter
here, or would that violate something?

Gary Eickmeier

William Sommerwerck
August 18th 13, 08:55 PM
"Gary Eickmeier" wrote in message ...

> Can you give us the letter here, or would that violate something?

It's just a spew about the general disregard of audiophiles for absolute
accuracy (rather than euphony).

Scott Dorsey
August 18th 13, 09:43 PM
William Sommerwerck > wrote:
>Strictly speaking, all conventional stereophony is "wrong", because it doesn't
>provide correct directional cues. Only binaural and Ambisonic recordings do
>(AFAIK).

It does provide correct directional cues in one axis, within the
stereo angle. Is this sufficient? Depends on your goal and on the
source material too.

>A friend of mine has a proprietary recording process that uses heavy
>multi-miking, combined with delay compensation (and likely other processing he
>won't describe). The result sounds a lot like an ideal single-point recording.

That can be a useful advantage in an application where getting a good
single-point recording is impossible. I am skeptical about heavy
multi-miking, though, just because some instruments don't have any place
up close where they sound right (like violins). On the other hand,
many of those instruments are among the least likely to need aggressive
spotting.
--scott


--
"C'est un Nagra. C'est suisse, et tres, tres precis."

William Sommerwerck
August 19th 13, 01:23 AM
"Scott Dorsey" wrote in message ...
William Sommerwerck > wrote:

>> Strictly speaking, all conventional stereophony is "wrong",
>> because it doesn't provide correct directional cues. Only
>> binaural and Ambisonic recordings do (AFAIK).

> It does provide correct directional cues in one axis, within the
> stereo angle. Is this sufficient? Depends on your goal and on the
> source material, too.

This can be true when pan-potting. But like Oliver Twist, I want more.


>> A friend of mine has a proprietary recording process that uses
>> heavy multi-miking, combined with delay compensation (and likely
>> other processing he won't describe). The result sounds a lot like an
>> ideal single-point recording.

> That can be a useful advantage in an application where getting a
> good single-point recording is impossible. I am skeptical about
> heavy multi-miking, though, just because some instruments don't
> have any place up close where they sound right (like violins). On
> the other hand, many of those instruments are among the least
> likely to need aggressive spotting.

Good point. When I said "a lot like an ideal single-point recording", I meant
that it sounded spatially coherent -- not like a bunch of individual sounds
pasted together.

Ron C[_2_]
August 19th 13, 01:34 AM
On 8/18/2013 8:23 PM, William Sommerwerck wrote:
> "Scott Dorsey" wrote in message ...
> William Sommerwerck > wrote:
>
>>> Strictly speaking, all conventional stereophony is "wrong",
>>> because it doesn't provide correct directional cues. Only
>>> binaural and Ambisonic recordings do (AFAIK).
>
>> It does provide correct directional cues in one axis, within the
>> stereo angle. Is this sufficient? Depends on your goal and on the
>> source material, too.
>
> This can be true when pan-potting. But like Oliver Twist, I want more.

Somehow I can't help but think of these Rolling Stones lyrics:
~~
You can't always get what you want
But if you try sometimes well you might find
You get what you need
~~

It does seem like those who care about that degree
of precision are becoming a vanishing breed.
Weep as ye need.
==
Later...
Ron Capik
--

Gary Eickmeier
August 19th 13, 06:30 AM
William Sommerwerck wrote:
> "Scott Dorsey" wrote in message ...
> William Sommerwerck > wrote:
>
>>> Strictly speaking, all conventional stereophony is "wrong",
>>> because it doesn't provide correct directional cues. Only
>>> binaural and Ambisonic recordings do (AFAIK).

Stereophony can't be "wrong" and there are no "correct directional cues."
The playback is a re-staging of the recorded material, with directions
established by your speakers and room. We are not recording "cues" for the
ears, nor are we recording "ear signals." We are recording and reproducing
sound in rooms. Binaural is an attempt to record ear signals, or cues, but
no, it doesn't quite work out that way because it doesn't externalize and
you can't move your head around to get it to work right. Ambisonics is an
attempt to encode and play back all directional information, but it still
must be played in a real room to externalize, which makes it, too, a
reconstruction of the recorded event.

> When I said "a lot like an ideal single-point recording",
> I meant that it sounded spatially coherent -- not like a bunch of
> individual sounds pasted together.

Single point miking is not inherently "correct" nor do miking techniques
have anything to do with the number of ears on our heads or even the human
hearing mechanism. This means there is nothing inherently right or wrong
with multi-miking, nor is there any reason that it can't also record the
ambience surrounding the performers. It would be good to have a master
stereo pair or trio or else keep the line of individual mikes generally the
same distance from the orchestra, but a multi-miked recording is not
inherently wrong. Think of a panorama photo taken by shooting individual
images of instruments or sections of the orchestra, then printing the
pictures so that they meet perfectly or overlap and provide a realistic
panorama of the group. Each camera records both the instrument or section
and the background.

If you do record the background along with the individual instruments, then
you get a "you are there" feel to the playback. If you isolate the
instruments and record no ambience of the original room, then you have a
"they are here" feel to the playback because you will be placing the players
in your local acoustics. This is valid as well, and can sound very
realistic.

Gary Eickmeier

Scott Dorsey
August 19th 13, 01:38 PM
William Sommerwerck > wrote:
>"Scott Dorsey" wrote in message ...
>
>> That can be a useful advantage in an application where getting a
>> good single-point recording is impossible. I am skeptical about
>> heavy multi-miking, though, just because some instruments don't
>> have any place up close where they sound right (like violins). On
>> the other hand, many of those instruments are among the least
>> likely to need aggressive spotting.
>
>Good point. When I said "a lot like an ideal single-point recording", I meant
>that it sounded spatially coherent -- not like a bunch of individual sounds
>pasted together.

And that's a big worry to my mind... if it's not tonally accurate, I don't
care if it's spatially accurate.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."

William Sommerwerck
August 19th 13, 03:57 PM
"Gary Eickmeier" wrote in message ...
William Sommerwerck wrote:

> Stereophony can't be "wrong" and there are no "correct directional cues."

But there are. These are well-established by binaural and Ambisonic theory.
Ever heard of Makita localization? (It has nothing to do with rummaging
through power tools at Home Despot.)


> The playback is a re-staging of the recorded material, with directions
> established by your speakers and room. We are not recording "cues"
> for the ears, nor are we recording "ear signals." We are recording and
> reproducing sound in rooms.

That's why so many recordings are so bad.

William Sommerwerck
August 19th 13, 04:00 PM
"Scott Dorsey" wrote in message ...
William Sommerwerck > wrote:
>"Scott Dorsey" wrote in message ...


>>> That can be a useful advantage in an application where getting a
>>> good single-point recording is impossible. I am skeptical about
>>> heavy multi-miking, though, just because some instruments don't
>>> have any place up close where they sound right (like violins). On
>>> the other hand, many of those instruments are among the least
>>> likely to need aggressive spotting.

>> Good point. When I said "a lot like an ideal single-point recording",
>> I meant that it sounded spatially coherent -- not like a bunch of
>> individual sounds pasted together.

> And that's a big worry to my mind... if it's not tonally accurate, I don't
> care if it's spatially accurate.

Okay. If you want the violins to sound right, then they have to be played in
an acoustically appropriate environment, and that environment has to be
correctly recorded and reproduced.

Arny Krueger[_5_]
August 19th 13, 07:11 PM
"Gary Eickmeier" > wrote in message
...

> I am more than half way through the book now, haven't found a single thing
> that I didn't already know. Why did you want me to get this? I may have a
> rant later on about some of the misconceptions I have found in here, which
> I am taking notes on as I go, but for now let me keep it very simple with
> a question for the group.

> After all of the types of microphones and stereo recording methods, he
> goes into multi-miking and mixing. He relates how you need to pan the spot
> miked instruments into the same position as the master stereo pickup
> places them, but he says nothing about time delay.

> If you do a stereo pickup at the front of the orchestra PLUS several spot
> mikes nearer to the instruments, then if you do not delay the spotted
> instruments it will be the same as if you advanced their sound by several
> milliseconds, no? This would not be a trivial error and I thought it was
> well-known.

Have you ever mixed a recording that was made this way, and compared what it
sounds like with the mic delayed and not delayed?

IME adding delay to the spot mics does not usually make a lot of difference
since the signal from the spot mic is usually quite a bit louder than the
signal for that source from the overall mic pair. The purpose of spot micing
is to make a the spot miced source more apparent, and advancing it in time
tends to make it more audible. Yes, you end up with a recording with the
spot miced instrument delayed by different amounts, but the ear tends to
fuse them into one apparent source. There are also a number of copies of
each instrument with different timings due to reflections within the room
and there isn't a lot you can do about that in a normal live room once you
have optimized your mic selections, locations and orientations.

Also, in a live recording with spot mics it is difficult or impossible to
avoid bleed between the spot mics, so your mixdown recording will probably
contain a significant contribution from each instrument from several spot
mics.

Gary Eickmeier
August 19th 13, 08:51 PM
William Sommerwerck wrote:
> "Gary Eickmeier" wrote in message
> ... William Sommerwerck wrote:
>
>> Stereophony can't be "wrong" and there are no "correct directional
>> cues."
>
> But there are. These are well-established by binaural and Ambisonic
> theory. Ever heard of Makita localization? (It has nothing to do with
> rummaging through power tools at Home Despot.)

So how do these directional cues get to us on playback? Is there some
standard speaker separation or angle that we are supposed to adhere to so
that the recorded angles come out right? What if you sit closer or farther
away? The angle changes, no? So how does this "cue" tell you that the
trumpet is 23 degrees left of center?
>
>
>> The playback is a re-staging of the recorded material, with
>> directions established by your speakers and room. We are not
>> recording "cues" for the ears, nor are we recording "ear signals." We are
>> recording
>> and reproducing sound in rooms.
>
> That's why so many recordings are so bad.

What is why they are so bad?

Gary

Gary Eickmeier
August 19th 13, 08:54 PM
William Sommerwerck wrote:

> Okay. If you want the violins to sound right, then they have to be
> played in an acoustically appropriate environment, and that
> environment has to be correctly recorded and reproduced.

Thou hast said it.

Gary

Gary Eickmeier
August 19th 13, 09:06 PM
Arny Krueger wrote:

> Have you ever mixed a recording that was made this way, and compared
> what it sounds like with the mic delayed and not delayed?

No - that's why I say we need to confirm it with listening tests.

> IME adding delay to the spot mics does not usually make a lot of
> difference since the signal from the spot mic is usually quite a bit
> louder than the signal for that source from the overall mic pair. The
> purpose of spot micing is to make a the spot miced source more
> apparent, and advancing it in time tends to make it more audible.
> Yes, you end up with a recording with the spot miced instrument
> delayed by different amounts, but the ear tends to fuse them into one
> apparent source. There are also a number of copies of each instrument
> with different timings due to reflections within the room and there
> isn't a lot you can do about that in a normal live room once you have
> optimized your mic selections, locations and orientations.

I am theorizing that if the spotted instruments do not have the ambience on
their track then they should sound like several up front performers all
lined up, not sinking into the recorded acoustic space.
>
> Also, in a live recording with spot mics it is difficult or
> impossible to avoid bleed between the spot mics, so your mixdown
> recording will probably contain a significant contribution from each
> instrument from several spot mics.

Yes, leakage, but the proximity to the instrument it is doing might swamp
(mask) that, right?

Gary

William Sommerwerck
August 19th 13, 11:07 PM
"Gary Eickmeier" wrote in message ...
William Sommerwerck wrote:
> "Gary Eickmeier" wrote in message
> ... William Sommerwerck wrote:
>

>>> Stereophony can't be "wrong" and there are no "correct directional
>>> cues."

>> But there are. These are well-established by binaural and Ambisonic
>> theory. Ever heard of Makita localization? (It has nothing to do with
>> rummaging through power tools at Home Despot.)

> So how do these directional cues get to us on playback? Is there some
> standard speaker separation or angle that we are supposed to adhere to so
> that the recorded angles come out right? What if you sit closer or farther
> away? The angle changes, no? So how does this "cue" tell you that the
> trumpet is 23 degrees left of center?

This is why you need to hear good Ambisonic reproduction. Ambisonics does not
include all possible directional cues (delay is the principal missing cue),
but it includes many others, of which Makita is one.

It turns out that when you satisfy a sufficient number of cues, the "sweet
spot" becomes /huge/. You can walk around the room -- including standing
directly in front of a speaker -- without major image shifts. Regular
stereophony can't do this, because it doesn't correctly implement directional
cues.

Perhaps the strangest aspect of Ambisonics occurs when you stand outside the
speaker array. (I did this once in a large room.) It sounds as if someone has
removed the roof, and you are "hearing into" the concert hall.

Mike Rivers[_2_]
August 19th 13, 11:34 PM
On 8/19/2013 4:06 PM, Gary Eickmeier wrote:

>> Have you ever mixed a recording that was made this way, and compared
>> what it sounds like with the mic delayed and not delayed?
>
> No - that's why I say we need to confirm it with listening tests.

Well, pack up your mics, find an orchestra, and let us know your
conclusions. Or listen to several modern classical recordings and see if
you can pick out the spot mics.

> I am theorizing that if the spotted instruments do not have the ambience on
> their track then they should sound like several up front performers all
> lined up, not sinking into the recorded acoustic space.

I believe that you could make them sound like that, but a crafty
engineer will assure that they don't, unless it's appropriate for the
music. When a player in a big band stands up for a solo, you want to
"hear" him standing up and playing up front. In a classical orchestra,
the violas should know their place and stay there.


--
For a good time, call http://mikeriversaudio.wordpress.com

Frank Stearns
August 20th 13, 02:38 AM
Mike Rivers > writes:

>On 8/19/2013 4:06 PM, Gary Eickmeier wrote:

>>> Have you ever mixed a recording that was made this way, and compared
>>> what it sounds like with the mic delayed and not delayed?
>>
>> No - that's why I say we need to confirm it with listening tests.

>Well, pack up your mics, find an orchestra, and let us know your
>conclusions. Or listen to several modern classical recordings and see if
>you can pick out the spot mics.

>> I am theorizing that if the spotted instruments do not have the ambience on
>> their track then they should sound like several up front performers all
>> lined up, not sinking into the recorded acoustic space.

>I believe that you could make them sound like that, but a crafty
>engineer will assure that they don't, unless it's appropriate for the
>music. When a player in a big band stands up for a solo, you want to
>"hear" him standing up and playing up front. In a classical orchestra,
>the violas should know their place and stay there.

Mike is getting pretty spot on (no pun).

Wish I had more time at the moment because some of the replies on this thread are
making remove still more hair at my own hands.

Arrghhh. Spots are NOT yes/no, black/white. Using spots requires care and awareness.
You just don't arithmetically set their delay, set their faders inline with the
stereo pair, and then forget them. They MUST be integrated with care and craft. If
you NOTICE the spots, you're NOT doing it right.

Look, I do acoustic and classical music primarily. I spot moderately because when
done right those spots help in adverse playback conditions, yet will not cause harm
under good conditions.

With spots, delay, level, pan, EQ, very light compression, even a little reverb to
blend in with what the main pair is getting in the way of room reverb, is all key.

And you'd be surprised just how little spot level is required to get just a bit more
"outline" or a little better "stage lighting" throughout an ensemble. And that's
really a pretty good analogy. Sit through a rehearsal when none of the stage lights
are on and the players are in their scruffy jeans and green sweaters. Then see the
show when everyone is dressed up and the stage is lit (assuming the lighting
designer is doing things right). Those improved visuals mitigate any drabness and
add visual vitality -- just like proper spotting can for the audio. (Yeah, I know,
this from the guy who closes his eyes during performances -- but you get the general
idea.)

In post you listen as you mix, hopefully with musical awareness of the piece, the
performers, the hall, and even the performance that day. My spot levels (and often
other parameters) are automated to do their best job, only as needed, and no more.

If you want to deconstruct various aspects of the process, fine; but be aware that
putting things together in the RIGHT way is what makes or breaks spot use.

I'll paraphrase a few others by saying if it's an orchestra you're recording, it
bloody damned well better sound like an orchestra when you're done.

YMMV.

Frank
Mobile Audio
--

Gary Eickmeier
August 20th 13, 03:18 AM
William Sommerwerck wrote:

> This is why you need to hear good Ambisonic reproduction. Ambisonics
> does not include all possible directional cues (delay is the
> principal missing cue), but it includes many others, of which Makita
> is one.
> It turns out that when you satisfy a sufficient number of cues, the
> "sweet spot" becomes /huge/. You can walk around the room --
> including standing directly in front of a speaker -- without major
> image shifts. Regular stereophony can't do this, because it doesn't
> correctly implement directional cues.
>
> Perhaps the strangest aspect of Ambisonics occurs when you stand
> outside the speaker array. (I did this once in a large room.) It
> sounds as if someone has removed the roof, and you are "hearing into"
> the concert hall.

I have dabbled in surround Ambisonics in the early 80s with one of its
practitioners, Mike Skeet, in England. I have not heard full periphony,
which is probably what you are talking about. I will look up the Wiki
version of Ambisonics explanation to reflresh myself on the architecture
(layout) you are talking about.

Gary

Scott Dorsey
August 20th 13, 03:18 AM
William Sommerwerck > wrote:
>
>Okay. If you want the violins to sound right, then they have to be played in
>an acoustically appropriate environment, and that environment has to be
>correctly recorded and reproduced.

So far, yeah. But the ability to simulate environments is continuing to
grow by leaps and bounds and while it's far from realistic, it's closer
than I ever thought it would be.

We're not up to the point where we can recreate an orchestra entirely from
spots but I can imagine a distant era when we could.
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Scott Dorsey
August 20th 13, 03:20 AM
Gary Eickmeier > wrote:
>
>So how do these directional cues get to us on playback? Is there some
>standard speaker separation or angle that we are supposed to adhere to so
>that the recorded angles come out right?

Yes.

> What if you sit closer or farther
>away? The angle changes, no? So how does this "cue" tell you that the
>trumpet is 23 degrees left of center?

Right. That's why you are supposed to create an equilateral triangle
with the right and left speakers and the listener. 60-degree angles.
Recordings are made with that assumption on playback.

If you sit closer or farther away, it changes.
--scott


--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Scott Dorsey
August 20th 13, 03:24 AM
Frank Stearns > wrote:
>Arrghhh. Spots are NOT yes/no, black/white. Using spots requires care and awareness.
>You just don't arithmetically set their delay, set their faders inline with the
>stereo pair, and then forget them. They MUST be integrated with care and craft. If
>you NOTICE the spots, you're NOT doing it right.

Thank you.

And I will say that when you are using delays, it is very interesting as you
adjust the delay because at some point you get to a position where the spot
mike seems to disappear. You only notice it when you mute it. And, that is
when you know the delay is right.. it is a very clear snap into place.

>If you want to deconstruct various aspects of the process, fine; but be aware that
>putting things together in the RIGHT way is what makes or breaks spot use.

Amen!
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Gary Eickmeier
August 20th 13, 03:31 AM
Frank Stearns wrote:


> Wish I had more time at the moment because some of the replies on
> this thread are making remove still more hair at my own hands.
>
> Arrghhh. Spots are NOT yes/no, black/white. Using spots requires care
> and awareness. You just don't arithmetically set their delay, set
> their faders inline with the stereo pair, and then forget them. They
> MUST be integrated with care and craft. If you NOTICE the spots,
> you're NOT doing it right.
>
> Look, I do acoustic and classical music primarily. I spot moderately
> because when done right those spots help in adverse playback
> conditions, yet will not cause harm under good conditions.
>
> With spots, delay, level, pan, EQ, very light compression, even a
> little reverb to blend in with what the main pair is getting in the
> way of room reverb, is all key.
>
> And you'd be surprised just how little spot level is required to get
> just a bit more "outline" or a little better "stage lighting"
> throughout an ensemble. And that's really a pretty good analogy. Sit
> through a rehearsal when none of the stage lights are on and the
> players are in their scruffy jeans and green sweaters. Then see the
> show when everyone is dressed up and the stage is lit (assuming the
> lighting designer is doing things right). Those improved visuals
> mitigate any drabness and add visual vitality -- just like proper
> spotting can for the audio. (Yeah, I know, this from the guy who
> closes his eyes during performances -- but you get the general idea.)
>
> In post you listen as you mix, hopefully with musical awareness of
> the piece, the performers, the hall, and even the performance that
> day. My spot levels (and often other parameters) are automated to do
> their best job, only as needed, and no more.
>
> If you want to deconstruct various aspects of the process, fine; but
> be aware that putting things together in the RIGHT way is what makes
> or breaks spot use.
>
> I'll paraphrase a few others by saying if it's an orchestra you're
> recording, it bloody damned well better sound like an orchestra when
> you're done.

Yo Frank, glad to have you back. This all started with a discussion of
whether some time delay is important with spot mikes because without it they
will be mixed in several milliseconds ahead of the main stereo pickup. I
wondered how audible this was. Arny said not very, if you do it right, and
Mike and you have now confirmed it.

The only experience I have had in my short recording career is with the solo
singer in my concert band. She has always had a mike to sing into, of
course, and that was then put thru the PA system and picked up by my stereo
mikes with all of the echo and horror of that PA system included. So I
decided that the only way was to cop her sound off that same microphone from
the sound board of the hall and mix that in to get a much clearer and up
front sound from her. It worked fairly well, but I sure wish I could shoot
the PA speakers or cut their wires secretly just before she goes on.

Gary Eickmeier

William Sommerwerck
August 20th 13, 04:05 AM
"Gary Eickmeier" wrote in message ...

> I have dabbled in surround Ambisonics in the early 80s
> with one of its practitioners, Mike Skeet, in England.
> I have not heard full periphony, which is probably what
> you are talking about.

No, these effects occur with "just" lateral surround.

Gary Eickmeier
August 20th 13, 05:10 AM
Scott Dorsey wrote:
> Gary Eickmeier > wrote:
>>
>> So how do these directional cues get to us on playback? Is there some
>> standard speaker separation or angle that we are supposed to adhere
>> to so that the recorded angles come out right?
>
> Yes.
>
>> What if you sit closer or farther
>> away? The angle changes, no? So how does this "cue" tell you that the
>> trumpet is 23 degrees left of center?
>
> Right. That's why you are supposed to create an equilateral triangle
> with the right and left speakers and the listener. 60-degree angles.
> Recordings are made with that assumption on playback.

Now why are you saying that Scott? Some textbook example of stereo?

You know that there is no such assumption when making a recording. XY
techniques could be any included angle from 60 to 120 degrees. MS has no
particular angular separation. Spaced omnis ditto. Nor does the SRA have
anything to do with the angle you get on playback

This is very curious to me. It is obvious that neither recording techniques
nor playback setup relies on any particular angle between listener and
speakers. We set up our speakers in the front of the room, and whatever the
included angle between speakers turns out to be, that is the stereo spread.

I have talked before about how simplistic and incomplete the "two speakers
and a listener in an equilateral triangle" drawings about how stereo works
are. This is just too basic to even allow for a common starting point for
discussion. I am licked before the starting gate opens.

I will re-start all this when I finish the book and go through my notes.
Please stay with me. I respect your experience and opinions.

Gary Eickmeier

Tom McCreadie
August 20th 13, 10:49 AM
"Gary Eickmeier" > wrote:

>>> So how do these directional cues get to us on playback? Is there some
>>> standard speaker separation or angle that we are supposed to adhere
>>> to so that the recorded angles come out right?
>>
>> Yes.
>>
>>> What if you sit closer or farther
>>> away? The angle changes, no? So how does this "cue" tell you that the
>>> trumpet is 23 degrees left of center?
>>
>> Right. That's why you are supposed to create an equilateral triangle
>> with the right and left speakers and the listener. 60-degree angles.
>> Recordings are made with that assumption on playback.
>
>Now why are you saying that Scott? Some textbook example of stereo?
>
>You know that there is no such assumption when making a recording. XY
>techniques could be any included angle from 60 to 120 degrees. MS has no
>particular angular separation. Spaced omnis ditto. Nor does the SRA have
>anything to do with the angle you get on playback
>
>This is very curious to me. It is obvious that neither recording techniques
>nor playback setup relies on any particular angle between listener and
>speakers. We set up our speakers in the front of the room, and whatever the
>included angle between speakers turns out to be, that is the stereo spread.
>

It may help to consider the trumpet location in terms of "% of the total angular
travel between the center and the edge of the SRA". On playback, the SRA gets
fitted between your two speakers and the trumpet location perceived at this same
% displacement from center towards one speaker.

So yes, since closer-spaced or further-sited speakers subtend a smaller angle to
your listening seat, this causes the absolute angle of location of the trumpet
image to be smaller (and the entire angular width of the orchestra, also). Most
listeners can live with that.

From the foregoing, you might logically expect to get the strongest illusion of
sitting under the mic pair array in the auditorium if you discard the
equilateral triangle mantra and customize the speakers subtended angle to match
that of the SRA...e.g. about 75° for Blumlein. But there's plenty of other
strong convictions out there on ideal speaker angling. :-)

Mike Rivers[_2_]
August 20th 13, 12:47 PM
On 8/19/2013 10:31 PM, Gary Eickmeier wrote:

> The only experience I have had in my short recording career is with the solo
> singer in my concert band. She has always had a mike to sing into, of
> course, and that was then put thru the PA system and picked up by my stereo
> mikes with all of the echo and horror of that PA system included.

This is the classic problem with recording a show that's mostly acoustic
but which needs a little reinforcement for a single element to bring it
into balance for the audience. This is why so many live recordings of
pop music are done backwards from what you're doing. They put mics on
everything and mix in a little of the room sound to augment any
artificial reverb that was added to the close mics.

The fact that we like to hear music that isn't completely balanced
acoustically is what keeps microphone, multitrack recorder, and DAW
makers (and recording engineers and producers) in business.


--
For a good time, call http://mikeriversaudio.wordpress.com

Frank Stearns
August 20th 13, 01:27 PM
"Gary Eickmeier" > writes:

>Frank Stearns wrote:

snips

>Yo Frank, glad to have you back. This all started with a discussion of
>whether some time delay is important with spot mikes because without it they
>will be mixed in several milliseconds ahead of the main stereo pickup. I
>wondered how audible this was. Arny said not very, if you do it right, and
>Mike and you have now confirmed it.

My mistatement due to lack of time.

Of all the processing items I mentioned that can be involved with a spot channel,
//delay might be the MOST important.// I would never use spots without delay; the
sound of a spotted ensemble is broken without it.

As Scott said, at the right delay, the spot seems to disappear, but you sure notice
when it gets muted.

And the right delay might not be exactly what you'd think if you took your tape
measure (or aligned waveforms on dog-clicker pops). There are many reasons for this;
no time to go into it now. Suffice to say that I will "rough in" the delay using one
of those methods, then step the delay by single-samples up and down until it sounds
right (disappears but brings detail or clarity).

Sometimes, you can't find an ideal delay time where you think it should be, and
you'll be hundreds of samples longer so that you mitigate conflicts from multiple
instuments at unresolvable distances between main and spots. Obviously, while
longer, you keep that time well under Haas. Such spots are still worthwhile, but
require even more care when used.

In either case, at 44.1, your "window of ideal" is often only 3
samples wide.

Anyway. Sorry for the misassumptions - in my experience delay is VITAL to proper
spotting. And my spotted sound got better when I moved from delaying on the digital
multitrack (which only had delay in 1 mSec jumps) to the DAW, which has delay in
single sample steps.

Frank
Mobile Audio

--

Scott Dorsey
August 20th 13, 02:11 PM
Gary Eickmeier > wrote:
>Scott Dorsey wrote:
>>
>> I suggested you get it (or read it at a library) because it is the
>> standard undergraduate textbook on the subject. It is a very good
>> introduction to stereophony in general, and you have made some
>> statements which imply that
>> you either do not understand or do not believe in conventional
>> stereophony.
>
>I well understand that my stereo theories differ from the "two ears two
>speakers" norm. But if this book represents the norm that most agree with, I
>have a longer row to hoe than I imagined. This has got to stop. But you are
>correct, if it gives a common starting point from which we can diverge and
>discuss, then it is worth the trip.

I think so. And stereo is not necessarily "two ears with to speakers."
Sometimes it is two ears with three speakers, which has considerable
advantages once you're out of the sweet spot. You could even argue that
a multi-speaker arrangement like the Carrouso sustem is really stereo,
since the math is all the same.

>> So for many decades, people just lived with the degradation when they
>> brought spots up, and they used the tightest spot mikes possible so
>> that that degradation was localized.
>
>Yes - I am thinking that it would have the tendency to not only make the
>spotted instruments clearer, but also move them to the front of the group,
>which is not what you want. This of course would have to be sumitted to some
>stringent listening tests to see exactly how audible it is and what it does,
>but that is my supposition.

To some extent, the sense of moving them to the front is a result of making
them clearer and brighter. When you spot everything, all of the instruments
sound like they are at the front and in your face (and that happens in a mono
recording as well as stereo... the stereo effects are the least important of
the things that happen with spots).
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Scott Dorsey
August 20th 13, 02:16 PM
William Sommerwerck > wrote:
>"Gary Eickmeier" wrote in message ...
>William Sommerwerck wrote:
>> "Gary Eickmeier" wrote in message
>> ... William Sommerwerck wrote:
>
>>>> Stereophony can't be "wrong" and there are no "correct directional
>>>> cues."
>
>>> But there are. These are well-established by binaural and Ambisonic
>>> theory. Ever heard of Makita localization? (It has nothing to do with
>>> rummaging through power tools at Home Despot.)
>
>> So how do these directional cues get to us on playback? Is there some
>> standard speaker separation or angle that we are supposed to adhere to so
>> that the recorded angles come out right? What if you sit closer or farther
>> away? The angle changes, no? So how does this "cue" tell you that the
>> trumpet is 23 degrees left of center?
>
>This is why you need to hear good Ambisonic reproduction. Ambisonics does not
>include all possible directional cues (delay is the principal missing cue),
>but it includes many others, of which Makita is one.

Are you talking about direct B-format playback or 4-channel UHJ or something
else?

I have heard B-format playback and the lack of phase imaging is a definite
thumbs down. The sense of space is good but it feels gimmicky because of
the lack of low frequency imaging. Mind you, that's just me, and it doesn't
seem to bother a lot of other people.

>It turns out that when you satisfy a sufficient number of cues, the "sweet
>spot" becomes /huge/. You can walk around the room -- including standing
>directly in front of a speaker -- without major image shifts. Regular
>stereophony can't do this, because it doesn't correctly implement directional
>cues.

Regular stereo doesn't correctly implement directional cues except at one
point in space, yes. B-format, as well as the cruder many-channel systems,
do a very good job of improving the usable space. This is important for
a lot of applications, though not all.
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Scott Dorsey
August 20th 13, 02:22 PM
Gary Eickmeier > wrote:
>Scott Dorsey wrote:
>> Gary Eickmeier > wrote:
>>>
>>> So how do these directional cues get to us on playback? Is there some
>>> standard speaker separation or angle that we are supposed to adhere
>>> to so that the recorded angles come out right?
>>
>> Yes.
>>
>>> What if you sit closer or farther
>>> away? The angle changes, no? So how does this "cue" tell you that the
>>> trumpet is 23 degrees left of center?
>>
>> Right. That's why you are supposed to create an equilateral triangle
>> with the right and left speakers and the listener. 60-degree angles.
>> Recordings are made with that assumption on playback.
>
>Now why are you saying that Scott? Some textbook example of stereo?

Because that is the standard and that is the expectation of what the listener
will have. That is how the control booth is set up.

>You know that there is no such assumption when making a recording. XY
>techniques could be any included angle from 60 to 120 degrees. MS has no
>particular angular separation. Spaced omnis ditto. Nor does the SRA have
>anything to do with the angle you get on playback

Right. The geometry used in the recording has absolutely nothing to do
with the geometry used for playback.

We set the geometry used in the recording such that it sounds good when
played back on a system where the listener is at one vertex of an equilateral
triangle and the speakers are the others, and the reverb time of the room
is quite short. That is the expectation for playback.

The recording technique is selected so as to give good playback under those
conditions. Which technique is used, what the included angle of the mikes
are (if there is any angle between them at all), all of that is a function
of the room and the desired sound the producer wants to get. The producer
may want it to sound like you're up close, he may want it to sound like you
are in the balcony.

The engineer's job is to select whatever technique will give good results
in the room, for playback on a standardized playback system.

Let me reiterate: the geometry used for recording has absolutely nothing to
do with the geometry used for playback.

>This is very curious to me. It is obvious that neither recording techniques
>nor playback setup relies on any particular angle between listener and
>speakers. We set up our speakers in the front of the room, and whatever the
>included angle between speakers turns out to be, that is the stereo spread.

If you do this, you're doing it wrong. And the stereo spread should exceed
sixty degrees even with an equilateral triangle arrangement.

>I have talked before about how simplistic and incomplete the "two speakers
>and a listener in an equilateral triangle" drawings about how stereo works
>are. This is just too basic to even allow for a common starting point for
>discussion. I am licked before the starting gate opens.

You keep saying this, but this is the standardized playback arrangement,
it is how the listener is set up. Recordings are made with the assumption
that they will be played back on such an arrangement. The fact that you care
to ignore the standard playback configuration does not mean there is not one.
--scott


--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Scott Dorsey
August 20th 13, 02:25 PM
Gary Eickmeier > wrote:
>Scott Dorsey wrote:
>
>> And I will say that when you are using delays, it is very interesting
>> as you adjust the delay because at some point you get to a position
>> where the spot mike seems to disappear. You only notice it when you
>> mute it. And, that is when you know the delay is right.. it is a
>> very clear snap into place.
>
>Now THAT is an interesting statement! That tells me that it IS audible (the
>time delay) and therefore should be addressed in the final mix.

Only the _lack_ of time delay is audible.

When you're setting the delays up in the mix, it can help to turn the spot
mikes up to levels much higher than you would ever use in the mix, to
exaggerate the effect. Then it is very clear, and once you have the delays
in, you can then bring the levels down to a point where the spots are making
very minor contributions to the sound.

However, the ability to do delays means it's possible to use a lot more
spotting on a large group before things start to come apart. This is not
to say it's a good idea to do so, but there are things possible that were
totally unavailable to engineers a decade ago.
--scott


--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Ron C[_2_]
August 20th 13, 06:48 PM
On 8/20/2013 8:27 AM, Frank Stearns wrote:
> "Gary Eickmeier" > writes:
>
>> Frank Stearns wrote:
>
> snips
>
>> Yo Frank, glad to have you back. This all started with a discussion of
>> whether some time delay is important with spot mikes because without it they
>> will be mixed in several milliseconds ahead of the main stereo pickup. I
>> wondered how audible this was. Arny said not very, if you do it right, and
>> Mike and you have now confirmed it.
>
> My mistatement due to lack of time.
>
> Of all the processing items I mentioned that can be involved with a spot channel,
> //delay might be the MOST important.// I would never use spots without delay; the
> sound of a spotted ensemble is broken without it.
>
> As Scott said, at the right delay, the spot seems to disappear, but you sure notice
> when it gets muted.
>
> < ...snip added details... >
>
> Frank
> Mobile Audio
>

Interestingly this sounds like a duel for setting up zone speakers.
When the delay is set right the zone speakers also seem to disappear
but you'll notice (a reduction in clarity) when muted.

Both effects are similarly connected to sound field coherence.

==
Later...
Ron Capik
--

Gary Eickmeier
August 21st 13, 03:11 AM
William Sommerwerck wrote:


> My point is that Ambisonics is the only speaker-based surround system
> that actually works. Anyone who publicly discusses surround sound
> /has/ to have some practical familiarity with it

In "the book" under discussion Ron Streicher makes the point that single
point miking may not be the best way to do surround sound, a point that I
have been hatching from my brief experiments. The reason seems to be that
there isn't enough difference between the front and rear channels if they
are all in the same spot. He has a 35 ft rule for rear mikes (no farther
than) and a separation for the rear mikes and some remarks about aiming the
nulls of the rear mikes toward the soundstage.

This subject of how far apart any spaced omni system should be is a whole
new area for me that I think is worth pondering. Scott, the back part of the
book makes a lot more sense than the first few chapters and seems to either
contradict or correct some of his statements from early on. More later.

Gary Eickmeier

Gary Eickmeier
August 21st 13, 03:28 AM
Scott Dorsey wrote:
> Gary Eickmeier > wrote:

>> I have talked before about how simplistic and incomplete the "two
>> speakers and a listener in an equilateral triangle" drawings about
>> how stereo works are. This is just too basic to even allow for a
>> common starting point for discussion. I am licked before the
>> starting gate opens.
>
> You keep saying this, but this is the standardized playback
> arrangement,
> it is how the listener is set up. Recordings are made with the
> assumption that they will be played back on such an arrangement. The
> fact that you care to ignore the standard playback configuration does
> not mean there is not one. --scott

So you are saying that (a) monitoring rooms, when there is one, are set up
with this 60° rule for the mixer, and (b) that the Telarc etc spaced omnis,
the Decca Tree, the Blumlein pair, the MS math, ORT-F, and on and on and on,
all know this? I am not being obstinate - much - but I haven't read that
exact statement anywhere else, and of course my ideas on speaker placement,
shall we say, "differ." I want to place speakers relative to the room, not
the listener. The listener can sit wherever he wants, just as with live
sound, if it is done right. A good example is the motion picture theater.
The sound installers place the speakers w respect to the screen, not the
listener sitting in any particular seat, much less at the apex of a 60°
triangle.

Gary Eickmeier

Gary Eickmeier
August 21st 13, 03:42 AM
Mike Rivers wrote:
> On 8/19/2013 10:31 PM, Gary Eickmeier wrote:
>
>> The only experience I have had in my short recording career is with
>> the solo singer in my concert band. She has always had a mike to
>> sing into, of course, and that was then put thru the PA system and
>> picked up by my stereo mikes with all of the echo and horror of that
>> PA system included.
>
> This is the classic problem with recording a show that's mostly
> acoustic but which needs a little reinforcement for a single element
> to bring it into balance for the audience. This is why so many live
> recordings of pop music are done backwards from what you're doing.
> They put mics on everything and mix in a little of the room sound to
> augment any artificial reverb that was added to the close mics.
>
> The fact that we like to hear music that isn't completely balanced
> acoustically is what keeps microphone, multitrack recorder, and DAW
> makers (and recording engineers and producers) in business.

Wow, you have said an earful. Of course, my recordings are not dedicated
studio jobs, just an attempt to capture live events, and I have to option of
placing mikes all over the place - even if I knew how. But I have often
wondered how the many live recordings I have were done without the problems
I am experiencing. I know they must be close-miking most everything, then
adding ambience carefully, but they do it so well it is as if they were
doing simple stereo pairs or something.

I have said that the recording is a new work of art, based on some live
event or concocted from whole cloth. No matter, but it gets really
interesting with jazz, pop, and live stage events. Kind of like a beautiful
bouquet of plastic flowers - so fake they look real.

Gary Eickmeier

August 21st 13, 10:58 AM
Scott Dorsey wrote: "Let me reiterate: the geometry used for recording has absolutely nothing to
do with the geometry used for playback. "

Earlier on you suggest the traditional equilateral listener-speakers playback config.

So what do you mean in the text I quoted - the geometry present at the recording venue?

Scott Dorsey
August 21st 13, 09:03 PM
Gary Eickmeier > wrote:
>Scott Dorsey wrote:
>> You keep saying this, but this is the standardized playback
>> arrangement,
>> it is how the listener is set up. Recordings are made with the
>> assumption that they will be played back on such an arrangement. The
>> fact that you care to ignore the standard playback configuration does
>> not mean there is not one. --scott
>
>So you are saying that (a) monitoring rooms, when there is one, are set up
>with this 60° rule for the mixer,

Yes, absolutely. And there is a relatively small range of reverb times
to expect in the monitoring room too.

>and (b) that the Telarc etc spaced omnis,
>the Decca Tree, the Blumlein pair, the MS math, ORT-F, and on and on and on,
>all know this?

No, but the engineers USING them all know this. All these are techniques
that are designed to make recordings intended for playback on a standard
stereo playback system.

If I set up an ORTF-like pair, I am setting it up in part with trial and
error to get a good image in the standard monitors. If I wanted a recording
that sounded right on headphones, I'd probably cock them in a lot more.
If I wanted a recording that sounded right on more widely spaced speakers
without a hole in the center, I would probably pull them farther back.

The placement the engineer selects will depend entirely on the monitoring
system... the engineer is comparing the monitors with what he hears in the
hall and (maybe) trying to make them the same. Therefore, the setup and
the recording will differ considerably if the monitoring system differs
from the norm.

>I am not being obstinate - much - but I haven't read that
>exact statement anywhere else, and of course my ideas on speaker placement,
>shall we say, "differ."

I guess it's one of those things that I take for granted, but it's an
important thing to point out.

> I want to place speakers relative to the room, not
>the listener. The listener can sit wherever he wants, just as with live
>sound, if it is done right. A good example is the motion picture theater.
>The sound installers place the speakers w respect to the screen, not the
>listener sitting in any particular seat, much less at the apex of a 60°
>triangle.

Unfortunately, this results in a lot of problems because of the playback
arrangement. The center channel helps a lot. But motion pictures are not
mixed to sound right on a stereo system, they are mixed to sound right in
a standard dubbing theatre with three channels and the audience somewhat
farther back from the screen. The standard dubbing theatre also has a lot
longer reverb time than your living room, so recordings mixed for theatrical
distribution tend to have much drier dialogue.

This has resulted in some cultural conflicts when the film sound mixers
and the video mixers and the music recording mixers get together.
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Scott Dorsey
August 21st 13, 09:04 PM
In article >,
> wrote:
>Scott Dorsey wrote: "Let me reiterate: the geometry used for recording has absolutely nothing to
>do with the geometry used for playback. "
>
>Earlier on you suggest the traditional equilateral listener-speakers playback config.

Yes.

>So what do you mean in the text I quoted - the geometry present at the recording venue?

Yes. The geometry of the microphones and the orchestra are selected to
create a recording intended for playback on a system of speakers and listener
with very different geometry.

It's a trick.
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Frank Stearns
August 21st 13, 09:41 PM
(Scott Dorsey) writes:

snips

>farther back from the screen. The standard dubbing theatre also has a lot
>longer reverb time than your living room, so recordings mixed for theatrical
>distribution tend to have much drier dialogue.

>This has resulted in some cultural conflicts when the film sound mixers
>and the video mixers and the music recording mixers get together.

Guffaw! What a great comment. When I worked at Quad-Eight several glacial aeons ago,
they had their fingers in each of those pies for custom consoles. I was sales
engineering on the recording side, a guy hired about the same time handled the film
side, and the video stuff just sort of happened, though their new VP was gung-ho on
the coming wave of tv audio sweetening.

Occasionally, when someone was out or had something else to attend, we'd rotate
around (that happened a lot at the trade shows). We'd come back from visiting a
client or having a conversation in those mutually unfamiliar areas. Our eyes would
be somewhat glazed over and we'd go, "Wow! Are those guys in [film/music/tv] ever
whacked!"

Of course, my position was that film and tv audio was derivative of music recording,
and that our methods were usually best (though there isn't anything much more fun
than a good film scoring session and mix). <g>

Frank
Mobile Audio
--

geoff
August 22nd 13, 08:32 AM
"Scott Dorsey" > wrote in message
...
> In article >,
> > wrote:
>>Scott Dorsey wrote: "Let me reiterate: the geometry used for recording
>>has absolutely nothing to
>>do with the geometry used for playback. "
>>
>>Earlier on you suggest the traditional equilateral listener-speakers
>>playback config.
>
> Yes.
>
>>So what do you mean in the text I quoted - the geometry present at the
>>recording venue?
>
> Yes. The geometry of the microphones and the orchestra are selected to
> create a recording intended for playback on a system of speakers and
> listener
> with very different geometry.
>
> It's a trick.
> --scott


Yes, it is a little tricky to sit in the same configuartion as a complex
mic setup !

geoff

Trevor
August 22nd 13, 12:29 PM
"Scott Dorsey" > wrote in message
...
> If I set up an ORTF-like pair, I am setting it up in part with trial and
> error to get a good image in the standard monitors. If I wanted a
> recording
> that sounded right on headphones, I'd probably cock them in a lot more.

I'd simply use headphones in that case.

Trevor.

Scott Dorsey
August 22nd 13, 01:35 PM
In article >, Trevor > wrote:
>"Scott Dorsey" > wrote in message
...
>> If I set up an ORTF-like pair, I am setting it up in part with trial and
>> error to get a good image in the standard monitors. If I wanted a
>> recording
>> that sounded right on headphones, I'd probably cock them in a lot more.
>
>I'd simply use headphones in that case.

Right. You'd use headphones to monitor, and when you set the microphones up
for a good image, you'd find the positions used were very different than those
you'd used to get a good image with conventional speakers. The microphones
would be pointed more toward the center.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."

William Sommerwerck
August 22nd 13, 05:17 PM
"Gary Eickmeier" wrote in message ...
William Sommerwerck wrote:

>> My point is that Ambisonics is the only speaker-based surround system
>> that actually works. Anyone who publicly discusses surround sound
>> /has/ to have some practical familiarity with it

> In "the book" under discussion Ron Streicher makes the point that single
> point miking may not be the best way to do surround sound, a point that
> I have been hatching from my brief experiments. The reason seems to be
> that there isn't enough difference between the front and rear channels
> if they are all in the same spot. He has a 35 ft rule for rear mikes (no
> farther
> than) and a separation for the rear mikes and some remarks about aiming
> the nulls of the rear mikes toward the soundstage.

I sometimes think that if Michael Gerzon had never existed, someone would have
had to have invented him.

Gary, you have made /profoundly/ erroneous remarks. You have stumbled onto
/the/ fundamental error concerning "directional" recording and playback. It is
this error that screws up virtually every system proposed (or used) in
surround recording. (The same error applies to two-channel stereo, but isn't
as obvious there.)

This error is the assumption that there necessarily /has/ to be a one-to-one
relationship between the recorded channels and the speakers. There is --
indeed, must be -- a left-front channel, a right-side channel, and so on.
This, in turn, leads to the belief that "channel separation" (at the recording
and/or playback end) is a fundamental requirement for good surround sound.

WHY?

Michael Gerzon recognized that you can't control what you don't understand. *
He (and a few others) saw that any workable surround system had to take into
account the known mechanisms of human directional hearing. For example, you
couldn't assume you could place sounds to the side simply by panning them
between the front and rear speakers.

Gerzon therefore broke the recording process into two steps -- analysis and
synthesis. The sound field is analyzed into zeroth-order (mono) and
first-order (figure-8) directional components. These are then synthesized into
speaker feeds that provide appropriate cues for /accurate/ directionality.

http://audiosignal.co.uk/Resources/Surround_sound_psychoacoustics_A4.pdf

This article is a good introduction. (Note Gerzon's warning that discrete
four-channel systems cannot be used as standards of excellence.) There are
other articles. Google "Makita localization".

Single-point miking preserves -- at least in principle -- the "auditory
perspective" of a particular position in the recording venue.

* He wasn't the first. The guys at Western Electric who developed electrical
recording made as thorough a study as was possible at that time of
electromechanical disk cutting, and based their system on the an understanding
of the physics involved. This stood in contrast to the purveyors of acoustical
recording and playback, who "tinkered" with their systems to get better (or
distinctive) sound.

Gary Eickmeier
August 23rd 13, 05:03 AM
William Sommerwerck wrote:

> I sometimes think that if Michael Gerzon had never existed, someone
> would have had to have invented him.
>
> Gary, you have made /profoundly/ erroneous remarks. You have stumbled
> onto /the/ fundamental error concerning "directional" recording and
> playback. It is this error that screws up virtually every system
> proposed (or used) in surround recording. (The same error applies to
> two-channel stereo, but isn't as obvious there.)
>
> This error is the assumption that there necessarily /has/ to be a
> one-to-one relationship between the recorded channels and the
> speakers. There is -- indeed, must be -- a left-front channel, a
> right-side channel, and so on. This, in turn, leads to the belief
> that "channel separation" (at the recording and/or playback end) is a
> fundamental requirement for good surround sound.

> WHY?

Bill, you make the same very fundamental error that Gerzon and Duane Cooper
make in the articles on Ambisonics. They think they are designing a system
that should be centered on a single listener and his human hearing
mechanism. This is the same as basic stereo theory erroneously assumes. The
truth is that we are not recording and reproducing signals or cues for the
ears of a single two-eared human listener, we are recording and reproducing
sound
fields in rooms.

Note that the speakers for any surround sound system are placed at a
distance from the listener, at positions which will generate the correct
directions of the various sounds, no matter where you sit in the room.

The sound must be reproduced in a real room with real acoustics or it will
not externalize, unless you have very many channels with signal processing
etc. However, the article's 400,000 channels is plain nonsense.

>
> Michael Gerzon recognized that you can't control what you don't
> understand. * He (and a few others) saw that any workable surround
> system had to take into account the known mechanisms of human
> directional hearing. For example, you couldn't assume you could place
> sounds to the side simply by panning them between the front and rear
> speakers.
> Gerzon therefore broke the recording process into two steps --
> analysis and synthesis. The sound field is analyzed into zeroth-order
> (mono) and first-order (figure-8) directional components. These are
> then synthesized into speaker feeds that provide appropriate cues for
> /accurate/ directionality.
> http://audiosignal.co.uk/Resources/Surround_sound_psychoacoustics_A4.pdf
>
> This article is a good introduction. (Note Gerzon's warning that
> discrete four-channel systems cannot be used as standards of
> excellence.) There are other articles. Google "Makita localization".
>
> Single-point miking preserves -- at least in principle -- the
> "auditory perspective" of a particular position in the recording
> venue.

Think of it this way: In a simple example, if we wish to reproduce a string
quartet, we can close-mike each instrument, set up four speakers in an
arrangement geometrically similar to the positions of the
microphones/players during recording, and give the speakers radiation
patterns similar to the instruments. When we play this back in a good size
and good sounding room, we will have all of the elements of total realism
even without the help of surround speakers. We would be able to walk around,
left, right, closer to or farther away from the players, with perfect
realism. They would be playing in THIS room, but we would have all the
ingredients of perfect realism WITHOUT ANY REFERENCE TO THE HUMAN HEARING
MECHANISM during recording OR playback.

We would be able to enhance this recording with surround speakers if we
wished to portray a location much bigger than our playback space by making
some additional reverberant info come from the surround speakers. But note
that we would not want the sound that emanates from the rear and sides of
our playback space to come from microphones that were placed forward of us
in the recording venue! Sounds coming from "back there" should be recorded
by microphones that were placed "back there" and not coincident with the
front mikes.

Some of these guys like to come up with "cool" theories that use some kind
of mathematics and a principle that has nothing to do with audio, such as
Don Keele's boomerang speakers based on some theory from submarine SONAR
math. They do it just because they can, not because it is based on either
field-type theory or binaural theory.

We play back surround sound by placing speakers all around us in a real
space. It makes sense that we might want to place the microphones in the
recording venue in positions that are geometrically similar to those of the
speakers, positions where the corresponding original sounds came from.

Gary Eickmeier

Gary Eickmeier
August 23rd 13, 05:10 AM
geoff wrote:

> Yes, it is a little tricky to sit in the same configuartion as a
> complex mic setup !

Excellent! Geoff refers to the audiophile notion that the goal of "accurate"
reproducion is to place the listener at the position of the microphones!

Gary Eickmeier

Scott Dorsey
August 23rd 13, 02:23 PM
William Sommerwerck > wrote:
>
>My point is that Ambisonics is the only speaker-based surround system that
>actually works. Anyone who publicly discusses surround sound /has/ to have
>some practical familiarity with it.

If by "actually works" you mean that there is a solid image that wraps
360 degrees around and so the listener can turn their head in any direction
and it sounds correct, I'd agree.

But there are a bunch of surround systems that work properly as long as you
don't turn your head.

And there are also multi-multi speaker array systems like the Carrouso or
some of the holographic imaging systems which come very close to working.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Scott Dorsey
August 23rd 13, 02:25 PM
Gary Eickmeier > wrote:
>geoff wrote:
>
>> Yes, it is a little tricky to sit in the same configuartion as a
>> complex mic setup !
>
>Excellent! Geoff refers to the audiophile notion that the goal of "accurate"
>reproducion is to place the listener at the position of the microphones!

But it's not. It's to place the listener somewhere in the hall.

If I want it to sound like a listener is in the balcony, I don't necessarily
place the microphones in the balcony. I place the microphones in wherever
is appropriate with that mike configuration to get a sound as if the listener
is in the balcony.
--scott


--
"C'est un Nagra. C'est suisse, et tres, tres precis."

William Sommerwerck
August 23rd 13, 06:23 PM
Gary, I have wasted far too much time trying to explain things that should be
obvious to anyone having done basic research into surround souns. I don't know
everything there is to know about recording and reproduction, but please do
not continue to explain to ME the philosophy, aesthetics, and practice of
surround recording -- especially when you haven't reduced your
wishful-thinking theories to math, or made strong connections with
psychoacoustics.

The idea of reproducing the original sound field in detail is a good one -- on
paper -- and some researchers have made significant progress. But it has
several problems.

1. One can assume the optimum number of channels and speakers will be
significantly larger than 10, but not more than 100. How do you propose to
record these and play them back? The amps and speakers would have to be small
and inexpensive. Perhaps too inexpensive.

2. A related problem is the tradeoff between more channels of lower sound
quality, against the improved quality obtained by closely approximating the
original sound field. I, for one, am not giving up my Apogees for a room full
of shoeboxes. (As the Bose 901 so brutally demonstrates, quantity cannot
substitute for quality.)

The idea of bringing the performers to the listening room (except for small
groups) is and always has been wrong-headed. For most music, the last thing
you want is to have the playback interact with the room. If you still don't
understand this, you need to think it through.

Oh, and there is no magic manipulation that will permit extracting ambience
from a recording in such a way that we hear the original concert-hall
acoustics accurately rendered.

You need to find something constructive to do with your research. What you are
doing is not constructive.

I will no longer respond to Mr Eickmeier's postings.

Gary Eickmeier
August 23rd 13, 09:49 PM
OK, now it's getting serious. With Sommerwerk's resignation, I guess I will
need to polish my communication skills and take another run at it. The
original intent was to critique the book, Scott says that the book would be
a good jumping off point, with which I agree.

However, I can see that most of you are on overload as well, so maybe I
should give it a rest for a while. It will take me a while to put it
together anyway. This book is kind of like the bible, with several internal
contradictions that show that it may not be The Eternal Truth.

My apologies for taking up so much of your time with more of my BS
(Brilliant Stimulus).

Let's all go out and record some more!

Gary Eickmeier

None
August 23rd 13, 11:42 PM
Willie Wiener****** whined:
> I will no longer respond to Mr Eickmeier's postings.

Putz.

None
August 23rd 13, 11:46 PM
"Gary Eickmeier" > wrote in message
...
> I guess I will need to polish my communication skills...

Now there's something Willie will never say (even though his
communications skills are abysmal).

Gary Eickmeier
August 24th 13, 07:21 AM
None wrote:
> "Gary Eickmeier" > wrote in message
> ...
>> I guess I will need to polish my communication skills...
>
> Now there's something Willie will never say (even though his
> communications skills are abysmal).

None, I don't know who you are, but please refrain from insulting Mr.
Sommerwerck.

My point is that I have not yet communicated successfully what I am trying
to relate about how stereo and surround sound work, or should I say a
different way of thinking about it that might explain the reason for all of
the disagreements among a lot of distinguished minds and practitioners. Just
within this thread there has been disagreement between some even aside from
my rantings.

To Chicken Little, the sky is falling, but if he had the right language
skills, he could tell them that there is a tornado coming.

Gary Eickmeier

hank alrich
August 25th 13, 08:18 AM
Gary Eickmeier > wrote:

>
> Bill, you make the same very fundamental error that Gerzon and Duane Cooper
> make in the articles on Ambisonics. They think they are designing a system
> that should be centered on a single listener and his human hearing
> mechanism. This is the same as basic stereo theory erroneously assumes. The
> truth is that we are not recording and reproducing signals or cues for the
> ears of a single two-eared human listener, we are recording and reproducing
> sound
> fields in rooms.

Gary, I am amzed at the patience people are showing you here. When you
start to badmouth Gerzon you better start showing your work.

Have you ever deployed or heard first person the results of using the
technology developed via Gerzon's work?

I might as well be blunt here: a dilettante with a set of Bose is gonna
tell Gerzon about stereo. Good luck with that.

Nothing so impedes learning as does arrogance.

--
shut up and play your guitar * HankAlrich.Com
HankandShaidriMusic.Com
YouTube.Com/WalkinayMusic

Tom McCreadie
August 25th 13, 02:40 PM
>.... When you start to badmouth Gerzon you better start showing your work.
>
>Have you ever deployed or heard first person the results of using the
>technology developed via Gerzon's work?

Yep, Gerzon's a bit of a hero for me. His papers top my list of desert island
nerdy-reads. From the rigour and depth of his contributions, he'd also qualify
for that epitaph from Dr. Johnson for the poet Goldsmith:
"nullum quod tetigit non ornavit" - he touched nothing that he did not adorn.
--
Tom McCreadie

Tinnitus is a pain in the neck

Gary Eickmeier
August 26th 13, 02:26 AM
Tom McCreadie wrote:

>> .... When you start to badmouth Gerzon you better start showing your
>> work.

I did not, nor do I intend, to "badmouth" anyone, but I reserve the right to
question, and I will attempt to give my work and my reasons below.

>> Have you ever deployed or heard first person the results of using the
>> technology developed via Gerzon's work?

Yes, when I was stationed in England. Had an Ambisonic decoder for surround.
It was in the form of a bare circuit board, but it worked.

> Yep, Gerzon's a bit of a hero for me. His papers top my list of
> desert island nerdy-reads. From the rigour and depth of his
> contributions, he'd also qualify for that epitaph from Dr. Johnson
> for the poet Goldsmith: "nullum quod tetigit non ornavit" - he
> touched nothing that he did not adorn.

Skip the hero worship. This goes a lot deeper than Gerzon's math or
Blumlein's patent. None of that helps if they are operating on wrong
assumptions. For a little insight, I offer the following:

There are fundamentally two ways of reproducing a sensory experience. You
can attempt to duplicate the sensory inputs (such as with binaural or
stereoscopic pictures), or you can duplicate the object itself (such as a
sound field in a room or the object of a stereoscopic image) and then just
use your normal hearing or vision mechanism to witness it.

Harry Olson defined the stereophonic system as follows:

"A stereophonic sound reproducing system is a field type sound reproducing
system in which two or more microphones, used to pick up the original sound,
are each coupled to a corresponding number of independent transducing
channels which in turn are each coupled to a corresponding number of
loudspeakers arranged in substantial geometrical correspondence to that of
the microphones."

William Snow comments that "It has been aptly said that the binaural system
transports the listener to the original scene, whereas the stereophonic
system transports the sound source to the listener's room."

Do you understand the difference between a closed circuit, direct sensory
input system like binaural and a field-type system such as stereophonic and
surround sound? I go on in my basic paper to elaborate on it.

"It would be worthwhile at this point to emphasize the diffrence between a
closed-circuit type system (binaural) and a field-type system (stereophonic)
as used in the above system definitions. Binaural reproduction is meant to
isolate the listener from his actual acoustic environment and to isolate the
two channnels from each other by presenting the sound directly to the ears
by means of headphones. The recording was made with a binaural head placed
in the 'best seat in the house' where it could hear both the music and the
complete acoustic environment from that position. The system requires each
ear to receive exactly the same signals that impinged upon the binaural head
during recording. With the stereophonic system the microphones are placed
much closer to the orchestra, recording the sound that exists in the region
closer to the instruments rather than that which would occur near a typical
listening position. This means that the recording contains correspondingly
less of the concert hall acoustic and more of the direct and early reflected
sound from the region of the proscenium. The recording is reproduced on
loudspeakers which are placed at a distance from the listener, the entire
process taking place in an acoustic space which is different from that in
which the recording was made. The resultant sound depends on the new
acoustic surroundings to impart acoustical qualities required for good
sound."

So there is a fundamental difference between systems that are meant for a
single individual's ears at a single point in space, and one that is meant
to reproduce the entire sound field for any and all listeners to experience
from anywhere in that room, able to change position and turn their heads and
hear the reproduction in exactly the same way that they hear the live music.
I go on to explain:

"A common-sense way of stating all this is that with binaural we are
recording and reproducing ear signals, whereas with stereo we are
reproducing the orchestra itself, and the soundstage surrounding it, on a
macroscopic scale in our playback room. The channels will blend with each
other acoustically, and both ears are free to hear both (or all) speakers.
We are neither isolating the channels from each other (at the ears) nor the
listener from the playback room. Rather, we are using the acoustics of the
playback room to reconstruct the sound field around him. The listener then
experiences the total sound field, rather than having individual channels
piped to his ears. The confusion between the two systems arises when people
begin to believe that the object of 'accurate' stereophonic reproduction is
to get the sound that went into the microphones straight to the listener's
ears, or (worse) that the sound from each speaker should be heard only by
the respective ear, with all interaural crosstalk eliminated."

Ambisonics does not confuse direction of arrival with isolation at the ears,
but it is still a system which attempts to record and bring back all
acoustical sounds from the original to a single spot at the listener's head.
Stay with me until I can go through that a little more elaborately below.

I continue:

"The binaural system is straightforward and uncontroversial enough, but the
stereophonic system, even at the most fundamental theoretical level, is
still undeveloped. To be more specific, lateralization has been discussed in
great detail and related unequivocally to the intensity or time difference
of the direct sound from two loudspeakers. Localization and spatial
impression have not. Entire classes of auditory cues necessary for
localization in enclosed spaces, as discussed by Benade, Moulton and
Feralli, have been omitted from reproduction with the usual stereo
arrangement. The body of knowledge from architectural acoustics as to what
causes good sound in a concert hall is temporarily sidestepped when we
attempt to reproduce the original sound field as a high direct fireld from
two point sources, with no explanation as to how we are supposed to regain
those qualities. Many maverick products exist in the marketplace to address
some of these problems, but none of them is as yet a prt of an overriding
stereo theory in any scientific or engineering sense."

"It is difficult, therefore, to review the present state of stereophonic
theory because there IS no single theory, written down in so many words in
textbook form. However, from the many articles that have been written and
from the practice of stereo certain trends can be inferred."

We're getting closer now, moving in for the kill. Stay with me.

"From the Bell Labs 'curtain of sound' theory stereophonic sound is seen as
a wavefront that can be approximated by two or three speakers and
corresponding microphones. From Blumlein, stereo is thought to be a
microscopic, as opposed to macroscopic, or large scale process wherein a
coincident pair of microphones encode the direction of all arriving sounds
by means of intensity, and the loudspeakers deliver those signals to the
listener's ears so that the original sound field might be perceived at the
listener's head."

"The trend to note with both of these versions is that the stereo is thought
to operate as a sort of windowing or portaling process wherein the sound
that was recorded is simply being relayed to the listener by the
reproduction chain. Stereophonic sound is thought to be a 'trick' that
attempts to fool the ears into hearing all audible spatial properties of
live sound strictly by means of lateralization - like looking trough a
portal into another acoustic space. The degree of success of the illusion is
thought to depend on the 'accuracy' of the system, and the status of stereo
theory as we know it today can be thought of as a search for greater and
greater accuracy."

"Notice also that the above descriptions are strictly two-dimensional
processes. The theories are based only on the direct sound radiated from a
pair or a line of speakers. They are 'blind' to the effects of loudspeaker
radiation pattern, positioning, and room acoustics. We started with the
system definition as a field type system, reproduced in a real acoustic
space by loudspeakers, but as far as the explanation of how it works goes,
the playback room might as well not exist, and nowhere do we find reflected
sound incorporated as part of stereo theory."

End of quotes from my paper.

I remind you that the term "stereophonic" as used in the above discussion
includes all field-type systems that attempt to reproduce the "solid" or 3D
types of the realistic reproduction of auditory perspective, including
surround sound.

Now if I have been in the slightest way effective in my communication
skills, I hope that you take away from all that that the major, major
difference between Olson's field-type system and Blumlein and Gerzon's
models is that the former is the placing of real sound sources in a real
space in geometrical similarity to the original instruments plus the
relationship of those sounds to the room surfaces in order to finish the
recreation of the total sound field around the ROOM, whereas the latter is
an attempt to simply encode DIRECTION of all incoming sounds around THE
LISTENER rather than in the room as a whole. You can see this operating in
The New Stereo Soundbook when Streicher shows the "stereo triangle" of two
speakers and a listener as the explanation of how it works.

You can see the field type system explanation in my Image Model Theory, in
which the system is drawn as an image model of direct and reflected sources
that try to mimic the positions and intensities of all real sources and
their reflections in a typical live model. The function of a loudspeaker
changes from a direct radiator feeding "cues" to the ears to being an Image
Model Projector.

So why can't it be done both ways? Remember the difference between a system
that tries to encode "ear signals" as with binaural being one which would be
recorded from a single point in space farther back in the concert hall where
it can hear all direct and reflected sounds coming in to it, vs the
field-type system placing the microphones closer to the soundstage and the
speakers at a distance from you in your room? The reason we record stereo as
we do is for the clarity of the frontal sound imaging, because if it were
done farther back it would be too swimmy and wet and wouldn't sound as real
played back in another real room. But to get any sound to externalize, we
must play it in a real acoustic space, so recording farther back and trying
to eliminate the playback acoustic is a non-starter. A better system would
be to record frontal sounds closer to the front, and surround sounds farther
back where those sounds need to be coming from. If the playback model is an
orthographic projection from the surfaces within the room, then it makes
some sense that we might want to record those sounds from similar positions
in the live event.

Dr. Floyd Toole told me that he once experienced Ambisonics with full
periphony in an anechoic chamber, a process that Gerzon's theories and math
would dictate would sound like a perfect reproduction of all sounds that
were encoded as arriving at the Calrec Soundfield microphone. Alas, he
reports that it failed to externalize, giving an In Head Localization effect
akin to headphones.

For all of these reasons, I believe I have a right to question Gerzon,
Cooper, and Blumlein.

Gary Eickmeier

Scott Dorsey
August 26th 13, 03:22 PM
Tom McCreadie > wrote:
>>.... When you start to badmouth Gerzon you better start showing your work.
>>
>>Have you ever deployed or heard first person the results of using the
>>technology developed via Gerzon's work?
>
>Yep, Gerzon's a bit of a hero for me. His papers top my list of desert island
>nerdy-reads. From the rigour and depth of his contributions, he'd also qualify
>for that epitaph from Dr. Johnson for the poet Goldsmith:
>"nullum quod tetigit non ornavit" - he touched nothing that he did not adorn.

I can't say I particularly have been a fan of ambisonic recording for
2-speaker and 3-speaker playback, but that said I have to give an immense
amount of credit to Gerzon for making the first real attempts to put recording
methods on sound mathematical basis as well as for strongly supporting further
research in that direction.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."

polymod
August 29th 13, 11:10 PM
"Gary Eickmeier" > wrote in message
...

> To Chicken Little, the sky is falling, but if he had the right language
> skills, he could tell them that there is a tornado coming.

Man, if that ain't a line from a song it should be ;)

Poly

polymod
August 30th 13, 06:43 PM
"Jeff Henig" > wrote in message
...
> "polymod" > wrote:
>> "Gary Eickmeier" > wrote in message
>> ...
>>
>>> To Chicken Little, the sky is falling, but if he had the right language
>>> skills, he could tell them that there is a tornado coming.
>>
>> Man, if that ain't a line from a song it should be ;)
>>
>> Poly
>
> No doubt.

Gwen Stefani?

;)

Poly

Peter Larsen[_3_]
September 28th 13, 09:53 AM
Gary Eickmeier wrote:

> You know that there is no such assumption when making a recording. XY
> techniques could be any included angle from 60 to 120 degrees. MS has
> no particular angular separation. Spaced omnis ditto. Nor does the
> SRA have anything to do with the angle you get on playback

Search term: "The Stereophonic Zoom"

Kind regards

Peter Larsen