View Full Version : Why do most commercial recordings (talking Classical and Jazz, here
Audio_Empire[_2_]
April 3rd 13, 03:20 AM
I don't know if any of my fellow audio enthusiasts out there have
noticed this, but even the best recordings always seem to "lack"
something. Uncompressed digital (even Red Book), promises wide dynamic
range, excellent frequency response and low distortion. It should be
possible to make recordings so good that, given a halfway decent
playback system, the musicians are in the room with you. It is
technically possible and surprisingly easy to do this, but it rarely
happens with commercial recordings. Why is it that still, in this
digital age, audiophiles cling to performances recorded more than
fifty years ago as the pinnacle of the recording arts? Recordings made
in the late 1950's and early 1960's by such people as Mercury Record's
C. Robert Fine, or RCA Victor's Lewis Leyton in the classical
recording world, and Rudy Van Gelder of Riverside, and Impulse Records
fame in the world of jazz are held in such high esteem, that even CD
and SACD re-releases of their recordings still sell very well today.
It's as if no progress has been made in the art and science of
recording in the last 55 years or so.
I have found, in building my stereo system, that this has become a dog
chasing his tail endeavor. My playback equipment gets better and
better and yet the recordings to which I listen, ranging from terrible
to OK seldom get any better than just OK. Even so-called audiophile
recordings from labels such as Telarc and Reference and Naxos, to name
a few, never sound quite as good as I think they should. Any ideas,
other opinions, criticisms or nasty comments? All of the above would
be welcomed.
Audio_Empire
Andrew Haley
April 3rd 13, 12:41 PM
Audio_Empire > wrote:
> I don't know if any of my fellow audio enthusiasts out there have
> noticed this, but even the best recordings always seem to "lack"
> something. Uncompressed digital (even Red Book), promises wide
> dynamic range, excellent frequency response and low distortion. It
> should be possible to make recordings so good that, given a halfway
> decent playback system, the musicians are in the room with you. It
> is technically possible and surprisingly easy to do this, but it
> rarely happens with commercial recordings. Why is it that still, in
> this digital age, audiophiles cling to performances recorded more
> than fifty years ago as the pinnacle of the recording arts?
Because that sense of palpable realism is not the goal. Nobody other
than a few hi-fi buffs cares about it, and that's not enough to create
a mass market. Everybody else just wants to listen to the music,
whether that's via headphones on the bus, over a car stereo, or
whatever. In adverse listening conditions, all that compression and
equalization helps.
> I have found, in building my stereo system, that this has become a dog
> chasing his tail endeavor. My playback equipment gets better and
> better and yet the recordings to which I listen, ranging from terrible
> to OK seldom get any better than just OK. Even so-called audiophile
> recordings from labels such as Telarc and Reference and Naxos, to name
> a few, never sound quite as good as I think they should.
I think you're jaded. I'm listening to the Diamond Dogs remaster, and
it sounds fabulous. But that's not "acoustic instruments in a real
space", so I suppose it doesn't count. :-(
Andrew.
Doug McDonald[_6_]
April 3rd 13, 06:57 PM
On 4/2/2013 9:20 PM, Audio_Empire wrote:
> Uncompressed digital (even Red Book), promises wide dynamic
> range, excellent frequency response and low distortion. It should be
> possible to make recordings so good that, given a halfway decent
> playback system, the musicians are in the room with you. It is
> technically possible and surprisingly easy to do this, but it rarely
> happens with commercial recordings. Why is it that still, in this
> digital age, audiophiles cling to performances recorded more than
> fifty years ago as the pinnacle of the recording arts?
Its because producers don't want it.
But there is a caveat: "the musicians are in the room with you"
is only a realistic goal if they will fit. Few people have rooms
big enough for an orchestra to fit, let alone produce a concert hall
sized ambiance. For those cases, one wants ... and I presume you certainly do
want ... to bring the sound of the recording concert hall (assuming
its not Avery Fisher Hall) into your room. Both are possible,
as I'm sure you will agree. Most real audiophiles agree that it
is, more or less, with a good room and just the right equipment and
recordings. I've never heard a concert hall in my room, but its too small;
a friend has a system that really does the job (on the best recordings.)
In most cases the problem is not compression or equalization, at least
in most modern classical recordings. Large numbers of recordings are
not compressed. And bad equalization seems a thing of the past: even
recordings made in the 60s and pressed onto vinyl with horrendous
cuts in the bass have reasonable balance on CD reissues.
The problem is, as I'm sure you as a hyper-audiophile know, is microphone
technique, pure and simple. You ask for a nasty comment, I'll make one:
I'll bet if you look you can find nasty posts BY YOU complaining of mike
technique.
Doug McDonald
Audio_Empire[_2_]
April 4th 13, 12:13 AM
On Wednesday, April 3, 2013 4:41:19 AM UTC-7, Andrew Haley wrote:
> Audio_Empire wrote:
>
> > I don't know if any of my fellow audio enthusiasts out there have
> > noticed this, but even the best recordings always seem to "lack"
> > something. Uncompressed digital (even Red Book), promises wide
> > dynamic range, excellent frequency response and low distortion. It
> > should be possible to make recordings so good that, given a halfway
> > decent playback system, the musicians are in the room with you. It
> > is technically possible and surprisingly easy to do this, but it
> > rarely happens with commercial recordings. Why is it that still, in
> > this digital age, audiophiles cling to performances recorded more
> > than fifty years ago as the pinnacle of the recording arts?
>
> Because that sense of palpable realism is not the goal. Nobody other
> than a few hi-fi buffs cares about it, and that's not enough to create
> a mass market. Everybody else just wants to listen to the music,
> whether that's via headphones on the bus, over a car stereo, or
> whatever. In adverse listening conditions, all that compression and
> equalization helps.
That may be true with regard to the various "pop" music categories,
but as I said in my OP, for this discussion, those don't count. But,
I'm sure that what you say, above, is right when discussing pop
genres. OTOH, with classical and jazz, it's SO SIMPLE AND EASY to do
it right and so complex (not to mention expensive) to do it wrong that
I have to wonder why record companies continue to do it wrong and I've
never heard a reasonable explanation of "why".
> > I have found, in building my stereo system, that this has become a dog
> > chasing his tail endeavor. My playback equipment gets better and
> > better and yet the recordings to which I listen, ranging from terrible
> > to OK seldom get any better than just OK. Even so-called audiophile
> > recordings from labels such as Telarc and Reference and Naxos, to name
> > a few, never sound quite as good as I think they should.
>
> I think you're jaded. I'm listening to the Diamond Dogs remaster, and
> it sounds fabulous. But that's not "acoustic instruments in a real
> space", so I suppose it doesn't count. :-(
That's correct. It doesn't count. In fact, pop music is such a product
of the studio, that what they do in the studio to create each group's
"unique sound" actually amounts to more than just recording. It's not
the same thing at all. Liken actual recording to the function of a
court stenographer. It takes skill to do it and the result must be a
true record of the everything said in court. On the other hand, a
studio pop recording is like a novelist or playwright who attends a
court trial and then creates a fictional play or novel based upon the
incidents of that trial. The results may or may not be truthful to
what the musicians were actually doing during their recording
session.
Audio_Empire
Audio_Empire[_2_]
April 4th 13, 12:16 AM
On Wednesday, April 3, 2013 10:57:02 AM UTC-7, Doug McDonald wrote:
> On 4/2/2013 9:20 PM, Audio_Empire wrote:
>
> > Uncompressed digital (even Red Book), promises wide dynamic
> > range, excellent frequency response and low distortion. It should be
> > possible to make recordings so good that, given a halfway decent
> > playback system, the musicians are in the room with you. It is
> > technically possible and surprisingly easy to do this, but it rarely
> > happens with commercial recordings. Why is it that still, in this
> > digital age, audiophiles cling to performances recorded more than
> > fifty years ago as the pinnacle of the recording arts?
>
> Its because producers don't want it.
>
> But there is a caveat: "the musicians are in the room with you"
> is only a realistic goal if they will fit.
Obviously I was being a bit hyperbolic, there. I was referring to the
you-are-there sense that good recordings can provide if correctly
done. With a small ensemble, like a string quartet or jazz group, it
is possible to put them in the room with you. With an orchestra, of
course, a good recording transports you to the venue where the
recording takes place.
> Few people have rooms
> big enough for an orchestra to fit, let alone produce a concert hall
> sized ambiance. For those cases, one wants ... and I presume you certainly do
> want ... to bring the sound of the recording concert hall (assuming
> its not Avery Fisher Hall) into your room.
Of course.
> Both are possible,
> as I'm sure you will agree. Most real audiophiles agree that it
> is, more or less, with a good room and just the right equipment and
> recordings. I've never heard a concert hall in my room, but its too small;
> a friend has a system that really does the job (on the best recordings.)
>
> In most cases the problem is not compression or equalization, at least
> in most modern classical recordings. Large numbers of recordings are
> not compressed. And bad equalization seems a thing of the past: even
> recordings made in the 60s and pressed onto vinyl with horrendous
> cuts in the bass have reasonable balance on CD reissues.
Of course, and I agree fully that over-compression and hard-limiting
are largely things of the past in classical recording. Most of these
excesses started in the late '60's and went through the 70's and much
of the 1980's before saner practices prevailed. It's a shame too as
this time period became the last chance to capture some of the great
artists of the 20th century before they left us. Those who were
captured were recorded by what were (to me) incompetent knob twiddlers
who simply had no idea (or didn't care) what they were doing to the
music. Case in point. Ever heard Sir Adrian Boult's last 'recording
of Holst's "The Planets" Made by EMI in either the late 70's or early
80's it was a great performance, perhaps the best ever - ruined by
multimiking, multi-channeling and knob-twiddling.
> The problem is, as I'm sure you as a hyper-audiophile know, is microphone
> technique, pure and simple. You ask for a nasty comment, I'll make one:
> I'll bet if you look you can find nasty posts BY YOU complaining of mike
> technique.
I don't doubt it. I welcome nasty comments. They add spice to the
proceedings.
Gary Eickmeier
April 4th 13, 10:45 AM
Believe me AE, you can use your recordings as a benchmark for quality in
comparing to both old and new commercial recordings. What I mean is, this is
sort of a chicken and egg question, similar to Floyd Toole's Circle of
Confusion. You go out and make a recording, you come home and play it back.
You make improvements to your technique for next time. You make improvements
in your system so the recordings will sound better. But what is the
reference, your system or the recordings? I suppose it will always be an
iterative process, making adjustments like this until it sounds as close to
what you heard live as possible in a smaller environment.
But the commercial recordists have to make them sound good on any system
that they may be played on, including boom boxes, iPods, and super systems.
Quite a problem, and the result is boost and compression and multi-miking.
Notice that they USED to advertise "no equalization or compression was used
at any point in this recording." But we don't see that much any more.
The biggest difference I notice since I started recording is that my stuff
is a lot lower in volume than the commercial stuff. I go out and do my
darndest, come home and master a disc, and it sounds terrific on my home
system because I am listening in a quiet environment and I can adjust the
gain to perfection and I have over 1500 watts of power and so on. THEN I
decide to play it in my car - big mistake. I need to crank the gain up a lot
more than the other sources that I have available, and it may sound OK but I
wish I had learned a little more about compression and processing in
Audition so it is a LITTLE louder to level the playing field. When my disc
stops the FM or other discs BLAST me out and sound fairly good, but leave me
scratching my head.
I had been giving my recordings a touch of bass boost and of course gain
setting in mastering, but lately I have discovered that my AT 2050s in omni
mode do not need any bass boost, and my record levels are pretty damn good
as is, at least on the screen during editing. If I adjust the gain up any,
it might clip the peaks, tho it would raise the overall gain nicely, so I
don't do it, and then I have the above problem.
So what would John Eargle do? Bob Katz? Scott Dorsey?
Gary Eickmeier
Andrew Haley
April 4th 13, 12:47 PM
Audio_Empire > wrote:
> On Wednesday, April 3, 2013 4:41:19 AM UTC-7, Andrew Haley wrote:
>> Audio_Empire wrote:
>>
>> > I don't know if any of my fellow audio enthusiasts out there have
>> > noticed this, but even the best recordings always seem to "lack"
>> > something. Uncompressed digital (even Red Book), promises wide
>> > dynamic range, excellent frequency response and low distortion. It
>> > should be possible to make recordings so good that, given a halfway
>> > decent playback system, the musicians are in the room with you. It
>> > is technically possible and surprisingly easy to do this, but it
>> > rarely happens with commercial recordings. Why is it that still, in
>> > this digital age, audiophiles cling to performances recorded more
>> > than fifty years ago as the pinnacle of the recording arts?
>>
>> Because that sense of palpable realism is not the goal. Nobody other
>> than a few hi-fi buffs cares about it, and that's not enough to create
>> a mass market. Everybody else just wants to listen to the music,
>> whether that's via headphones on the bus, over a car stereo, or
>> whatever. In adverse listening conditions, all that compression and
>> equalization helps.
>
> That may be true with regard to the various "pop" music categories,
> but as I said in my OP, for this discussion, those don't count. But,
> I'm sure that what you say, above, is right when discussing pop
> genres. OTOH, with classical and jazz, it's SO SIMPLE AND EASY to do
> it right and so complex (not to mention expensive) to do it wrong that
> I have to wonder why record companies continue to do it wrong and I've
> never heard a reasonable explanation of "why".
I think that my comment above applies to all forms of music. There is
a conflict, expecially when listening in less than perfect conditions,
between audibility and pure realism, with the dynamic range that
implies.
For example: when listening to a concerto in a live concert audibility
of the soloist can suffer, but to a large extent visual cues make up
for that. When recording a concerto it's usual, and IMO praiseworthy,
to separately mike the soloist, and indeed each section of the
orchestra, so that the balance can be adjusted afterwards. This isn't
purist recording, and it may not capture the sound of the hall with
absolute realism, but it is a better way to capture the music. And
it's going to be a lot easier to listen to while on a train. Everyone
but a hi-fi buff (who cares more about the ambience than the music)
wins.
Andrew.
Andrew Haley
April 4th 13, 02:45 PM
Gary Eickmeier > wrote:
> Believe me AE, you can use your recordings as a benchmark for
> quality in comparing to both old and new commercial recordings. What
> I mean is, this is sort of a chicken and egg question, similar to
> Floyd Toole's Circle of Confusion. You go out and make a recording,
> you come home and play it back. You make improvements to your
> technique for next time. You make improvements in your system so the
> recordings will sound better. But what is the reference, your system
> or the recordings? I suppose it will always be an iterative process,
> making adjustments like this until it sounds as close to what you
> heard live as possible in a smaller environment.
Including the famous "seat-dip effect", which is a broad and maybe
more than 10dB deep notch in the bass around 100Hz. I wouldn't have
thought that was at all desirable, but some might.
Andrew.
Arny Krueger[_5_]
April 4th 13, 03:02 PM
"Audio_Empire" > wrote in message
...
>I don't know if any of my fellow audio enthusiasts out there have
> noticed this, but even the best recordings always seem to "lack"
> something.
Yes, indeed.
> Uncompressed digital (even Red Book), promises wide dynamic
> range, excellent frequency response and low distortion.
Subject to the limitations of stereo recordings.
Stereo recording has pronounced inherent limitiations. No way does one
capture enough information to recreate the original sound field.
> It should be
> possible to make recordings so good that, given a halfway decent
> playback system, the musicians are in the room with you.
How, given all of the audible information that is lost during recording with
microphones?
> It is
> technically possible and surprisingly easy to do this, but it rarely
> happens with commercial recordings.
I don't know about that.
> Why is it that still, in this
> digital age, audiophiles cling to performances recorded more than
> fifty years ago as the pinnacle of the recording arts? Recordings made
> in the late 1950's and early 1960's by such people as Mercury Record's
> C. Robert Fine, or RCA Victor's Lewis Leyton in the classical
> recording world, and Rudy Van Gelder of Riverside, and Impulse Records
> fame in the world of jazz are held in such high esteem, that even CD
> and SACD re-releases of their recordings still sell very well today.
Sentimentality. Musical works and performances that can't possibly be
re-recorded today.
> It's as if no progress has been made in the art and science of
> recording in the last 55 years or so.
2013 - 55 = 1958
Since 1958 there has essentially been one signfiicant upgrade to stereo
recording which is digital media.
> I have found, in building my stereo system, that this has become a dog
> chasing his tail endeavor. My playback equipment gets better and
> better and yet the recordings to which I listen, ranging from terrible
> to OK seldom get any better than just OK. Even so-called audiophile
> recordings from labels such as Telarc and Reference and Naxos, to name
> a few, never sound quite as good as I think they should. Any ideas,
> other opinions, criticisms or nasty comments? All of the above would
> be welcomed.
We need a better recording technology than stereo. We have a ready
alternative called multitrack, 5.1, 7.1, whatever, but while it adds
something, it still doesn't provide us with that something that only exists
at the live recording.
Gary Eickmeier
April 4th 13, 03:43 PM
Arny Krueger wrote:
> "Audio_Empire" > wrote in message
> ...
>> It should be
>> possible to make recordings so good that, given a halfway decent
>> playback system, the musicians are in the room with you.
>
> How, given all of the audible information that is lost during
> recording with microphones?
Of course we have heard this said a few times before, but how about pinning
that down a little better Arn? Exactly what "information" are you saying
gets lost during recording? Perhaps you could approach this by dividing up
the total sound "picture" that we experience into its component parts and
analyzing which parts are missing (?).
> We need a better recording technology than stereo. We have a ready
> alternative called multitrack, 5.1, 7.1, whatever, but while it adds
> something, it still doesn't provide us with that something that only
> exists at the live recording.
Multitrack is fine, and I play everything in surround, even if it is just
DPL II trying to extract whatever ambience is contained in the recording. I
also record in surrond sound sometimes, but I haven't succeeded completely
in enhancing the just stereo recordings enough to bother with it. I think my
mistake is recording from a single point in space, and it would be much
better to record the surround from further back in the hall. Lot of reasons,
but practical difficulties have prevented me from trying it yet.
But anyway, I think that the main difference between the live and the
playback is an acoustic one, not necessarily that some info gets lost during
recording. To put it simply, no matter how many channels you have you cannot
make your listening room sound like a much larger space by playing a
recording of a larger space in it. Your speakers become another real sound
source that interracts with your room and gives the game away, no matter
what has been recorded. There is no simple solution to this outside of a
technical laboratory, but I don't believe it is correct to say that the
problem is that not enough info has been recorded.
Very interesting sidebar, Floyd Toole told me once that he had heard full
periphony Ambisonics played back in an anechoic chamber, and it sounded like
IHL (Inside the Head Locatedness) rather than superb realism. I don't know
if they had added many many more speakers if this problem would be fixed,
but it is apparent to me that without a real room to support the sound
localization it just cannot place itself outside the head in a real space.
This is an acoustical problem, not one of "accuracy" or insufficient
information.
Gary Eickmeier
Arny Krueger[_5_]
April 4th 13, 10:39 PM
"Gary Eickmeier" > wrote in message
...
> Arny Krueger wrote:
>> "Audio_Empire" > wrote in message
>> ...
>
>>> It should be
>>> possible to make recordings so good that, given a halfway decent
>>> playback system, the musicians are in the room with you.
>>
>> How, given all of the audible information that is lost during
>> recording with microphones?
> Of course we have heard this said a few times before, but how about
> pinning
> that down a little better Arn? Exactly what "information" are you saying
> gets lost during recording?
Consider a microphone sitting in a horizontal plane that is marked up with
radial lines like a protractor. Approaching the microphone along each
radial line is a different sound from a different part of the room. There
are obviously dozens or hundreds of different lines depending on how
different we want them to be. Each of these lines represents a separate
channel of information. The output of the microphone is a single channel
that represents all of these sounds multiplied by the sensitivity of the
microphone in each direction. Obviously a tremendous amount of information
has been lost.
Dick Pierce[_2_]
April 5th 13, 12:27 AM
Gary Eickmeier wrote:
> Arny Krueger wrote:
>
>>"Audio_Empire" > wrote in message
...
>
>
>>>It should be
>>>possible to make recordings so good that, given a halfway decent
>>>playback system, the musicians are in the room with you.
>>
>>How, given all of the audible information that is lost during
>>recording with microphones?
>
>
> Of course we have heard this said a few times before, but how about pinning
> that down a little better Arn? Exactly what "information" are you saying
> gets lost during recording? Perhaps you could approach this by dividing up
> the total sound "picture" that we experience into its component parts and
> analyzing which parts are missing (?).
You're kidding, right? This stuff is hardly new, and wwas known back
in the 1930's and before.
ALL directional information iis lost in any single microphone.
The output of the microphone is a simply two-dimensional record
of instantaneous pressure or velocity amplitude vs time. That's
it. There is no information in that electrical signal as to where
the sound that caused it came from. None.
As I said, even in a directional microphone, that information is
irretrievably lost. Say a directional microphone is down 20 db
120 degrees relative to the principle axis. There's nothing in
the resulting electrical stream that unambiguously (or even
vaguely) provides a clue as to whether that signal was due to
an 80 dB SPL sound on the principal axis or a 100 dB SPL sound
120 degrees off axis.
And when you start to talk about recording in a complex sound
field, the electrical output has NO indication AT ALL whether
a direct sound came from there, while the reverberent sound
came form over there.
Now, take a stereo pair. The situation is really not any better
It is geometrically impossible to disambiguate, for example, by
any property in the elctrical signals, whether a source of a sound
is anywhere on a circle whose center is defined by the line between
the two microphones and whose plane is at right angles to that
circle. Two omnis some distance apart will generate the SAME
electrical signals whether the source is 20 feet ahead, 20 feet
above, 20 feet behind or anywhere else on the circle. The same is
true of any other mike position. The only position that can be
unambiguously recorded is somewhere EXACTLY in between the two,
which is arguably not very useful.
Consider also the reciprocity principle as a gedanken (and, as a
real-world excercise, if your want). Record something from a
complex sound field with a microphone of your choosing.
Now, play it back through the same microphone. While you're
thinking about it, go study up on the reciprocity principle.
Now, if your assertion were correct, recording with a single mike
of several sound sources in different direction, should result,
if you insist there is no loss of information, in the sound that
emenates from that same microphone finding their way back to the
original location they were emitted from.
Now, does that happen?
Does even the simplest version of that happen?
> But anyway, I think that the main difference between the live and the
> playback is an acoustic one, not necessarily that some info gets lost during
> recording.
Uh, sorry, but it is the 3-dimensional aspect of the original
acoustical field that is provably lost.
The fact is that the HRTF of the original sound field is
eliminate from the listening chain is precisely the problem.
> To put it simply, no matter how many channels you have you cannot
> make your listening room sound like a much larger space by playing a
> recording of a larger space in it. Your speakers become another real sound
> source that interracts with your room and gives the game away, no matter
> what has been recorded. There is no simple solution to this outside of a
> technical laboratory, but I don't believe it is correct to say that the
> problem is that not enough info has been recorded.
No, sorry, this might not be the only problem, but it is the FIRST
probloem, and unles syou solve it, everything else is parlor trick.
The reason carefully done (and VERY inconvenient) binaural works
is because it works VERY hard to try to preserve as much of the
utility of the listener's HRTF as possible.
But to do it right is VERY hard and only works, when done extremely
well, for one specific listener.
--
+--------------------------------+
+ Dick Pierce |
+ Professional Audio Development |
+--------------------------------+
Gary Eickmeier
April 5th 13, 12:54 PM
Dick Pierce wrote:
> ALL directional information iis lost in any single microphone.
> The output of the microphone is a simply two-dimensional record
> of instantaneous pressure or velocity amplitude vs time. That's
> it. There is no information in that electrical signal as to where
> the sound that caused it came from. None.
>
>
> Now, take a stereo pair. The situation is really not any better
> It is geometrically impossible to disambiguate, for example, by
> any property in the elctrical signals, whether a source of a sound
> is anywhere on a circle whose center is defined by the line between
> the two microphones and whose plane is at right angles to that
> circle. Two omnis some distance apart will generate the SAME
> electrical signals whether the source is 20 feet ahead, 20 feet
> above, 20 feet behind or anywhere else on the circle. The same is
> true of any other mike position. The only position that can be
> unambiguously recorded is somewhere EXACTLY in between the two,
> which is arguably not very useful.
Mr. Pierce,
I can appreciate your confusion on this, and Arnie's statement that we can't
record every direction from the microphones. But summing localization works,
and works well to enable us to encode direction along a line between the
microphones. We can tell where every instrument is along that line, and we
can even get a good sense of depth and spaciousness if the recording was
made right and the playback technique is good.
The fact that we can't encode all directions of all sounds emitted during a
performance is not a real problem if we don't NEED to record all that. Stay
with me.
> Now, if your assertion were correct, recording with a single mike
> of several sound sources in different direction, should result,
> if you insist there is no loss of information, in the sound that
> emenates from that same microphone finding their way back to the
> original location they were emitted from.
>
What assertion? You may have misread something I said.
> Now, does that happen?
>
> Does even the simplest version of that happen?
YES! Of course it does!
> The fact is that the HRTF of the original sound field is
> eliminate from the listening chain is precisely the problem.
Uh-oh - the binaural confusion rears its ugly "head" - so to speak.
I would ask you (both) to try and flip your thinking around for a minute.
What you are saying in essence is that it is impossible to take a 3D picture
of a snowfall because you cannot see 4 ? steradians all around you. Well,
that is not a problem if you don't need to see every direction around you to
accomplish your intent.
Consider the live vs recorded demos that were so successful. The reason for
their success was that the acoustical problems were overcome by recording
the instruments anechoically (outdoors) and then relying on the same
acoustic space as the live instruments to make them sound alike.
The change in thinking that this example illustrates is that there was no
attempt or need to encode all directions of each instrument in another
acoustic space, nor does HRTF have anything to do with stereophonic
(field-type) reproduction. It would be possible, for example, to record each
instrument individually and reproduce it by means of its own dedicated
speaker, placing those speakers similarly to the original locations, in a
room that is similar in size to the original. In this way you didn't need to
encode every direction of arriving or acoustically resultant sound because
you are going to be playing it back in a real acoustic space, which will in
turn cause all of the effects of the complex sound field to occur in that
space. You will use your own natural hearing to listen to all of those
complex sounds, and apply your own exact HRTF to the sound in the process.
We can then simplify the system down to as few as two speakers without
losing all that much, because summing localization can place the whole
frontal soundstage along a line between the speakers and the support of the
playback room acoustic is still active.
The fact that we have had a two channel system for so long and we happen to
have two ears has screwed us up and caused so much confusion it may be
impossible to overcome.
Anyway, it's late, I'm tired and got another bid day ahead of me.
Gary Eickmeier
Dick Pierce[_2_]
April 5th 13, 02:36 PM
Gary Eickmeier wrote:
> Dick Pierce wrote:
>
>
>>ALL directional information iis lost in any single microphone.
>>The output of the microphone is a simply two-dimensional record
>>of instantaneous pressure or velocity amplitude vs time. That's
>>it. There is no information in that electrical signal as to where
>>the sound that caused it came from. None.
>>
>>
>>Now, take a stereo pair. The situation is really not any better
>>It is geometrically impossible to disambiguate, for example, by
>>any property in the elctrical signals, whether a source of a sound
>>is anywhere on a circle whose center is defined by the line between
>>the two microphones and whose plane is at right angles to that
>>circle. Two omnis some distance apart will generate the SAME
>>electrical signals whether the source is 20 feet ahead, 20 feet
>>above, 20 feet behind or anywhere else on the circle. The same is
>>true of any other mike position. The only position that can be
>>unambiguously recorded is somewhere EXACTLY in between the two,
>>which is arguably not very useful.
>
>
> Mr. Pierce,
>
> I can appreciate your confusion on this,
No, you can't becasue you're viewing the world through the
fog of your own self-created confusion.
>>Now, if your assertion were correct, recording with a single mike
>>of several sound sources in different direction, should result,
>>if you insist there is no loss of information, in the sound that
>>emenates from that same microphone finding their way back to the
>>original location they were emitted from.
>>
> What assertion? You may have misread something I said.
No, you seem to dismiss that recording loses information.
>>Now, does that happen?
>>Does even the simplest version of that happen?
>
> YES! Of course it does!
Extraordinary assertion: prove it.
>>The fact is that the HRTF of the original sound field is
>>eliminate from the listening chain is precisely the problem.
>
> Uh-oh - the binaural confusion rears its ugly "head" - so to speak.
>
> I would ask you (both) to try and flip your thinking around for a minute.
> What you are saying in essence is that it is impossible to take a 3D picture
> of a snowfall because you cannot see 4 ? steradians all around you.
No, that's your confused interpretation: that's not
what I said.
If you want to play the optical analogy game, fine. Consider
that a single microphone is NOT anything remotely like a
single camera, rather it's FAR closer to a single photocell.
The electrical output of the photocell is simply a record of
the instantaneous light intensity vs time. Okay, so it's
snowing. What information being emitted from the photocell
places each snowflake in 3-d space.
Assume you replace the photocell with a lightbulb. Now, play
the photocell's electrical output back through the light bulb.
According to YOUR statement:
>> Now, if your assertion were correct, recording with a single mike
>> of several sound sources in different direction, should result,
>> if you insist there is no loss of information, in the sound that
>> emenates from that same microphone finding their way back to the
>> original location they were emitted from.
>>
>> Now, does that happen?
>>
>> Does even the simplest version of that happen?
>
> YES! Of course it does!
that flickering light bulb should be painting pretty scene of our
little snowstorm.
Does it?
Please, if you're going to answer, "of course it does," spare us
your embarassment, if you will.
--
+--------------------------------+
+ Dick Pierce |
+ Professional Audio Development |
+--------------------------------+
Audio_Empire[_2_]
April 5th 13, 11:36 PM
On Thursday, April 4, 2013 7:02:11 AM UTC-7, Arny Krueger wrote:
> "Audio_Empire" wrote in message
> >I don't know if any of my fellow audio enthusiasts out there have
> > noticed this, but even the best recordings always seem to "lack"
> > something.
>
> Yes, indeed.
>
> > Uncompressed digital (even Red Book), promises wide dynamic
> > range, excellent frequency response and low distortion.
>
> Subject to the limitations of stereo recordings.
>
> Stereo recording has pronounced inherent limitiations. No way does one
> capture enough information to recreate the original sound field.
I disagree here, somewhat. We all know that absolute perfection, in
either recording or playback is impossible. However the detours from
good recording practices to which I'm referring are not due to
limitations in either the recording or the playback technology. These
less than satisfying recordings are the result of deliberate choices
made at the time the recordings are captured.
> > It should be
> > possible to make recordings so good that, given a halfway decent
> > playback system, the musicians are in the room with you.
> How, given all of the audible information that is lost during recording with
> microphones?
Obviously, I'm talking about an illusion here. Of course an absolute
virtual presence is impossible. But your comment seems to indicate
that you don't really understand what I'm talking about. As a
sometimes "recordist" yourself, if you've never made a recording that
produced the illusion that, on playback, the musicians are there in
the room with you (obviously we're talking small ensembles here - you
aren't going to get an entire symphony orchestra in the room with you,
virtually, or otherwise), then your recording practices are part of
the problem, not part of the solution.
> > It is
> > technically possible and surprisingly easy to do this, but it rarely
> > happens with commercial recordings.
> I don't know about that.
I could play for you recordings that I have made, using simple (but
high-quality) equipment which you would find very convincing.
> > Why is it that still, in this
> > digital age, audiophiles cling to performances recorded more than
> > fifty years ago as the pinnacle of the recording arts? Recordings made
> > in the late 1950's and early 1960's by such people as Mercury Record's
> > C. Robert Fine, or RCA Victor's Lewis Leyton in the classical
> > recording world, and Rudy Van Gelder of Riverside, and Impulse Records
> > fame in the world of jazz are held in such high esteem, that even CD
> > and SACD re-releases of their recordings still sell very well today.
> Sentimentality. Musical works and performances that can't possibly be
> re-recorded today.
EHHHHH! Thanks for playing, Arnie. But you are wrong. Nostalgia and
sentimentality are probably the LEAST significant factors here. The
reason why audiophiles still purchase these 50-year-old-plus
recordings is because they were simply recorded with straightforward
gear directly to two or at most three channels with as simple a signal
path as possible and they were properly miked using some minimalist
miking technique and, as a result, they still SOUND GREAT.
> > It's as if no progress has been made in the art and science of
> > recording in the last 55 years or so.
> 2013 - 55 =3D 1958
> Since 1958 there has essentially been one signfiicant upgrade to stereo
> recording which is digital media.
Nothing wrong with digital. Properly applied, even old Red Book CD
format can yield jaw-dropping results. The media's not the problem.
Magnetic tape could do that too, even 55 years ago!
> > I have found, in building my stereo system, that this has become a dog
> > chasing his tail endeavor. My playback equipment gets better and
> > better and yet the recordings to which I listen, ranging from terrible
> > to OK seldom get any better than just OK. Even so-called audiophile
> > recordings from labels such as Telarc and Reference and Naxos, to name
> > a few, never sound quite as good as I think they should. Any ideas,
> > other opinions, criticisms or nasty comments? All of the above would
> > be welcomed.
> We need a better recording technology than stereo.
I think that we are talking at cross purposes, here. My point is that
we already have adequate tools to produce stupendous recordings of
lasting audio merit. They just aren't being employed to give us what
they are capable of. These tools have been perverted to other tasks,
such as making recordings as loud as possible, all the time, or to
make the recordist's job "easier" by allowing him or her to capture a
performance of each instrument, separate form the others so that the
musicians can be dismissed and the producers and engineers can
vacillate over the balances and effects until the cows come home.
> We have a ready
> alternative called multitrack, 5.1, 7.1, whatever, but while it adds
> something, it still doesn't provide us with that something that only exists
> at the live recording.
So-called surround systems might have something to add to pure music
recordings, but if so, I've yet to hear it. They way I see it, most
recording engineers working today haven't even mastered simple stereo,
so I don't see how complicating the situation with more channels of
information is going to improve that situation any.
Audio_Empire[_2_]
April 5th 13, 11:41 PM
On Thursday, April 4, 2013 7:14:12 PM UTC-7, Barkingspyder wrote:
> On Tuesday, April 2, 2013 7:20:55 PM UTC-7, Audio_Empire wrote:
> > I don't know if any of my fellow audio enthusiasts out there have
> > noticed this, but even the best recordings always seem to "lack"
> > something. Uncompressed digital (even Red Book), promises wide dynamic
> > range, excellent frequency response and low distortion. It should be
> > possible to make recordings so good that, given a halfway decent
> > playback system, the musicians are in the room with you. It is
> > technically possible and surprisingly easy to do this, but it rarely
> > happens with commercial recordings. Why is it that still, in this
> > digital age, audiophiles cling to performances recorded more than
> > fifty years ago as the pinnacle of the recording arts? Recordings made
> > in the late 1950's and early 1960's by such people as Mercury Record's
> > C. Robert Fine, or RCA Victor's Lewis Leyton in the classical
> > recording world, and Rudy Van Gelder of Riverside, and Impulse Records
> > fame in the world of jazz are held in such high esteem, that even CD
> > and SACD re-releases of their recordings still sell very well today.
> > It's as if no progress has been made in the art and science of
> > recording in the last 55 years or so.
> >
> > I have found, in building my stereo system, that this has become a dog
> > chasing his tail endeavor. My playback equipment gets better and
> > better and yet the recordings to which I listen, ranging from terrible
> > to OK seldom get any better than just OK. Even so-called audiophile
> > recordings from labels such as Telarc and Reference and Naxos, to name
> > a few, never sound quite as good as I think they should. Any ideas,
> > other opinions, criticisms or nasty comments? All of the above would
> > be welcomed.
> >
> > Audio_Empire
>
> My experience is that you are there feeling only comes from recording
> of a very few instruments playing, the fewer the better. Orchestral
> works are just impossible IME. There are a few audiophile recording
> companies that use purist techniques that give very good results, but
> it is always with things like quartets at most. I've had god luck with
> Sheffield Labs, Telarc, and Bainbridge. Your mileage may vary.
I have recordings made by me and others of large ensembles which will
challenge your assertion to the point of proving it wrong. I have
recordings that one can, while listening to them, point to the
location of every instrument in the group and even tell if some of the
players are behind others or whether some are on risers or not! Image
specificity, depth and height are all parameters that exist in the
sound field and can be accurately picked-up and recorded by the use of
correct microphone usage and placement.
Audio_Empire[_2_]
April 6th 13, 02:26 PM
On Thursday, April 4, 2013 4:27:06 PM UTC-7, Dick Pierce wrote:
> Gary Eickmeier wrote:
> > Arny Krueger wrote:
> >
> >>"Audio_Empire" wrote in message
> ...
> >
> >>>It should be
> >>>possible to make recordings so good that, given a halfway decent
> >>>playback system, the musicians are in the room with you.
> >>
> >>How, given all of the audible information that is lost during
> >>recording with microphones?
> >
> > Of course we have heard this said a few times before, but how about pinning
> > that down a little better Arn? Exactly what "information" are you saying
> > gets lost during recording? Perhaps you could approach this by dividing up
> > the total sound "picture" that we experience into its component parts and
> > analyzing which parts are missing (?).
>
> You're kidding, right? This stuff is hardly new, and wwas known back
> in the 1930's and before.
>
> ALL directional information iis lost in any single microphone.
> The output of the microphone is a simply two-dimensional record
> of instantaneous pressure or velocity amplitude vs time. That's
> it. There is no information in that electrical signal as to where
> the sound that caused it came from. None.
>
> As I said, even in a directional microphone, that information is
> irretrievably lost. Say a directional microphone is down 20 db
> 120 degrees relative to the principle axis. There's nothing in
> the resulting electrical stream that unambiguously (or even
> vaguely) provides a clue as to whether that signal was due to
> an 80 dB SPL sound on the principal axis or a 100 dB SPL sound
> 120 degrees off axis.
>
> And when you start to talk about recording in a complex sound
> field, the electrical output has NO indication AT ALL whether
> a direct sound came from there, while the reverberent sound
> came form over there.
>
> Now, take a stereo pair. The situation is really not any better
> It is geometrically impossible to disambiguate, for example, by
> any property in the elctrical signals, whether a source of a sound
> is anywhere on a circle whose center is defined by the line between
> the two microphones and whose plane is at right angles to that
> circle. Two omnis some distance apart will generate the SAME
> electrical signals whether the source is 20 feet ahead, 20 feet
> above, 20 feet behind or anywhere else on the circle. The same is
> true of any other mike position. The only position that can be
> unambiguously recorded is somewhere EXACTLY in between the two,
> which is arguably not very useful.
Are you talking about omnidirectional microphones here? Because they
don't work as a stereo pair unless you take extraordinary precautions,
such as placing a big sound baffle between them as Ray Kimber does for
his IsoMike recordings.
> Consider also the reciprocity principle as a gedanken (and, as a
> real-world excercise, if your want). Record something from a
> complex sound field with a microphone of your choosing.
> Now, play it back through the same microphone. While you're
> thinking about it, go study up on the reciprocity principle.
If you did do that, say, through a magnetic microphone, it wouldn't
sound very good I'm afraid. It would likely sound much worse, even,
than a telephone. And I don't see what this has to do with the subject
at hand. Microphones are designed, to capture sound and turn it into
an electronic analog of that sound, it is not designed to be a
reproducer.
> Now, if your assertion were correct, recording with a single mike
> of several sound sources in different direction, should result,
> if you insist there is no loss of information, in the sound that
> emenates from that same microphone finding their way back to the
> original location they were emitted from.
That's not possible. The microphone is not designed to reproduce
anything. it would make a more than lousy speaker.
Also recoding with a single mike will result in NO spatial information
being captured (it's called "monaural sound"). One needs two mikes and
the spatial information results from the difference between the two
mike signals and THAT takes place in the listeners' ears. We hear in
stereo due differences in phase, time delay, and spatial separation of
signals reaching our ears. if done right, those cues can provide a
very satisfactory soundstage on a good stereo system.
Gary Eickmeier
April 7th 13, 01:47 PM
Audio_Empire wrote:
> That's not possible. The microphone is not designed to reproduce
> anything. it would make a more than lousy speaker.
>
> Also recoding with a single mike will result in NO spatial information
> being captured (it's called "monaural sound"). One needs two mikes and
> the spatial information results from the difference between the two
> mike signals and THAT takes place in the listeners' ears. We hear in
> stereo due differences in phase, time delay, and spatial separation of
> signals reaching our ears. if done right, those cues can provide a
> very satisfactory soundstage on a good stereo system.
OK, let's take another run at this. I'm sure Mr. Pierce didn't mean to imply
that microphones can be reproducers; he was making a philosophical point.
Nor did he catch my meaning in my prickly post.
Audio Empire's last paragraph above gets a little off the path. Most
textbooks describe how stereo works much like he did, with what is happening
at the ears. I say that is a red herring, a mislead that confuses stereo
with binaural. Stereo is not a head-related, ear input system, it is a
field-type system in which two or more transducers make sound in a room. We
then experience that sound with our natural hearing mechanism, our own HRTF,
freq response anomolies, everything the same way we hear live sound at a
concert. The key to improvement of reproduction systems is how closely those
sound fields that are made by the speakers and room come to a typical live
sound in a good hall.
Binaural, on the other hand, if recorded with the classic binaural head
placed in a good seat, requires only those two microphones, experiences the
whole original sound field, and no information is lost, any more than if
your were sitting there. Stereo is just a different system, but with both
systems all of the sounds arrive at the microphones and no "information" is
lost. It's all there, waiting to be reproduced.
But once you take the headphones off and work in the stereophonic system,
the sound will be on speakers in a room, and those speakers become another
sound source, and your ears are free to hear them and their interraction
with the room. What you hear about that source is mainly its (their)
frequency response and radiation pattern - but not radiation pattern per se,
but rather how those patterns interract with the surfaces around them.
Doesn't matter whether you are playing pink noise, test clicks, or Pink
Floyd, you ears can hear those characteristics of your speakers and room,
and ain't nuthin you can do but try and make it sound as much like a live
field as possible by studying this model of reproduction and working with
it.
The main difference between the live sound and the stereo repro is this
summing localization that can place an auditory event anywhere along the
line between the speakers (and beyond if you take advantage of the
reflecting surfaces of the right and left side walls). However, as I said,
this effect works and works well to place instruments on a soundstage where
they belong in your room, even if there are only two mikes and two speakers,
it can work quite amazingly. Surround and center are definite enhancements
that I support fully as correct techniques in reconstructing a realistic
sounding field in your room.
Summary so I stop rambling, stereo recording can be described as "close
miking the soundstage" in a way that will result in a realistic sound field
in your room when played on speakers which have been placed in a position
which is geometrically similar to the positions of the microphones that
captured the sound, in front of you at a distance from you, making sound
patterns that hopefully mimic those that were recorded, so that you and your
natural hearing can then experience sound that has most of the spatial
patterns and frequencies and timings, except for the simple fact that the
time between reflections in your smaller space will be superimposed on those
timings. In this way, the real sound sources within your room are once again
creating all of those "lost" pieces of information that Arnie and Dick are
talking about. But you should not think of this recreation as something
"fake" or artificial in any way, it is just an acoustical fact of life in
the field-type game called stereophonic - or multichannel, 5.1, 7.1, or
whateve clever commercial name will be attached next. What we are doing with
all of those channels is reconstructing sound fields within a room. If done
right, all of the sounds that were there during recording will still be
there, and will come from all of those complex angles and locations that you
thought were lost, because this sound in your room is REAL and not a trick
of the two ears being assaulted by those two "lost" signals.
This way of thinking about the reproduction problem is very, very, very
different from what most of us would pick up in the magazines or even the
classic texts on stereo. Study not ear input signals and lost information,
but rather making sound in rooms.
Gary Eickmeier
On 4/7/2013 5:47 AM, Gary Eickmeier wrote:
> Audio_Empire wrote:
>
>> That's not possible. The microphone is not designed to reproduce
>> anything. it would make a more than lousy speaker.
>>
>> Also recoding with a single mike will result in NO spatial information
>> being captured (it's called "monaural sound"). One needs two mikes and
>> the spatial information results from the difference between the two
>> mike signals and THAT takes place in the listeners' ears. We hear in
>> stereo due differences in phase, time delay, and spatial separation of
>> signals reaching our ears. if done right, those cues can provide a
>> very satisfactory soundstage on a good stereo system.
>
> OK, let's take another run at this. I'm sure Mr. Pierce didn't mean to imply
> that microphones can be reproducers; he was making a philosophical point.
> Nor did he catch my meaning in my prickly post.
Seems to me he caught the meaning perfectly well. As I read it, the
point was not philosophical at all, and the assumption in his gedanken
was clearly that the microphones were as accurate as radiators as they
are microphones. The point is that, even in that situation, where
you're in the venue, you record and replay from the exact same points,
and the mikes are perfectly omnidirectional, all directional information
is lost. The signal is two dimensional, so the reverberant field input
from, say 120 degrees, will replayed omnidirectionally. The
directional, spacial, information is simply lost forever.
Take the thought one step further. Suppose you could playback and
record coincidentally, with the same assumptions above. What would you
hear when you played back that second recording? A second reverberant
field that included reflections of the original reverberant field, as
well as the direct sound. Would this sound *more* accurate than the
first playback? Of course not, it would compound the problem. This, in
essence, is the method you're espousing, except you're not holding the
venue constant.
<snip>
> In this way, the real sound sources within your room are once again
> creating all of those "lost" pieces of information that Arnie and Dick are
> talking about. But you should not think of this recreation as something
> "fake" or artificial in any way,
Leaving aside arguments hashed out previously, this statement is what
IMO most people would disagree with. It is artificial, and those lost
pieces of information will Not be recovered by anything you can do
during reproduction. No matter what you do in an effort to retrieve it,
it will be a simulation only - i.e. fake.
> it is just an acoustical fact of life in
> the field-type game called stereophonic - or multichannel, 5.1, 7.1, or
> whateve clever commercial name will be attached next. What we are doing with
> all of those channels is reconstructing sound fields within a room. If done
> right, all of the sounds that were there during recording will still be
> there, and will come from all of those complex angles and locations that you
> thought were lost, because this sound in your room is REAL and not a trick
> of the two ears being assaulted by those two "lost" signals.
*And* it is not the sound heard at the recording venue. All the sound in
your room is "real", but in no wise does that imply accuracy to the
original event.
> This way of thinking about the reproduction problem is very, very, very
> different from what most of us would pick up in the magazines or even the
> classic texts on stereo. Study not ear input signals and lost information,
> but rather making sound in rooms.
That's the way to make an illusion that *you* like, or believe is more
realistic. It's not the path to the most accurate reproduction of the
live event IME.
Keith
Audio_Empire[_2_]
April 8th 13, 03:58 AM
On Sunday, April 7, 2013 5:47:12 AM UTC-7, Gary Eickmeier wrote:
> Audio_Empire wrote:
>=20
>=20
>=20
> > That's not possible. The microphone is not designed to reproduce
>=20
> > anything. it would make a more than lousy speaker.
>=20
> >
>=20
> > Also recoding with a single mike will result in NO spatial information
>=20
> > being captured (it's called "monaural sound"). One needs two mikes and
>=20
> > the spatial information results from the difference between the two
>=20
> > mike signals and THAT takes place in the listeners' ears. We hear in
>=20
> > stereo due differences in phase, time delay, and spatial separation of
>=20
> > signals reaching our ears. if done right, those cues can provide a
>=20
> > very satisfactory soundstage on a good stereo system.
>=20
>=20
>=20
> OK, let's take another run at this. I'm sure Mr. Pierce didn't mean to im=
ply=20
>=20
> that microphones can be reproducers; he was making a philosophical point.=
=20
>=20
> Nor did he catch my meaning in my prickly post.
>=20
>=20
>=20
> Audio Empire's last paragraph above gets a little off the path. Most=20
>=20
> textbooks describe how stereo works much like he did, with what is happen=
ing=20
>=20
> at the ears. I say that is a red herring, a mislead that confuses stereo=
=20
>=20
> with binaural. Stereo is not a head-related, ear input system, it is a=20
>=20
> field-type system in which two or more transducers make sound in a room. =
We=20
>=20
> then experience that sound with our natural hearing mechanism, our own HR=
TF,=20
>=20
> freq response anomolies, everything the same way we hear live sound at a=
=20
>=20
> concert. The key to improvement of reproduction systems is how closely th=
ose=20
>=20
> sound fields that are made by the speakers and room come to a typical liv=
e=20
>=20
> sound in a good hall.
Oh, I disagree with that. All due respect, Gary. The stereo effect very muc=
h involves the head and the ears. The mechanism formed ny our heads and our=
ears (down to the shape of the latter, is very much responsible for how we=
perceive the space around us, aurally. This includes directionality of sou=
nd
sources as well as the sense of whether we're enclosed by a large space or =
a small one.=20
>=20
>=20
> Binaural, on the other hand, if recorded with the classic binaural head=
=20
>=20
> placed in a good seat, requires only those two microphones, experiences t=
he=20
>=20
> whole original sound field, and no information is lost, any more than if=
=20
>=20
> your were sitting there. Stereo is just a different system, but with both=
=20
>=20
> systems all of the sounds arrive at the microphones and no "information" =
is=20
>=20
> lost. It's all there, waiting to be reproduced.
That's something else entirely. Biauaral uses surrogate ears that are repla=
ced on playback
by two ear-phones. What the mikes are doing there is intercepting the sound=
at the point where it interacts with our heads and recording it. when play=
ed back, it merely re-inserts the sound
into the ear-head interface at the point it was intercepted upon recording.=
I can give a very=20
convincing "you-are-there" illusion, but because everyone's head and ears a=
re a bit different, it=20
isn't perfect. For instance, binaural sound cannot produce an image that th=
e brain can interpret as
a sound coming from behind the listener. Also Binaural doesn't work very we=
ll as a stereo source to be listened to on speakers. That's the difference =
between stereophonic sound and binaural sound.=20
But once you take the headphones off and work in the stereophonic system,=
=20
>=20
> the sound will be on speakers in a room, and those speakers become anothe=
r=20
>=20
> sound source, and your ears are free to hear them and their interraction=
=20
>=20
> with the room. What you hear about that source is mainly its (their)=20
>=20
> frequency response and radiation pattern - but not radiation pattern per =
se,=20
>=20
> but rather how those patterns interract with the surfaces around them.=20
>=20
> Doesn't matter whether you are playing pink noise, test clicks, or Pink=
=20
>=20
> Floyd, you ears can hear those characteristics of your speakers and room,=
=20
>=20
> and ain't nuthin you can do but try and make it sound as much like a live=
=20
>=20
> field as possible by studying this model of reproduction and working with=
=20
>=20
> it.
That's true but it still doesn't conflate Binaural with stereo.
>=20
> The main difference between the live sound and the stereo repro is this=
=20
>=20
> summing localization that can place an auditory event anywhere along the=
=20
>=20
> line between the speakers (and beyond if you take advantage of the=20
>=20
> reflecting surfaces of the right and left side walls). However, as I said=
,=20
>=20
> this effect works and works well to place instruments on a soundstage whe=
re=20
>=20
> they belong in your room, even if there are only two mikes and two speake=
rs,=20
>=20
> it can work quite amazingly. Surround and center are definite enhancement=
s=20
>=20
> that I support fully as correct techniques in reconstructing a realistic=
=20
>=20
> sounding field in your room.
>=20
>=20
>=20
> Summary so I stop rambling, stereo recording can be described as "close=
=20
>=20
> miking the soundstage"
Again I disagree with your wording. Stereo is the capture of the soundfield=
associated with an acoustic event. Correctly done, it consists of two micr=
ophones "viewing" that event from two different perspectives. Reproduced, t=
hose two soundfields interact with one another (and the space around them) =
in such a way as to fool the ear into reconstructing a stereo image from th=
ose two perspectives. Since the normally functioning human ear EXPECTS to h=
ear two perspectives of a sonic event happening at a distance, it hears ste=
reo. It even hears directionality when the source is a multi-miked and mult=
ichannel recording played back through two speakers and tries to make stere=
o from that recording.=20
An Interesting diregression illustrates the point of even accidental stereo=
recording can provide a=20
satisfactory illusion.=20
In 1947, Motion Picture composer Alfred Newman ('The Robe', 'How The West W=
as Won', 'Airport', etc.)
was scoring the Fox film 'Captain From Castile' in the Fox scoring studio. =
He told the engineers that he wanted the orchestra recorded from the two si=
des of the room so that he could choose which perspective suited the action=
best. The perspective favoring the strings would emphasize the romance asp=
ect of the film, that perspective favoring the brass would emphasize the mi=
litaristic aspect of the action. Newman had (we assume) NO knowledge of ste=
reo and never played the two optical sound tracks back simultaneously. Anyw=
ay, the film finished and the score laid-in, the optical two track=20
sound track was forgotten.
Then about seven years ago, a small "record" company specializing in film s=
oundtracks stumbled upon the original session film for 'Captain From Castil=
e'. looking at the optical tracks, the engineer, expecting to see music on =
one of the tracks and dialog or sound effects on the other, noticed that bo=
th tracks seemed to have music on them. Not just music but what looked to h=
im like slightly different versions of the same performance. He hooked the =
multitrack optical reader up to amplifiers and speakers and VOILA!, a stere=
o performance of the music. After using a computer to clean it up some and =
do some digital EQ, the performance was released on CD. It has to be the ea=
rliest real stereo performance ever
released as a commercial recording! The stereo, BTW, is quite good and the =
two-disc CD sounds a lot=20
better than one might expect given that it was recorded optically, not magn=
etically on equipment probably pre-war in origin. BUT THE KICKER IS THAT TH=
E STEREO IS TOTALLY ACCIDENTAL!
Audio_Empire[_2_]
April 8th 13, 04:00 AM
On Sunday, April 7, 2013 10:04:56 AM UTC-7, KH wrote:
>
> *And* it is not the sound heard at the recording venue. All the sound in
> your room is "real", but in no wise does that imply accuracy to the
> original event.
You're right, and for a number of reasons, not all of which are
readily apparent to the casual observer. First of course is that
microphones are far from perfect. Not only are they not perfectly flat
in frequency response from DC to daylight, but they aren't even all
that accurate to their advertised patterns. For instance, Omnis are
not really omnidirectional. Cardioids do not fully supress sounds from
the sides and rear as they are said to do. Figure-of- eight
microphones do not attenuate sounds from the side by very much at all.
The experienced and savvy recordist understands this and uses this
knowledge to their advantage. But I makes recording a far different
proposition than it looks like on the face of it.
The fact that omnis are only more or less omni-directional accounts
somewhat for the way that Mercury Living Presence recordings image. It
also explains why Bob Woods of Telarc Records was unsuccessful at
copying Mercury's technique. Mercury used three Telefunken
omnidirectional microphones that was only "semi-omnidirectional". It
was the best that they could do in the early-to-middle fifties when
these Mercuries were made. OTOH, Woods used Schoeps calibration mikes
which were not on;t true omnis, they were also the flattest mikes,
frequency response-wise available at the time. They were nothing like
the mikes C.R. Fine at Mercury used, and thus Woods did not get the
same results.
Gary Eickmeier
April 8th 13, 02:24 PM
Audio_Empire wrote:
> On Sunday, April 7, 2013 5:47:12 AM UTC-7, Gary Eickmeier wrote:
> Oh, I disagree with that. All due respect, Gary. The stereo effect
> very much involves the head and the ears. The mechanism formed ny our
> heads and our ears (down to the shape of the latter, is very much
> responsible for how we perceive the space around us, aurally. This
> includes directionality of sound
> sources as well as the sense of whether we're enclosed by a large
> space or a small one.
That says nothing about the system itself. In both ideas about how stereo
works, we are listening with our ears. But if you think that stereo is a
system of creating, or recording, ear input signals, then tell me what you
are doing in your recording to put the HRTF and appropriate ear spacing and
head attenuation into your signals?
> That's something else entirely. Biauaral uses surrogate ears that are
> replaced on playback
> by two ear-phones. What the mikes are doing there is intercepting the
> sound at the point where it interacts with our heads and recording
> it. when played back, it merely re-inserts the sound
> into the ear-head interface at the point it was intercepted upon
> recording. I can give a very
> convincing "you-are-there" illusion, but because everyone's head and
> ears are a bit different, it
> isn't perfect. For instance, binaural sound cannot produce an image
> that the brain can interpret as
> a sound coming from behind the listener. Also Binaural doesn't work
> very well as a stereo source to be listened to on speakers. That's
> the difference between stereophonic sound and binaural sound.
WHAT is the difference? What I said? That binaural is an ear input system
and stereo is a field-type system?
>> and ain't nuthin you can do but try and make it sound as much like a
>> live
>>
>> field as possible by studying this model of reproduction and working
>> with
>>
>> it.
> That's true but it still doesn't conflate Binaural with stereo.
It does if you are holding up the Mercury recordings as a great example of
the art. If you think they were recording signals that were intended to sub
for your ears if you had been there, then let me know who here has ears that
stretch what - some 12 to 18 feet across the front of the orchestra and are
suspended above the conductor's head.
> Again I disagree with your wording. Stereo is the capture of the
> soundfield associated with an acoustic event.
Very good!
Correctly done, it
> consists of two microphones "viewing" that event from two different
> perspectives. Reproduced, those two soundfields interact with one
> another (and the space around them) in such a way as to fool the ear
> into reconstructing a stereo image from those two perspectives. Since
> the normally functioning human ear EXPECTS to hear two perspectives
> of a sonic event happening at a distance, it hears stereo. It even
> hears directionality when the source is a multi-miked and
> multichannel recording played back through two speakers and tries to
> make stereo from that recording.
Did you say TWO microphones? And TWO perspectives? Then we all have a
serious problem! Those terrific three spaced omni recordings, first of all,
anything done without a proper dummy head second of all, many jazz and
classical recordings made with more than two mikes, such as highlight mikes,
drum kit mikes, piano mikes, and vocal mikes for the soloist. Now we are
told that our ears will be stretched across the orchestra, stuffed into a
drum kit, placed under the lid of a piano, and shoved into the face of the
singer or suspended above the chorus. I just have a problem with that theory
of what we are doing with recording.
Now lets have some fun with Alfred Newman....
> An Interesting diregression illustrates the point of even accidental
> stereo recording can provide a
> satisfactory illusion.
>
> In 1947, Motion Picture composer Alfred Newman ('The Robe', 'How The
> West Was Won', 'Airport', etc.)
> was scoring the Fox film 'Captain From Castile' in the Fox scoring
> studio. He told the engineers that he wanted the orchestra recorded
> from the two sides of the room so that he could choose which
> perspective suited the action best. The perspective favoring the
> strings would emphasize the romance aspect of the film, that
> perspective favoring the brass would emphasize the militaristic
> aspect of the action. Newman had (we assume) NO knowledge of stereo
> and never played the two optical sound tracks back simultaneously.
> Anyway, the film finished and the score laid-in, the optical two
> track
> sound track was forgotten.
> Then about seven years ago, a small "record" company specializing in
> film soundtracks stumbled upon the original session film for 'Captain
> From Castile'. looking at the optical tracks, the engineer, expecting
> to see music on one of the tracks and dialog or sound effects on the
> other, noticed that both tracks seemed to have music on them. Not
> just music but what looked to him like slightly different versions of
> the same performance. He hooked the multitrack optical reader up to
> amplifiers and speakers and VOILA!, a stereo performance of the
> music. After using a computer to clean it up some and do some digital
> EQ, the performance was released on CD. It has to be the earliest
> real stereo performance ever
> released as a commercial recording! The stereo, BTW, is quite good
> and the two-disc CD sounds a lot
> better than one might expect given that it was recorded optically,
> not magnetically on equipment probably pre-war in origin. BUT THE
> KICKER IS THAT THE STEREO IS TOTALLY ACCIDENTAL!
My first thought on this was that if these were two separate optical tracks,
it would be impossible to sychronize them in order to play them together in
stereo. But if they wre recorded on the same piece of film, of the same
performance, then they already had a stereo recording sound head which, if
what you say about their ignorance of stereo, would be unlikely. Nor would
they record the music on one track and the sound effects or narration on
another, so that proposed guess would be silly. Anyway, I would be curious
about the full story on this, being a film maker myself. All I can think is
that these two tracks must have been recorded on separate optical heads of
the same performance, which were running in sync with each other by means of
whatever technology was available at that time. I know they used optical
film recorders the same way I used fullcoat to sync up with the camera. I
just can't imagine anyone having a multichannel optical recorder for
original field work.
Gary Eickmeier
Gary Eickmeier
April 8th 13, 02:25 PM
KH wrote:
> On 4/7/2013 5:47 AM, Gary Eickmeier wrote:
>> OK, let's take another run at this. I'm sure Mr. Pierce didn't mean
>> to imply that microphones can be reproducers; he was making a
>> philosophical point. Nor did he catch my meaning in my prickly post.
>
> Seems to me he caught the meaning perfectly well. As I read it, the
> point was not philosophical at all, and the assumption in his gedanken
> was clearly that the microphones were as accurate as radiators as they
> are microphones. The point is that, even in that situation, where
> you're in the venue, you record and replay from the exact same points,
> and the mikes are perfectly omnidirectional, all directional
> information is lost. The signal is two dimensional, so the
> reverberant field input from, say 120 degrees, will replayed
> omnidirectionally. The directional, spacial, information is simply
> lost forever.
Curious misunderstanding of stereo. If there are two or more mikes recording
phase locked signals of the same performance, then the two signals are what
gives us stereo perspective. Not each mike. Both acting together.
NOW - where did the reverberant field go? It's true that stereo has a
front/back problem, like when the audience applause is folded back behind
the performance in live recordings, but the reverberant field is still
recorded, just not as much as if it were a binaural recording with the head
placed further back into the audience. With stereo recordings, we depend on
the playback room to support the reverberant field spatially. Most people's
music rooms do not have much of a reverberant field of their own, so the
resultant playback takes on most of the "flavor" of the recorded field. But
it would be a mistake to use all kinds of absorbant materials in the room to
kill the reflected sound.
>
> Take the thought one step further. Suppose you could playback and
> record coincidentally, with the same assumptions above. What would
> you hear when you played back that second recording? A second
> reverberant field that included reflections of the original
> reverberant field, as well as the direct sound. Would this sound
> more accurate than the first playback? Of course not, it would
> compound the problem. This, in essence, is the method you're
> espousing, except you're not holding the venue constant.
Not sure I get your example, what you are doing there. But we all know about
the "central recording problem," that in order to do stereo we are running
the sound through two acoustic spaces. I am just saying that if you
understand the system, the two spaces will complement each other rather than
compete with each other.
>
> <snip>
>
>> In this way, the real sound sources within your room are once again
>> creating all of those "lost" pieces of information that Arnie and
>> Dick are talking about. But you should not think of this recreation
>> as something "fake" or artificial in any way,
>
> Leaving aside arguments hashed out previously, this statement is what
> IMO most people would disagree with. It is artificial, and those lost
> pieces of information will Not be recovered by anything you can do
> during reproduction. No matter what you do in an effort to retrieve
> it, it will be a simulation only - i.e. fake.
Oh dear! Recordings fake? Say it ain't so Joe!
>> it is just an acoustical fact of life in
>> the field-type game called stereophonic - or multichannel, 5.1, 7.1,
>> or whateve clever commercial name will be attached next. What we are
>> doing with all of those channels is reconstructing sound fields
>> within a room. If done right, all of the sounds that were there
>> during recording will still be there, and will come from all of
>> those complex angles and locations that you thought were lost,
>> because this sound in your room is REAL and not a trick of the two
>> ears being assaulted by those two "lost" signals.
>
> And it is not the sound heard at the recording venue. All the sound
> in your room is "real", but in no wise does that imply accuracy to the
> original event.
We are not doing accuracy. You cannot have accuracy, because of the central
recording problem.
>
>> This way of thinking about the reproduction problem is very, very,
>> very different from what most of us would pick up in the magazines
>> or even the classic texts on stereo. Study not ear input signals and
>> lost information, but rather making sound in rooms.
>
> That's the way to make an illusion that you like, or believe is more
> realistic. It's not the path to the most accurate reproduction of the
> live event IME.
> Keith
Let me propose a little thought experiment riddle to you, Keith.
You want to do a modern live vs recorded demo for some commercial purpose,
maybe to sell speakers. You will use a saxophone, drum, and maybe trumpet.
Doesn't matter. So you close mike the sax and trumpet, and you use two or
three mikes on the drum kit if it has some spatial extent in the room. You
do this in a very anechoic space, maybe outdoors like the original
experiment. You then play back these tracks on speakers that have
substantially the same radiation patterns as the instruments - the sax and
drums mainly omni with the trumpet more directional. You find that if you
place the live instrument side by side with those speakers, their sound is
indistinguishable. Success, so you take your act on the road and amaze all
and sundry.
So first, would you agree that this would work? Would be a terrific
experiment and very realistic?
If so then note that the recording technique had nothing whatsoever to do
with the human hearing mechanism, the spacing between the ears, the HRTF,
none of it. You are usning the acoustics of the playback space in the same
way as the live instruments, so they sound the same and very realistic. You
have recorded not ear input signals but the object itself, the sound of the
instruments and their radiation pattern, to be played back in a real room to
make sound fields in that room, not to cast ear input signals toward you.
Furthermore, everyone in the room will hear the same sound, each with his
own HRTF and total hearing mechanism.
And the goal is not "accuracy" but realism!
So there you have an example that gives a more useful understanding of the
nature of the system. Actual realistic stereo recording (the subject of this
thread, after all) is a point on a continuum between this example and a
dummy head recording placed well out into the audience and reverberant field
and reproduced with crosstalk elimination etc etc. A "dry" recording is more
like the example, a "wet" recording more toward the farther back limit. I
sometimes think of Ralph Glasgal's system up in New York, which uses
speakers in a surround sound arrangement with crosstalk elimination but in a
real room. That might be a very interesting cross between the "you are
there" binaural technique and the "they are here" stereo technique. Anyway,
I would love to get the opportunity some fine day.
Gary Eickmeier
Audio_Empire[_2_]
April 8th 13, 11:37 PM
On Monday, April 8, 2013 6:24:44 AM UTC-7, Gary Eickmeier wrote:
> Audio_Empire wrote:
> > On Sunday, April 7, 2013 5:47:12 AM UTC-7, Gary Eickmeier wrote:
>
> > Oh, I disagree with that. All due respect, Gary. The stereo effect
> > very much involves the head and the ears. The mechanism formed ny our
> > heads and our ears (down to the shape of the latter, is very much
> > responsible for how we perceive the space around us, aurally. This
> > includes directionality of sound
> > sources as well as the sense of whether we're enclosed by a large
> > space or a small one.
>
> That says nothing about the system itself. In both ideas about how stereo
> works, we are listening with our ears. But if you think that stereo is a
> system of creating, or recording, ear input signals, then tell me what you
> are doing in your recording to put the HRTF and appropriate ear spacing and
> head attenuation into your signals?
Again, you seem to be conflating binaural sound with stereo. They
aren't the same thing. Binaural sound is about capturing and playing
back the sound field as the EAR receives it, while stereo is about
capturing and transmitting (or playing back in the case of a
recording) the sound field as the musical ensemble MAKES it. Totally
different concept. In one, the spatial characteristics and the
interplay of the listener's head are already part of the signal,
having been created at the binaural microphones and surrogate head.
The only thing that the listener does into interpret the sound of
someone (or more likely, someTHING else's head. In Stereo the user
uses his head and his ear shape to interpret sound. That's one reason
why binaural can't parse the difference between sounds occurring
behind the listener and sounds that are supposed be in front. They all
seem to come from in front of the listener, even when they are clearly
supposed to be in back. In contrast, so-called "surround" stereo has
no problem=20 placing images anywhere in the sound field that it cares
to, and the listener can correctly perceive it as being in it's
correct spot.
> > That's something else entirely. Biauaral uses surrogate ears that are
> > replaced on playback
> > by two ear-phones. What the mikes are doing there is intercepting the
> > sound at the point where it interacts with our heads and recording
> > it. when played back, it merely re-inserts the sound
> > into the ear-head interface at the point it was intercepted upon
> > recording. I can give a very
> > convincing "you-are-there" illusion, but because everyone's head and
> > ears are a bit different, it
> > isn't perfect. For instance, binaural sound cannot produce an image
> > that the brain can interpret as
> > a sound coming from behind the listener. Also Binaural doesn't work
> > very well as a stereo source to be listened to on speakers. That's
> > the difference between stereophonic sound and binaural sound.
>
> WHAT is the difference? What I said? That binaural is an ear input system
> and stereo is a field-type system?
>
> >> and ain't nuthin you can do but try and make it sound as much
> >> like a live field as possible by studying this model of
> >> reproduction and working with it.
>
> > That's true but it still doesn't conflate Binaural with stereo.
>
> It does if you are holding up the Mercury recordings as a great example of
> the art. If you think they were recording signals that were intended to sub
> for your ears if you had been there, then let me know who here has ears that
> stretch what - some 12 to 18 feet across the front of the orchestra and are
> suspended above the conductor's head.
No one is saying that, least of all me. I am an adherent of closely
placed cardioid or crossed figure-of-eight microphones and do no not
believe that one generally gets very good stereo from spaced omnis.
Bob Fine of Mercury is an exception. The mikes (Telefunken U-47s) he
used were advertised as omnidirectional, but they really weren't. The
long lobe in the front of the mike was very wide, but frequency
response fell-off very quickly on the side lobes and was somewhat less
attenuated at the back. Yes, it picked-up sound from those lobes, but
the mikes were more of a semi-cardioide than they were
omnidirectional. That's why the Mercury system worked, and why, when
Bob Woods of Telarc tried the same three-mike spaced omni arrangement
in the late '70's, it didn't really work. Woods was using modern
omnidirectional mikes which were REAL omnis and the result was that
most Telarcs image very poorly.
> > Again I disagree with your wording. Stereo is the capture of the
> > soundfield associated with an acoustic event.
>
> Very good!
>
> > Correctly done, it
> > consists of two microphones "viewing" that event from two different
> > perspectives. Reproduced, those two soundfields interact with one
> > another (and the space around them) in such a way as to fool the ear
> > into reconstructing a stereo image from those two perspectives. Since
> > the normally functioning human ear EXPECTS to hear two perspectives
> > of a sonic event happening at a distance, it hears stereo. It even
> > hears directionality when the source is a multi-miked and
> > multichannel recording played back through two speakers and tries to
> > make stereo from that recording.
>
> Did you say TWO microphones? And TWO perspectives? Then we all have a
> serious problem! Those terrific three spaced omni recordings, first of all,
> anything done without a proper dummy head second of all, many jazz and
> classical recordings made with more than two mikes, such as highlight mikes,
> drum kit mikes, piano mikes, and vocal mikes for the soloist. Now we are
> told that our ears will be stretched across the orchestra, stuffed into a
> drum kit, placed under the lid of a piano, and shoved into the face of the
> singer or suspended above the chorus. I just have a problem with that theory
> of what we are doing with recording.
Why do you keep bringing up dummy heads, Gary? They have nothing to do
with stereo. Have you ever tried to listen to a binaural recording
through speakers? It sounds awful (from a soundstage point of view,
anyway), pretty much like mono
> Now lets have some fun with Alfred Newman....
>
> > An Interesting diregression illustrates the point of even accidental
> > stereo recording can provide a
> > satisfactory illusion.
> >
> > In 1947, Motion Picture composer Alfred Newman ('The Robe', 'How The
> > West Was Won', 'Airport', etc.)
> > was scoring the Fox film 'Captain From Castile' in the Fox scoring
> > studio. He told the engineers that he wanted the orchestra recorded
> > from the two sides of the room so that he could choose which
> > perspective suited the action best. The perspective favoring the
> > strings would emphasize the romance aspect of the film, that
> > perspective favoring the brass would emphasize the militaristic
> > aspect of the action. Newman had (we assume) NO knowledge of stereo
> > and never played the two optical sound tracks back simultaneously.
> > Anyway, the film finished and the score laid-in, the optical two
> > track
> > sound track was forgotten.
> > Then about seven years ago, a small "record" company specializing in
> > film soundtracks stumbled upon the original session film for 'Captain
> > From Castile'. looking at the optical tracks, the engineer, expecting
> > to see music on one of the tracks and dialog or sound effects on the
> > other, noticed that both tracks seemed to have music on them. Not
> > just music but what looked to him like slightly different versions of
> > the same performance. He hooked the multitrack optical reader up to
> > amplifiers and speakers and VOILA!, a stereo performance of the
> > music. After using a computer to clean it up some and do some digital
> > EQ, the performance was released on CD. It has to be the earliest
> > real stereo performance ever
> > released as a commercial recording! The stereo, BTW, is quite good
> > and the two-disc CD sounds a lot
> > better than one might expect given that it was recorded optically,
> > not magnetically on equipment probably pre-war in origin. BUT THE
> > KICKER IS THAT THE STEREO IS TOTALLY ACCIDENTAL!
>
> My first thought on this was that if these were two separate optical tracks,
> it would be impossible to sychronize them in order to play them together in
> stereo.
Again you're letting your unfamiliarity with this stuff write checks
that your keyboard can't cash. There were several types of optical
film sound recorders developed for the motion picture industry. There
was a 16mm, single track recorder developed in the early thirties,
then there was a two track 16mm recorder that had two parallel optical
heads on it. Then there was a 35mm recorder that had FOUR parallel
optical tracks on it. For Gone With The Wind, for instance, the music
was on track one, the dialog on track two and the sound effects were
on track three. Track four had a time code signal on it. From those
three sound tracks, the final mono track (on the edge of the release
print) was mixed. Post-production mixing was invented by Hollywood. So
you see, the two tracks didn't need to be 'synchronized' as they were
both recorded to the same length of film.
> But if they wre recorded on the same piece of film, of the same
> performance, then they already had a stereo recording sound head which, if
> what you say about their ignorance of stereo, would be unlikely. Nor would
> they record the music on one track and the sound effects or narration on
> another, so that proposed guess would be silly. Anyway, I would be curious
> about the full story on this, being a film maker myself. All I can think is
> that these two tracks must have been recorded on separate optical heads of
> the same performance, which were running in sync with each other by means of
> whatever technology was available at that time. I know they used optical
> film recorders the same way I used fullcoat to sync up with the camera. I
> just can't imagine anyone having a multichannel optical recorder for
> original field work.
I suggest that you read-up on Hollywood production methods in the
'30's and '40's before making wild assumptions. For instance, when you
said that the optical recorders in question were used "in the field",
I'm wondering from what sta tement of mine did he glean that
statement? Because I don't remember mentioning field recording in any
way, shape, or form! The Fox Scoring stage in Culver City is not in
any way "in the field".
You are right. They did use two optical recording heads,
But rather than use two separate machines, the two heads were
recording to the same 16mm piece of film simultaneously on the SAME
machine producing two parallel tracks.
And there's nothing "silly" about recording all of the sound elements
of a film (dialog, music, sound effects and folly) separately to a
single piece of sound film It allows them to be mixed and edited
TOGETHER before being transferred to the final cut of the film.
Audio_Empire[_2_]
April 8th 13, 11:40 PM
On Monday, April 8, 2013 6:25:34 AM UTC-7, Gary Eickmeier wrote:
> KH wrote:
> > On 4/7/2013 5:47 AM, Gary Eickmeier wrote:
<SNIP>
> We are not doing accuracy. You cannot have accuracy, because of the central
> recording problem.
What central recording problem?
> >> This way of thinking about the reproduction problem is very, very,
> >> very different from what most of us would pick up in the magazines
> >> or even the classic texts on stereo. Study not ear input signals and
> >> lost information, but rather making sound in rooms.
> >
> > That's the way to make an illusion that you like, or believe is more
> > realistic. It's not the path to the most accurate reproduction of the
> > live event IME.
>
> > Keith
>
> Let me propose a little thought experiment riddle to you, Keith.
>
> You want to do a modern live vs recorded demo for some commercial purpose,
> maybe to sell speakers. You will use a saxophone, drum, and maybe trumpet.
> Doesn't matter. So you close mike the sax and trumpet, and you use two or
> three mikes on the drum kit if it has some spatial extent in the room. You
> do this in a very anechoic space, maybe outdoors like the original
> experiment. You then play back these tracks on speakers that have
> substantially the same radiation patterns as the instruments - the sax and
> drums mainly omni with the trumpet more directional. You find that if you
> place the live instrument side by side with those speakers, their sound is
> indistinguishable. Success, so you take your act on the road and amaze all
> and sundry.
>
> So first, would you agree that this would work? Would be a terrific
> experiment and very realistic?
No it would not because the capture of the live instruments was done
stereophonically. Now, if you took those musicians outdoors (on a
quiet day) or into an anechoic chamber and recorded them with an MS or
coincident pair (like the original) then played back the recording
through the speakers being demonstrated WITH the ensemble on stage in
the exact formation that they occupied when the recording was made,
THEN it would work. It would also work, I hasten to add, using a
slight modification of your proposal, if each instrument were miked
separately (not to mention anechoically) and recorded to a mutitrack
recorder of some type and then each track were played back through a
separate amp it's own speaker (in the case of your example with a sax,
drums and trumpet, that would be THREE speakers of the type being
auditioned) with the musician being either beside or behind his
instrument's speaker. it would also work, but in that case, it would
do nothing to show off the imaging and sound-staging characteristics
of the speakers being demonstrated. That's why Edgar Vilchur had the
string quartet recorded with a coincident mike technique for his
original Acoustic Research "live vs recorded" demonstrations of his
AR3 speakers.
On 4/8/2013 6:25 AM, Gary Eickmeier wrote:
> KH wrote:
>> On 4/7/2013 5:47 AM, Gary Eickmeier wrote:
>
>>> OK, let's take another run at this. I'm sure Mr. Pierce didn't mean
>>> to imply that microphones can be reproducers; he was making a
>>> philosophical point. Nor did he catch my meaning in my prickly post.
>>
>> Seems to me he caught the meaning perfectly well. As I read it, the
>> point was not philosophical at all, and the assumption in his gedanken
>> was clearly that the microphones were as accurate as radiators as they
>> are microphones. The point is that, even in that situation, where
>> you're in the venue, you record and replay from the exact same points,
>> and the mikes are perfectly omnidirectional, all directional
>> information is lost. The signal is two dimensional, so the
>> reverberant field input from, say 120 degrees, will replayed
>> omnidirectionally. The directional, spacial, information is simply
>> lost forever.
>
> Curious misunderstanding of stereo.
Really? How so? I would say, rather, that you fail to grasp the basic
point of the thought experiment.
> If there are two or more mikes recording
> phase locked signals of the same performance, then the two signals are what
> gives us stereo perspective. Not each mike. Both acting together.
Clearly you did miss the point. It is entirely irrelevant, for the
purposes of the 'experiment' whether there are two microphones, or just
one (or X #). Let's look at the situation with one microphone; assuming
the microphone is 100% accurate and 100% omnidirectional, further
assuming the microphone is a perfectly accurate and perfectly
omnidirectional speaker. Will the playback sound like the performance
did, in that precise venue? No, it won't, because the directional
information is lost, and all sounds, irrespective of what direction the
sound originally came from, will be radiated in ALL directions.
>
> NOW - where did the reverberant field go?
It was transformed from a 3-D sound pressure field to a 2-D electrical
signal. I.e., it is gone.
> It's true that stereo has a
> front/back problem, like when the audience applause is folded back behind
> the performance in live recordings, but the reverberant field is still
> recorded, just not as much as if it were a binaural recording with the head
> placed further back into the audience.
Well, no, there's not "less of it" in our thought experiment. It's just
a different perspective, with varying levels based on distance. At the
back wall, the reflections are higher in level, relative to the direct
component, than in the front row. But since our microphone is perfectly
omnidirectional it is still recording an accurate translation (from 3 to
2 dimensions) of the reverberant field, wherever we locate it.
<snip>
>> Take the thought one step further. Suppose you could playback and
>> record coincidentally, with the same assumptions above. What would
>> you hear when you played back that second recording? A second
>> reverberant field that included reflections of the original
>> reverberant field, as well as the direct sound. Would this sound
>> more accurate than the first playback? Of course not, it would
>> compound the problem. This, in essence, is the method you're
>> espousing, except you're not holding the venue constant.
>
> Not sure I get your example, what you are doing there. But we all know about
> the "central recording problem," that in order to do stereo we are running
> the sound through two acoustic spaces. I am just saying that if you
> understand the system, the two spaces will complement each other rather than
> compete with each other.
And I'm disagreeing with your assertion. Take the one microphone
instance discussed above. It has recorded the reverberant field during
the performance, it then replays that information in a 2-D rendition
(time/pressure) broadcasting in all directions. Clearly, the
directional clues that a listener, in any location in the venue, heard
during the performance are gone. The playback will also generate a
reverberant field that consists of all of the direct radiated sound from
the performance, and all of the reflected sound from the performance,
and the reflections of all of these sounds from the venue. The
resulting sound field will not resemble the original soundfield because
the directional information is not in the recording, and you're simply
adding more reflections to an already inaccurate recording.
>>> In this way, the real sound sources within your room are once again
>>> creating all of those "lost" pieces of information that Arnie and
>>> Dick are talking about. But you should not think of this recreation
>>> as something "fake" or artificial in any way,
>>
>> Leaving aside arguments hashed out previously, this statement is what
>> IMO most people would disagree with. It is artificial, and those lost
>> pieces of information will Not be recovered by anything you can do
>> during reproduction. No matter what you do in an effort to retrieve
>> it, it will be a simulation only - i.e. fake.
>
> Oh dear! Recordings fake? Say it ain't so Joe!
You're now arguing with yourself Gary. Do you think it's fake or not?
Above you say not fake "in any way".
<snip>
>> And it is not the sound heard at the recording venue. All the sound
>> in your room is "real", but in no wise does that imply accuracy to the
>> original event.
>
> We are not doing accuracy. You cannot have accuracy, because of the central
> recording problem.
You might want to define "central recording problem". In my lexicon, it
means the loss of the ability to record the directional information in a
stereo recording. The kind of recordings we have to live with.
And "we" appear to be trying for the best accuracy to the original event
as possible. You may be trying for something else.
<snip>
> Let me propose a little thought experiment riddle to you, Keith.
>
> You want to do a modern live vs recorded demo for some commercial purpose,
> maybe to sell speakers. You will use a saxophone, drum, and maybe trumpet.
> Doesn't matter. So you close mike the sax and trumpet, and you use two or
> three mikes on the drum kit if it has some spatial extent in the room. You
> do this in a very anechoic space, maybe outdoors like the original
> experiment. You then play back these tracks on speakers that have
> substantially the same radiation patterns as the instruments - the sax and
> drums mainly omni with the trumpet more directional. You find that if you
> place the live instrument side by side with those speakers, their sound is
> indistinguishable. Success, so you take your act on the road and amaze all
> and sundry.
>
> So first, would you agree that this would work? Would be a terrific
> experiment and very realistic?
No I don't agree. If each instrument were recorded separately on it's
own channel, then the three instruments were played back on their own
dedicated speakers, in the same spatial orientation as the original,
with speakers with the same FR and radiation pattern, it could work.
>
> If so then note that the recording technique had nothing whatsoever to do
> with the human hearing mechanism, the spacing between the ears, the HRTF,
> none of it.
In *your* version of the experiment, making a stereo recording, no, the
HRTF has nothing to do with it. Which is why it wouldn't work. As I
suggested, using individual recordings, on individual channels, recorded
anechoically, it very much recognizes the HRTF in the recording process.
It's a recognition that reverberant field information has to be
excluded from the recording for the listeners' HRTF to process the
information the same as it would the live instrument. Any reverberant
field information from, e.g. the "sax speaker" that is not the sax, will
make the speaker easily distinguishable from the sax on replay. That's
pretty straightforward.
> You are usning the acoustics of the playback space in the same
> way as the live instruments, so they sound the same and very realistic. You
> have recorded not ear input signals but the object itself, the sound of the
> instruments and their radiation pattern, to be played back in a real room to
> make sound fields in that room, not to cast ear input signals toward you.
> Furthermore, everyone in the room will hear the same sound,
No they won't hear the same. That's like saying every seat in the hall
is the same.
> each with his
> own HRTF and total hearing mechanism.
Yep, so they'll know whether the sax is on the left or right side of the
"stage", and which instrument is closer/farther.
>
> And the goal is not "accuracy" but realism!
If the playback sounds the same as the real instrument, that *is*
accuracy. This is not, however, possible in the real world, with real
world recordings. Try this with an orchestra - gets out of control real
quick. And if you could do it, it would sound like an orchestra stuffed
into your living room, and sound nothing like a concert. That's not the
kind of "realism" I'm looking for.
>
> So there you have an example that gives a more useful understanding of the
> nature of the system.
Well no, not really, IMO.
Keith
Gary Eickmeier
April 9th 13, 12:44 PM
Audio_Empire wrote:
> On Monday, April 8, 2013 6:25:34 AM UTC-7, Gary Eickmeier wrote:
[ Extraneous attributions snipped. -- dsr ]
>> We are not doing accuracy. You cannot have accuracy, because of the
>> central recording problem.
>
> What central recording problem?
The one I discussed above.
>> Let me propose a little thought experiment riddle to you, Keith.
>>
>> You want to do a modern live vs recorded demo for some commercial
>> purpose, maybe to sell speakers. You will use a saxophone, drum, and
>> maybe trumpet. Doesn't matter. So you close mike the sax and
>> trumpet, and you use two or three mikes on the drum kit if it has
>> some spatial extent in the room. You do this in a very anechoic
>> space, maybe outdoors like the original experiment. You then play
>> back these tracks on speakers that have substantially the same
>> radiation patterns as the instruments - the sax and drums mainly
>> omni with the trumpet more directional. You find that if you place
>> the live instrument side by side with those speakers, their sound is
>> indistinguishable. Success, so you take your act on the road and
>> amaze all and sundry.
>>
>> So first, would you agree that this would work? Would be a terrific
>> experiment and very realistic?
>
> No it would not because the capture of the live instruments was done
> stereophonically.
No, it was not. Each instrument was close-miked separately.
> Now, if you took those musicians outdoors (on a
> quiet day) or into an anechoic chamber and recorded them with an MS or
> coincident pair (like the original) then played back the recording
> through the speakers being demonstrated WITH the ensemble on stage in
> the exact formation that they occupied when the recording was made,
> THEN it would work. It would also work, I hasten to add, using a
> slight modification of your proposal, if each instrument were miked
> separately (not to mention anechoically) and recorded to a mutitrack
> recorder of some type and then each track were played back through a
> separate amp it's own speaker (in the case of your example with a sax,
> drums and trumpet, that would be THREE speakers of the type being
> auditioned) with the musician being either beside or behind his
> instrument's speaker. it would also work, but in that case, it would
> do nothing to show off the imaging and sound-staging characteristics
> of the speakers being demonstrated. That's why Edgar Vilchur had the
> string quartet recorded with a coincident mike technique for his
> original Acoustic Research "live vs recorded" demonstrations of his
> AR3 speakers.
Very good AE - but I think you are reading my stuff and then thinking that
they are your own thoughts. No complaints here, as long as we are getting
closer. Now let's press on to Alfred Newman again...
Gary Eickmeier
Gary Eickmeier
April 9th 13, 12:45 PM
Audio_Empire wrote:
> On Monday, April 8, 2013 6:24:44 AM UTC-7, Gary Eickmeier wrote:
>> That says nothing about the system itself. In both ideas about how
>> stereo works, we are listening with our ears. But if you think that
>> stereo is a system of creating, or recording, ear input signals,
>> then tell me what you are doing in your recording to put the HRTF
>> and appropriate ear spacing and head attenuation into your signals?
>
> Again, you seem to be conflating binaural sound with stereo. They
> aren't the same thing. Binaural sound is about capturing and playing
> back the sound field as the EAR receives it, while stereo is about
> capturing and transmitting (or playing back in the case of a
> recording) the sound field as the musical ensemble MAKES it. Totally
> different concept. In one, the spatial characteristics and the
> interplay of the listener's head are already part of the signal,
> having been created at the binaural microphones and surrogate head.
> The only thing that the listener does into interpret the sound of
> someone (or more likely, someTHING else's head. In Stereo the user
> uses his head and his ear shape to interpret sound. That's one reason
> why binaural can't parse the difference between sounds occurring
> behind the listener and sounds that are supposed be in front. They all
> seem to come from in front of the listener, even when they are clearly
> supposed to be in back. In contrast, so-called "surround" stereo has
> no problem=20 placing images anywhere in the sound field that it cares
> to, and the listener can correctly perceive it as being in it's
> correct spot.
Again, very good, but still reading my stuff and claiming it as your own
thoughts. Welcome aboard! Now where is Pierce?
>> Did you say TWO microphones? And TWO perspectives? Then we all have a
>> serious problem! Those terrific three spaced omni recordings, first
>> of all, anything done without a proper dummy head second of all,
>> many jazz and classical recordings made with more than two mikes,
>> such as highlight mikes, drum kit mikes, piano mikes, and vocal
>> mikes for the soloist. Now we are told that our ears will be
>> stretched across the orchestra, stuffed into a drum kit, placed
>> under the lid of a piano, and shoved into the face of the singer or
>> suspended above the chorus. I just have a problem with that theory
>> of what we are doing with recording.
>
> Why do you keep bringing up dummy heads, Gary? They have nothing to do
> with stereo. Have you ever tried to listen to a binaural recording
> through speakers? It sounds awful (from a soundstage point of view,
> anyway), pretty much like mono
Because you keep saying two perspectives with two microphones. But I think
we are in agreement now that stereo and binaural are two totally separate
systems. I point out that we must not confuse the two, because recording and
playback techniques are very different for each, and audiophiles keep trying
to play their LP collections on two speakers because they have two ears and
no brain. (That's a joke, moderators).
>
>> Now lets have some fun with Alfred Newman....
>> My first thought on this was that if these were two separate optical
>> tracks, it would be impossible to sychronize them in order to play
>> them together in stereo.
>
> Again you're letting your unfamiliarity with this stuff write checks
> that your keyboard can't cash. There were several types of optical
> film sound recorders developed for the motion picture industry. There
> was a 16mm, single track recorder developed in the early thirties,
> then there was a two track 16mm recorder that had two parallel optical
> heads on it. Then there was a 35mm recorder that had FOUR parallel
> optical tracks on it. For Gone With The Wind, for instance, the music
> was on track one, the dialog on track two and the sound effects were
> on track three. Track four had a time code signal on it. From those
> three sound tracks, the final mono track (on the edge of the release
> print) was mixed. Post-production mixing was invented by Hollywood. So
> you see, the two tracks didn't need to be 'synchronized' as they were
> both recorded to the same length of film.
That's terrific knowledge, but you don't record original sound that way.
Dialog would be recorded during filming, and even possibly looped afterward
and mixed in later. Music would be recorded in a sound stage while the
conductor watched a work print for timing. Foley effects would be each
recorded separately and mixed in later. All of these tracks are then mixed
down to a master and printed on an optical track to make the final release
prints. They did not and could not record all of the sounds for Gone With
the Wind simultaneously during filming. I know you realize that, but if you
are saying that they put music, effects, and dialog on the same piece of
35mm for the mixdown, I hope you also realize that those tracks would not be
recorded live, possibly not even simultaneously, but one at a time, running
the film thru 3 or four times in sync with the editor's tracks.
>
>> But if they wre recorded on the same piece of film, of the same
>> performance, then they already had a stereo recording sound head
>> which, if what you say about their ignorance of stereo, would be
>> unlikely. Nor would they record the music on one track and the sound
>> effects or narration on another, so that proposed guess would be
>> silly. Anyway, I would be curious about the full story on this,
>> being a film maker myself. All I can think is that these two tracks
>> must have been recorded on separate optical heads of the same
>> performance, which were running in sync with each other by means of
>> whatever technology was available at that time. I know they used
>> optical film recorders the same way I used fullcoat to sync up with
>> the camera. I just can't imagine anyone having a multichannel
>> optical recorder for original field work.
>
> I suggest that you read-up on Hollywood production methods in the
> '30's and '40's before making wild assumptions. For instance, when you
> said that the optical recorders in question were used "in the field",
> I'm wondering from what sta tement of mine did he glean that
> statement? Because I don't remember mentioning field recording in any
> way, shape, or form! The Fox Scoring stage in Culver City is not in
> any way "in the field".
"The field" doesn't mean out in the country somewhere. It just means
original recording. When they record the music that is original recording. I
used cassette tape in sync with my camera, or sometimes fullcoat. I would
then come home and sync up the fullcoat with the movie film, frame for frame
on a vertical editing bench. Hollywood used Moviolas for a long time, then
probably flatbeds, but the idea is always the same - each track is built
separately and edited in by a film editor, laying in music, effects, and
narration where he wants them and choosing sync sound and shots from the
many takes they shoot in the field.
Sync stereo recording was a quest of mine, and I finally produced several
surround sound sync films running a cassette recorder in sycn with the
projector which had its own mag tracks on it.
>
> You are right. They did use two optical recording heads,
> But rather than use two separate machines, the two heads were
> recording to the same 16mm piece of film simultaneously on the SAME
> machine producing two parallel tracks.
>
> And there's nothing "silly" about recording all of the sound elements
> of a film (dialog, music, sound effects and folly) separately to a
> single piece of sound film It allows them to be mixed and edited
> TOGETHER before being transferred to the final cut of the film.
No, it's just not done that way. They may have used such a scheme for the
final mixdown in the dubbing stage, but if they had multi track recorders
for live recording, then they already had stereo recorders. There is no
conceivable reason to have more than one track in a live sound recorder
except to be able to record in stereo.
I will try and Google up that story and see what was really happening and
how they managed to record two sync tracks of the same performance before
stereo. My guess is two separate recorders running in sync.
Gary Eickmeier
Gary Eickmeier
April 9th 13, 04:58 PM
Keith,
To be brief and not agonize this a lot more, let me just summarize your and
my points and then I must get to work!
In my thought experiment with the individually miked instruments, played
back on speakers placed the same and with similar radiation patterns to the
instruments, you first told me that I was wrong and then you repeated my
version as how you would do it and agreed that it would work.
I set this as an example of a very realistic reproduction of the
instruments' sound. It would not be "accurate" to the sound of those
instruments played in any particular hall or stage, because of the central
recording problem, that you have to run the sound thru two acoustic spaces
before it reaches you. Then I noted that we aren't worried about that,
because we are "doing" accuracy in this example, we are doing realism. Those
"lost" spatial patterns of the reverberant field would be present in their
entirety in your playback room, but they are not the patterns of any
recording room, they are those of your playback room. Agreed? Not fair,
because your goal is to be able to record some original hall and play it so
that you think you are there. Right?
OK, NOW - all I am saying is you can imagine a continuum with the above
example at one extreme, one in which there is NO acoustic recorded and 100%
of what you hear for an acoustic is that of your room, and is real and
taking on 100% of the duties of supporting the good sound that you are
getting, that very realistic playback of someone's performance. That is one
end of the continuum. The other end is a very wet recording with whatever
recording mikes you want to use placed back in the audience where the good
seats are in a mistaken attempt to record 100% the space of the live sound,
and then you will try to play it back in a nearly anechoic very dry room, as
some poor dumb *******s attempt all the time.
We all know (heh) that that doesn't work. You will say that it is because
that spatial info will be lost because we don't have enough microphones and
speakers to make all of those patterns come back. OK, fine, great dodge, but
in reality WE DON'T DO IT THAT WAY. We place the mikes much closer to the
instruments than you would sit so listen in that hall, then we play it back
in another room at a distance from you so that the two spaces complement
each other and lead to greater realism even tho we cannot have "accuracy" of
what was heard if you were there. Accuracy would mean you would hear a
concert from 9 ft above the conductor's head, or from ears that are 18 ft
apart, or some such reductio ad absurdum. I have described the idea as
close-miking the soundstage, because we are recording not just the actual
instruments but also the early reflected sound from the sidewalls, the most
important reflections, and also a hint of the reverberant field. All of this
info mixes with the playback acoustic to give you the realism of sitting
there with them in front of you, and also the "flavor" of the live acoustic
space.
OK, so it wasn't very brief, but there is a lot more to it than even that.
Bottom line, AE is correct that it is possible to record for greater
realism, but the techniques and reasons may be surprising to both of you,
and progress toward a goal of greater realism may take a path a little -
well, a lot - different from what most of us assume. That path is NOT a
search for greater and greater "accuracy," but rather trying to work with
and understand what we are actually doing with a field-type system, which is
making sound in rooms, not making sound in your ears.
Gary Eickmeier
Audio_Empire[_2_]
April 10th 13, 03:34 AM
On Tuesday, April 9, 2013 4:45:05 AM UTC-7, Gary Eickmeier wrote:
> Audio_Empire wrote:
> > On Monday, April 8, 2013 6:24:44 AM UTC-7, Gary Eickmeier wrote:
>
> >> Now lets have some fun with Alfred Newman....
>
> >> My first thought on this was that if these were two separate optical
> >> tracks, it would be impossible to sychronize them in order to play
> >> them together in stereo.
> >
> > Again you're letting your unfamiliarity with this stuff write checks
> > that your keyboard can't cash. There were several types of optical
> > film sound recorders developed for the motion picture industry. There
> > was a 16mm, single track recorder developed in the early thirties,
> > then there was a two track 16mm recorder that had two parallel optical
> > heads on it. Then there was a 35mm recorder that had FOUR parallel
> > optical tracks on it. For Gone With The Wind, for instance, the music
> > was on track one, the dialog on track two and the sound effects were
> > on track three. Track four had a time code signal on it. From those
> > three sound tracks, the final mono track (on the edge of the release
> > print) was mixed. Post-production mixing was invented by Hollywood. So
> > you see, the two tracks didn't need to be 'synchronized' as they were
> > both recorded to the same length of film.
>
> That's terrific knowledge, but you don't record original sound that way.
Not any more, no. But if the original sound track, performed in a
cinema sound recording studio was not recorded on film, how was it
recorded. Before answering, keep in mind that the music, along with
the dialog, and all sound effects MUST be editable. Remember, audio
tape, even though it had been invented in 1947, was not being used in
movie studios yet.
> Dialog would be recorded during filming, and even possibly looped afterward
Very true, but that has nothing whatsoever with what I'm talking
about.
> and mixed in later. Music would be recorded in a sound stage while the
> conductor watched a work print for timing.
Absolutely, and THIS is the two-track optical recording that I'm
talking about.
> Foley effects would be each
> recorded separately and mixed in later. All of these tracks are then mixed
> down to a master and printed on an optical track to make the final release
> prints.
But they were consolidated FIRST onto a single, four track piece of 35
mm film into what is called a conformance print.
> They did not and could not record all of the sounds for Gone With
> the Wind simultaneously during filming.
Who said that the did?
> I know you realize that, but if you
> are saying that they put music, effects, and dialog on the same piece of
> 35mm for the mixdown, I hope you also realize that those tracks would not be
> recorded live, possibly not even simultaneously, but one at a time, running
> the film thru 3 or four times in sync with the editor's tracks.
Of course I know that. Why even bring it up?
> >> But if they wre recorded on the same piece of film, of the same
> >> performance, then they already had a stereo recording sound head
It wasn't designed as a stereo record head. It was designed as a
two-track sound head. Stereo in film hadn't been invented yet and
wouldn't be until "This is Cinerama" 1952.
> >> which, if what you say about their ignorance of stereo, would be
> >> unlikely. Nor would they record the music on one track and the sound
> >> effects or narration on another, so that proposed guess would be
> >> silly. Anyway, I would be curious about the full story on this,
> >> being a film maker myself. All I can think is that these two tracks
> >> must have been recorded on separate optical heads of the same
> >> performance, which were running in sync with each other by means of
> >> whatever technology was available at that time. I know they used
> >> optical film recorders the same way I used fullcoat to sync up with
> >> the camera. I just can't imagine anyone having a multichannel
> >> optical recorder for original field work.
> >
> > I suggest that you read-up on Hollywood production methods in the
> > '30's and '40's before making wild assumptions. For instance, when you
> > said that the optical recorders in question were used "in the field",
> > I'm wondering from what sta tement of mine did he glean that
> > statement? Because I don't remember mentioning field recording in any
> > way, shape, or form! The Fox Scoring stage in Culver City is not in
> > any way "in the field".
>
> "The field" doesn't mean out in the country somewhere. It just means
> original recording. When they record the music that is original recording. I
> used cassette tape in sync with my camera, or sometimes fullcoat. I would
> then come home and sync up the fullcoat with the movie film, frame for frame
> on a vertical editing bench. Hollywood used Moviolas for a long time, then
> probably flatbeds, but the idea is always the same - each track is built
> separately and edited in by a film editor, laying in music, effects, and
> narration where he wants them and choosing sync sound and shots from the
> many takes they shoot in the field.
Yes, of course. For years (before portable digital) Hollywood used a
specially modified (with a SMPTE time code added) Sony WM-D6 servo
capstaned cassette recorder to grab sound on location. But the
equipment used to record a musical score doesn't have to move. Like
any recording studio, it's fixed in the control room.
> Sync stereo recording was a quest of mine, and I finally produced several
> surround sound sync films running a cassette recorder in sycn with the
> projector which had its own mag tracks on it.
Not necessary when all the tracks are recorded simultaneously on the same media.
> > You are right. They did use two optical recording heads,
> > But rather than use two separate machines, the two heads were
> > recording to the same 16mm piece of film simultaneously on the SAME
> > machine producing two parallel tracks.
> >
> > And there's nothing "silly" about recording all of the sound elements
> > of a film (dialog, music, sound effects and folly) separately to a
> > single piece of sound film It allows them to be mixed and edited
> > TOGETHER before being transferred to the final cut of the film.
>
> No, it's just not done that way.
You were there in 1939, perhaps? You've seen the sound master? Held it
in your hands, perhaps? I have. It is BECAUSE this master existed
that it was possible to replace the music track with a newly recorded
stereo track for a 1970's re-release. It was easy to separate the
music, dialog, and sound effects so that a stereo print could be made.
They "panned" the dialog and the sound effects to follow the action.
They also played with the aspect ratio of the picture to make it
widescreen, but that's another story.
> They may have used such a scheme for the
> final mixdown in the dubbing stage,
Isn't THAT what I said???!!!
> but if they had multi track recorders
> for live recording, then they already had stereo recorders.
To use for WHAT? There was no stereo in film. Disney did play with
multi-channel sound for Fantasia in 1941, but it wasn't stereo.
You seem not to believe me on this. Why don't you look up a Westrex
1581A Photographic Film Recorder and get back to me. While you are at
it, Check SAE-Records Catalogue Number CRS-0007, Complete ORIGINAL
STEREO film soundtrack from the 1947 20th Century Fox production of
"Captain From Castile".
http://www.discogs.com/Alfred-Newman-Captain-From-Castile/release/3594394
They had multitrack recorders what they used them for likely changed
with the film being produced.
> There is no
> conceivable reason to have more than one track in a live sound recorder
> except to be able to record in stereo.
Stereo hadn't been "invented" yet.
> I will try and Google up that story and see what was really happening and
> how they managed to record two sync tracks of the same performance before
> stereo. My guess is two separate recorders running in sync.
There were so many multitrack photographic recorders made in those
days. Westrex. Mauer RCA, etc. None had much response above about
7KHz. The producers of the "Captain from Castile" Two CD set, used
some digital "enhancement" to autocorrelate the noise and to boost
what High- frequencies present on the film, but the results aren't all
that bad, especially for "accidental" stereo.
KH
April 10th 13, 02:04 PM
On 4/9/2013 8:58 AM, Gary Eickmeier wrote:
> Keith,
>
> To be brief and not agonize this a lot more, let me just summarize your and
> my points and then I must get to work!
If only you would "summarize" accurately, that might help. Alas...
> In my thought experiment with the individually miked instruments, played
> back on speakers placed the same and with similar radiation patterns to the
> instruments, you first told me that I was wrong
Uhmm, no. You talked about close-miking the soundfield, as you do
below. You used multiple mikes on the drum kit, for example, to get the
"space". Basically, you described a partially close-mike stereo recording.
> and then you repeated my
> version as how you would do it and agreed that it would work.
Only when recorded anechoically, which you certainly did not describe.
They are two separate scenarios whether you realize it or not.
>
> I set this as an example of a very realistic reproduction of the
> instruments' sound. It would not be "accurate" to the sound of those
> instruments played in any particular hall or stage, because of the central
> recording problem, that you have to run the sound thru two acoustic spaces
> before it reaches you.
OK, fine, at least you defined your term.
> Then I noted that we aren't worried about that,
> because we are "doing" accuracy in this example
I assume you mean "...not doing..."
>, we are doing realism. Those
> "lost" spatial patterns of the reverberant field would be present in their
> entirety in your playback room,
No, *different* patterns will be *generated* in the playback room.
Again, two very disparate things indeed.
> but they are not the patterns of any
> recording room, they are those of your playback room. Agreed? Not fair,
> because your goal is to be able to record some original hall and play it so
> that you think you are there. Right?
Yes. And it's clear that is NOT your goal, yet you seem to think any
goal not your own is misguided.
> OK, NOW - all I am saying is you can imagine a continuum with the above
> example at one extreme, one in which there is NO acoustic recorded and 100%
> of what you hear for an acoustic is that of your room, and is real and
> taking on 100% of the duties of supporting the good sound that you are
> getting, that very realistic playback of someone's performance. That is one
> end of the continuum. The other end is a very wet recording with whatever
> recording mikes you want to use placed back in the audience where the good
> seats are in a mistaken attempt to record 100% the space of the live sound,
> and then you will try to play it back in a nearly anechoic very dry room, as
> some poor dumb *******s attempt all the time.
Inability to converse without gratuitous ad hominem noted. Do you
really wonder why you're not successful in furthering your arguments?
> We all know (heh) that that doesn't work. You will say that it is because
> that spatial info will be lost because we don't have enough microphones and
> speakers to make all of those patterns come back.
No I won't. I'll repeat the focus of this entire discussion - the
spatial information is LOST. Period. Even with myriad microphones and
playback speakers, you would have to be able control the directivity
precisely, recording and playback, on a very small scale, which isn't
possible. You can devise schemes along those lines that can get you
closer, for sure, but the electrical signal from each and every one of
those mikes would be bereft of any directional information, being only
ameliorated by simulating that information through placement and
orientation of the playback devices.
> OK, fine, great dodge, but
> in reality WE DON'T DO IT THAT WAY.
No, typically we record a great deal of the reverberant field as well.
You feel adding more reflections on playback enhances realism. Most do not.
> We place the mikes much closer to the
> instruments than you would sit so listen in that hall, then we play it back
> in another room at a distance from you so that the two spaces complement
> each other and lead to greater realism even tho we cannot have "accuracy" of
> what was heard if you were there. Accuracy would mean you would hear a
> concert from 9 ft above the conductor's head, or from ears that are 18 ft
> apart, or some such reductio ad absurdum.
A nonsensical argument. Accuracy would mean you'd hear, accurately,
what an average listener heard at a predetermined point in the audience.
The placement of the microphones are irrelevant - they do not define
the point of accuracy, they are positioned as needed to provide the
greatest accuracy, relative to the actual performance, at the
predetermined location. Only in the thought experiment Dick provided,
that I replied to, would the mike location and the "accuracy point"
coincide, and then only if the method *worked*, the fallacy of which was
the point of the example.
> I have described the idea as
> close-miking the soundstage, because we are recording not just the actual
> instruments but also the early reflected sound from the sidewalls, the most
> important reflections, and also a hint of the reverberant field. All of this
> info mixes with the playback acoustic to give you the realism of sitting
> there with them in front of you, and also the "flavor" of the live acoustic
> space.
>
> OK, so it wasn't very brief, but there is a lot more to it than even that.
> Bottom line, AE is correct that it is possible to record for greater
> realism, but the techniques and reasons may be surprising to both of you,
> and progress toward a goal of greater realism may take a path a little -
> well, a lot - different from what most of us assume. That path is NOT a
> search for greater and greater "accuracy," but rather trying to work with
> and understand what we are actually doing with a field-type system, which is
> making sound in rooms, not making sound in your ears.
>
OK, I can summarize our arguments much more succinctly: You want
playback that sounds "real" to you, irrespective of whether it resembles
the actual recorded event or not. You don't think directional
information is lost in recording, because you create your own, in your
room, in a manner that pleases you, and sounds real - to you - and then
say "see, it's there!", which it demonstrably is not. Clearly, once the
playback is untethered from the actual event, then "realism" is strictly
a matter of your preference. There is no reference, let alone definable
quantitative or qualitative evaluation parameters.
I, on the other hand, want to hear, to the extent possible, a playback
that sounds more like the event, knowing that such accuracy is not truly
attainable, but choosing equipment and setup parameters that get me as
close as possible. I'm stuck with the same evaluation parameters
challenge as you, but I at least have a reference that exists outside of
my own very individual head.
Keith
Gary Eickmeier
April 10th 13, 04:03 PM
AE -
I don't doubt your knowledge and research on multitrack optical recorders, I
am just pointing out that you do not record original sound for music,
effects, and dialog or anything else simultaneously on one machine. Except
maybe in post production, as you say, for ease in doing the mix. I don't
know what all machines they had for doing the mix and don't really care that
much - it changes too fast. I sat in on a mix one fine time at a big
production house. The dubbing stage was like a little movie theater, only
with an engineer at a huge board rather than an audience. Behind him there
was a glass wall with a room behind it that had all of the sync film
players. Each reel held 10 minutes, I think, and that is about all a mixer
could concentrate anyway at one time. Controls may have been programmable as
well, for when to come up and down. I did much the same thing in my amateur
filmmaking, using a couple of fullcoat recorders and the sync cassette
player running together. Except I had to do the whole film at one sitting,
because I had no way of inserting cleanly after a take.
The biggest change that came along for me was crystal sync, in which all of
the recorders and cameras ran at a crystal controlled constant speed, and so
didn't need a sync cord running between them. Today, with video and digital
recording, all recorders automatically run at the same precise speed, so no
worries about sync ever. I can shoot an entire wedding for over an hour and
sync up the sound with the picture later at home and it stays in sync the
whole time. And when you think about the digital projection revolution and
what it means for distribution and costs and storage and back-breaking
labor - whew! They used to deliver a movie - to every theater for every
title - several heavy cans of 35 or 70 mm film that had to then be toted up
to the projection room and set to go at showtime. I think they sometimes
spliced it together for projection, but the main routine was to wait for the
marks to come up and hit play on the next machine. Today they deliver the
whole movie on a hard drive. No changing reels and no focus problems and no
threading a projector.
And footnote added - stereo was invented and well understood long before
1947. It wasn't done by the film people this time, but the film people are
usually at the forefront of new technologies well before pure audio people.
Example, stereo and surround sound.
Got any more jazz recordings? I just finished my Concert Band recording
yesterday. I got the singer's voice off the board, so I could use a clean
feed. It still has all of the echo from the hall sound, but my dry track of
the announcers and singers really adds a touch of clarity.
Gary
Gary Eickmeier
April 10th 13, 04:04 PM
KH wrote:
> OK, I can summarize our arguments much more succinctly: You want
> playback that sounds "real" to you, irrespective of whether it
> resembles the actual recorded event or not. You don't think
> directional information is lost in recording, because you create your own,
> in your
> room, in a manner that pleases you, and sounds real - to you - and
> then say "see, it's there!", which it demonstrably is not. Clearly, once
> the playback is untethered from the actual event, then "realism" is
> strictly a matter of your preference. There is no reference, let alone
> definable quantitative or qualitative evaluation parameters.
>
> I, on the other hand, want to hear, to the extent possible, a playback
> that sounds more like the event, knowing that such accuracy is not
> truly attainable, but choosing equipment and setup parameters that
> get me as close as possible. I'm stuck with the same evaluation
> parameters challenge as you, but I at least have a reference that exists
> outside
> of my own very individual head.
>
> Keith
Thanks for that, and may I respond with my argument against? This will be
both theoretical and subjective results oriented, which is the best I can
do.
THEORETICAL: Assuming that you are saying that you want to record a sort of
sound "picture" of a live performance, as if the playback will then cast
that picture back to your ears and let you hear "into" another acoustic
space - how am I doing - you will then play it back on these really accurate
speakers which have been placed at an angle that will complement the
recorded angles, in a space that is deadened down to some practical extent
so that it doesn't dilute the recorded acoustic too much with its own sound.
OK?
So my objection on theoretical grounds is that this playback model with two
point sources (or three, doesn't matter) will have a sound of its own
despite your attempts to eliminate your room from the nuisance variables,
and will therefore CHANGE the spatial characteristics of the original
model - think of direct to reflected ratios, early reflected sound coming
form different points in space than the direct sound, and the full
reverberant field - to those of your playback system - two highly
directional points in space surrounded by a void. Your problem on the
theoretical level is how - by what theorem or scheme or mechanism - do you
expect to get those spatial patterns of the original back again? If you say
you want the sound from each side to enter your ears and fool you into
hearing the much larger space, then you are confusing this system with
binaural, which IS an ear input system. Complaining that the stereophonic
system has "lost" those spatial characteristics is either a dodge or an
admission that your theory, or paradigm does not work. Using the crosstalk
cancellation idea is yet another confusion with binaural. It gets the sound
out of the speakers as obvious sources, but it also spreads the sound
artificially in a horseshoe pattern around you, where it doesn't belong.
SUBJECTIVE OBSERVATION: I have observed over many years of evaluating
imaging and speaker directivity that if we do it your way, with your theory
of stereo, what results is that ALL of the recorded direct, early reflected,
and reverberant sound are heard to come from those two points in space that
are your speakers, plus of course the instruments in between the speakers,
and any sounds that are extreme left or right on the stage are heard as
coming right from those speakers, causing you to hear them as real sources,
artificial sources in your room. If you would agree that a desirable goal
would be to have the speakers disappear from the soundscape, then you are
purposely running counter to that goal, for mistaken theoretical reasons, or
mistaken ideas on how the system works.
In my Image Model Theory with the more omnidirectional speakers in a large,
good sounding room, with center and surround speakers to support the
reverberant field reconstruction, what I get is the speakers completely
disappearing - NO sound is heard as coming directly from any speaker - and a
set of aerial images of instruments coming from a region behind and beside
the actual speakers, magically placing them at acoustic points in my room
and sounding very much as if they are right there in front of me. I also get
the early reflected sound that was recorded sounding from the front and side
walls, just like live, giving the impression that there is a decoding effect
going on in which my model is placing all of the elements of the recorded
sounds and acoustics coming once again from appropriate locations spatially
within my room. This happens for reasons that are the same as why they
happen live, because delayed extreme right or left sounds are actually
bounced from similar angles on playback to those that were recorded.
Obviously, this is talking about the frontal soundstage only, and not the
full reverberant field, which condition is the same for both of us. We can
both support all that with surround sound speakers and either discrete
recording or processing. But the idea is the same - the object on playback
is the reconstruction of a realistic model of the live fields, not casting
the recorded sound back toward your ears!
My playback will sound like a model of the live sound fields, the size and
shape of my own room superimposed upon the recorded acoustic, the resultant
sound being like a 30-70 mix of the live vs the local, leaning toward the
live - as if you were sticking your head into a three dimensional model, or
diurama of the concert hall. If it is a smaller group like a jazz trio, it
will sound like they are right there in the room with you. Your playback (in
my experience with similar) will sound like a rectangular hole cut in a wall
separating you from the concert hall, but placed maybe halfway up, kind of
like a widescreen movie that is limited right and left and is not 3D. You
will be hearing more of the recorded acoustic space than I, but coming from
this distorted set of incident angles represented by the separation of the
speakers.
Theoretical bull**** aside now, to get back to AE's OP, we can "retrieve" a
very realistic sound on playback if the recording contains some decent
proportions of the important images that we need for reconstruction, and we
can reconstruct those images if we study the problem as a field-type system
rather than an ear input system.
It is not a system of "accuracy" because the recording does not contain this
imagined "picture" of another acoustic space as if from a position that you
would want to listen from in the concert hall. So if we continue to try to
retrieve this non-existent accuracy by using directional speakers in a dead
room the sound will all collapse to the speaker boxes in front of us,
destroying the suspension of disbelief, and we will never reach the kind of
realism that I am enjoying every day with my mistaken ideas of how stereo
works.
Ya pays your money and takes your choice.
Gary Eickmeier
KH
April 11th 13, 12:07 PM
On 4/10/2013 8:04 AM, Gary Eickmeier wrote:
> KH wrote:
>
>> OK, I can summarize our arguments much more succinctly: You want
>> playback that sounds "real" to you, irrespective of whether it
>> resembles the actual recorded event or not. You don't think
>> directional information is lost in recording, because you create your own,
>> in your
>> room, in a manner that pleases you, and sounds real - to you - and
>> then say "see, it's there!", which it demonstrably is not. Clearly, once
>> the playback is untethered from the actual event, then "realism" is
>> strictly a matter of your preference. There is no reference, let alone
>> definable quantitative or qualitative evaluation parameters.
>>
>> I, on the other hand, want to hear, to the extent possible, a playback
>> that sounds more like the event, knowing that such accuracy is not
>> truly attainable, but choosing equipment and setup parameters that
>> get me as close as possible. I'm stuck with the same evaluation
>> parameters challenge as you, but I at least have a reference that exists
>> outside
>> of my own very individual head.
>>
>> Keith
>
> Thanks for that, and may I respond with my argument against? This will be
> both theoretical and subjective results oriented, which is the best I can
> do.
>
> THEORETICAL: Assuming that you are saying that you want to record a sort of
> sound "picture" of a live performance, as if the playback will then cast
> that picture back to your ears and let you hear "into" another acoustic
> space - how am I doing - you will then play it back on these really accurate
> speakers which have been placed at an angle that will complement the
> recorded angles, in a space that is deadened down to some practical extent
> so that it doesn't dilute the recorded acoustic too much with its own sound.
> OK?
Somewhat the general gist, dismissive phrasing aside.
> So my objection on theoretical grounds is that this playback model with two
> point sources (or three, doesn't matter) will have a sound of its own
> despite your attempts to eliminate your room from the nuisance variables,
> and will therefore CHANGE the spatial characteristics of the original
> model -
Yes, and...
> think of direct to reflected ratios, early reflected sound coming
> form different points in space than the direct sound, and the full
> reverberant field - to those of your playback system - two highly
> directional points in space surrounded by a void.
Well, no. They are not "highly" directional, nor are they surrounded by
"a void". They have a great deal of information between them, in
addition to the unavoidable reverberant field around them. So...
> Your problem on the
> theoretical level is how - by what theorem or scheme or mechanism - do you
> expect to get those spatial patterns of the original back again?
Uhmm, you don't. Have you not paid any attention to what I'm saying?
That directional information is GONE. By taking speakers that have a
*specific*, not as you would claim, "highly", directional radiation
pattern, and placing them in locations, at angles, to provide an
illusion of the acoustic in the live event, one can achieve a pretty
fair recreation.
> If you say
> you want the sound from each side to enter your ears and fool you into
> hearing the much larger space, then you are confusing this system with
> binaural, which IS an ear input system. Complaining that the stereophonic
> system has "lost" those spatial characteristics is either a dodge or an
> admission that your theory, or paradigm does not work. Using the crosstalk
> cancellation idea is yet another confusion with binaural. It gets the sound
> out of the speakers as obvious sources, but it also spreads the sound
> artificially in a horseshoe pattern around you, where it doesn't belong.
Good grief! When will you get off the "binaural" dodge? No one here is
conflating stereo and binaural except you. *NO* method, theory, or
floobie dust will "work" to bring back information that is NOT in the
recording.
> SUBJECTIVE OBSERVATION: I have observed over many years of evaluating
> imaging and speaker directivity that if we do it your way, with your theory
> of stereo, what results is that ALL of the recorded direct, early reflected,
> and reverberant sound are heard to come from those two points in space that
> are your speakers, plus of course the instruments in between the speakers,
> and any sounds that are extreme left or right on the stage are heard as
> coming right from those speakers, causing you to hear them as real sources,
> artificial sources in your room. If you would agree that a desirable goal
> would be to have the speakers disappear from the soundscape, then you are
> purposely running counter to that goal, for mistaken theoretical reasons, or
> mistaken ideas on how the system works.
No, you are mistakenly assuming that your experience, and your
interpretation of that experience is universal. It is not. You think
no one in the history of audio, save for you, has experienced a setup
where the speakers - on a good recording - disappear in space? Allow me
to disabuse you of that misconception.
> In my Image Model Theory with the more omnidirectional speakers in a large,
> good sounding room, with center and surround speakers to support the
> reverberant field reconstruction,
No - and this is where you are simply, and demonstrably WRONG. You are
*constructing* a reverberant field, you are NOT REconstructing any 3-d
field from a 2-d recording.
Please explain the physics that would allow that. Any ideas at all?
Tell me how the 3-d spatial information is encoded into a 2-d signal?
Yes, with 2 or more channels, you can simulate 3-d to an extent, but it
isn't accurate. Doesn't mean it isn't good, or realistic - it most
definitely can be both.
> what I get is the speakers completely
> disappearing - NO sound is heard as coming directly from any speaker - and a
> set of aerial images of instruments coming from a region behind and beside
> the actual speakers, magically placing them at acoustic points in my room
> and sounding very much as if they are right there in front of me. I also get
> the early reflected sound that was recorded sounding from the front and side
> walls, just like live, giving the impression that there is a decoding effect
> going on in which my model is placing all of the elements of the recorded
> sounds and acoustics coming once again from appropriate locations spatially
> within my room. This happens for reasons that are the same as why they
> happen live, because delayed extreme right or left sounds are actually
> bounced from similar angles on playback to those that were recorded.
That is simply impossible. The reverberant field in the recording has
no directional information, and although you can bounce, or reflect,
signals from all over the room, you are, for example, bouncing input
that originally came from rear left in all directions - not from "the
appropriate directions". All directions.
>
> Obviously, this is talking about the frontal soundstage only, and not the
> full reverberant field, which condition is the same for both of us. We can
> both support all that with surround sound speakers and either discrete
> recording or processing. But the idea is the same - the object on playback
> is the reconstruction of a realistic model of the live fields, not casting
> the recorded sound back toward your ears!
In your opinion.
>
> My playback will sound like a model of the live sound fields, the size and
> shape of my own room superimposed upon the recorded acoustic, the resultant
> sound being like a 30-70 mix of the live vs the local, leaning toward the
> live - as if you were sticking your head into a three dimensional model, or
> diurama of the concert hall. If it is a smaller group like a jazz trio, it
> will sound like they are right there in the room with you. Your playback (in
> my experience with similar) will sound like a rectangular hole cut in a wall
> separating you from the concert hall, but placed maybe halfway up, kind of
> like a widescreen movie that is limited right and left and is not 3D. You
> will be hearing more of the recorded acoustic space than I, but coming from
> this distorted set of incident angles represented by the separation of the
> speakers.
You are simply wrong. That is not what I hear, as you've been told many
times. You assume, based on your tastes, and your guess as to what my
system sounds like, that you know what *I* hear.
> Theoretical bull**** aside now, to get back to AE's OP, we can "retrieve" a
> very realistic sound on playback if the recording contains some decent
> proportions of the important images that we need for reconstruction, and we
> can reconstruct those images if we study the problem as a field-type system
> rather than an ear input system.
Yes, although your method for creation is certainly not the only, nor,
IMO, the better of the approaches that can be taken. And it is
simulation, not reconstruction.
> It is not a system of "accuracy"
Amen
> because the recording does not contain this
> imagined "picture" of another acoustic space as if from a position that you
> would want to listen from in the concert hall. So if we continue to try to
> retrieve this non-existent accuracy by using directional speakers in a dead
> room the sound will all collapse to the speaker boxes in front of us,
No it doesn't. You either don't like the sound that most find realistic,
or you haven't heard it. You likely are so used to the overblown fake
"acoustic" you create by myriad reflections that you simply think
anything else is just "dead". AE's original "sound splashed all over"
is a very apt description of what many, if not most of us *HEAR* in the
type of system you find to be the hallmark of realism. My soundfield
sounds pretty good to me, and it certainly does not "collapse" to the
speaker boxes. I'd be pretty unhappy if that were the case, and I'm
actually quite satisfied.
> destroying the suspension of disbelief, and we will never reach the kind of
> realism that I am enjoying every day with my mistaken ideas of how stereo
> works.
If you're enjoying it, more power to you. There may be many more just
like you that would enjoy your system. But continuing to derogate all
who disagree, or have other tastes, to the status of "poor dumb
*******s", as you just did, will ensure the continuing irrelevance of
your "theory", and ensure it's place in the dustbin of history.
Keith
Gary Eickmeier
April 12th 13, 02:05 PM
KH wrote:
> On 4/10/2013 8:04 AM, Gary Eickmeier wrote:
>> THEORETICAL: Assuming that you are saying that you want to record a
>> sort of sound "picture" of a live performance, as if the playback
>> will then cast that picture back to your ears and let you hear
>> "into" another acoustic space - how am I doing - you will then play
>> it back on these really accurate speakers which have been placed at
>> an angle that will complement the recorded angles, in a space that
>> is deadened down to some practical extent so that it doesn't dilute
>> the recorded acoustic too much with its own sound. OK?
>
> Somewhat the general gist, dismissive phrasing aside.
You've got me curious - what dismissive phrasing?
> Uhmm, you don't. Have you not paid any attention to what I'm saying?
> That directional information is GONE. By taking speakers that have a
> *specific*, not as you would claim, "highly", directional radiation
> pattern, and placing them in locations, at angles, to provide an
> illusion of the acoustic in the live event, one can achieve a pretty
> fair recreation.
Curiouser and curiouser - I think I know what you are getting at Keith, but
saying that directional information is lost in a stereo recording hits me
the wrong way. It would be idiotic to state that a single microphone gives
no directional information, so I'm sure that is not what you mean. Perhaps
it has something to do with the well-known phenom that different miking
techniques give different apparent perspectives on playback, and also that
different speaker setups do somewhat the same thing. But all of that is why
I say that we must reconstruct the stereo image on playback. It has to be
contained in the recording and it has to be reconstructed on playback, by
placing speakers in rooms and sending channels to them and experiencing the
total result of speaker and room interface.
> No - and this is where you are simply, and demonstrably WRONG. You
> are *constructing* a reverberant field, you are NOT REconstructing
> any 3-d field from a 2-d recording.
>
> Please explain the physics that would allow that. Any ideas at all?
> Tell me how the 3-d spatial information is encoded into a 2-d signal?
> Yes, with 2 or more channels, you can simulate 3-d to an extent, but
> it isn't accurate. Doesn't mean it isn't good, or realistic - it most
> definitely can be both.
Let's talk about that for a moment. Just indulge me. As you have so
bravely - and courteously - up to this point.
The three dimensions of which you speak are height, width, and depth. The
width dimension I think we all agree is encoded in the stereo information
from the summing localization from the multiple microphones used. We can
perceive left, center, right, and anything in between - but only if we
reconstruct that on playback by placing the speakers left and right with
some separation. That used to go without saying, but now is worth pointing
out. If your wife tries to make you stick one speaker behind the sofa and
the other on the bookshelf above, you point out that you must reconstruct
the width information contained in the recording by placing the speakers in
a certain way, or it won't work.
The height we aren't real concerned with, because the sources are all at the
same level in front of us. There was a little interest in full periphony
with Ambisonics, but didn't really take off. So we place the speakers
situated at about the same level as the instruments would be, and press on -
except for an intresting psychoacoustic effect wherein certain frequencies
can seem to be higher up than others, like the horns seem to come from a
little higher than the drums or others. We can also be fooled into hearing
level sound from speakers that are hung from the ceiling. I guess this is
mainly because we have no terrific height perception except from moving our
heads, and if the installation is not done so poorly that the reflecting
surfaces nearby call attention to the localization of the speakers
themselves, our brain is satisfied that the orchestra must be level with us,
where it belongs. Still, we usually just place the speakers there for
obvious reasons.
Now that bugaboo depth. We have no mechanism for detecting depth, either,
except by moving our heads a little or moving around a little in a closed
environment - sound in rooms gives more localization information than
outdoors or anechoically. In experiments, if two sounds are played at
different depths, but the farther one is compensated with increased gain,
you can't tell the depth. Within a room, however, we can hear to a great
extent a source's position w respect to the walls around it by the
reflection pattern. Moving around even a little helps with that, but
basically we can sense if something is right up against the wall or spaced
out from it. Audiophile experience over a long period has led us to placing
the speakers well out from the walls to give that impression of depth due to
the simple observations above - that you have placed the reconstructed sound
sources in your three dimensional space so that they have some degree of
depth to the sound! Then if the recording contains some near and some far
instruments, a learned response tells us the difference because "far" sounds
different than "near" from loudness and reverberance etc etc.
Oh, there he goes again, fakey fakey fakey! He wants to build a little model
of the live soundscape by placing speakers like a pop-up book. Sorry, but
ya, that's pretty much the way it is.
> That is simply impossible. The reverberant field in the recording has
> no directional information, and although you can bounce, or reflect,
> signals from all over the room, you are, for example, bouncing input
> that originally came from rear left in all directions - not from "the
> appropriate directions". All directions.
NO - if the recording properly contains sound that was bouncing off the
left side wall of the concert hall from those instruments on the left side
of the orchestra, and if those sounds are played on a left channel speaker
that has some output that bounces off the left wall of your room, you get
that sound coming from the appropreate direction - from points in space that
are different from the primary sound, whch is coming from the speaker first.
Precedence effect, but that is getting too involved for now. Right now just
imagine a light bulb on the left side of the room. It shines (bounces) more
of its output from the left side wall than any others. NOT everywhere.
Footnote, there is a lot more to speaker positioning than I can relate in
this short essay.
I understand your statement that info from the rear left (like, 120°) cannot
be made to come from the rear left on playback, but that is not real
important for reconstructing the frontal soundstage, and we can easily turn
to surround sound if you think it is.
> Yes, although your method for creation is certainly not the only, nor,
> IMO, the better of the approaches that can be taken. And it is
> simulation, not reconstruction.
>. You either don't like the sound that most find
> realistic, or you haven't heard it. You likely are so used to the
> overblown fake "acoustic" you create by myriad reflections that you
> simply think anything else is just "dead". AE's original "sound
> splashed all over" is a very apt description of what many, if not
> most of us *HEAR* in the type of system you find to be the hallmark
> of realism. My soundfield sounds pretty good to me, and it certainly
> does not "collapse" to the speaker boxes. I'd be pretty unhappy if
> that were the case, and I'm actually quite satisfied.
>
> If you're enjoying it, more power to you. There may be many more just
> like you that would enjoy your system. But continuing to derogate all
> who disagree, or have other tastes, to the status of "poor dumb
> *******s", as you just did, will ensure the continuing irrelevance of
> your "theory", and ensure it's place in the dustbin of history.
>
> Keith
Fair enough. Results and perception are what is important. We keep on
truckin and make changes here and there as we go and try to figure out what
causes either the improvements or unimprovements. One thing is certain: the
Big Three, as I call them, of radiation pattern, speaker positioning, and
room acoustics that Siegfried Linkwitz asked about in the Challenge to the
AES, are difinitely audible, and are the main variables in the making of the
sound that we both - all - percieve in our playback. It is those variables
that must be studied to find out which ideas sound better than others in the
playback of the field-type system called stereophonic sound.
Gary Eickmeier
Dick Pierce[_2_]
April 12th 13, 04:56 PM
Gary Eickmeier wrote:
> Audio_Empire wrote:
>>That's not possible. The microphone is not designed to reproduce
>>anything. it would make a more than lousy speaker.
>>
>>Also recoding with a single mike will result in NO spatial information
>>being captured (it's called "monaural sound"). One needs two mikes and
>>the spatial information results from the difference between the two
>>mike signals and THAT takes place in the listeners' ears. We hear in
>>stereo due differences in phase, time delay, and spatial separation of
>>signals reaching our ears. if done right, those cues can provide a
>>very satisfactory soundstage on a good stereo system.
>
>
> OK, let's take another run at this. I'm sure Mr. Pierce didn't mean to imply
> that microphones can be reproducers; he was making a philosophical point.
No, I implied no such thing, Mr. Eickmeier, I stated
it explicitly. Go do some research on the reciprocity
principle and then come back and try to argue your point.
--
+--------------------------------+
+ Dick Pierce |
+ Professional Audio Development |
+--------------------------------+
Dick Pierce[_2_]
April 12th 13, 05:42 PM
Audio_Empire wrote:
> On Thursday, April 4, 2013 4:27:06 PM UTC-7, Dick Pierce wrote:
>>Now, take a stereo pair. The situation is really not any better
>>It is geometrically impossible to disambiguate, for example, by
>>any property in the elctrical signals, whether a source of a sound
>>is anywhere on a circle whose center is defined by the line between
>>the two microphones and whose plane is at right angles to that
>>circle. Two omnis some distance apart will generate the SAME
>>electrical signals whether the source is 20 feet ahead, 20 feet
>>above, 20 feet behind or anywhere else on the circle. The same is
>>true of any other mike position. The only position that can be
>>unambiguously recorded is somewhere EXACTLY in between the two,
>>which is arguably not very useful.
>
> Are you talking about omnidirectional microphones here? Because they
> don't work as a stereo pair unless you take extraordinary precautions,
> such as placing a big sound baffle between them as Ray Kimber does for
> his IsoMike recordings.
Take ANY microphone you choose, ANY patter,n next to ANY other
contrivance you want: the output of the microphone is
an electrical signal which is simply the instantaneous magnitude
at the diaphragm surface. NO directionality information CAn
be encoded: it's a single dimension vs time.
>>Consider also the reciprocity principle as a gedanken (and, as a
>>real-world excercise, if your want). Record something from a
>>complex sound field with a microphone of your choosing.
>>Now, play it back through the same microphone. While you're
>>thinking about it, go study up on the reciprocity principle.
>
> If you did do that, say, through a magnetic microphone, it wouldn't
> sound very good I'm afraid. It would likely sound much worse, even,
> than a telephone. And I don't see what this has to do with the subject
> at hand. Microphones are designed, to capture sound and turn it into
> an electronic analog of that sound, it is not designed to be a
> reproducer.
Please, go read up on the reciprocity principle. It's
a very well established acoustical principle with
solid, hard science behind it. Try Balnkenstock,
try Breanek, try Kinsler and Frey. If you succeed in
convincing THEM they're wrong, then you get to some back
here and continue your argument. Until then all due
respect, you're arguing from a poit of technical ignorance.
>>Now, if your assertion were correct, recording with a single mike
>>of several sound sources in different direction, should result,
>>if you insist there is no loss of information, in the sound that
>>emenates from that same microphone finding their way back to the
>>original location they were emitted from.
>
> That's not possible. The microphone is not designed to reproduce
> anything. it would make a more than lousy speaker.
B*ll****! Within linear limits of the device, the reciprocity
principle states quite clearly that an acoustical transducer
will make just as good a speaker as it does a microphone.
Again, go research the topic, because your argument simply is not
based on solid, well-established physics. This same principle
is the basis behind high-accuracy independent calibration of
microphones. IN one example, you take two microphones: drive
one as a speaker, and measure with the other. The resulting
response of both measured together will have twice the
aberrations of one. It's the basis behind precision independent
microphone calibration systems by the likes of IET, Bruel & Kjaer,
General Radio and others.
Let's take a very simple experiment, practically realizable.
Take a directional microphone of your choosing. Let's just
say it's a cardiod (pick whatever you like). This is an experiment
you can do yourself.
Now, place a speaker 10' on its principle axis playing a 1 kHz
tone sufficient to produce 80 dB at the position of the microphone.
At the same time, place a speaker 120 degrees off axis and play
a 2 kHz tone through it, also producing an SPL of 80 dB at the mike
position. Record the signal.
DO the same, only in the on-axis spaekr, play 2 kHz at 72 dB SPL
and in the off-axis speaker, play 1 kHz at 86 dB. Record this signal.
Now, use ANY means you want (other than your memory of the experiment),
tell me unambiguously, which recording is which.
If you cannot tell, then this tells me that all information about
the sound field at THAT position was lost by the microphone.
Okay, do the same thing with TWO microphones of your choosing.
Without getting into the details, there are an infinite number
of arrangements of the sources which CANNOT be unambiguously
determined by the resulting pair of electrical signals.
> Also recoding with a single mike will result in NO spatial information
> being captured (it's called "monaural sound"). One needs two mikes and
> the spatial information results from the difference between the two
> mike signals and THAT takes place in the listeners' ears.
But two mikes are utterly incapable of caturing unambiguously
that information. Take a sound source that is placed such that
it's 60 degrees off of both microphone. Okay, WHICH 60 degrees?
There's an infinite number of position where the sound source
geometrically satisfies that position. Which one is is? What in
the electrical signal disambiguates which on it is.
> We hear in
> stereo due differences in phase, time delay, and spatial separation of
> signals reaching our ears. if done right, those cues can provide a
> very satisfactory soundstage on a good stereo system.
No, the ears do not depend on that. A VERY IMPORTANT factor
you have left out is the HRTF, the Head-Related Transfer
Function: which is an aberration in the sound field created
by our heads and our outer ear structures (and the rest of
our body, for that matter) which is crucial in the disambiguation
of the problem I cited above.
ANY stereo pair of microphone cannot tell the difference between
a sound source directly in front, directly overhead, directly
behind, or directly below. Blindfolded, I'd bet you could tell
such pretty quickly and you'd get it right the vast majority of
the time (and, yet, I can still find some cases where you might
get fooled). If your ancestors could NOT do this, well, you'd
not be around to having this discussion. The shading effects
along with the co,plex path differences and phase scrambling and all
that is what give you the ability to further encode the phase and
amplitude differences and turn them into real directional information.
Extend the experiment one step further. DO the same thing
blindfolded and with one ear plugged. According to your thesis,
you'd lose ALL ability to sense direction, yet there is well
over a century of hearing research that simply contradicts you.
The ability and success to decode direction is less certain, to
to be sure, but it does not vanish like you would assert. And it's
all bnecause of YOUR HRTF, which YOU, inadvertantly, started training
yourself to use from the moment you were born.
Now, to your assertions about the reciprocity principle: by YOUR
argument, if I then take the SAME signal recorded from the two
microphone, and two microphones of YOUR choosing, feed them back
through those two microphones (maintaining them in linear operation),
then I should be able to recreate the same original sound field the
captured. I should be able to trace back to the original sources
the sound from which they originally eminated. That would mean that
I should be able to go back and sample the resulting soundfield and
find that the 2 kHz tone is being projected in the direction fron
which it originally came, and the 1 kHz tone should be projected in
the unique direction it came from. Will this happen?
THAT'S what it means to "creat the original sound field."
If you want to argue whether that's the point of music
reproduction or the degree aof accuracy to which it is
sufficient to satisfy the listener, or whether it's all
aprlor trick, that's fine: that's one discussion.
But if you want to make technical claims about the suitability
of an acoustic transducer for such an experiment under the
reciprocity principle, thus essentially denying the validity
of the reciprocity principle, I might caution you that you're
skating in VERY thin ice, technically, and you might not want
to go there.
--
+--------------------------------+
+ Dick Pierce |
+ Professional Audio Development |
+--------------------------------+
Audio_Empire[_2_]
April 12th 13, 11:28 PM
On Friday, April 12, 2013 9:42:08 AM UTC-7, Dick Pierce wrote:
> Audio_Empire wrote:
> > On Thursday, April 4, 2013 4:27:06 PM UTC-7, Dick Pierce wrote:
> >>Now, take a stereo pair. The situation is really not any better
> >>It is geometrically impossible to disambiguate, for example, by
> >>any property in the elctrical signals, whether a source of a sound
> >>is anywhere on a circle whose center is defined by the line between
> >>the two microphones and whose plane is at right angles to that
> >>circle. Two omnis some distance apart will generate the SAME
> >>electrical signals whether the source is 20 feet ahead, 20 feet
> >>above, 20 feet behind or anywhere else on the circle. The same is
> >>true of any other mike position. The only position that can be
> >>unambiguously recorded is somewhere EXACTLY in between the two,
> >>which is arguably not very useful.
> >
> > Are you talking about omnidirectional microphones here? Because they
> > don't work as a stereo pair unless you take extraordinary precautions,
> > such as placing a big sound baffle between them as Ray Kimber does for
> > his IsoMike recordings.
>
> Take ANY microphone you choose, ANY patter,n next to ANY other
> contrivance you want: the output of the microphone is
> an electrical signal which is simply the instantaneous magnitude
> at the diaphragm surface. NO directionality information CAn
> be encoded: it's a single dimension vs time.
Well, of course it can't. That can be easily demonstrated by playing
only one channel of a true stereo recording through both stereo
speakers. If you are familiar enough with the recording you will
notice that nothing is missing. The entire ensemble is there. Some
instruments might be slightly attenuated (and depending upon how the
recording was made, they might not), but they are present. There is no
directional information without two microphones, each "viewing" the
performance from a different perspective.
> >>Consider also the reciprocity principle as a gedanken (and, as a
> >>real-world excercise, if your want). Record something from a
> >>complex sound field with a microphone of your choosing.
> >>Now, play it back through the same microphone. While you're
> >>thinking about it, go study up on the reciprocity principle.
> >
> > If you did do that, say, through a magnetic microphone, it wouldn't
> > sound very good I'm afraid. It would likely sound much worse, even,
> > than a telephone. And I don't see what this has to do with the subject
> > at hand. Microphones are designed, to capture sound and turn it into
> > an electronic analog of that sound, it is not designed to be a
> > reproducer.
>
> Please, go read up on the reciprocity principle. It's
> a very well established acoustical principle with
> solid, hard science behind it. Try Balnkenstock,
> try Breanek, try Kinsler and Frey. If you succeed in
> convincing THEM they're wrong, then you get to some back
> here and continue your argument. Until then all due
> respect, you're arguing from a poit of technical ignorance.
I understand the reciprocity principle perfectly well, but the way you
describe it sounded like you were talking literally. On the other hand
It's moot point because I never said that one microphone picked up any
actual directional information. In fact, in one post I stated that one
microphone (I used and omni for clarity, but any mike will do) cannot
pick up any spatial information. That's why it's called monaural
sound.
> >>Now, if your assertion were correct, recording with a single mike
> >>of several sound sources in different direction, should result,
> >>if you insist there is no loss of information, in the sound that
> >>emenates from that same microphone finding their way back to the
> >>original location they were emitted from.
I never made such an assertion, If you think I did, then you either
misunderstood my point, or I didn't state it very well, which is
possible.
> > That's not possible. The microphone is not designed to reproduce
> > anything. it would make a more than lousy speaker.
>
> B*ll****! Within linear limits of the device, the reciprocity
> principle states quite clearly that an acoustical transducer
> will make just as good a speaker as it does a microphone.
No, because within the limits of physics, a microphone diaphragm
cannot move enough air to make even a poor speaker and the reciprocity
principle never says that it should. Most studio microphones can
accept SPLs approaching 140dB before distorting, but under no
circumstances could such a microphone reproduce anywhere near 140dB,
except, that the diaphragm can MOVE (I.E. be displaced) as much in
response to a driving signal as it would when intercepting a sound
wave of 140 dB in intensity. Whether or not it produces any actual
sound in the room in which it is energized is another story. The
microphone diaphragm should move as much in response to the reciprocal
of an electrical signal it generated as it did when converting the
original acoustical signal to that electrical signal. Ideally, they
would be identical.
> Again, go research the topic, because your argument simply is not
> based on solid, well-established physics. This same principle
> is the basis behind high-accuracy independent calibration of
> microphones. IN one example, you take two microphones: drive
> one as a speaker, and measure with the other. The resulting
> response of both measured together will have twice the
> aberrations of one. It's the basis behind precision independent
> microphone calibration systems by the likes of IET, Bruel & Kjaer,
> General Radio and others.
I don't know what argument you are lumbering me with, but I think you
are confusing me with Mr Eickmeier or perhaps someone else.
> Let's take a very simple experiment, practically realizable.
> Take a directional microphone of your choosing. Let's just
> say it's a cardiod (pick whatever you like). This is an experiment
> you can do yourself.
>
> Now, place a speaker 10' on its principle axis playing a 1 kHz
> tone sufficient to produce 80 dB at the position of the microphone.
> At the same time, place a speaker 120 degrees off axis and play
> a 2 kHz tone through it, also producing an SPL of 80 dB at the mike
> position. Record the signal.
>
> DO the same, only in the on-axis spaekr, play 2 kHz at 72 dB SPL
> and in the off-axis speaker, play 1 kHz at 86 dB. Record this signal.
>
> Now, use ANY means you want (other than your memory of the experiment),
> tell me unambiguously, which recording is which.
>
> If you cannot tell, then this tells me that all information about
> the sound field at THAT position was lost by the microphone.
I agree and have never asserted anything else. The only thing that one
MIGHT be able to tell is when the speaker gas been placed in the
microphone's pattern shadow, and even that that would be largely
frequency and distance dependent.
> Okay, do the same thing with TWO microphones of your choosing.
> Without getting into the details, there are an infinite number
> of arrangements of the sources which CANNOT be unambiguously
> determined by the resulting pair of electrical signals.
Of course there are.
> > Also recoding with a single mike will result in NO spatial information
> > being captured (it's called "monaural sound"). One needs two mikes and
> > the spatial information results from the difference between the two
> > mike signals and THAT takes place in the listeners' ears.
> But two mikes are utterly incapable of caturing unambiguously
> that information. Take a sound source that is placed such that
> it's 60 degrees off of both microphone. Okay, WHICH 60 degrees?
> There's an infinite number of position where the sound source
> geometrically satisfies that position. Which one is is? What in
> the electrical signal disambiguates which on it is.
Nothing is perfect, that's a given. But "listen" to yourself. You took
a simple statement by me where I said that one microphone captures no
spatial information, and that you need two mikes for real stereo and
all that does is pick up any differences in phase or intensity that
result from those two different perspectives. And you argue that I'm
wrong simply because the process is far from perfect. I think we all
know that, Dick.
> > We hear in
> > stereo due differences in phase, time delay, and spatial separation of
> > signals reaching our ears. if done right, those cues can provide a
> > very satisfactory soundstage on a good stereo system.
>
> No, the ears do not depend on that. A VERY IMPORTANT factor
> you have left out is the HRTF, the Head-Related Transfer
> Function: which is an aberration in the sound field created
> by our heads and our outer ear structures (and the rest of
> our body, for that matter) which is crucial in the disambiguation
> of the problem I cited above.
I left out nothing. The Head Transfer Function is implied by my
statement that the cues we use to pick up directionality occur in the
air, as those cues reach our ears. My statement, though simplified
from your lofty, technical standpoint, is correct in as far as it
goes.
> ANY stereo pair of microphone cannot tell the difference between
> a sound source directly in front, directly overhead, directly
> behind, or directly below. Blindfolded, I'd bet you could tell
> such pretty quickly and you'd get it right the vast majority of
> the time (and, yet, I can still find some cases where you might
> get fooled). If your ancestors could NOT do this, well, you'd
> not be around to having this discussion. The shading effects
> along with the co,plex path differences and phase scrambling and all
> that is what give you the ability to further encode the phase and
> amplitude differences and turn them into real directional information.
I have thirty years of location recording experience. I know exactly
what microphones are capable of doing. I've probably miked more
symphony concerts and jazz ensembles than most people have ever even
heard.
> Extend the experiment one step further. DO the same thing
> blindfolded and with one ear plugged. According to your thesis,
> you'd lose ALL ability to sense direction, yet there is well
> over a century of hearing research that simply contradicts you.
> The ability and success to decode direction is less certain, to
> to be sure, but it does not vanish like you would assert. And it's
> all bnecause of YOUR HRTF, which YOU, inadvertantly, started training
> yourself to use from the moment you were born.
Again. I don't recall ever saying anything that should cause you or
anyone else to attribute the above attitude to me.
There must be other windmills for you to joust at, Don Quixote.
Gary Eickmeier
April 13th 13, 02:49 PM
Okay, so Mr. Pierce, Keith Howard, and Scott W are surprised that a single
microphone can't record directional information, Pierce thinks that
recordings need to carry HRTF, and microphones can make great speakers.
I think I am done here.
Gary Eickmeier
Dick Pierce[_2_]
April 13th 13, 08:01 PM
Gary Eickmeier wrote:
> Okay, so Mr. Pierce, Keith Howard, and Scott W are surprised that a single
> microphone can't record directional information, Pierce thinks that
> recordings need to carry HRTF, and microphones can make great speakers.
I never said any such thing, Mr. Eickmeier.
> I think I am done here.
You are done if you insist in putting your fanatstic misconceptions
into someone else's mouth.
..
--
+--------------------------------+
+ Dick Pierce |
+ Professional Audio Development |
+--------------------------------+
KH
April 14th 13, 04:47 PM
On 4/12/2013 6:05 AM, Gary Eickmeier wrote:
> KH wrote:
>> On 4/10/2013 8:04 AM, Gary Eickmeier wrote:
>
>>> THEORETICAL: Assuming that you are saying that you want to record a
>>> sort of sound "picture" of a live performance, as if the playback
>>> will then cast that picture back to your ears and let you hear
>>> "into" another acoustic space - how am I doing - you will then play
>>> it back on these really accurate speakers which have been placed at
>>> an angle that will complement the recorded angles, in a space that
>>> is deadened down to some practical extent so that it doesn't dilute
>>> the recorded acoustic too much with its own sound. OK?
>>
>> Somewhat the general gist, dismissive phrasing aside.
>
> You've got me curious - what dismissive phrasing?
Ok, let's start with "...as if the playback will then cast that picture
back to your ears...". You don't see that even while you are 'just'
describing what *you* think is my position, you invariably use language
such as this which both describes, and simultaneously dismisses, the
position? Does this truly escape you?
>
>> Uhmm, you don't. Have you not paid any attention to what I'm saying?
>> That directional information is GONE. By taking speakers that have a
>> *specific*, not as you would claim, "highly", directional radiation
>> pattern, and placing them in locations, at angles, to provide an
>> illusion of the acoustic in the live event, one can achieve a pretty
>> fair recreation.
>
> Curiouser and curiouser - I think I know what you are getting at Keith, but
> saying that directional information is lost in a stereo recording hits me
> the wrong way.
How it "hits you" is irrelevant to the fact that it is, indeed, lost.
> It would be idiotic to state that a single microphone gives
> no directional information, so I'm sure that is not what you mean.
OK, let's end with this statement in our discussion of your "dismissive"
tone. You can restate this sentence more concisely as "If you believe
this you're an idiot". Given that I do, and have, stated this belief
(fact actually, but...) your statement reduces further to a simple
"You're an idiot".
You don't see this?
<snip>
>
>> No - and this is where you are simply, and demonstrably WRONG. You
>> are *constructing* a reverberant field, you are NOT REconstructing
>> any 3-d field from a 2-d recording.
>>
>> Please explain the physics that would allow that. Any ideas at all?
>> Tell me how the 3-d spatial information is encoded into a 2-d signal?
>> Yes, with 2 or more channels, you can simulate 3-d to an extent, but
>> it isn't accurate. Doesn't mean it isn't good, or realistic - it most
>> definitely can be both.
>
> Let's talk about that for a moment. Just indulge me. As you have so
> bravely - and courteously - up to this point.
>
> The three dimensions of which you speak are height, width, and depth. The
> width dimension I think we all agree is encoded in the stereo information
> from the summing localization from the multiple microphones used.
Well, no, I don't think we all agree on that. There is no "width"
information in either channels data. We agree that on playback of the
two channels, the differential between channels will be perceived as
spacial information. You can pan-pot instruments during mixing to
achieve the same thing, where "width" never existed, so physical "width"
is clearly not a parameter that is encoded in the recording.
<snip>
> Oh, there he goes again, fakey fakey fakey! He wants to build a little model
> of the live soundscape by placing speakers like a pop-up book. Sorry, but
> ya, that's pretty much the way it is.
Yes, here we agree.
>> That is simply impossible. The reverberant field in the recording has
>> no directional information, and although you can bounce, or reflect,
>> signals from all over the room, you are, for example, bouncing input
>> that originally came from rear left in all directions - not from "the
>> appropriate directions". All directions.
>
> NO - if the recording properly contains sound that was bouncing off the
> left side wall of the concert hall from those instruments on the left side
> of the orchestra, and if those sounds are played on a left channel speaker
They are. They are played from the right speaker as well. As are all of
the sounds captured from all directions. Please explain how you play
that information only from the left speaker.
> that has some output that bounces off the left wall of your room,
Please explain how you direct only the reflected sound (from the
recording) that bounced off the Left wall of the venue, to bounce off
only the Left wall of your room.
> you get
> that sound coming from the appropreate direction - from points in space that
> are different from the primary sound, whch is coming from the speaker first.
Please explain how the how the information is encoded in the electrical
signal to allow for the sound reflected off of the Left wall, in the
venue, to be removed from the "primary" sound coming directly from the
speaker.
If you can provide these explanations, please do so. Please refrain
from embellishment, as these three basics comprise the basis for my
disagreement with your theory. The electrical signals from the
microphone, or microphones, do not contain the requisite information to
"direct" the acoustic signals as you imply.
What you describe above does happen, but you fail to address the
concomitant constraints - ALL the reflected sound and ALL of the direct
sound will be reflected off of that left wall of the room as well (not
just the desired reverberant information), so that reflection will
contain far more acoustic information than it did in the venue. The
ratio of direct to reflected sound will be completely different as well.
> Precedence effect, but that is getting too involved for now.
Right.
> Right now just
> imagine a light bulb on the left side of the room. It shines (bounces) more
> of its output from the left side wall than any others. NOT everywhere.
> Footnote, there is a lot more to speaker positioning than I can relate in
> this short essay.
And when sound waves travel at the speed of light, this is a very
different discussion indeed. The analogy, however, is irrelevant to the
discussion. The answers to my questions above are quite relevant.
<snip>
>
> Fair enough. Results and perception are what is important. We keep on
> truckin and make changes here and there as we go and try to figure out what
> causes either the improvements or unimprovements. One thing is certain: the
> Big Three, as I call them, of radiation pattern, speaker positioning, and
> room acoustics that Siegfried Linkwitz asked about in the Challenge to the
> AES, are difinitely audible, and are the main variables in the making of the
> sound that we both - all - percieve in our playback. It is those variables
> that must be studied to find out which ideas sound better than others in the
> playback of the field-type system called stereophonic sound.
And no one is arguing that those properties are not important. But there
are an infinite number of configurations of those variables. And
precisely zero of them will provide accurate reproduction of vector
information from a scalar signal.
Keith
Audio_Empire[_2_]
April 14th 13, 04:48 PM
On Saturday, April 13, 2013 12:01:30 PM UTC-7, Dick Pierce wrote:
> Gary Eickmeier wrote:
> > Okay, so Mr. Pierce, Keith Howard, and Scott W are surprised that a single
> > microphone can't record directional information, Pierce thinks that
> > recordings need to carry HRTF, and microphones can make great speakers.
>
> I never said any such thing, Mr. Eickmeier.
>
> > I think I am done here.
>
> You are done if you insist in putting your fanatstic misconceptions
> into someone else's mouth.
You gotta admit that the way you asserted reciprocity, it COULD be
easily construed as an assertion that microphones could be used as
speakers and that those speakers would have the exact same properties
as the microphones would have when capturing the sound. I know that
you were talking about the diaphragm displacement when a microphone is
fed an audio signal. Ideally that diaphragm's characteristics should
mimic, exactly, the characteristics it demonstrates when intercepting
a sound field and converting that sound field into an electrical
signal. I knew what you must have been trying to say, but your wording
even threw me, at first.
Dick Pierce[_2_]
April 14th 13, 10:51 PM
Audio_Empire wrote:
> On Saturday, April 13, 2013 12:01:30 PM UTC-7, Dick Pierce wrote:
>>Gary Eickmeier wrote:
>>>I think I am done here.
>>
>>You are done if you insist in putting your fanatstic misconceptions
>>into someone else's mouth.
>
> You gotta admit that the way you asserted reciprocity, it COULD be
> easily construed as an assertion that microphones could be used as
> speakers and that those speakers would have the exact same properties
> as the microphones would have when capturing the sound. I know that
> you were talking about the diaphragm displacement when a microphone is
> fed an audio signal. Ideally that diaphragm's characteristics should
> mimic, exactly, the characteristics it demonstrates when intercepting
> a sound field and converting that sound field into an electrical
> signal. I knew what you must have been trying to say, but your wording
> even threw me, at first.
You mean, like the part where I said:
"Within linear limits of the device, the reciprocity
principle states quite clearly that an acoustical
transducer will make just as good a speaker as it
does a microphone."
And you said:
"No, because within the limits of physics, a microphone
diaphragm cannot move enough air to make even a poor
speaker and the reciprocity principle never says that
it should."
And, you seemed to have COMPLETELY ignored that I said:
"Within linear limits of the device..."
So, I guess what your saying is that when I said "within
the linear limits of the device," I must have REALLY said
"within the limits of physics" instead. I can see now when
I said "within the linear limits of the device" and you
substituted "within the limits of physics", how my wording
threw you. I think.
Now both you AND Mr. Eickmeier seemed to have fatasized that
I claimed that a microphone is capable of producing high sound
pressure levels. I can see now that since I never, ever said
that, how one could be confused into thinking that I did.
One of us is missing somethiong here. Maybe it's me, missing
the words I NEVER said.
I can now see that the wording I never used could confuse anyone!
--
+--------------------------------+
+ Dick Pierce |
+ Professional Audio Development |
+--------------------------------+
Gary Eickmeier
April 15th 13, 11:29 AM
Dick Pierce wrote:
> Now both you AND Mr. Eickmeier seemed to have fatasized that
> I claimed that a microphone is capable of producing high sound
> pressure levels. I can see now that since I never, ever said
> that, how one could be confused into thinking that I did.
>
> One of us is missing somethiong here. Maybe it's me, missing
> the words I NEVER said.
>
> I can now see that the wording I never used could confuse anyone!
It's not just the sound pressure level difference that threw us, and that is
not the only factor that makes microphones and speakers different, and not
reciprocal. The acceptance patterns of microphones and their use and
positioning in recording a stereo performance have nothing to do with the
radiation patterns of speakers and their postiioning and use on playback of
those signals.
If you think otherwise, perhaps you could explain how a coincident pair or
single point stereo mike could be used as a stereo speaker, if only it were
loud enough.
Gary Eickmeier
Gary Eickmeier
April 15th 13, 12:52 PM
KH wrote:
> On 4/12/2013 6:05 AM, Gary Eickmeier wrote:
>> KH wrote:
>>> On 4/10/2013 8:04 AM, Gary Eickmeier wrote:
Keith -
This is getting a little out of control. You are taking an attitude about me
that puts a negative whammy on every syllable I say. I did not mean anything
dismissive or ugly toward you. Sometimes I speak more conversationally than
engineeringly, but I am thinking my audience is regular folk like me who are
interested in a subject.
>>>> THEORETICAL: Assuming that you are saying that you want to record a
>>>> sort of sound "picture" of a live performance, as if the playback
>>>> will then cast that picture back to your ears and let you hear
>>>> "into" another acoustic space - how am I doing - you will then play
>>>> it back on these really accurate speakers which have been placed at
>>>> an angle that will complement the recorded angles, in a space that
>>>> is deadened down to some practical extent so that it doesn't dilute
>>>> the recorded acoustic too much with its own sound. OK?
>>>
>>> Somewhat the general gist, dismissive phrasing aside.
>>
>> You've got me curious - what dismissive phrasing?
>
> Ok, let's start with "...as if the playback will then cast that
> picture back to your ears...". You don't see that even while you are
> 'just' describing what you think is my position, you invariably use
> language such as this which both describes, and simultaneously
> dismisses, the position? Does this truly escape you?
Yes. I didn't read it, or intend it, as any kind of dismissive or critical.
How would you state it? What hits you wrong about "casts that picture back
to your ears"?
>> Curiouser and curiouser - I think I know what you are getting at
>> Keith, but saying that directional information is lost in a stereo
>> recording hits me the wrong way.
>
> How it "hits you" is irrelevant to the fact that it is, indeed, lost.
>
>> It would be idiotic to state that a single microphone gives
>> no directional information, so I'm sure that is not what you mean.
>
> OK, let's end with this statement in our discussion of your
> "dismissive" tone. You can restate this sentence more concisely as
> "If you believe this you're an idiot". Given that I do, and have,
> stated this belief (fact actually, but...) your statement reduces
> further to a simple "You're an idiot".
>
> You don't see this?
No. Permit me to interpret a paragraph that has been a major embarrassment
to me.
Everyone knows that the earth is round. It would be idiotic to state that
the earth is not flat to an audience of peers, so if you said that I would
have to assume you were getting at something deeper. AE and I have already
said that we wonder why you and Pierce keep going on about how a single
microphone can't record direction. we tell you that everyone knows that, so
what's your point. It takes two microphones to record stereo, no?
So when I say it would be idiotic to bring up that a single microphone can't
record stereo, I mean that everyone knows that, so why say it? I do not mean
that I think that it does record direction. Is that clear enough?
Now let me help both of you with this direction dilemma. Pierce says that
even with the stereo pair, we cannot decipher whether the sounds are coming
from 90 degrees up, straight behind, or 90 degrees down, or in front of us -
this information is all lost from the recording. I point out that we solve
this mystery by physical reconstruction of the recorded sound fields. WE
PLACE THE SOURCES straight ahead in front of us, in positions which are
geometrically similar to those of the instruments. We have all the
information we NEED to reconstruct a realistic image of the performance as
it existed. Can you see that my way just a little?
>> Let's talk about that for a moment. Just indulge me. As you have so
>> bravely - and courteously - up to this point.
>>
>> The three dimensions of which you speak are height, width, and
>> depth. The width dimension I think we all agree is encoded in the
>> stereo information from the summing localization from the multiple
>> microphones used.
>
> Well, no, I don't think we all agree on that. There is no "width"
> information in either channels data. We agree that on playback of the
> two channels, the differential between channels will be perceived as
> spacial information. You can pan-pot instruments during mixing to
> achieve the same thing, where "width" never existed, so physical
> "width" is clearly not a parameter that is encoded in the recording.
This is getting silly again. Width means the lateral positioning of the
instruments. Width of the soundstage. That is what stereo is all about. Now
what are you talking about, there is no width in stereo?
>
>>> That is simply impossible. The reverberant field in the recording
>>> has no directional information, and although you can bounce, or
>>> reflect, signals from all over the room, you are, for example,
>>> bouncing input that originally came from rear left in all
>>> directions - not from "the appropriate directions". All directions.
>>
>> NO - if the recording properly contains sound that was bouncing off
>> the left side wall of the concert hall from those instruments on the
>> left side of the orchestra, and if those sounds are played on a left
>> channel speaker
>
> They are. They are played from the right speaker as well. As are all
> of the sounds captured from all directions. Please explain how you
> play that information only from the left speaker.
Again, this is really simple. Left sidewall sounds are recorded more
strongly in the left microphone, and are therefore stronger from the left
speaker than the right. Similar to the instruments on the left side of the
band, in intensity stereo. We do indeed play the left channel information
only from the left speaker.
>
>> that has some output that bounces off the left wall of your room,
>
> Please explain how you direct only the reflected sound (from the
> recording) that bounced off the Left wall of the venue, to bounce off
> only the Left wall of your room.
Stronger from the left wall, due to the stronger recorded signal from that
side.
>
>> you get
>> that sound coming from the appropreate direction - from points in
>> space that are different from the primary sound, whch is coming from
>> the speaker first.
>
> Please explain how the how the information is encoded in the
> electrical signal to allow for the sound reflected off of the Left
> wall, in the venue, to be removed from the "primary" sound coming
> directly from the speaker.
It is not removed. But if we permit some of the output of the left speaker
to bounce off the left wall, there is what they call a "spatial broadening"
effect from recorded sounds that have some reverberance in them (from the
venue). This is easily audible if you have heard directional systems vs omni
speaker systems. Have you not ever noticed that?
> If you can provide these explanations, please do so. Please refrain
> from embellishment, as these three basics comprise the basis for my
> disagreement with your theory. The electrical signals from the
> microphone, or microphones, do not contain the requisite information
> to "direct" the acoustic signals as you imply.
Well, in a way they do. The recording is in stereo, and by the time we are
finished placing the channel sounds where they belong in playback, directing
those acoustic signals where they belong - i.e. from locations similar to
the live model - then we have a very realistic playback.
> What you describe above does happen, but you fail to address the
> concomitant constraints - ALL the reflected sound and ALL of the
> direct sound will be reflected off of that left wall of the room as
> well (not just the desired reverberant information), so that
> reflection will contain far more acoustic information than it did in
> the venue. The ratio of direct to reflected sound will be completely
> different as well.
No, there is a difference between channels. That is part of what stereo is
all about. If playback has been properly reconstructed the range of ratios
of direct to reflected will also be similar to the live sound. I think you
may be on the verge of discovering Image Model Theory. Just keep thinking
about all this stuff and try to stay with me, rather than resisting all the
time.
>> Right now just
>> imagine a light bulb on the left side of the room. It shines
>> (bounces) more of its output from the left side wall than any
>> others. NOT everywhere. Footnote, there is a lot more to speaker
>> positioning than I can relate in this short essay.
>
> And when sound waves travel at the speed of light, this is a very
> different discussion indeed. The analogy, however, is irrelevant to
> the discussion. The answers to my questions above are quite relevant.
And I sincerely hope I have answered most of it!
>> Fair enough. Results and perception are what is important. We keep on
>> truckin and make changes here and there as we go and try to figure
>> out what causes either the improvements or unimprovements. One thing
>> is certain: the Big Three, as I call them, of radiation pattern,
>> speaker positioning, and room acoustics that Siegfried Linkwitz
>> asked about in the Challenge to the AES, are difinitely audible, and
>> are the main variables in the making of the sound that we both - all
>> - percieve in our playback. It is those variables that must be
>> studied to find out which ideas sound better than others in the
>> playback of the field-type system called stereophonic sound.
>
> And no one is arguing that those properties are not important. But
> there are an infinite number of configurations of those variables. And
> precisely zero of them will provide accurate reproduction of vector
> information from a scalar signal.
OK, this is where you have to formulate a theory of reproduction that
explains just how this SHOULD be done. How does stereo work? There are many
ideas in the literature, such as the Blumlein patent, in which direction is
encoded by means of intensity, and the angles are retrieved on playback by
positioning the speakers at approx 60 degree angles such that the image will
form at the head of the listener. There is the Bell Labs Curtain of Sound in
which two or three speakers are just simulating the ideal system, which
would be an infinite number of speakers and microphones positioned along an
imaginary curtain in front of the performance. There is Sonic Holography, in
which crosstalk must be cancelled to separate the channels from each other
at the ears. There is Ambisonics, recorded with a single point Soundfield
microphone so that sounds from all directions might be encoded to come from
the correct direction at the head of the listener.
Nowadays these guys (in Europe I think) are trying for "Wave Field
Synthesis," which is a complex attempt to encode precise directionality from
the soundstage. Maybe you would be attracted to that. I wouldn't because it
says nothing about the reverberant field.
No matter which of these you tend toward, all engineers and theorists who
wish to tell us how to set up a playback system need to use something for a
model of what is happening with the recorded signals and their relationship
to the playback. Otherwise we are just flailing about, poking around with
The Big Three variables in a haphazard hope that we will stumble upon
something that works.
That sure seems to be a good description of the industry today, if you go to
a CES or and sudio show and take a look at all of the completely different
designs and try to find a common thread.
So OK, if my description of how you believe it works was all wrong, I beg
you to tell me now.
Gary Eickmeier
KH
April 16th 13, 01:46 PM
On 4/15/2013 4:52 AM, Gary Eickmeier wrote:
> KH wrote:
>> On 4/12/2013 6:05 AM, Gary Eickmeier wrote:
>>> KH wrote:
>>>> On 4/10/2013 8:04 AM, Gary Eickmeier wrote:
>
> Keith -
>
> This is getting a little out of control. You are taking an attitude about me
> that puts a negative whammy on every syllable I say.
I am reading what you write. To clarify the point:
> I did not mean anything
> dismissive or ugly toward you. Sometimes I speak more conversationally than
> engineeringly, but I am thinking my audience is regular folk like me who are
> interested in a subject.
As if you could talk in engineering terms. Now, was that
conversational, or dismissive? See the point?
<snip>
>> You don't see this?
>
> No. Permit me to interpret a paragraph that has been a major embarrassment
> to me.
>
> Everyone knows that the earth is round. It would be idiotic to state that
> the earth is not flat to an audience of peers, so if you said that I would
> have to assume you were getting at something deeper. AE and I have already
> said that we wonder why you and Pierce keep going on about how a single
> microphone can't record direction. we tell you that everyone knows that, so
> what's your point. It takes two microphones to record stereo, no?
>
> So when I say it would be idiotic to bring up that a single microphone can't
> record stereo, I mean that everyone knows that, so why say it? I do not mean
> that I think that it does record direction. Is that clear enough?
Frankly, it seems a rather facile attempt at rationalizing a clearly
inaccurate statement. It may be pedantic to state a single microphone
doesn't record directional information, but to describe a true statement
as "idiotic" is, shall we say, unusual.
Now, to deal with the "stereo" reference, neither microphone records
*any* directional information. Zero.
>
> Now let me help both of you with this direction dilemma. Pierce says that
> even with the stereo pair, we cannot decipher whether the sounds are coming
> from 90 degrees up, straight behind, or 90 degrees down, or in front of us -
> this information is all lost from the recording.
A clearly accurate statement you refuse to deal with directly.
> I point out that we solve
> this mystery by physical reconstruction of the recorded sound fields
And you *never*, *ever*, even attempt to explain how this happens. You
dodge *every* attempt at eliciting a clear explanation.
>. WE
> PLACE THE SOURCES straight ahead in front of us, in positions which are
> geometrically similar to those of the instruments. We have all the
> information we NEED to reconstruct a realistic image of the performance as
> it existed. Can you see that my way just a little?
IF, and ONLY If, you do not add an attempt to create "another"
performance by reflecting sound all over the place.
<snip>
>> Well, no, I don't think we all agree on that. There is no "width"
>> information in either channels data. We agree that on playback of the
>> two channels, the differential between channels will be perceived as
>> spacial information. You can pan-pot instruments during mixing to
>> achieve the same thing, where "width" never existed, so physical
>> "width" is clearly not a parameter that is encoded in the recording.
>
> This is getting silly again. Width means the lateral positioning of the
> instruments. Width of the soundstage. That is what stereo is all about. Now
> what are you talking about, there is no width in stereo?
No, you are getting evasive, yet again. I clearly said that in the case
of pan-potting of instruments into a recording *there is* NO PHYSICAL
width. It simply has no existence in the real world. Hence, the
"width" that exists in a stereophonic recording is not indicative of any
physical reality. If that "width" cannot be unambiguously associated
with physical reality, then it is not encoded in the signal as such. It
is an artifact of differential recording levels (which still excludes
front, rear, top, etc.).
<snip>
>> Please explain how you
>> play that information only from the left speaker.
>
> Again, this is really simple. Left sidewall sounds are recorded more
> strongly in the left microphone, and are therefore stronger from the left
> speaker than the right. Similar to the instruments on the left side of the
> band, in intensity stereo. We do indeed play the left channel information
> only from the left speaker.
That is simply nonsense. So only the left microphone picks up
reverberant information from the left half of the hall? Really. How
does that happen? Is it higher level on the left, yes. That is not the
same thing.
>>
>>> that has some output that bounces off the left wall of your room,
>>
>> Please explain how you direct only the reflected sound (from the
>> recording) that bounced off the Left wall of the venue, to bounce off
>> only the Left wall of your room.
>
> Stronger from the left wall, due to the stronger recorded signal from that
> side.
But also from the back wall, and the ceiling, and the right wall, and
from direct radiation. These are *all equal* - that is the point. ALL
of the sound originally bouncing off the left wall will be radiated
equally throughout the entire radiation cone of the speaker. In the
case of omni's or dipoles, that is basically all over the place.
>>
>>> you get
>>> that sound coming from the appropreate direction - from points in
>>> space that are different from the primary sound, whch is coming from
>>> the speaker first.
>>
>> Please explain how the how the information is encoded in the
>> electrical signal to allow for the sound reflected off of the Left
>> wall, in the venue, to be removed from the "primary" sound coming
>> directly from the speaker.
>
> It is not removed. But if we permit some of the output of the left speaker
> to bounce off the left wall, there is what they call a "spatial broadening"
> effect from recorded sounds that have some reverberance in them (from the
> venue). This is easily audible if you have heard directional systems vs omni
> speaker systems. Have you not ever noticed that?
Yes. This is also what "they" call smearing. Once again, it is not
just the reverberant information that you "bounce" off the wall, it is
ALL of the information. This is completely different than the live
performance, in every directional respect.
>
>> If you can provide these explanations, please do so. Please refrain
>> from embellishment, as these three basics comprise the basis for my
>> disagreement with your theory. The electrical signals from the
>> microphone, or microphones, do not contain the requisite information
>> to "direct" the acoustic signals as you imply.
>
> Well, in a way they do. The recording is in stereo,
Uhmm, sometimes.
> and by the time we are
> finished placing the channel sounds where they belong in playback, directing
> those acoustic signals where they belong - i.e. from locations similar to
> the live model - then we have a very realistic playback.
Ok so now you're saying your model only "works" when a studio recoding
is designed to be replayed according to your model? OK, I'll buy that.
So how many recordings are going to be recorded and mastered to be
tailor made for your model?
>> What you describe above does happen, but you fail to address the
>> concomitant constraints - ALL the reflected sound and ALL of the
>> direct sound will be reflected off of that left wall of the room as
>> well (not just the desired reverberant information), so that
>> reflection will contain far more acoustic information than it did in
>> the venue. The ratio of direct to reflected sound will be completely
>> different as well.
>
> No, there is a difference between channels.
The statement has nothing to do with "channels". Each channel has
reverberant + direct information reflecting off the walls, and also has
reverberant + direct information coming directly at the listener. This
in no way mimics the live performance, where direct and reverberant do
not commingle. Once recorded, they are inextricably commingled, and
cannot be separated during replay. Additional reflection of this
commingled signal can only result in more smearing.
> That is part of what stereo is
> all about. If playback has been properly reconstructed the range of ratios
> of direct to reflected will also be similar to the live sound.
Only if acoustic ratios were linear, which they are not. Once you take
the original ratio, record it, and then play it back, even with the same
ratio of direct to reverberant (e.g. same venue), the resulting ratio of
direct *information* versus reverberant *information* will be changed.
You will obviously hear the same ratio, in the same venue, but you have
to understand that on replay, both the direct and the reverberant fields
contain *both* direct and reverberant information when they initially
did not, and thus the effective ratio is altered.
> I think you
> may be on the verge of discovering Image Model Theory. Just keep thinking
> about all this stuff and try to stay with me, rather than resisting all the
> time.
How about you trying to stay with me? How about answering some direct
questions once in a while?
>
>>> Right now just
>>> imagine a light bulb on the left side of the room. It shines
>>> (bounces) more of its output from the left side wall than any
>>> others. NOT everywhere. Footnote, there is a lot more to speaker
>>> positioning than I can relate in this short essay.
>>
>> And when sound waves travel at the speed of light, this is a very
>> different discussion indeed. The analogy, however, is irrelevant to
>> the discussion. The answers to my questions above are quite relevant.
>
> And I sincerely hope I have answered most of it!
Well, no. You've repeated the same, IMO flawed, understanding. You
continue to ignore the question of how adding both direct and
reverberant information to the "new" reverberant field, and adding both
direct and reverberant information to the direct sonic field bears any
resemblance to the original acoustic.
<snip>
> OK, this is where you have to formulate a theory of reproduction that
> explains just how this SHOULD be done.
No, it isn't. Once again, you revert to the assumption that there is
only ONE way that "works" and either you are right, or not. You assume
a binary where none does, nor can, exist.
> How does stereo work? There are many
> ideas in the literature, such as the Blumlein patent, in which direction is
> encoded by means of intensity, and the angles are retrieved on playback by
> positioning the speakers at approx 60 degree angles such that the image will
> form at the head of the listener. There is the Bell Labs Curtain of Sound in
> which two or three speakers are just simulating the ideal system, which
> would be an infinite number of speakers and microphones positioned along an
> imaginary curtain in front of the performance. There is Sonic Holography, in
> which crosstalk must be cancelled to separate the channels from each other
> at the ears. There is Ambisonics, recorded with a single point Soundfield
> microphone so that sounds from all directions might be encoded to come from
> the correct direction at the head of the listener.
Yep. None of these are accurate. None work for everyone, or every
equipment type. Your theory is no different.
> Nowadays these guys (in Europe I think) are trying for "Wave Field
> Synthesis," which is a complex attempt to encode precise directionality from
> the soundstage. Maybe you would be attracted to that. I wouldn't because it
> says nothing about the reverberant field.
>
> No matter which of these you tend toward, all engineers and theorists who
> wish to tell us how to set up a playback system need to use something for a
> model of what is happening with the recorded signals and their relationship
> to the playback. Otherwise we are just flailing about, poking around with
> The Big Three variables in a haphazard hope that we will stumble upon
> something that works.
I seriously doubt any of the names you dropped consider themselves to be
"flailing about" as you routinely describe it. People do not perceive
things in a uniform fashion. That is obvious. Why then do you expect
that there is a single "unified theory" of stereo that everyone will
agree is the best? Is there a single painting style that is "correct"?
Or a "correct" musical genera, or style?
It also makes no sense to assume, especially within the context of your
"new acoustical event" paradigm, that any one method or theory is
"correct". This makes no more sense that to state a particular
performance is more or less "accurate". It's a false distinction.
>
> That sure seems to be a good description of the industry today, if you go to
> a CES or and sudio show and take a look at all of the completely different
> designs and try to find a common thread.
>
> So OK, if my description of how you believe it works was all wrong, I beg
> you to tell me now.
Well, let's start with your assumption that I like "a window into a
performance", I don't. Your assumption that I like "boxy sound" - zippo
again. How about your assumption that I like a soundfield that
"collapses" into two boxes - again, no cigar. These are all flaws that
you take for granted, and so you assume, a priori, that they exist
equally for all and sundry. But these are simply your opinions about
how certain things sound, and you project them, as truth, upon others as
a fait accompli.
I think we can tailor our playback systems to sound, within the
limitations of the commercial recordings available, as much like real
life as possible. For some that's dipoles, for some conventional
mini-monitors, for some it's the Bose 901 approach, for some - like me-
the Wilson approach to controlling reflections to create a realistic
sweet spot. That sweet spot, in my case, is relatively small. You
consider that a flaw, I don't care as it suits my listening style.
Hence the resistance to your "listening" test with the Orions, among
others, where one criterion is how stable the image is when moving
around the room. Who cares? Apparently you do, but many, like me,
couldn't care less. And it goes without saying that the more diffuse
the image you create, the more stable it will be as you move around the
room. In that case, stable image equates to pitiful imaging, as I
define it.
Clearly you believe that you are right, and the world is wrong. You
like what you like, I don't. Let's just leave it at that.
Keith
Dick Pierce[_2_]
April 16th 13, 03:44 PM
KH wrote:
> for some it's the Bose 901 approach, ...
>
> Clearly you believe that you are right, and the world is wrong.
On the theory that context is everything, perhaps a
revelation of context might be useful.
Unless my memory is faulty, and that is certainly a
possibility, I believe that Mr. Eickmeier has revealed
in the past that he is an advocate of the Bose 901, and
has further stated words to the effect that Bose doesn't
know how to set them up, and (to possibly risk exaggeration
to the point of possible hyperbole), it's Mr. Eickmeier's
claim that has the "secret" of setting Bose 901s up "properly."
Now, if my memory of these points is correct, people should
now have more information they can accept or reject Mr.
Eickmeier's claims, using a more complete knowledge of the
context surrounding his, well, theory.
If my memory is not correct, then I apologize for contributing
to the indistinct and muddy picture already in place.
--
+--------------------------------+
+ Dick Pierce |
+ Professional Audio Development |
+--------------------------------+
Audio_Empire[_2_]
April 16th 13, 08:59 PM
On Tuesday, April 16, 2013 5:46:30 AM UTC-7, KH wrote:
> On 4/15/2013 4:52 AM, Gary Eickmeier wrote:
>
> > KH wrote:
>
> >> On 4/12/2013 6:05 AM, Gary Eickmeier wrote:
>
> >>> KH wrote:
>
> >>>> On 4/10/2013 8:04 AM, Gary Eickmeier wrote:
>
> >
>
> > Keith -
>
> >
>
> > This is getting a little out of control. You are taking an attitude about me
>
> > that puts a negative whammy on every syllable I say.
>
>
>
> I am reading what you write. To clarify the point:
>
>
>
> > I did not mean anything
>
> > dismissive or ugly toward you. Sometimes I speak more conversationally than
>
> > engineeringly, but I am thinking my audience is regular folk like me who are
>
> > interested in a subject.
>
>
>
> As if you could talk in engineering terms. Now, was that
>
> conversational, or dismissive? See the point?
>
>
>
> <snip>
>
>
>
> >> You don't see this?
>
> >
>
> > No. Permit me to interpret a paragraph that has been a major embarrassment
>
> > to me.
>
> >
>
> > Everyone knows that the earth is round. It would be idiotic to state that
>
> > the earth is not flat to an audience of peers, so if you said that I would
>
> > have to assume you were getting at something deeper. AE and I have already
>
> > said that we wonder why you and Pierce keep going on about how a single
>
> > microphone can't record direction. we tell you that everyone knows that, so
>
> > what's your point. It takes two microphones to record stereo, no?
>
> >
>
> > So when I say it would be idiotic to bring up that a single microphone can't
>
> > record stereo, I mean that everyone knows that, so why say it? I do not mean
>
> > that I think that it does record direction. Is that clear enough?
>
>
>
> Frankly, it seems a rather facile attempt at rationalizing a clearly
>
> inaccurate statement. It may be pedantic to state a single microphone
>
> doesn't record directional information, but to describe a true statement
>
> as "idiotic" is, shall we say, unusual.
>
>
>
> Now, to deal with the "stereo" reference, neither microphone records
>
> *any* directional information. Zero.
I think that depends upon how you define "directional information". Hyper-cardioid,
cardioid, and figure-of-eight microphones are more-or-less DEFINED by their
directionality. I'd say that's "directional" information, by definition. OTOH, I
understand what you are saying. A microphone picks up a sound-field. Within
that sound field, the microphone has way of discriminating what sounds it picks
up. nor the direction from which those sounds come. For instance, a pair of cardioid
mikes on a 7-inch T-bar, placed in front of an orchestra or band and "aimed" 90
degrees apart from each other so that one mike faces 45 degrees to the left of canter
and the other faces 45 degrees to the right of center both pick up the whole orchestra;
right left center and behind the mike. It's just that their sensitivity to sounds
that occur off of the microphone's axis is attenuated compared to the mike's sensitivity
on-axis. This might manifest itself as a wide-band, deep attenuation off-axis, resulting
in a hyper-cardioid or super-cardioid pattern, or it may be merely a few dB and be frequency dependent. When used as a stereo pair, the directionality occurs because the left
microphone picks-up the left side of the band or orchestra better than the right side
because it's aimed at the left side and right-side information, though "heard" by the
left mike is attenuated by some amount. That amount of course depends upon how
well the microphone manufacturer has designed the microphone to attenuate off-axis
sounds. Because omni-directional mikes are designed to pick-up sounds with no
attenuation at 360 degrees, two such mikes on the same T-bar should yield mono
as 7-inches is too close together to be considered anything but the exact same location
to an omnidirectional pickup. In other words there would be no discernible difference
between the left track and the right track due to the close proximity of both
microphones. For this example we're assuming that the two omnis used are a identical
make and model. So, some microphones do pick-up directional information, it's just
indiscriminate of source location. A cardioid might pick up a loud noise in the back
of the hall where the mike, being pointed at the stage, is least sensitive, to be louder
than the instrument or instruments the mike is pointing directly at. On playback, no
one listening (who wasn't there at the time) will be able to tell from that noise, where
it came from. Back stage? Up in the theater fly-area? on stage? No clue at all because
the microphone is incapable of distinguishing location. Only a microphone's pick-up
pattern determines whether a sound is on or off axis. and only a difference in intensity
caused by the microphone's pick-up pattern and distance from the sound source be
can discerned.
If that's what you mean by microphones picking-up NO directional information, then I
think that we should all be in agreement and can move on.
Is that not so?
Audio_Empire[_2_]
April 16th 13, 11:46 PM
On Sunday, April 14, 2013 8:45:06 AM UTC-7, ScottW wrote:
> On Apr 13, 6:49am, "Gary Eickmeier" > wrote:
>
> Gary,
> It's more than a bit frustrating when after the following exchange :
> <start quote>
> "On Apr 12, 6:05 am, "Gary Eickmeier" >
> wrote:
>
> > It would be idiotic to state that a single microphone gives
> > no directional information, so I'm sure that is not what you mean.
>
> I've been wondering why this conversation seems so confused but I
> think I found the crux of the biscuit.
> You need to accept that this idiocy is in fact true....and start
> over.
> <end quote>
>
> Then you apparently want to exchange our positions with this
> statement.
>
> > Okay, so Mr. Pierce, Keith Howard, and Scott W are
> >surprised that a single microphone can't record directional
> >information,
> >
> > I think I am done here.
>
> I can't tell if this is a few inadvertent typos or if you are really
> this inconsistent in your position (and I think that is a very
> generous and gentle way to put it).
> FWIW, I think that some recordings (close mic'd and pan potted of the
> kind AE deplores) create a superior illusion of instruments in the
> room with me. Recordings of live performances that capture a sound
> field complete with reverberant info etc. loose so much in replay from
> 2 speakers in a small room that the you are there illusion falls well
> short of convincing IME.
>
> ScottW
Yeah, Scott, I do "deplore" them. They aren't real, as in they don't
exist outside of the studio where they were made. They also aren't
"stereo". Stereo means, literally, "solid" as in having three
dimensions. Individually miked instruments have only ONE dimension,
width. The individual instruments can be placed anywhere on a straight
line, running between the extreme left of the soundstage to the
extreme right by the use of the mixing engineer's pan-pots. That's it;
there is no image depth, and no image height. Even any "ambience" is
artificially provided by use of a digital reverb device. This is done
for two reasons: First, studios go to great lengths not to have any,
and secondly, even if such a recording were made in Carnegie Hall, The
mikes used for multi-miking are so close to the individual instruments
and the difference between volume of the instrument that they are
capturing and the hall sound is so great, that any ambience will be so
far down in the mud, that it is simply not "heard" by the recording
device. Of course, one could put separate "ambience" mikes at
strategic points in the hall, and yes, some notorious multi-miking
producers have done this (notably J. David Saks who used to "produce"
those awful sounding Philadelphia Orchestra recordings in the heyday
or multi-miking, the late 1960's through much of the 1970's). Even
then the great Philadelphians still sounded ridiculous, all standing
in a "chorus line" stretched between the left and right speakers.....
Audio_Empire[_2_]
April 17th 13, 11:38 AM
On Tuesday, April 16, 2013 5:28:55 PM UTC-7, ScottW wrote:
> On Apr 16, 3:46*pm, Audio_Empire > wrote:
>
> > On Sunday, April 14, 2013 8:45:06 AM UTC-7, ScottW wrote:
>
> > > On Apr 13, 6:49am, "Gary Eickmeier" > wrote:
>
> >
>
> > > Gary,
>
> > > *It's more than a bit frustrating when after the following exchange :
>
> > > <start quote>
>
> > > *"On Apr 12, 6:05 am, "Gary Eickmeier" >
>
> > > wrote:
>
> >
>
> > > > *It would be idiotic to state that a single microphone gives
>
> > > > no directional information, so I'm sure that is not what you mean.
>
> >
>
> > > I've been wondering why this conversation seems so confused but I
>
> > > think I found the crux of the biscuit.
>
> > > You need to accept that this idiocy is in fact true....and start
>
> > > over.
>
> > > <end quote>
>
> >
>
> > > Then you apparently want to exchange our positions with this
>
> > > statement.
>
> >
>
> > > > Okay, so Mr. Pierce, Keith Howard, and Scott W are
>
> > > >surprised that a single microphone can't record directional
>
> > > >information,
>
> >
>
> > > > I think I am done here.
>
> >
>
> > > *I can't tell if this is a few inadvertent typos or if you are really
>
> > > this inconsistent in your position (and I think that is a very
>
> > > generous and gentle way to put it).
>
> > > FWIW, I think that some recordings (close mic'd and pan potted of the
>
> > > kind AE deplores) create a superior illusion of instruments in the
>
> > > room with me. *Recordings of live performances that capture a sound
>
> > > field complete with reverberant info etc. loose so much in replay from
>
> > > 2 speakers in a small room that the you are there illusion falls well
>
> > > short of convincing IME.
>
> >
>
> > > ScottW
>
> >
>
> > Yeah, Scott, I do "deplore" them. They aren't real, as in they don't
>
> > exist outside of the studio where they were made. They also aren't
>
> > "stereo". Stereo means, literally, "solid" as in having three
>
> > dimensions. Individually miked instruments have only ONE dimension,
>
> > width. The individual instruments can be placed anywhere on a straight
>
> > line, running between the extreme left of the soundstage to the
>
> > extreme right by the use of the mixing engineer's pan-pots. That's it;
>
> > there is no image depth, and no image height. Even any "ambience" is
>
> > artificially provided by use of a digital reverb device.
>
>
>
> There appears to be lots of literature available on how to add 3D to
>
> dry studio tracks with the use of reverb and delay.
Don't even want to go there. The concept is ridiculous - from my point
of view.
> Yes it's all artificial, but so is any reproduction...even if the
>
> origin was a live event.
Some more so than others....
> I've often wondered if a live event recording using mic's spaced ~10
>
> ft. apart and 10 ft in front of the listeners position wouldn't yield
>
> a reasonable recreation.
How many mikes? Three? It's been done. Mercury Living Presence from the 50's, 60's and Telarc in the late 70's - to about 2010. The Mercurys were more successful, IMNSHO.
>
> The concept being the mics/speakers acting as repeaters of the
>
> original event and therefore being similarly placed.
>
> Seems odd to use mics spaced only 7" apart to be replayed on speakers
>
> positioned quite further apart.
Think of it like a pair of "surrogate ears" (or eyes) looking at the orchestra from there the mikes are hung. look down the axis of the left mike, what do you see? the left side of the orchestra and much of the middle. Now look down the axis of the right microphone (remember these must be preferably cardioid mikes, though it works with crossed figure-of-eights as well; but with the latter, you have to
deal with the back side of the mike as well Great to pick up hall ambience in an empty hall, not at all
acceptable id there's am audience), what do you see? The right side of the audience and much of the middle. The mikes "see" the same thing. left sees left stage orchestra, right sees right stage orchestra. both see the middle.. Essentially what you get is left mike = (L+M) right mike =(R+M). Now this looks to be self evident but it really isn't. because you have to factor into the "equation" the fact that as the pickup from both these mikes gets further and further off axis, their pickup sensitivity falls off as well starting with the high frequencies. The place in the middle where the two pickup patterns cross, they ADD one to the other and it turns out that this fills the center with full pickup where with one or the other mike only, the center would be attenuated and lacking in highs. You do have to watch the outside pickup lobes of the mike pattern though. Those can catch you out. This is learned by experience. I can
tell you about it, but until you've experimented yourself you probably won't get it right. Since you can think of each mike's point of view as a cone, you can see that the closer to the mike, the smaller the on-axis pattern. The further, the larger that "sweet spot" will be. Eventually you will learn to look at how wide the ensemble seating will be and you will be able to judge the angle between the two mikes and
how far back from the conductor that you must place the mikes in order to get the L and R edges of the
ensemble "in the picture". Of course, we don't care how much the mikes roll-off beyond the sight-lines of the leftmost and right-most musicians.
I hope that clears it up for you. An easier way to get just as good stereo as an XY. A-B, ORTF or other
similar microphone schemes is to use MS miking. In this scheme, you mount two microphone heads
in the same, exact horizontal space, with one mike atop the other (this works best with a multi-pattern
stereo mike such as an Avantone CK-40). You set one mike to omnidirectional (like if you were making a monaural recording) and place it stage center behind the conductor. Just below (or above, it makes no difference) the first mike you mount a figure-of-eight mike with the two lobes pointing sideways (I.E., parallel to the stage) You feed the omni into one mike input on your mixer and set the pan-pot on that channel to center (mono). You then split the figure-of-eight mike feed into two feeds: one in-phase, and one out-of-phase (some mixers have switches for phase. Otherwise you have to hardwire one leg of the splitter out of phase, or use a phase swapping in-line transformer) Plug these two figure-of-eight feeds into the next two mike mixer inputs and set the left-most of the two channel's pan-pot all the way to the left and right most (the inverted phase feed) pan=pot all the way to the right. That means you need three mike inputs for this. With the level controls you can vary the pickup pattern of this MS mike from dead mono (only the omni being used) to full 120 degree stereo (full Figure-of-eight and full omni) or, by manipulating the amount of "S" in the mix, anything in between. If you're recording live, you can use a cardioid for the "M" mike. It will pick-up less audience. But the omni "M" mike is best.
Hope this is all clear.
A_E
Gary Eickmeier
April 17th 13, 12:56 PM
ScottW wrote:
> I've often wondered if a live event recording using mic's spaced ~10
> ft. apart and 10 ft in front of the listeners position wouldn't yield
> a reasonable recreation.
> The concept being the mics/speakers acting as repeaters of the
> original event and therefore being similarly placed.
> Seems odd to use mics spaced only 7" apart to be replayed on speakers
> positioned quite further apart.
>
> ScottW
Quite an innocent question from one so knowledgeable! No, the one has
nothing to do with the other except maybe in the case of three spaced omnis.
For example, you can record an amazing perspective with the coincident
techniques of MS and XY microphones that have no spacing at all. You can do
almost as well with ORT-F, fairly closely spaced cardioids angled at about
120°.
One of my favorite techniques is 3 spaced omnis, positioned as you suggest
geometrically similar (that is, similar, not identical spacing) to the
positioning and spacing of the speakers at home. The main difference would
be positioning the mikes at a certain distance in front of the orchestra so
that their summing would image the instruments in between correctly and
there would be no hole in the middle. Care must be taken that you are not so
close to the front players that the rear of the orchestra sounds like they
are a mile away. Some elevation of the mikes can help even out these
loudnesses so that it sounds more natural when played back.
In general there is no direct relationship between the spacing of
microphones and speakers like you ask about. Recording is an art aimed at
the final result as heard on most playback systems. Nothing is off limits,
such as highlight mikes for the quieter instruments or the soloists, or
accent mikes such as the outboard omnis used by John Eargle. Several of my
friends get amazing results with Mid Side, including Audio Empire. I have
been trying two spaced omnis on a high bar, but I am not satisfied with the
separation at the distances that I am forced to record live performances, so
I will change something next season.
Gary Eickmeier
Gary Eickmeier
April 17th 13, 12:57 PM
Dick Pierce wrote:
> KH wrote:
>> for some it's the Bose 901 approach, ...
>>
>> Clearly you believe that you are right, and the world is wrong.
>
> On the theory that context is everything, perhaps a
> revelation of context might be useful.
>
> Unless my memory is faulty, and that is certainly a
> possibility, I believe that Mr. Eickmeier has revealed
> in the past that he is an advocate of the Bose 901, and
> has further stated words to the effect that Bose doesn't
> know how to set them up, and (to possibly risk exaggeration
> to the point of possible hyperbole), it's Mr. Eickmeier's
> claim that has the "secret" of setting Bose 901s up "properly."
>
> Now, if my memory of these points is correct, people should
> now have more information they can accept or reject Mr.
> Eickmeier's claims, using a more complete knowledge of the
> context surrounding his, well, theory.
>
> If my memory is not correct, then I apologize for contributing
> to the indistinct and muddy picture already in place.
No apology necessary - your memory is correct sir! But no, as usual, you
haven't explained anything about my theory, so let me give you the Readers
Digest version.
First, "the answer" is that I am using four 901s plus center channel of my
own making and a Velodyne F1800 that I purchased from Howard Ferstler, all
in my 21 x 31 foot dedicated home theater listening room. Howard came down
and reviewed my system in The Sensible Sound a while back. The front 901s
are positioned in an unusual way, 5 ft out from the front wall and 5 ft in
from the side walls, in accordance with my positioning theory. Such
positioning is more in line with audiophile practice than Bose owners manual
suggestions.
Do you remember my Big Three of speaker positioning, frequency response, and
room acoustics? Siegfried Linkwitz asked a question of the AES about these
factors, which factors are the main determinants of the sound of a speaker
system in a room. The question was how to optimize them for the realistic
reproduction of auditory perspective - realism. He called it the Auditory
Scene, or AS. From my Image Model Theory, I already knew the answer to those
questions, so I entered a home made speaker in the listening test series
that my audio club volunteered to run to answer The Challenge, as it was
called. I made some cheap speakers whose main design feature was a greater
output to the rear than to the front of the box, and told them how to
position them. After the first round of blind testing everyone was very
surprised to learn that my design won over the Orions and the Behringers.
My Damascus moment came one fine day when I was stationed overseas in
England and I decided to experiment with my 901s and position them more like
standard audiophile practice. I moved them out a lot farther from all walls
and beheld a focusing of the imaging that stunned me to the extent that I
wrote to Bose to ask them why they didn't tell us about this in the owners
manual. My letter struck Dr. Bose to the extent that he phoned me and talked
about acoustics, D/R ratios, and speaker design for an hour and a quarter. I
visited the factory on redeployment as I passed through Boston and spoke
with Dr. Bose and his chief engineer, Joe Veranth. Joe told me about a
technique from architectural acoustics called image modeling, in which you
can draw the effects of speaker radiation pattern and positioning by drawing
the virtual images on the other side of the reflecting surfaces rather than
ray tracing. This opened up a very visual way of looking at the problem and
comparing the live model with the reproduction model, which insight led me
to my paper to explain it all that I presented at the AES in 1989. It is
called An Image Model Theory for Stereophonic Sound. I would be glad to send
you a PDF of it in Email.
Since that time I have continued to research all aspects of it, looking for
anything that would disprove it, but everything I read and hear only
confirms and supports it.
The reason for my zeal, beyond my enthusiasm for the kind of sound I am
getting at home, is that the interrelationships of the possible variations
of The Big Three are complex enough that others are not likely to stumble
upon the correct combination as I did except by chance, which chance is so
low because it would never occur to anyone other than Bose to design a
negative directivity index (more sound to the rear than the front) into a
speaker. It goes against what most engineers are taught about what causes
stereo perception - as you probably are familiar with!
The concept is so different from what you would read in the textbooks that I
am concerned that the industry is operating on an entirely wrong theory
about how stereo works, one that is counter productive to the supposed goal
of the listening experience, which is precisely what AE has asked about in
the OP for this thread - why don't recordings sound more real, more like
they are right there in front of us in our room?
Curiously, upon doing some research into the history of stereo systems, I
discovered that the pioneers knew the answer long before these questions
resurfaced about what we are doing with the process (of recording and
reproduction). Both Harry Olson and William B. Snow warned us that the terms
stereophonic and binaural were continually confused with one another, and
Snow defined the stereophonic as a field-type system in which speakers were
placed in a playback room in positions that were geometrically similar to
the positions of the microphones in order to produce a sound field in the
room that is similar to that which was recorded. Binaural is a head-related
system in which we record ear input signals for direct presentation to the
ears. It is this system in which HRTF, head shadowing, and elimination of
the playback environment are important.
In all texts on how stereo imaging works you will see two speakers and a
listener in an equilateral triangle, with nothing about The Big Three
anywhere to be found. We started with the definition as a field type system,
but as far as the explanation of it goes, the room might as well not exist.
Enter Image Model Theory. We must study recording and reproducing sound
fields in rooms, not ear input signals.
That is my, well, theory, if you care to learn more about it.
Gary Eickmeier.
Dick Pierce[_2_]
April 17th 13, 02:35 PM
Gary Eickmeier wrote:
> Dick Pierce wrote:
>
>>KH wrote:
>>
>>>for some it's the Bose 901 approach, ...
>>>
>>>Clearly you believe that you are right, and the world is wrong.
>>
>>On the theory that context is everything, perhaps a
>>revelation of context might be useful.
>>
>>Unless my memory is faulty, and that is certainly a
>>possibility, I believe that Mr. Eickmeier has revealed
>>in the past that he is an advocate of the Bose 901, and
>>has further stated words to the effect that Bose doesn't
>>know how to set them up, and (to possibly risk exaggeration
>>to the point of possible hyperbole), it's Mr. Eickmeier's
>>claim that has the "secret" of setting Bose 901s up "properly."
>>
>>Now, if my memory of these points is correct, people should
>>now have more information they can accept or reject Mr.
>>Eickmeier's claims, using a more complete knowledge of the
>>context surrounding his, well, theory.
>>
>>If my memory is not correct, then I apologize for contributing
>>to the indistinct and muddy picture already in place.
>
>
> No apology necessary - your memory is correct sir! But no, as usual, you
> haven't explained anything about my theory,
And precisely why is it MY job to "explain" YOUR theory?
You're done a miserable job of explaining it, it would seem,
to anyone but yourself.
And your theory is not a theory, not by any accepted
scientific and technical usage of the term. It's your
story, you're sticking to it. But it's not a theory.
Where are the testable, falsifiable predictions of your
theory, to just give one of many examples of the properties
and attributes that might qualified it as a candidate for
a theory?
When I last pressed you on some of your assertions over
a year ago, you defended your story, for example, by saying
that you got a phone call from Amar Bose, like that means
ANYTHING.
> so let me give you the Readers Digest version.
And Readers Digest is the LAST place I would look for
any coherent explanation of any technical topic. But,
thus far, that's all you've provided.
My take is that you like your Bose 901s. You really,
REALLY like them. You drank the Bose kool-aid. Now you
construct an entire world view whose puropose is to
make that fact self-consistent. And you manage the
cognitive dissonance created by the surrounding physical
reality and its contradiction to the world you created
by being dismissive of other views, by name dropping, by
refusing to address technical challenges using reasoned
technical, fact-based arguments, outright denial of
physical contradictions in your story, fabrication of
positions and viewpoints of others not sharing your kool-
aid, and by blaming others for not "explaining anything
about your theory."
And that's my story, and I'm sticking to it until I am
presented with compelling evidence that might suggest
that story needs modification.
--
+--------------------------------+
+ Dick Pierce |
+ Professional Audio Development |
+--------------------------------+
Gary Eickmeier
April 17th 13, 02:36 PM
KH wrote:
> Clearly you believe that you are right, and the world is wrong. You
> like what you like, I don't. Let's just leave it at that.
>
> Keith
I wrote an Email to Keith to offer him a paper I wrote to explain all of
this systemic theory in great detail, but he has not responded so I don't
know if he got it or not. So I throw it open to all. The paper was written
after my basic Image Model Theory paper was rejected by the people at the
AES who do the peer reviewing for publication. I asked them for comments and
it showed that they didn't understand a word I said about the difference
between a field-type system and a head-related system. So I wrote a second,
more basic paper that painstakingly shows the difference by using an
imaginary trip to Mars a hundred years from now to explain the field-type
system to the Martians, who knew only trinaural as their system for their 3
eared heads. They needed a recording and reproduction system that has
nothing to do with the number of ears or the shape of the head for all of
the beings who wanted to share music. The AES team introduces them to the
field-type system known as stereophonic, and in the process goes over the
entire process in great detail and shows how it works and how we can reduce
the number of channels without losing too much information.
The paper was never submitted because it was somewhat tongue in cheek and
was written more like a magazine article than a technical paper. It was
written before the Archimedes project was finished, and it assumed great
hope that they would discover some of the answers to questions that I had
about speakers and rooms. Alas, they never did much with that project and I
couldn't contribute to it. I talked to Soren Bech one fine day at a
convention, but no communication was established or attempted. The paper was
published in the BAS Speaker, I forget which issue, but I would be glad to
send the PDF to anyone interested in an unusual way of talking about
stereo - analyzing it brick by brick and then putting it back together again
in modern form. If you are curious at all about all this field-type system
talk, this article will hit you over the head with it - so to speak.
Gary Eickmeier
Gary Eickmeier
April 17th 13, 03:06 PM
I would just add that I have a few of AE's most excellent recordings and he
knows what he is talking about. They stretch from wall to wall in my system,
with terrific ambience and localization of individual images. His success in
recording is one reason for this thread - he is asking why the commercial
recordings can't seem to get it right if he can do it so easily. I agree.
Also, I have experimented with MS by manually positioning my AT 2050s one
atop the other and using them in figure of eight pattern. Then, when I get
home with the two tracks, I can combine them in the correct matrix to turn
them into stereo again and manipulate the degree of separation in Adobe
Audition 2.0. This involves inverting one of the channels of the Side
microphone and adding each side to the Mid mike to get R and L. This can be
a very flexible system for adjusting apparent width of image in post, but is
a lot harder than just using an XY pair and not worrying about converting to
R and L. I'm just too anxious to hear the result when I get home!
Gary Eickmeier
Dick Pierce[_2_]
April 17th 13, 03:25 PM
Gary Eickmeier wrote:
> KH wrote:
>>Clearly you believe that you are right, and the world is wrong. You
>>like what you like, I don't. Let's just leave it at that.
>>
> I wrote an Email to Keith to offer him a paper I wrote to explain all of
> this systemic theory in great detail, but he has not responded so I don't
> know if he got it or not. So I throw it open to all. The paper was written
> after my basic Image Model Theory paper was rejected by the people at the
> AES who do the peer reviewing for publication.
Q.E.D.
> I asked them for comments and it showed that they didn't
> understand a word I said
That's what YOU think it showed YOU. Since you have not shared
ANY of your comments with us, we are presented with not factual
content of the comments, but only your interpretation of those
comments.
Your once again dismissive, belittling tone suggests to this
person that you didn't understand their comments. That their
comments were not supportive of your position is plausible.
That you took that lack of support as "proof" that they
"didn't understand a word you said," given your history,
is not only plausible, but expected.
So pretend someone comes up with a "theory" that relates
Euler's identity to the aggregated hyperfine structure
constant of of the contact area between the landing pads
of the Apollo 11 lander and then "demonstrates" that that
proves beyond a shadow of a doubt that it was undetectable
aliens in miniature invisible asteroids that fired the
fatal shot from the grassy knoll under the orders of
the Trilateral Commission, and the paper is rejected by
the peer review comittee, who didn't understand a word
that person said, therefore, it's the review comittee
that's wrong?
A hyperbole, to be sure, but I suspect someone will get
the point.
--
+--------------------------------+
+ Dick Pierce |
+ Professional Audio Development |
+--------------------------------+
Gary Eickmeier
April 17th 13, 05:00 PM
Dick Pierce wrote:
> Gary Eickmeier wrote:
> And precisely why is it MY job to "explain" YOUR theory?
> You're done a miserable job of explaining it, it would seem,
> to anyone but yourself.
>
> And your theory is not a theory, not by any accepted
> scientific and technical usage of the term. It's your
> story, you're sticking to it. But it's not a theory.
>
> Where are the testable, falsifiable predictions of your
> theory, to just give one of many examples of the properties
> and attributes that might qualified it as a candidate for
> a theory?
>
> When I last pressed you on some of your assertions over
> a year ago, you defended your story, for example, by saying
> that you got a phone call from Amar Bose, like that means
> ANYTHING.
>
>> so let me give you the Readers Digest version.
>
> And Readers Digest is the LAST place I would look for
> any coherent explanation of any technical topic. But,
> thus far, that's all you've provided.
>
> My take is that you like your Bose 901s. You really,
> REALLY like them. You drank the Bose kool-aid. Now you
> construct an entire world view whose puropose is to
> make that fact self-consistent. And you manage the
> cognitive dissonance created by the surrounding physical
> reality and its contradiction to the world you created
> by being dismissive of other views, by name dropping, by
> refusing to address technical challenges using reasoned
> technical, fact-based arguments, outright denial of
> physical contradictions in your story, fabrication of
> positions and viewpoints of others not sharing your kool-
> aid, and by blaming others for not "explaining anything
> about your theory."
>
> And that's my story, and I'm sticking to it until I am
> presented with compelling evidence that might suggest
> that story needs modification.
As I said, I would be glad to send you the PDF of my basic paper, plus the
Mars paper that goes a little deeper into the field-type vs the head-related
type systems, as defined by Snow and Olson.
Gary Eickmeier
Gary Eickmeier
April 17th 13, 05:00 PM
Dick Pierce wrote:
> So pretend someone comes up with a "theory" that relates
> Euler's identity to the aggregated hyperfine structure
> constant of of the contact area between the landing pads
> of the Apollo 11 lander and then "demonstrates" that that
> proves beyond a shadow of a doubt that it was undetectable
> aliens in miniature invisible asteroids that fired the
> fatal shot from the grassy knoll under the orders of
> the Trilateral Commission, and the paper is rejected by
> the peer review comittee, who didn't understand a word
> that person said, therefore, it's the review comittee
> that's wrong?
>
> A hyperbole, to be sure, but I suspect someone will get
> the point.
Wow - congratulations! Unasailable!
But I would rather imagine a lone scientist coming up with a theory that it
is not the sun revolving around the earth, but the earth and all of the
planets revolving around the sun. The church immediately labels him a kook
and has him locked up in a tower and observed for sanity.
Could never happen, but I am sure "someone" might get the point.
Gary Eickmeier
Audio_Empire[_2_]
April 17th 13, 09:44 PM
On Wednesday, April 17, 2013 7:56:34 AM UTC-7, ScottW wrote:
> On Apr 17, 3:38=A0am, Audio_Empire > wrote:
> > On Tuesday, April 16, 2013 5:28:55 PM UTC-7, ScottW wrote:
> > > On Apr 16, 3:46=A0pm, Audio_Empire > wrote:
> > > > On Sunday, April 14, 2013 8:45:06 AM UTC-7, ScottW wrote:
> > > > > On Apr 13, 6:49am, "Gary Eickmeier" > wr=
ote:
> > > > > Gary,
> > > > > =A0It's more than a bit frustrating when after the following exch=
ange :
> > > > > <start quote>
> > > > > =A0"On Apr 12, 6:05 am, "Gary Eickmeier" =
m>
> > > > > wrote:
> > > > > > =A0It would be idiotic to state that a single microphone gives
> > > > > > no directional information, so I'm sure that is not what you me=
an.
> > > > > I've been wondering why this conversation seems so confused but I
> > > > > think I found the crux of the biscuit.
> > > > > You need to accept that this idiocy is in fact true....and start
> > > > > over.
> > > > > <end quote>
> >
> > > > > Then you apparently want to exchange our positions with this
> > > > > statement.
> >
> > > > > > Okay, so Mr. Pierce, Keith Howard, and Scott W are
> > > > > >surprised that a single microphone can't record directional
> > > > > >information,
> > > > > > I think I am done here.
> >
> > > > > =A0I can't tell if this is a few inadvertent typos or if you are =
really
> > > > > this inconsistent in your position (and I think that is a very
> > > > > generous and gentle way to put it).
> > > > > FWIW, I think that some recordings (close mic'd and pan potted of=
the
> > > > > kind AE deplores) create a superior illusion of instruments in th=
e
> > > > > room with me. =A0Recordings of live performances that capture a s=
ound
> > > > > field complete with reverberant info etc. loose so much in replay=
from
> > > > > 2 speakers in a small room that the you are there illusion falls =
well
> > > > > short of convincing IME.
> > > > > ScottW
> >
> > > > Yeah, Scott, I do "deplore" them. They aren't real, as in they don'=
t
> > > > exist outside of the studio where they were made. They also aren't
> > > > "stereo". Stereo means, literally, "solid" as in having three
> > > > dimensions. Individually miked instruments have only ONE dimension,
> > > > width. The individual instruments can be placed anywhere on a strai=
ght
> > > > line, running between the extreme left of the soundstage to the
> > > > extreme right by the use of the mixing engineer's pan-pots. That's =
it;
> > > > there is no image depth, and no image height. Even any "ambience" i=
s
> > > > artificially provided by use of a digital reverb device.
> > > =A0There appears to be lots of literature available on how to add 3D =
to
> > > dry studio tracks with the use of reverb and delay.
> >
> > Don't even want to go there. The concept is ridiculous - from my point
> > of view.
> >
> > > Yes it's all artificial, but so is any reproduction...even if the
> > > origin was a live event.
> >
> > Some more so than others....
> >
> > > I've often wondered if a live event recording using mic's spaced ~10
> > > ft. apart and 10 ft in front of the listeners position wouldn't yield
> > > a reasonable recreation.
> >
> > How many mikes? Three?
> Two should suffice.
> >It's been done. Mercury Living Presence from the 50's, 60's and Telarc i=
n the late >70's - to about 2010. The Mercurys were more successful, IMNSHO=
..
> I am under the impression they were spaced quite a bit further
>=20
> apart...needing a 3rd mic to capture center. Correct me if I'm wrong.
Well, the right and left mike are indeed about 20 feet apart, but the cente=
r
mike was 10 ft from either. Actually, according to C.R. Fine, who I had a=
=20
chance to talk to over lunch at a AES convention at the Waldorf-Astoria bac=
k=20
in the 1970's, his rule-of-thumb was to place a microphone in the middle,=
=20
equi-distant from the left and right boundaries or the ensemble being recor=
ded.
Then he would bisect the first microphone and the edge of the ensemble on t=
he=20
left and right and with the two side mikes. He said that usually they ended=
-up
about 10 ft from mike to mike (give or take five). Robert Woods aped Fine f=
or
his Telarc miking technique (only he got his info from Robert Eberenz, who =
was
Fine's "assistant" (actually, Bob Fine, Bob Eberenz and Wilma Cozart Fine W=
ERE
Mercury records in the 50's and 60's)=20
=20
> >
>=20
> > > The concept being the mics/speakers acting as repeaters of the
> > > original event and therefore being similarly placed.
> >
> > > Seems odd to use mics spaced only 7" apart to be replayed on speakers
> > > positioned quite further apart.
> >
> > Think of it like =A0a pair of "surrogate ears" (or eyes) looking at the=
orchestra from
> > there the mikes are hung. look down the axis of the left mike, what do =
you see?
>=20
> I am thinking of the reciprocity principle discussed earlier where
> those ears are now speakers some 10' feet or so apart.
> The phase relationship between the two signals relative to the
> original event would be disturbed.
Actually, so called "coincident" miking methods (A-B, X-Y, M-S, ORTF (to a =
lesser extent because=20
it uses 110 degrees between capsules instead of 90 degrees) are the only st=
ereo microphone arrangements that are phase coherent. You can take a stereo=
recording from any of the above=20
and then sum the two channels together on playback and get perfect mono, wi=
th absolutely no loss. You cannot do that with spaced omnis. They are NOT p=
hase coherent and will not blend to mono without loss.=20
> >the left side of the orchestra and much of the middle. Now look down the=
axis of the right microphone (remember these must be preferably cardioid m=
ikes, though it works with crossed figure-of-eights as well; but with the l=
atter, you have to
> > deal with the back side of the mike as well Great to pick up hall ambie=
nce in an empty hall, not at all
> > acceptable id there's am audience), what do you see? The right side of =
the audience and much of the middle. The mikes "see" the same thing. left s=
ees left stage orchestra, right sees right stage orchestra. both see the mi=
ddle. Essentially what you get is left mike =3D (L+M) right mike =3D(R+M). =
Now this looks to be self evident but it really isn't. because you have to =
factor into the "equation" the fact that as the pickup from both these mike=
s gets further and further off axis, their pickup sensitivity falls off as =
well starting with the high frequencies. The place in the middle where the =
two pickup patterns cross, they ADD one to the other and it turns out that =
this fills the center with full pickup where with one or the other mike onl=
y, the center would be attenuated and lacking in highs. You do have to watc=
h the outside pickup lobes of the mike pattern though. Those can catch you =
out. This is learned by experience. I can
> > tell you about it, but until you've experimented yourself you probably =
won't get it right. Since you can think of each mike's point of view as a c=
one, you can see that the closer to the mike, the smaller the on-axis patte=
rn. The further, the larger that "sweet spot" will be. Eventually you will =
learn to look at how wide the ensemble seating will be and you will be able=
to judge the angle between the two mikes and
> > how far back from the conductor that you must place the mikes in order =
to get the L and R edges of the
> > ensemble "in the picture". Of course, we don't care how much the mikes =
roll-off beyond the sight-lines of the leftmost and right-most musicians.
> >
> > I hope that clears it up for you. An easier way to get just as good ste=
reo as an XY. A-B, ORTF or other
> > similar microphone schemes is to use MS miking. In this scheme, you mou=
nt two microphone heads
> > in the same, exact horizontal space, with one mike atop the other (this=
works best with a multi-pattern
> > stereo mike such as an Avantone CK-40). You set one mike to omnidirecti=
onal (like if you were making a monaural recording) and place it stage cent=
er behind the conductor. Just below (or above, it makes no difference) the =
first mike you mount a figure-of-eight mike with the two lobes pointing sid=
eways (I.E., parallel to the stage) You feed the omni into one mike input o=
n your mixer and set the pan-pot on that channel to center (mono). You then=
split the figure-of-eight mike feed into two feeds: one in-phase, and one =
out-of-phase (some mixers have switches for phase. Otherwise you have to ha=
rdwire one leg of the splitter out of phase, or use a phase swapping in-lin=
e transformer) Plug these two figure-of-eight feeds into the next two mike =
mixer inputs and set the left-most of the two channel's pan-pot all the way=
to the left and right most (the inverted phase feed) pan=3Dpot all the way=
to the right. That means you need three mike inputs for this. With the lev=
el controls you can vary the pickup pattern of this MS mike from dead mono =
(only the omni being used) to full 120 degree stereo (full Figure-of-eight =
and full omni) or, by manipulating the amount of "S" in the mix, anything i=
n between. If you're recording live, you can use a cardioid for the "M" mik=
e. It will pick-up less audience. But the omni "M" mike is best.
> >
> > Hope this is all clear.
>=20
> Yes, I appreciate the detail discussion of amplitude capture.....but
>=20
> I still think keeping the spacing close to the speakers may help
>=20
> maintain the phase relationships between the channels of the original
>=20
> event and enhance the sense of the original space in recreation.
Actually, just the opposite is true. While the fact that coincident miking =
preserves=20
perfect phase coherence within the recording allowing one to get mono witho=
ut
loss is not really important from a practical point of view any more (we do=
n't need
to make both stereo and mono records anymore), It is still an important tes=
t to=20
determine phase coherence of a system in order to insure stable imaging and=
=20
soundstage.
KH
April 19th 13, 02:05 AM
On 4/16/2013 12:59 PM, Audio_Empire wrote:
> On Tuesday, April 16, 2013 5:46:30 AM UTC-7, KH wrote:
>> On 4/15/2013 4:52 AM, Gary Eickmeier wrote:
>>
>>> KH wrote:
<snip>
> If that's what you mean by microphones picking-up NO directional information, then I
> think that we should all be in agreement and can move on.
>
> Is that not so?
>
Yes, basically. The signal recorded by any microphone is, of course,
dictated by the placement and response of the microphone. But, and it's
a huge "but", there is no information on the recording that *identifies*
how the microphone altered the signal, and thus no information available
on playback that would allow a signal, from a cardioid for example, to
be adjusted such that the levels would be equalized for the
directionality and sensitivity of the microphone.
So the signal is definitely affected, but the information to "undo" that
effect simply isn't in the 2-d signal.
Keith
KH
April 19th 13, 03:44 AM
On 4/17/2013 6:36 AM, Gary Eickmeier wrote:
> KH wrote:
>
>> Clearly you believe that you are right, and the world is wrong. You
>> like what you like, I don't. Let's just leave it at that.
>>
>> Keith
>
> I wrote an Email to Keith to offer him a paper I wrote to explain all of
> this systemic theory in great detail, but he has not responded so
<snip>
Yes, he did get it, but he's been in the air more than on the ground
this week, and rather busy. I would be happy to read your paper;
however, if your response, should I disagree with your 'clarified'
theory, is that I'm simply another one of the great unwashed incapable
of your depth of understanding, then please save us both some time and
refrain from responding.
If you want to *DISCUSS* your theory, or suppositions, then fine. If,
however, the likely outcome is that you will think me a fool for
disagreeing with you, then I'm loathe to 'prove' you right by foolishly
indulging you. To quote Michael Rennie, "the choice is yours".
Keith
Dick Pierce[_2_]
April 19th 13, 06:47 PM
I am willing to deal with Mr. Eickmeier only if he is willing to
the specific objections I and others have brough forth. If he
wants to answer the points I have brought up below and previously,
fine, we can have a discussion.
I have written several specific questions, and I would like
Mr. Eickmeier to attempt to answer them with appropriate,
relevant answers. I don't necessarily care what the answers
are, just that he address them.
KH wrote:
> On 4/19/2013 4:47 AM, Gary Eickmeier wrote:
>> KH wrote:
>>> Yes, he did get it, but he's been in the air more than on the ground
>>> this week, and rather busy. I would be happy to read your paper;
>>> however, if your response, should I disagree with your 'clarified'
>>> theory, is that I'm simply another one of the great unwashed incapable
>>> of your depth of understanding, then please save us both some time and
>>> refrain from responding.
>>>
>>> If you want to *DISCUSS* your theory, or suppositions, then fine. If,
>>> however, the likely outcome is that you will think me a fool for
>>> disagreeing with you, then I'm loathe to 'prove' you right by
>>> foolishly indulging you. To quote Michael Rennie, "the choice is
>>> yours".
>>> Keith
>>
>> Lets handle it this way. I wrote a nice response to Dick Pierce last
>> night,
>> but it didn't get posted or responded to - must not have gone through. I
>> will just re-post it here for you and whoever else is interested. I
>> realize
>> I may have seemed a little mysterious and combative because I think I
>> have
>> explained myself when I haven't. So here is a taste of it, which of
>> course
>> is not the full version but if you still think I am a nutcase then you
>> probably wouldn't want to read the rest. Please let me know.
>
> I never said or intend to implied that you were a nutcase. Merely that
> you have consistently, and persistently, dismissed all dissenters of
> your theory as idiots, or "poor dumb *******s". You will find few
> people who are interested in discussions with such a person. One mans
> "mysterious and combative" is another mans "arrogant and condescending".
I would at once provide both an explicit example as well
as an opporutinty for Mr. Eickmeier to follow a different
path.
Take his statement:
"The paper was written after my basic Image Model Theory
paper was rejected by the people at the AES who do the
peer reviewing for publication. I asked them for comments
and it showed that they didn't understand a word I said."
I would be willing to bet good hard cash that the comments did
not say "we don't understand a word you said," or anything like
that. Mr. Eickmeier, do you want to take the bet?
Instead, why don't you actually publish their comments? Leave it
to the collective to decide what they said and what they meant.
Publish them right here.
Specific question: Mr. Eickmeier: what EXACTLY were the objections
raided by the AES review committee?
You keep talking about your "theory" yet, no intent to insult,
your approach is about as anithetical to the scientific process
as one might imagine.
Part of the scienctific process when you profer something that
is new is that it has to withstand a rather brutal gauntlet of
valid, skeptical examination and criticism. And real scientists
have to be willing to realize that they might be wrong. I think
it's clear, simply by the record in front of us, you have been
anything but: you have dismissed any contrary view of your theory
out of hand, in one case, stating that a group of acknowledged
experts simply "didn't understand a word [you] said."
> Perhaps you've heard the aphorism "If the pupil hasn't learned, the
> teacher hasn't taught"?
Precisely.
FIrst. Mr. Eickmeier, how about explicitly adressing the objects.
Instead, you have been evasive and diversionary.
Now, let's look at what YOU need to do to have your hypothesis
taken seriously. And that is, if you want to call it a theory,
you have to follow the rules.
A hypothesis, to be taken seriously, must make predictions
that are testable. This is the principle of "falsifiability."
The tests must be able to be performed by any reasonably
competent party and generate clear outcomes which can then
be interpeted by reasonable independent parties as to whether
they support or refute your hypothesis.
Specific question:
Do you understand the scientific model?
Do you understand the principles of testability
and falsifiability?
Let's take a real example: Einstein's General Theory of Relativity
makes very specific predictions that are testable. FOr example,
it says that the sun's gravity causes a warping of space time,
and that, as a result, the apparent positions of stars close to
the sun should be changed by a certain amount. This is a testable
prediction, and makes this particular element of the theory
falsifiable: measure the positions of stars close to the sun: if
they change by the predicted amount, that supports that aspect of
the thoery. If they are not, that refutes that aspect of the theory.
Specific question:
What testable predictions does your model make that an
independent party can test and then examine the outcomes,
determining if that aspect of your theory making the
prediction is supported or refuted by the outcomes?
I see none. Therefore, based on the widely accepted usage of the
term "theory" in the scientific realm, I assert you have no theory.
Again, this is not meant as an insult, but a statement of fact as
I interpret it.
Let me provide a counterexample: a "theory" which states that
a person can levitate themselves only when no one is looking.
Tthat theory is not testable and therefore not falsifiable. There
is no observation, no test that can be performed by an independent
party that results in an outcome that either supports or refutes
the theory. How does one, for example, observe the levitiaion
when observation itsdelf prevents levitation?
Specific question:
Do you understand why this example does not qualify
as a valid theory?
Specific question:
Under the principles of the scientific method, does your
"model" qualify as a valid theory or not?
And because you're claims do not have the foundations of a
falsifiable theory, allowing others to independently and
objectively test its predictions, statements YOU have made
like:
"But no, as usual, you haven't explained anything about
my theory,"
incline me to simply dismiss your claims out of hand. You seem
to be utterly unwilling to meet the burden of proof, the burden
that is ENTIRELY yours and no one else's, necessary for your
"thoery" to be taken seriously.
And, I would expect, that is really the reason why the AES
review committee rejected your paper..
Specific question:
Is it POSSIBLE that the reason your paper was rejected is
because it failed to meet the criteria of a valid theory,
and NOT because "they didn't understand a word [you] said?"
Specific question:
Is it possible that if they really didn't understand a word
you said, it might be because either what you said or how
you said it?
--
+--------------------------------+
+ Dick Pierce |
+ Professional Audio Development |
+--------------------------------+
Audio_Empire
April 19th 13, 08:23 PM
In article >,
ScottW > wrote:
> On Apr 17, 1:44*pm, Audio_Empire > wrote:
> >
> > > *Yes, I appreciate the detail discussion of amplitude capture.....but
> >
> > > I still think keeping the spacing close to the speakers may help
> >
> > > maintain the phase relationships between the channels of the original
> >
> > > event and enhance the sense of the original space in recreation.
> >
> > Actually, just the opposite is true. While the fact that coincident miking
> > preserves
> > perfect phase coherence within the recording allowing one to get mono
> > without
> > loss is not really important from a practical point of view any more (we
> > don't need
> > to make both stereo and mono records anymore), It is still an important
> > test to
> > determine phase coherence of a system in order to insure stable imaging and
> > soundstage.
>
> The objective is not being able to sum to mono without loss.
> Loss (phase cancellation) exists within the original venue and it
> would seem necessary
> to tolerate it if the objective is to recreate the original
> perspective of a seat in the venue.
>
> It would appear that recording engineers have long taken liberty to
> provide a better than the
> best seat in the house perspective.
> Perhaps it should not be so surprising that I find my home listening
> generally
> more satisfying than most live events.
>
> ScottW
Well, it certainly can be. When I listen to the Boston Symphony live
broadcasts on Saturday nights, I get a much better "seat" than do the
patrons at Symphony Hall.
Likewise, many recordings in my library offer the same advantage, and
some image almost as well as "live" too. While it might be nice to "see"
the performance as well (by way of a Blu-Ray disc), not the way PBS does
concerts. I'd want a fixed camera, set in the hall so that the whole
orchestra will fit nicely onto a 16 X 9 hi-def screen and then
HANDS-OFF. I hate watching a video production of a symphony. The
musical perspective is fixed while the cameras are constantly moving.
makes me sick (not literally, you understand). I say make the
combination of the microphones (two, only, thank you - oh and
perhaps some accent or soloist mikes if needed) and the camera give the
viewer the best seat in the house and then leave well enough alone. They
can't, of course. They're "video artistes".
A-E
--- news://freenews.netfront.net/ - complaints: ---
Gary Eickmeier
April 20th 13, 02:54 AM
KH wrote:
> On 4/19/2013 4:47 AM, Gary Eickmeier wrote:
> I never said or intend to implied that you were a nutcase. Merely
> that you have consistently, and persistently, dismissed all
> dissenters of your theory as idiots, or "poor dumb *******s". You
> will find few people who are interested in discussions with such a
> person. One mans "mysterious and combative" is another mans
> "arrogant and condescending".
I guess I'm not going to outlive that one, so let me 'splain. Have you ever
seen "Patton" the movie? In it, George C. Scott makes a pep talk to the
troops before going into battle. He says something like "the goal is not to
die for your country - it is to make the other poor dumb ******* die for HIS
country." I always thought that was funny, and my statement is nothing more
than that - a joke.
> Perhaps you've heard the aphorism "If the pupil hasn't learned, the
> teacher hasn't taught"?
I am an Air Force instructor, and I know how to teach. I hope I have made up
for past losses.
> The answer to this, as I see it, is that Dick and I both know "what
> you've done" with your system, and the mental/visual model you have
> constructed that you feel explains how it works to create "realism".
> The crux is that there isn't a "theory" behind that visual model that
> describes how it can actually work. You've also said a number of
> times that you want others to get interested in your "model" so
> *they* can help delineate a mathematical/physical theoretical basis
> for what you've intuited. Well, that does not a "theory", in any
> classical sense, make. That's kind of the starting point, that leads
> to:
I have been told that it is more of a hypothesis, but I disagree. I am
doing it in my home and listening to the result every day.
>> I would say that the main theoretical basis of the whole concept is
>> to get the SPATIAL characteristic correct within your room by means
>> of physically reconstructing the important aspects of the whole
>> acoustical situation.
>
> And yet, you cannot, or will not, accept that you do not have, within
> the recorded signal, the information required to do that accurately. So
> you construct, not reconstruct, a reflected field that sounds to
> you like what you feel are the "important aspects" of the venue.
OK I think this is the place where I can go into my next speech (lesson).
You say there is a problem that the recording doesn't contain enough
information to determine all of the spatial aspects of the recorded venue.
So therefore we cannot reconstruct it at home. So my question to you would
be, what are you doing about it? Do you just give up on the concept of
stereo?
No, you don't. You handle the problem in much the same say as I do. You
place two (at least two) speakers in front of you in a room, so that the
lateral localization might be brought out on playback. You decide where the
speakers will go and now to treat the room and where to sit. This is an
attempt to reconstruct the spatial characteristic contained in the
recording. You cannot, for example, put one speaker on top of the other, or
even on opposite sides of you, you must construct a soundstage in front of
you with a reasonable resemblance to the geometry of the original, which is
almost always a presentation in front of you with a certain lateral spread
that we - and the Acoustical Society of America, and most band leaders and
producers, have come to know and love.
The main difference between us is that you don't take it as far as I do.
Remember The Big Three that Linkwitz asked about? We both need to decide on
those factors in our reproduction, and to do that we need some sort of
paradigm, or model, of what it is that we are doing with the system in order
for it to work. If you study the live model, you can see all of the spatial,
spectral, and temporal aspects of it. No matter who you are or what your
paradigm or ideas are, you must somehow account for a translation of those
characteristics from the live model to the reproduction. If you don't, it
will sound DIFFERENT. You simply cannot put this immense, complex sound
field through two points in space in front of you and expect it to sound the
same.
OK, pause for now.
>> The
>> direct, early reflected, and reverberant sounds will be made to come
>> from the appropriate directions if the playback model mimics a
>> typical live model as closely as possible.
>
> No, they will not, because the information required to do so is not in
> the recorded signal. You state this over and over but never provide
> any mechanism how this can actually happen. The direct, early
> reflected, and reverberant sounds *on the recording* cannot be
> disambiguated from the time variant scalar signal, and thus all of
> these pieces of the original acoustic will be reflected equally from
> all directions, not just the appropriate directions. The effect may
> well sound "realistic" to you, fine, but what you claim happens
> simply defies physics.
Yes, they can. I will only say this once, and hope that you latch onto it.
The Image Model is a spatially arrayed, temporally delayed, spectrally
shaped sound field synthesizer that attempts to decode the direct and early
reflected sound contained in the recording in much the same way as a Dolby
Pro Logic delay sytem can bring out the ambience contained in the recording
without destroying the soundstage that belongs in front of the room.
The recorded signal contains a stream, or train, of pulses from the first
arrival transients to the recorded reverberation from spatially separate
areas around the instruments. The Image Model features two real speakers
that just happen to be closest to you of the 8 in the model. THEREFORE,
first arrival transients will be heard from the actual speakers first, and
this precedence effect is a very strong one, psychoacoustically speaking,
and results in a separation between first arrival and later reverberation
contained in the recording. There can be only one first arrival, and it has
to come from the actual speakers and nowhere else.
So the mechanism that you ask for is the precedence effect, and it works the
same in the Model as it does in a delay based surround ambience extraction
system. If the recording contains no ambience, all that happens is a
harmless image shift toward the reflecting surfaces, resulting in
localization of the auditory event a little behind the plane of the
speakers. If there is ambience, there is a spatial broadening effect as per
well known principles.
>
> And the primary question that you've refused, many times, to answer
> is: If I, and Dick Pierce, and AE all sat down in your listening
> room, with your IMT optimized system, and we all felt that it did indeed
> sound contrived, unrealistic, with sound "splashed all over the
> walls" as AE describes it, *are we wrong*? This is a simple binary
> that you never answer - and won't now unless I miss my guess -
> because to answer, either way, would be to recognize that "realism"
> is in the ear of the beholder. There are many avenues to get near
> that goal, but you seem to want your method to be recognized as the
> Grail, and thus never answer the simple question regarding preference.
>
> Keith
That is one of the most fascinating aspects of this hobby. Seems like you
can't sit two of us down and have us agree on which is the best system.
Seems like we should have all gravitated to a system with certain common
characteristics by this late stage, but it may never happen. Perceptual
abilities and experience vary. I think that very few people pay that much
attention to these spatial factors in listening to music. John Atkinson has
a theory that some people can't hear stereo. I'm not sure. All I do know is
that I liisten intently for these imaging factors every time I listen to any
system. I wish you could hear my system so that this would not be so
theoretical.
Gary Eickmeier
Gary Eickmeier
April 20th 13, 02:25 PM
Dick Pierce wrote:
> I am willing to deal with Mr. Eickmeier only if he is willing to
> the specific objections I and others have brough forth. If he
> wants to answer the points I have brought up below and previously,
> fine, we can have a discussion.
>
> I have written several specific questions, and I would like
> Mr. Eickmeier to attempt to answer them with appropriate,
> relevant answers. I don't necessarily care what the answers
> are, just that he address them.
> Take his statement:
>
> "The paper was written after my basic Image Model Theory
> paper was rejected by the people at the AES who do the
> peer reviewing for publication. I asked them for comments
> and it showed that they didn't understand a word I said."
>
> I would be willing to bet good hard cash that the comments did
> not say "we don't understand a word you said," or anything like
> that. Mr. Eickmeier, do you want to take the bet?
No. Silly question.
> Instead, why don't you actually publish their comments? Leave it
> to the collective to decide what they said and what they meant.
> Publish them right here.
If I can find them - and I think I can - I could extract from them. Would
that be OK? But you would have to read my first paper before you could tell.
Again, to be perfectly clear, what I said was that I wrote the second paper,
the Mars paper, as a result of the comments I got from the reviewers.
> Specific question: Mr. Eickmeier: what EXACTLY were the objections
> raided by the AES review committee?
I will take a look. So long ago! Might be fun.
>
> You keep talking about your "theory" yet, no intent to insult,
> your approach is about as anithetical to the scientific process
> as one might imagine.
>
> Part of the scienctific process when you profer something that
> is new is that it has to withstand a rather brutal gauntlet of
> valid, skeptical examination and criticism. And real scientists
> have to be willing to realize that they might be wrong. I think
> it's clear, simply by the record in front of us, you have been
> anything but: you have dismissed any contrary view of your theory
> out of hand, in one case, stating that a group of acknowledged
> experts simply "didn't understand a word [you] said."
Didn't understand a word I said about the difference between a field-type
system and a head-related system.
> FIrst. Mr. Eickmeier, how about explicitly adressing the objects.
> Instead, you have been evasive and diversionary.
>
> Now, let's look at what YOU need to do to have your hypothesis
> taken seriously. And that is, if you want to call it a theory,
> you have to follow the rules.
>
> A hypothesis, to be taken seriously, must make predictions
> that are testable. This is the principle of "falsifiability."
> The tests must be able to be performed by any reasonably
> competent party and generate clear outcomes which can then
> be interpeted by reasonable independent parties as to whether
> they support or refute your hypothesis.
>
> Specific question:
> Do you understand the scientific model?
> Do you understand the principles of testability
> and falsifiability?
Yes.
> Let's take a real example: Einstein's General Theory of Relativity
> makes very specific predictions that are testable. FOr example,
> it says that the sun's gravity causes a warping of space time,
> and that, as a result, the apparent positions of stars close to
> the sun should be changed by a certain amount. This is a testable
> prediction, and makes this particular element of the theory
> falsifiable: measure the positions of stars close to the sun: if
> they change by the predicted amount, that supports that aspect of
> the thoery. If they are not, that refutes that aspect of the theory.
>
> Specific question:
> What testable predictions does your model make that an
> independent party can test and then examine the outcomes,
> determining if that aspect of your theory making the
> prediction is supported or refuted by the outcomes?
Simply that my system sounds different from a direct firing speaker system,
and the difference is an improvement. That was tested somewhat in The
Challenge experiment. Most engineers and experimenters know that radiation
pattern and room positioning are easily audible. Dr. Mark Davis has said
that the frequency response and radiation pattern are the major determinants
of the sound of a speaker. I have added room positioning. So the question is
how to optimize those and why. I have proposed a comprehensive model and the
rationale for it, and I listen to the result every day. This is not a
pipedream or hypothesis on whether the idea might work and be an
improvement. The 901s are a pretty good substitute for my ideal speaker
until I can build one that suits me, but they are not precisely what I am
talking about, and the Bose products have nothing to do with this
discussion.
>
> I see none. Therefore, based on the widely accepted usage of the
> term "theory" in the scientific realm, I assert you have no theory.
> Again, this is not meant as an insult, but a statement of fact as
> I interpret it.
Address the ideas, not the definition of a theory vs a hypothesis. I really
don't care what you call it, just please begin talking to me about it.
> Let me provide a counterexample: a "theory" which states that
> a person can levitate themselves only when no one is looking.
> Tthat theory is not testable and therefore not falsifiable. There
> is no observation, no test that can be performed by an independent
> party that results in an outcome that either supports or refutes
> the theory. How does one, for example, observe the levitiaion
> when observation itsdelf prevents levitation?
>
> Specific question:
> Do you understand why this example does not qualify
> as a valid theory?
Yes. Because it is silly.
>
> Specific question:
> Under the principles of the scientific method, does your
> "model" qualify as a valid theory or not?
Yes.
> And because you're claims do not have the foundations of a
> falsifiable theory, allowing others to independently and
> objectively test its predictions, statements YOU have made
> like:
>
> "But no, as usual, you haven't explained anything about
> my theory,"
>
> incline me to simply dismiss your claims out of hand. You seem
> to be utterly unwilling to meet the burden of proof, the burden
> that is ENTIRELY yours and no one else's, necessary for your
> "thoery" to be taken seriously.
>
> And, I would expect, that is really the reason why the AES
> review committee rejected your paper..
Again, and finally, a series of double blind listening tests could refute my
claims.
> Specific question:
> Is it POSSIBLE that the reason your paper was rejected is
> because it failed to meet the criteria of a valid theory,
> and NOT because "they didn't understand a word [you] said?"
Sure, it is possible. But I don't care about that. All I care about is
putting the concept out there for others to ponder, possibly experiment
with. There is a lot about it that I have not nailed down yet, and how
"hard" or "soft" the actual precision of the radiation pattern needs to be
is one of them. Seems to me from my brief experinece with prototypes that it
isn't all that critical. I hope not.
Again, I didn't really expect it to get published without a series of
listening tests and a lot of experimentation to report. But what I did not
expect was that they wouldn't even know the difference between binaural and
stereophonic. I must go find those critiques. It was incredible enough that
I had them run it past a second reader just to see if it ws a fluke.
>
> Specific question:
> Is it possible that if they really didn't understand a word
> you said, it might be because either what you said or how
> you said it?
A word I said about the difference between.... never mind, I will go find
them.
Thanks for the response, sorry I have been so long in answering specifics -
just a communication thing. Apologies also to AE for hijacking his thread.
But we are still on topic, so hope he doesn't mind.
Gary Eickmeier
Gary Eickmeier
April 20th 13, 02:25 PM
Dick Pierce wrote:
> Specific question: Mr. Eickmeier: what EXACTLY were the objections
> raided by the AES review committee?
Found it!
The paragraph that showed that the first reviewer didn't understand the
distinction and importance of binaural vs. stereophonic was this:
"This manuscript is also marred by the use of comparisons that make no
sense; for example, the author compares 'reproducing ear signals' with
'reproducing the orchestra itself,' etc."
So the whole discussion of binaural vs stereophonic, which I put in there to
emphasize the importance of understanding that stereo is a field-type system
and not "two ears, two speakers" went right over his head.
So the Mars paper illustrated the difference in a unique and clever way, by
showing that a field-type system has nothing to do with the number of ears
on your head, the spacing between them, or the HRTF.
Well, I'm nodding off now, but you would have to read the basic paper to
understand why this distinction is so important, then the Mars paper for my
elaboration of same so that no one would ever again be confused by it.
Gary Eickmeier
Dick Pierce[_2_]
April 20th 13, 05:07 PM
Gary Eickmeier wrote:
> Dick Pierce wrote:
>>Specific question:
>> What testable predictions does your model make that an
>> independent party can test and then examine the outcomes,
>> determining if that aspect of your theory making the
>> prediction is supported or refuted by the outcomes?
>
> Simply that my system sounds different from a direct firing speaker system,
I think you will be granted that a priori. THat's not a prediction,
that's a fact.
> and the difference is an improvement.
Define "improvement." You can't, it's a personal preference.
Are you saying that YOUR personal preference trumps the
personal preference of those that do not agree with you?
> That was tested somewhat in The
> Challenge experiment. Most engineers and experimenters know that radiation
> pattern and room positioning are easily audible. Dr. Mark Davis has said
> that the frequency response and radiation pattern are the major determinants
> of the sound of a speaker. I have added room positioning. So the question is
> how to optimize those and why. I have proposed a comprehensive model and the
> rationale for it, and I listen to the result every day. This is not a
> pipedream or hypothesis on whether the idea might work and be an
> improvement.
NONE of this is testable. These are personal opinions and
preferences disguised as a "theory."
>>I see none. Therefore, based on the widely accepted usage of the
>>term "theory" in the scientific realm, I assert you have no theory.
>>Again, this is not meant as an insult, but a statement of fact as
>>I interpret it.
>
> Address the ideas, not the definition of a theory vs a hypothesis. I really
> don't care what you call it, just please begin talking to me about it.
No, because you abjectily refuse to treat contrary views to
your opinion as valid crticism, instead you degenerate to
essentially ad hominem attacks.
>>Specific question:
>> Under the principles of the scientific method, does your
>> "model" qualify as a valid theory or not?
>
> Yes.
No. You have failed to provide any objective predictions
that can unabiguously provide results that are independently
testable.
>>And because you're claims do not have the foundations of a
>>falsifiable theory, allowing others to independently and
>>objectively test its predictions, statements YOU have made
>>like:
>>
>> "But no, as usual, you haven't explained anything about
>> my theory,"
>>
>>incline me to simply dismiss your claims out of hand. You seem
>>to be utterly unwilling to meet the burden of proof, the burden
>>that is ENTIRELY yours and no one else's, necessary for your
>>"thoery" to be taken seriously.
>>
>>And, I would expect, that is really the reason why the AES
>>review committee rejected your paper..
>
>
> Again, and finally, a series of double blind listening tests could refute my
> claims.
Refute how? What are you comparing your results to? How would you
construct such a bouble-blind experiment? WHat is being tested?
As you mentioned elsewhere, your "test" would be nothing more than
a preference test, andthe way you have acted heretofore, anyone
expressing a different preference is, in your book, "wrong,"
and someone "who doesn't understand a word you're saying."
In short, Mr. Eickmeier, you've come up with an arrangement for,
whatever personal reasons, you like a lot. That's fine. No one
is objecting to that. Your opinion is that it works well for YOU.
No disagreement from ANYONE is forthcoming on that.
But that's YOUR opinion, YOUR preference. That's IT. That's as far
as you get to go without meeting a MUCH higher standard of proof
that you've been able to muster thus far.
--
+--------------------------------+
+ Dick Pierce |
+ Professional Audio Development |
+--------------------------------+
Dick Pierce[_2_]
April 20th 13, 06:09 PM
Gary Eickmeier wrote:
> Dick Pierce wrote:
>>Let me provide a counterexample: a "theory" which states that
>>a person can levitate themselves only when no one is looking.
>>Tthat theory is not testable and therefore not falsifiable. There
>>is no observation, no test that can be performed by an independent
>>party that results in an outcome that either supports or refutes
>>the theory. How does one, for example, observe the levitiaion
>>when observation itsdelf prevents levitation?
>>
>>Specific question:
>> Do you understand why this example does not qualify
>> as a valid theory?
>
> Yes. Because it is silly.
Wrong, 100% wrong.
It is not a valid theory because it not falsifiable. There
does not exist a test whose outcome can clearly demonstrate
whether the the theory is supported or refuted.
Whether it's "silly" or not is a subjective evaluation,
not a testable predction.
Sorry, Mr. Eickmeier, I don't have much faith that you
really do understand the underlying principles.
--
+--------------------------------+
+ Dick Pierce |
+ Professional Audio Development |
+--------------------------------+
KH
April 21st 13, 03:34 AM
On 4/19/2013 6:54 PM, Gary Eickmeier wrote:
> KH wrote:
>> On 4/19/2013 4:47 AM, Gary Eickmeier wrote:
>
>> I never said or intend to implied that you were a nutcase. Merely
>> that you have consistently, and persistently, dismissed all
>> dissenters of your theory as idiots, or "poor dumb *******s". You
>> will find few people who are interested in discussions with such a
>> person. One mans "mysterious and combative" is another mans
>> "arrogant and condescending".
>
> I guess I'm not going to outlive that one, so let me 'splain. Have you ever
> seen "Patton" the movie? In it, George C. Scott makes a pep talk to the
> troops before going into battle. He says something like "the goal is not to
> die for your country - it is to make the other poor dumb ******* die for HIS
> country." I always thought that was funny, and my statement is nothing more
> than that - a joke.
Were this an isolated incident, your explanation would be far more
believable.
>
>> Perhaps you've heard the aphorism "If the pupil hasn't learned, the
>> teacher hasn't taught"?
>
> I am an Air Force instructor, and I know how to teach. I hope I have made up
> for past losses.
IMO, no. I'm quite sure I have, and have had, a pretty clear picture of
what you've done, how you see your 'model', and why you believe it
works. Your evidence is clear - it works for you. It's your
understanding of the mechanism behind what you're doing, and the
applicability of that method outside your own preference, that I question.
<snip>
>
> I have been told that it is more of a hypothesis, but I disagree. I am
> doing it in my home and listening to the result every day.
Which means nothing about your model other than it creates what you like.
>
>>> I would say that the main theoretical basis of the whole concept is
>>> to get the SPATIAL characteristic correct within your room by means
>>> of physically reconstructing the important aspects of the whole
>>> acoustical situation.
I would agree with you. Unfortunately, reconstruction of the "whole
acoustical situation" is not possible, for reasons provided many, many
times. That you can create a soundfield in your room that sounds real
*to you* doesn't make it a theory. Realism is a matter of preference.
Pure and simple.
>>
>> And yet, you cannot, or will not, accept that you do not have, within
>> the recorded signal, the information required to do that accurately. So
>> you construct, not reconstruct, a reflected field that sounds to
>> you like what you feel are the "important aspects" of the venue.
>
> OK I think this is the place where I can go into my next speech (lesson).
As a means of, yet again, failing to answer simple questions, or address
observations directly.
> You say there is a problem that the recording doesn't contain enough
> information to determine all of the spatial aspects of the recorded venue.
I say that it contains NO directional information. Obviously it
contains spatial clues in the form of delayed and attenuated information
from the reverberant field. These effects clearly can be interpreted as
a sense of spaciousness. Spaciousness is an attribute unrelated to
direction, and directional information is what you need for your model
to work the way you seem to think that it works.
> So therefore we cannot reconstruct it at home. So my question to you would
> be, what are you doing about it? Do you just give up on the concept of
> stereo?
I'm quite happy with my 'concept' and implementation of stereo. Given
the limits of commercially available recorded music, my stereo is not
"broken", and is not in need of some novel replay concept to "fix" it.
You are alone, as far as I can tell, in your perception that some
"stereo crisis" exists.
>
> No, you don't. You handle the problem in much the same say as I do. You
> place two (at least two) speakers in front of you in a room, so that the
> lateral localization might be brought out on playback. You decide where the
> speakers will go and now to treat the room and where to sit. This is an
> attempt to reconstruct the spatial characteristic contained in the
> recording.
So far, so good.
> You cannot, for example, put one speaker on top of the other, or
> even on opposite sides of you, you must construct a soundstage in front of
> you with a reasonable resemblance to the geometry of the original, which is
> almost always a presentation in front of you with a certain lateral spread
> that we - and the Acoustical Society of America, and most band leaders and
> producers, have come to know and love.
OK.
>
> The main difference between us is that you don't take it as far as I do.
No, you go in an altogether different direction; not further along the
same continuum.
> Remember The Big Three that Linkwitz asked about? We both need to decide on
> those factors in our reproduction, and to do that we need some sort of
> paradigm, or model, of what it is that we are doing with the system in order
> for it to work.
Throughout history most things have had models or theories generated to
explain how things worked, not the other way around.
That said, are you really implying that no one designs speakers against
physical models of how acoustics and reproduction work?
> If you study the live model, you can see all of the spatial,
> spectral, and temporal aspects of it. No matter who you are or what your
> paradigm or ideas are, you must somehow account for a translation of those
> characteristics from the live model to the reproduction. If you don't, it
> will sound DIFFERENT. You simply cannot put this immense, complex sound
> field through two points in space in front of you and expect it to sound the
> same.
No matter WHAT you do, it will sound different. That is a simple fact
of physics.
> OK, pause for now.
>
>>> The
>>> direct, early reflected, and reverberant sounds will be made to come
>>> from the appropriate directions if the playback model mimics a
>>> typical live model as closely as possible.
>>
>> No, they will not, because the information required to do so is not in
>> the recorded signal.
<snip>
>
> Yes, they can. I will only say this once, and hope that you latch onto it.
>
> The Image Model is a spatially arrayed, temporally delayed, spectrally
> shaped sound field synthesizer that attempts to decode the direct and early
> reflected sound contained in the recording in much the same way as a Dolby
> Pro Logic delay sytem can bring out the ambience contained in the recording
> without destroying the soundstage that belongs in front of the room.
Thank you. You're finally dealing with the electrical and acoustic
reality. You are "synthesizing" something that YOU find to sound
"real". You are absolutely not "decoding" anything, simply because no
directional information was "encoded" in the signal to start with.
>
> The recorded signal contains a stream, or train, of pulses from the first
> arrival transients to the recorded reverberation from spatially separate
> areas around the instruments. The Image Model features two real speakers
> that just happen to be closest to you of the 8 in the model. THEREFORE,
> first arrival transients will be heard from the actual speakers first, and
> this precedence effect is a very strong one, psychoacoustically speaking,
> and results in a separation between first arrival and later reverberation
> contained in the recording. There can be only one first arrival, and it has
> to come from the actual speakers and nowhere else.
>
> So the mechanism that you ask for is the precedence effect,
Nope, no cigar on that one. I'm quite certain I understand your
"theory", and I'm also quite certain you don't understand my critique,
so let me see if I can make it clearer:
As discussed earlier, there clearly are spatial clues available in the
recording, in the form of the delayed and attenuated reverberant field.
OK so far?
Ok, so now suppose we each sit down in front of our systems. Let's also
suppose that you limit your system to the front two speakers, and
further, that you eliminate the rear-firing drivers. Now, assuming we
play the exact same (good) recording, we will be hearing basically the
same sounds (for discussion, let's stipulate similar forward radiation
patterns, and similar room interactions). Still with me?
Now, in my room, I hear the direct sound, and the room effects, and I
get a sense of spaciousness from the reverb in the recording, and the
stereo effect, and to a small degree from the room interactions. I have
a well defined soundstage, with a proper localization of instruments and
vocals.
Now, in *your* room, one of two situations obtain; you hear *basically*
what I hear, or you hear something significantly different. From how
you frequently describe box speakers, you'll apparently hear no
soundstage, no spaciousness, a "window into another room", or a "hole in
the middle", and/or a flat presentation that is lifeless.
Alright, let's look at these two possible scenarios. In the first, we
hear *basically* the same thing. We both hear the spacial cues in the
recording in the direct sound from the speakers. We are, at this point,
both hearing ALL of the spacial information available on the recording -
we're hearing the entire unprocessed signal, within the limits of our
equipment. Nothing is hiding, nothing awaiting "decoding", we have the
whole tamale. Now, you then take all of this information and direct it
rearward to create a second, wholly synthesized, delayed soundfield
comprising all of the information - including the spacial cues - of the
recording. Every reflection comprising this synthesized field will
contain the entire signal, delayed (to a much lesser degree, and in
different ratios, than in the venue), attenuated (including the already
delayed and attenuated information in the recording), and coming from
directions different than the original. To you, this creates realism.
To me, this creates a sense of smearing that is incompatible with my
sense of realism.
In the case of the second scenario, there's no use for discussion, and
any hypothesis, model, or theory you come up with won't work for me; we
simply don't interpret audio signals in the same manner.
<snip>
> That is one of the most fascinating aspects of this hobby. Seems like you
> can't sit two of us down and have us agree on which is the best system.
Uhmm, yes. That is the point.
> Seems like we should have all gravitated to a system with certain common
> characteristics by this late stage, but it may never happen.
While you really seem to believe this, I am baffled as to why. This
seems the genesis of your mistaken belief that there is A paradigm, or
some fundamental TRVTH that would be universally applicable. There is
no realm of human perception, that I'm aware of, where such coalescing
has taken place. Not in food, art, music, literature, sport, or even in
the evaluation of human beauty. I fail to understand why you believe
interpretation of sound should buck the evolutionary tide where no other
class of perception has.
> Perceptual
> abilities and experience vary.
Why not just admit that perceptions and preferences vary, instead of
insinuating that dissenters are perceptually challenged, and thus *wrong*?
> I think that very few people pay that much
> attention to these spatial factors in listening to music.
You're free to think that, but don't expect anyone here to agree that
they are in that "group" however small or large it may be.
Keith
Audio_Empire
April 21st 13, 10:19 PM
In article >, KH >
wrote:
<snip>
> I say that it contains NO directional information. Obviously it
> contains spatial clues in the form of delayed and attenuated information
> from the reverberant field. These effects clearly can be interpreted as
> a sense of spaciousness. Spaciousness is an attribute unrelated to
> direction, and directional information is what you need for your model
> to work the way you seem to think that it works.
It's more than a "sense of spaciousness" as you so blithely put it. Done
correctly, it can provide an accurate audio snapshot of the musical
event. One which can show, with amazingly pin-point accuracy, the
location of every instrument in the sound field. And I don't just mean
right to left either. I mean front to back, and top to bottom. you can
tell, for instance if certain instruments are in front of, or behind
others, and whether or not some instruments (or voices) are on risers.
That's a lot of information from "delayed and attenuated" information.
Shows how remarkable the human ear/brain interface is as deciphering
clues about directionality.
> > So therefore we cannot reconstruct it at home. So my question to you would
> > be, what are you doing about it? Do you just give up on the concept of
> > stereo?
>
> I'm quite happy with my 'concept' and implementation of stereo. Given
> the limits of commercially available recorded music, my stereo is not
> "broken", and is not in need of some novel replay concept to "fix" it.
> You are alone, as far as I can tell, in your perception that some
> "stereo crisis" exists.
The only "stereo crises" that exists as far as I can see is the fact
that so few record company producers and engineers properly exploit the
tools and techniques available to them and don't give music lovers
enough proper "real" stereo product. Many seem to share the general
public's misconception that "stereo" only means "two channels" and so
that's all they care about. Make sure that release has a left and a
right channel. No matter how that's done or what's in them.
--- news://freenews.netfront.net/ - complaints: ---
Gary Eickmeier
April 22nd 13, 03:33 PM
KH wrote:
> I say that it contains NO directional information. Obviously it
> contains spatial clues in the form of delayed and attenuated
> information from the reverberant field. These effects clearly can be
> interpreted as a sense of spaciousness. Spaciousness is an attribute
> unrelated to direction, and directional information is what you need
> for your model to work the way you seem to think that it works.
This is a most extraordinary statement. Spaciousness is unrelated to
direction? Was that a typo?
Let me relate an allegory that tries to show the difference between the
spatial and the temporal.
A novice goes to Best Buy and purchases a surround sound home theater in a
box. It has the 5.1 speakers in it, but he loses the instruction sheet. So
he places all of the speakers on top of the TV and the subwoofer underneath
on the floor. His left and right and center speakers are placed OK, on the
left and right side of the big TV, but his surround speakers have been
placed on top of them. He calls you up and complains that he is not
satisfied, just not hearing all of those fancy spatial effects that are
supposed to be in the movie. So you go over and behold what he has done, and
instruct him that he has gotten some of it right, with the left, center, and
right speakers, but he has not got the spatial aspect correct. His perfectly
accurate speakers are playing all of the sounds contained in the recording,
he can hear the temporal effects of the reverberation and reverb time of
the hall and discrete effects, but these spatial effects must come from
different incident angles than the direct sound in order to work.
In acoustics, many sources note that in order for the eary reflections to
work they must come from a different set of incident angles than the direct
sound, or else most of it will be masked. So you cannot simply "play" the
stereo recording and have all of the recorded ambience come fron the same
point sources as the direct sound, or it will not be heard as ambience -
just smear, really.
I have found that it is difficult for most hobbyists (at first) to
distinguish in their minds between the spatial and the temporal. The
temporal from the live venue is contained in the recording, but the spatial
effects must come from a different set of incident angles than the direct
sound. Some people incorporate side speakers with time delay for this
reason. I use reflection in a way that mimics a typical live sound field.
> Throughout history most things have had models or theories generated
> to explain how things worked, not the other way around.
Yes, the image model explains how it works to model the spatial aspects of
the live sound.
>
> That said, are you really implying that no one designs speakers
> against physical models of how acoustics and reproduction work?
YES! Most of them learned audio in the mono era, when the charge was just
"let's make sound." When stereo came along, they just assumed that the same
speakers could be used, just that you need two of them.
>
> As discussed earlier, there clearly are spatial clues available in the
> recording, in the form of the delayed and attenuated reverberant
> field. OK so far?
>
> Ok, so now suppose we each sit down in front of our systems. Let's
> also suppose that you limit your system to the front two speakers, and
> further, that you eliminate the rear-firing drivers. Now, assuming we
> play the exact same (good) recording, we will be hearing basically the
> same sounds (for discussion, let's stipulate similar forward radiation
> patterns, and similar room interactions). Still with me?
>
> Now, in my room, I hear the direct sound, and the room effects, and I
> get a sense of spaciousness from the reverb in the recording, and the
> stereo effect, and to a small degree from the room interactions. I
> have a well defined soundstage, with a proper localization of
> instruments and vocals.
Not so fast. I hope from the discussion above you might be able to see that
you don't just "hear" the reverb in the recording, like in the mono days. In
order to be heard properly as reverberation (early reflections, actually),
it must come from different incident angles than the direct sound.
>
> Now, in *your* room, one of two situations obtain; you hear
> *basically* what I hear, or you hear something significantly
> different. From how you frequently describe box speakers, you'll
> apparently hear no
> soundstage, no spaciousness, a "window into another room", or a "hole
> in the middle", and/or a flat presentation that is lifeless.
>
> Alright, let's look at these two possible scenarios. In the first, we
> hear *basically* the same thing. We both hear the spacial cues in the
> recording in the direct sound from the speakers. We are, at this
> point, both hearing ALL of the spacial information available on the
> recording - we're hearing the entire unprocessed signal, within the
> limits of our equipment. Nothing is hiding, nothing awaiting
> "decoding", we have the whole tamale. Now, you then take all of this
> information and direct it rearward to create a second, wholly
> synthesized, delayed soundfield comprising all of the information -
> including the spacial cues - of the recording. Every reflection
> comprising this synthesized field will contain the entire signal,
> delayed (to a much lesser degree, and in different ratios, than in
> the venue), attenuated (including the already delayed and attenuated
> information in the recording), and coming from directions different
> than the original. To you, this creates realism. To me, this creates a
> sense of smearing that is incompatible with my
> sense of realism.
Not quite. Take a couple of examples from the real world of audio. The
Wilson WAMM vs the MBL omni. The WAMM might provide your kind of sound,
giant direct firing boxes that aim their output at your head. The MBLs, on
the other hand, are totallly different in the spatial department, being
omnis. Their sound has been described as huge, spacious, and deep, with a
sense of floating a soundstage surrounded by the ambience in the recording.
Two very different designs that sound different, because the radiaiton
pattern is very audible. Neither speaker is more "accurate" than the other,
but the MBL gets the spatial characteristic better arrayed on playback.
> In the case of the second scenario, there's no use for discussion, and
> any hypothesis, model, or theory you come up with won't work for me;
> we simply don't interpret audio signals in the same manner.
I am sure that we do; it's just that you don't as yet understand why
different systems sound the way they do, some better than others, some
worse. I say that if ever and whenever you hear superior spaciousness and
depth in a recording, it is not due to the accuracy in the direct sound, but
rather to the different spatial characteristics of the speakers.
I have started another thread I hope you wioll fnd interesting. It
simplifies down the larger example of typical rooms and sources to a single
instrument.
Gary Eickmeier
KH
April 22nd 13, 03:33 PM
On 4/21/2013 2:19 PM, Audio_Empire wrote:
> In article >, KH >
> wrote:
> <snip>
>
>> I say that it contains NO directional information. Obviously it
>> contains spatial clues in the form of delayed and attenuated information
>> from the reverberant field. These effects clearly can be interpreted as
>> a sense of spaciousness. Spaciousness is an attribute unrelated to
>> direction, and directional information is what you need for your model
>> to work the way you seem to think that it works.
>
> It's more than a "sense of spaciousness" as you so blithely put it. Done
> correctly, it can provide an accurate audio snapshot of the musical
> event.
Really? How exactly is the directional information encoded in the
recorded signal?
> One which can show, with amazingly pin-point accuracy, the
> location of every instrument in the sound field. And I don't just mean
> right to left either. I mean front to back, and top to bottom. you can
> tell, for instance if certain instruments are in front of, or behind
> others, and whether or not some instruments (or voices) are on risers.
> That's a lot of information from "delayed and attenuated" information.
And that differs from what I said...how?
> Shows how remarkable the human ear/brain interface is as deciphering
> clues about directionality.
>
>>> So therefore we cannot reconstruct it at home. So my question to you would
>>> be, what are you doing about it? Do you just give up on the concept of
>>> stereo?
>>
>> I'm quite happy with my 'concept' and implementation of stereo. Given
>> the limits of commercially available recorded music, my stereo is not
>> "broken", and is not in need of some novel replay concept to "fix" it.
>> You are alone, as far as I can tell, in your perception that some
>> "stereo crisis" exists.
>
> The only "stereo crises" that exists as far as I can see is the fact
> that so few record company producers and engineers properly exploit the
> tools and techniques available to them and don't give music lovers
> enough proper "real" stereo product.
I believe I've said that a number of times. And?
> Many seem to share the general
> public's misconception that "stereo" only means "two channels" and so
> that's all they care about. Make sure that release has a left and a
> right channel. No matter how that's done or what's in them.
"Stereo" typically does mean 2-channel. The general public doesn't have
a misconception in this regard. They need only look at the VAST
majority of "stereo" recordings to see what "stereo" is typically
construed to mean. *Can* it be different? Yes. Is it typically
different? No.
Keith
Audio_Empire
April 22nd 13, 08:23 PM
In article >, KH >
wrote:
> On 4/21/2013 2:19 PM, Audio_Empire wrote:
> > In article >, KH >
> > wrote:
> > <snip>
> >
> >> I say that it contains NO directional information. Obviously it
> >> contains spatial clues in the form of delayed and attenuated information
> >> from the reverberant field. These effects clearly can be interpreted as
> >> a sense of spaciousness. Spaciousness is an attribute unrelated to
> >> direction, and directional information is what you need for your model
> >> to work the way you seem to think that it works.
> >
> > It's more than a "sense of spaciousness" as you so blithely put it. Done
> > correctly, it can provide an accurate audio snapshot of the musical
> > event.
>
> Really? How exactly is the directional information encoded in the
> recorded signal?
>
> > One which can show, with amazingly pin-point accuracy, the
> > location of every instrument in the sound field. And I don't just mean
> > right to left either. I mean front to back, and top to bottom. you can
> > tell, for instance if certain instruments are in front of, or behind
> > others, and whether or not some instruments (or voices) are on risers.
> > That's a lot of information from "delayed and attenuated" information.
>
> And that differs from what I said...how?
Who said that I was disagreeing with you. I'm merely adding to your
statement.
>
> > Shows how remarkable the human ear/brain interface is as deciphering
> > clues about directionality.
> >
> >>> So therefore we cannot reconstruct it at home. So my question to you would
> >>> be, what are you doing about it? Do you just give up on the concept of
> >>> stereo?
> >>
> >> I'm quite happy with my 'concept' and implementation of stereo. Given
> >> the limits of commercially available recorded music, my stereo is not
> >> "broken", and is not in need of some novel replay concept to "fix" it.
> >> You are alone, as far as I can tell, in your perception that some
> >> "stereo crisis" exists.
> >
> > The only "stereo crises" that exists as far as I can see is the fact
> > that so few record company producers and engineers properly exploit the
> > tools and techniques available to them and don't give music lovers
> > enough proper "real" stereo product.
>
> I believe I've said that a number of times. And?
>
> > Many seem to share the general
> > public's misconception that "stereo" only means "two channels" and so
> > that's all they care about. Make sure that release has a left and a
> > right channel. No matter how that's done or what's in them.
>
> "Stereo" typically does mean 2-channel.
No it doesn't. "Stereo" means solid, or three-dimensional. Most people's
ignorance of the term's provenance doesn't change either the provenance
or the meaning.
>The general public doesn't have
> a misconception in this regard. They need only look at the VAST
> majority of "stereo" recordings to see what "stereo" is typically
> construed to mean. *Can* it be different? Yes. Is it typically
> different? No.
Again, ignorance does not change truth. If the vast hoi-poloi thinks
that a recording only needs to be two channel to be stereo, that's their
problem, not mine. Those who care, know. Those who don't care, by
definition don't need to know what stereo's all about.
--- news://freenews.netfront.net/ - complaints: ---
vBulletin® v3.6.4, Copyright ©2000-2025, Jelsoft Enterprises Ltd.