Reply
 
Thread Tools Display Modes
  #1   Report Post  
Posted to rec.audio.high-end
Gary Eickmeier Gary Eickmeier is offline
external usenet poster
 
Posts: 1,449
Default Mind Stretchers

To the group:

I have had 3 posts that I can still find not show up in this thread, or not
get acknowledged, so I would like to re-send them one at a time and just
complete the thought, as it were. Here is the first one:

"KH" wrote in message
...
On 5/28/2012 9:37 AM, Gary Eickmeier wrote:
Hello again Keith -

Thanks for the frank and honest reply. This is what I thought was going
on,
but I didn't think anyone else realized it.


Gary, please re-read your last sentence in the context of "us" as readers,
not you as author. Do you not see how implicitly dismissive it is? That
is why I "beat you up" about being condescending previously. I've been
around long enough to remember your earlier forays here, and as I recall,
they devolved similarly. You would do yourself a service if you would
take more care in tone.


Sorry Keith but I really do have something to say. Please stay with me for
this one last post - we are 99% of the way there.

snip

That's not idealism, Gary, that simply ignoring that there is no "right"
when it comes to listener preference. There are a hundred Grape flavors.
Do you believe there should be some unifying theory that would result in
the one, true, grape flavor and everyone would then agree with that
selection?

If you and I disagree about which grape is the most realistic, is one of
us wrong? If you and I disagree about whether a specific stereo
implementation is realistic or not, is one of us wrong? If your answer to
either question is "yes", further discussion is pointless as you're now in
the realm of ideology not acoustical theory.


OK, you are saying designing or selecting speakers is like selecting ice
cream flavors at Baskin Robins. Well, I have another analogy for you, and I
think it is quite apt.

I have said that the stereo signal is a concentrate, to be mixed with the
playback room acoustic in a certain way, a way that models itself after the
real thing. Imagine two orangeophiles who are unfamiliar with frozen orange
juice. They love their pure, rich Florida orange juice and they are yearning
to duplicate it. So they select some Sunkist frozen juice and take it home
to their tasting room.

The first one takes his can, opens it, and starts eating the frozen slush
straight out of the can. He figures that if this is made from the real
orange juice he wants to take it straight, for the most accurate experience
of the product. The second orangeophile says no no, you've got entirely the
wrong idea. Watch me. We take the can of frozen concentrate, dump it into
this pitcher, then add a quart of water and mix it in. The first man is
horrified at the inaccuracy of consuming the pure juice that way. The second
one explains why it works that way: He says it may not be as accurate to
mix it with all this water first, but by doing so it is more realistic, much
more like the original orange juice that it was made from and that we are
trying to duplicate. Its temperature is more like the original, and that is
feelable. Its texture is more like the real thing, and that is visible. Its
flavor is more like the real thing, and that is very tasteable. You may
prefer a California product, but we must all understand the basic principle
of mixing it with water in this certain way before we consume it, no matter
who made it.


(where did the recorded ambience info go)

It got converted into two dimensions. Basically, level and arrival time.
How do you think incident angle information is coded into the signal?
That's the HRTF information that is lost in the process. Not because it
wasn't in the venue, and not because the microphone didn't pick it up, but
because it was transduced using a very different instrument than WE use to
hear.


Here you have a technical misconception. Stereo has nothing to do with HRTF.
That is a binaural, or head-related, process. With stereo, a field-type
system, we are reproducing the object itself in front of us and using our
own natural hearing mechanism and HRTF to listen to it.


Why do you keep conflating stereo and binaural, and assuming everyone but
you confuses the two? If you believe all the spacial clues are in a
stereo recording I would submit that the confusion is yours.


snip

Then I would say that your whole approach is one of redefining what you
think live music *should* sound like. How can one consider the real event
to be anything other than the intended *end* point, not a "stepping off
point"?


Slight miscommunication. Although the reproduction is a new work of art,
some works are made with the goal of the realistic reproduction of the
original, some are a pure construct, such as a synthesizer composition or a
multimiked and highly produced pop or jazz piece. It's all fair game.

The point you repeatedly overlook - audiophiles, IME, all have the exact
same design goals; faithful reproduction of the recorded event. They have
different *preferences* that impact how they perceive the various
implementations designed to realize that goal.


*Most* are right - "trick", "illusion", call it what you will - that is
the goal of stereo. You seem to want to disconnect the reproduction from
the event, preferring to consider the reproduction paramount, and
massaging it to meet your interpretation of realism; an interpretation
untethered from the seminal event, and unconstrained by the desire to
faithfully recreate it. This concept is at odds with the goals of every
audiophile I know, or have conversed with. If this is, indeed your view,
then I hope you like the role of Sysiphus.


OK, here comes my main point of this whole discussion, my "closer."

We have discussed all of the audible parts of the listening experience in
the EEFs, What Can We Hear. We said that the spatial part is the main
stumbling block, the main difference between the reproduction and the real
thing. Think of it as pure physics. If the spatial qualities I discussed are
audible, then we must make some attempt to reproduce them.

The "real thing" comes to us as a primarily reverberant field from a
multiplicity of incident angles.

The reproduction comes to us from just those two points in space.

That difference is seriously audible; they CANNOT sound the same. This is
not a matter of taste, it is a fundamental error in the theory of
reproduction.

What to do, what to do? Look at the spatial problem from the standpoint of
the image model of the real thing and the reproduction. If you can separate
out in your mind the spatial from the temporal for a moment, and if the
recording really does contain some of the early reflected sound from the
venue, then it is more correct to reproduce that part of the sound by
reflecting it from the similar surfaces in your listening room. Also, due to
the closeness of the speakers to you, it is more correct to diminish the
direct to reflected ratio emanating from the speakers. Design a certain
radiation pattern according to Mark Davis that helps the time/intensity
trading and image stability as you go across the room, pull the speakers out
from the walls to make the soundstage three dimensional with similar depth
and spaciousness to the real thing, and you are almost all the way to Image
Model Theory, or IMT.

There are many, many more aspects of this that are worth discussing, but I
must leave it there for now. Thanks for listening.


Ah, but whereas taking "the huge, wide set of fields that were recorded
and pipe them all through just two points in space" split in some ratio
between sound radiated directly at the listener and sound directed toward
the front wall and then listening to the reflected simulation of the
reverberant field is more accurate? Really? This approach adds spacial
clues that are NOT in the recording, and thus cannot be accurate. There
is no information in the recording that can be used to correctly
"calibrate" some split of direct versus reflected sound to equal the
spatial information in the recording venue - it's simply artificial. You
may prefer the result, great, it's right for you, but you have no reason
to assume that it is universal for other listeners. Looking across current
speaker designs, it would seem quite the opposite in fact.


Yes, I know.

Keith


Gary



  #2   Report Post  
Posted to rec.audio.high-end
KH KH is offline
external usenet poster
 
Posts: 137
Default Mind Stretchers

On 6/9/2012 2:09 PM, Gary Eickmeier wrote:
To the group:


snip

If you and I disagree about which grape is the most realistic, is one of
us wrong? If you and I disagree about whether a specific stereo
implementation is realistic or not, is one of us wrong? If your answer to
either question is "yes", further discussion is pointless as you're now in
the realm of ideology not acoustical theory.


OK, you are saying designing or selecting speakers is like selecting ice
cream flavors at Baskin Robins. Well, I have another analogy for you, and I
think it is quite apt.


No, I did not present an analogy. I asked a very simple, probative
question and your analogy does not address it. Why continue to duck the
question?

I have said that the stereo signal is a concentrate, to be mixed with the
playback room acoustic in a certain way, a way that models itself after the
real thing.


That you have *said* in no way makes it valid.


snip
(where did the recorded ambience info go)

It got converted into two dimensions. Basically, level and arrival time.
How do you think incident angle information is coded into the signal?
That's the HRTF information that is lost in the process. Not because it
wasn't in the venue, and not because the microphone didn't pick it up, but
because it was transduced using a very different instrument than WE use to
hear.


Here you have a technical misconception.


No, here you have a communication issue.

Stereo has nothing to do with HRTF.


As has been stated repeatedly. You asked "where did the recorded
ambiance go?". The answer is that the "ambience" is related directly to
the HRTF of the listener in the venue. Sans listener, there is *ONLY*
temporal and level data available for recording, and when recorded as
such, this data cannot subsequently be used to accurately reproduce
incident angle information. That information is lost in the translation.

That is a binaural, or head-related, process. With stereo, a field-type
system, we are reproducing the object itself in front of us and using our
own natural hearing mechanism and HRTF to listen to it.


Yes, that is the problem. The signal presented to the listener, in the
venue, has angular, temporal, and level clues that, in conjunction with
the HRTF of the listener, create a spacial image. That information was
not, however, encoded into the recording except as temporal and level
information. No matter how that information is played back, the signal
reaching the listener cannot be the same as in the venue. Reflecting
the sound cannot, except in the context of listener preference,
ameliorate this constraint.

snip

OK, here comes my main point of this whole discussion, my "closer."

We have discussed all of the audible parts of the listening experience in
the EEFs, What Can We Hear. We said that the spatial part is the main
stumbling block, the main difference between the reproduction and the real
thing. Think of it as pure physics.


Feel free to present some, please.

If the spatial qualities I discussed are
audible, then we must make some attempt to reproduce them.


We can't reproduce them, they are not on the recording. What we can do
is to produce an illusion, the efficacy of which is clearly a function
of both engineering efficacy and listener preference.

The "real thing" comes to us as a primarily reverberant field from a
multiplicity of incident angles.


No, it does not, except in a narrow subset of live events. Many times
the direct component (lets think of outside live events for example,
shall we?) is the dominant component, and sometimes by wide margins.

The reproduction comes to us from just those two points in space.

That difference is seriously audible; they CANNOT sound the same. This is
not a matter of taste, it is a fundamental error in the theory of
reproduction.


It is not an *error* in theory, it is a physical constraint. You need to
understand the difference. The fact that the reproduction cannot be the
same as the original necessitates that listener preference play a
pivotal role in assessment of the realism of the reproduction.


What to do, what to do? Look at the spatial problem from the standpoint of
the image model of the real thing and the reproduction. If you can separate
out in your mind the spatial from the temporal for a moment,


The distinction is clear in my mind, but that differential information
is not present on stereo recordings. You need to clarify, in your mind,
that there is no spacial information in a stereo recording. All spacial
clues are translated to temporal and level information. If you disagree
with me, then pray tell me, what portion of the recorded electrical
signal - which is all we have after all, come reproduction - represents
spacial information, *separate and unique* from temporal or level
information?

and if the
recording really does contain some of the early reflected sound from the
venue,


It does contain it; translated into TEMPORAL and LEVEL information.

then it is more correct to reproduce that part of the sound by
reflecting it from the similar surfaces in your listening room.


You are not reflecting *that part of the sound*, you are reflecting ALL
of the sound, and in doing so, you reflect the DIRECT portion of the
signal as well. That is clearly inappropriate - it doesn't come to you
that way in the venue does it? If the spacial is as important as you
maintain, then reflecting the direct portion of the signal is at least
as egregious an error as ignoring the reverberant part of the signal.

Keith

  #3   Report Post  
Posted to rec.audio.high-end
Gary Eickmeier Gary Eickmeier is offline
external usenet poster
 
Posts: 1,449
Default Mind Stretchers

"KH" wrote in message
...
On 6/9/2012 2:09 PM, Gary Eickmeier wrote:


Yes, that is the problem. The signal presented to the listener, in the
venue, has angular, temporal, and level clues that, in conjunction with
the HRTF of the listener, create a spacial image. That information was
not, however, encoded into the recording except as temporal and level
information. No matter how that information is played back, the signal
reaching the listener cannot be the same as in the venue. Reflecting the
sound cannot, except in the context of listener preference, ameliorate
this constraint.


Keith, I'm not sure what exactly your conceptual problem is, but everyone
knows that stereo operates on temporal and level differences between
channels. You have noticed how those differences can cause the perception of
phantom images between the speakers, right? That spatial information is
encoded into the channels by means of temporal and level differences in the
signals.

Now, I have observed that reflecting a part of the sound from room surfaces
can cause an image shift toward the reflecting surfaces. This has a twofold
perceptual impact. One, it causes the sound to go outside the speaker boxes
and appear as an aerial image somewhat behind the plane of the speakers,
seeming like the instruments are right there in the room with you, rather
than coming from speakers. Secondly, it causes an impression of spaciousness
in recordings that contain such information, such as correctly miked
symphonies in a good hall. Most of us have experienced this very audible
difference between directional speakers and more omni type speakers.

OK, fine, now between those two types of sound, one is likely to sound
closer to live than the other. If you think that is just a preference and
worth no further study, then that is the bed you shall lie in. If I think
this is a significant point and worth further study, and try to get others
to notice these effects and help me out, then please don't tell me it is all
pointless because you are not interested. Audiophiles have been trying to
figure out what causes these effects for decades. They have complained about
boxy sounding speakers and the hole in the middle effect and wondered what
makes some systems sound more realistic than others. My theories answer some
very basic questions about very audible effects, and should be studied
further.

Yes, more psychoacoustic investigation is called for, to test these effects
of reflected sound w respect to the playback situation. No, I have not and
cannot do it all on my own. But I need some of those who can do it to pay
attention and see if some of my suggestions on speaker placement and
radiation patterns and room treatment could be true, so that it might help
engineer the installation of stereo systems and the development of new
speakers and maybe recording techniques.

It's a whole deal.

Gary Eickmeier



  #4   Report Post  
Posted to rec.audio.high-end
Gary Eickmeier Gary Eickmeier is offline
external usenet poster
 
Posts: 1,449
Default Mind Stretchers

"KH" wrote in message
...
On 6/9/2012 2:09 PM, Gary Eickmeier wrote:


Yes, that is the problem. The signal presented to the listener, in the
venue, has angular, temporal, and level clues that, in conjunction with
the HRTF of the listener, create a spacial image. That information was
not, however, encoded into the recording except as temporal and level
information. No matter how that information is played back, the signal
reaching the listener cannot be the same as in the venue. Reflecting the
sound cannot, except in the context of listener preference, ameliorate
this constraint.


Keith, I'm not sure what exactly your conceptual problem is, but everyone
knows that stereo operates on temporal and level differences between
channels. You have noticed how those differences can cause the perception of
phantom images between the speakers, right? That spatial information is
encoded into the channels by means of temporal and level differences in the
signals.

Now, I have observed that reflecting a part of the sound from room surfaces
can cause an image shift toward the reflecting surfaces. This has a twofold
perceptual impact. One, it causes the sound to go outside the speaker boxes
and appear as an aerial image somewhat behind the plane of the speakers,
seeming like the instruments are right there in the room with you, rather
than coming from speakers. Secondly, it causes an impression of spaciousness
in recordings that contain such information, such as correctly miked
symphonies in a good hall. Most of us have experienced this very audible
difference between directional speakers and more omni type speakers.

OK, fine, now between those two types of sound, one is likely to sound
closer to live than the other. If you think that is just a preference and
worth no further study, then that is the bed you shall l ie in. If I think
this is a significant point and worth further study, and try to get others
to notice these effects and help me out, then please don't tell me it is all
pointless because you are not interested. Audiophiles have been trying to
figure out what causes these effects for decades. They have complained about
boxy sounding speakers and the hole in the middle effect and wondered what
makes some systems sound more realistic than others. My theories answer some
very basic questions about very audible effects, and should be studied
further.

Yes, more psychoacoustic investigation is called for, to test these effects
of reflected sound w respect to the playback situation. No, I have not and
cannot do it all on my own. But I need some of those who can do it to pay
attention and see if some of my suggestions on speaker placement and
radiation patterns and room treatment could be true, so that it might help
engineer the installation of stereo systems and the development of new
speakers and maybe recording techniques.

It's a whole deal.

Gary Eickmeier



  #5   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default Mind Stretchers

On Mon, 11 Jun 2012 06:07:29 -0700, Gary Eickmeier wrote
(in article ):

"KH" wrote in message
...
On 6/9/2012 2:09 PM, Gary Eickmeier wrote:


Yes, that is the problem. The signal presented to the listener, in the
venue, has angular, temporal, and level clues that, in conjunction with
the HRTF of the listener, create a spacial image. That information was
not, however, encoded into the recording except as temporal and level
information. No matter how that information is played back, the signal
reaching the listener cannot be the same as in the venue. Reflecting the
sound cannot, except in the context of listener preference, ameliorate
this constraint.


Keith, I'm not sure what exactly your conceptual problem is, but everyone
knows that stereo operates on temporal and level differences between
channels. You have noticed how those differences can cause the perception of
phantom images between the speakers, right? That spatial information is
encoded into the channels by means of temporal and level differences in the
signals.


You both forgot phase differences.



  #6   Report Post  
Posted to rec.audio.high-end
KH KH is offline
external usenet poster
 
Posts: 137
Default Mind Stretchers

On 6/11/2012 6:07 AM, Gary Eickmeier wrote:
wrote in message
...
On 6/9/2012 2:09 PM, Gary Eickmeier wrote:


Yes, that is the problem. The signal presented to the listener, in the
venue, has angular, temporal, and level clues that, in conjunction with
the HRTF of the listener, create a spacial image. That information was
not, however, encoded into the recording except as temporal and level
information. No matter how that information is played back, the signal
reaching the listener cannot be the same as in the venue. Reflecting the
sound cannot, except in the context of listener preference, ameliorate
this constraint.


Keith, I'm not sure what exactly your conceptual problem is,


I'm tempted to believe you. Not convinced, but tempted. I would posit,
however, the conceptual difficulty appears to be yours.

but everyone
knows that stereo operates on temporal and level differences between
channels. You have noticed how those differences can cause the perception of
phantom images between the speakers, right? That spatial information is
encoded into the channels by means of temporal and level differences in the
signals.


Then quit asking questions like "where did the information go?". The
spatial information you are describing is left/right - that's it. That
information *is* encoded in the signal. Up/down, front/back, that
information is not present in two channel recordings. You can create an
illusion of depth and height - not the same thing.


Now, I have observed that reflecting a part of the sound from room surfaces
can cause an image shift toward the reflecting surfaces. This has a twofold
perceptual impact. One, it causes the sound to go outside the speaker boxes
and appear as an aerial image somewhat behind the plane of the speakers,
seeming like the instruments are right there in the room with you, rather
than coming from speakers.


And I have noticed that this sounds contrived, oversized, diffuse, and
not at all realistic. It is, inarguably, inaccurate since the new
spatial distribution of the reproduction cannot possibly be anywhere
close to the actual event.

Secondly, it causes an impression of spaciousness
in recordings that contain such information, such as correctly miked
symphonies in a good hall. Most of us have experienced this very audible
difference between directional speakers and more omni type speakers.


Yes, we have. Some of us think that's realism, some of us don't.


OK, fine, now between those two types of sound, one is likely to sound
closer to live than the other. If you think that is just a preference and
worth no further study, then that is the bed you shall l ie in.


Taking umbrage at a strawman of your construction is hardly helpful.

If I think
this is a significant point and worth further study, and try to get others
to notice these effects and help me out, then please don't tell me it is all
pointless because you are not interested.


Yet another strawman. Please provide a quote that even intimates any
such thought. As I've said, ad nauseum, and as you've ignored rather
perniciously, is that you are ignoring the role of preference, and want
to divorce it from the process. Ignoring preference is as egregious an
error as ignoring the physics or engineering involved.

Audiophiles have been trying to
figure out what causes these effects for decades. They have complained about
boxy sounding speakers and the hole in the middle effect and wondered what
makes some systems sound more realistic than others.


And where are all these audiophiles complaining about "hole in the
middle"? I've counted exactly one...you.

My theories answer some
very basic questions about very audible effects, and should be studied
further.

Yes, more psychoacoustic investigation is called for, to test these effects
of reflected sound w respect to the playback situation. No, I have not and
cannot do it all on my own. But I need some of those who can do it to pay
attention and see if some of my suggestions on speaker placement and
radiation patterns and room treatment could be true, so that it might help
engineer the installation of stereo systems and the development of new
speakers and maybe recording techniques.


So, you have an epiphany that tells you those of use who prefer direct
radiating speakers are nuts, and reflected sound is the only way to have
realism, BUT you need other people to invest time, money, and energy
into exploring *if* you *may* be right about said epiphany?

And, please, explain why you continue to avoid my simple direct question
about whether if you and I disagree about a system being realistic, is
one of us wrong? Could it be that you know full well that either way
you answer requires that you accept that individual preference is a
prime factor? And that, further, there can be no "paradigm" that
explains the whole theory of "correct" stereo reproduction unless you
exclude preference as a variable? Not *the* variable as you attempt to
claim, but a major variable.

Keith

  #7   Report Post  
Posted to rec.audio.high-end
KH KH is offline
external usenet poster
 
Posts: 137
Default Mind Stretchers

On 6/11/2012 4:52 PM, Audio Empire wrote:
On Mon, 11 Jun 2012 06:07:29 -0700, Gary Eickmeier wrote
(in ):

wrote in message
...
On 6/9/2012 2:09 PM, Gary Eickmeier wrote:


snip

You both forgot phase differences.


What feature of phase difference cannot be characterized as "temporal"
difference?

Keith

  #8   Report Post  
Posted to rec.audio.high-end
Gary Eickmeier Gary Eickmeier is offline
external usenet poster
 
Posts: 1,449
Default Mind Stretchers

"KH" wrote in message
...

So, you have an epiphany that tells you those of use who prefer direct
radiating speakers are nuts, and reflected sound is the only way to have
realism, BUT you need other people to invest time, money, and energy into
exploring *if* you *may* be right about said epiphany?


I used no such terms. I explained the perceptual effects and correlated them
to radiation pattern etc.

And, please, explain why you continue to avoid my simple direct question
about whether if you and I disagree about a system being realistic, is one
of us wrong? Could it be that you know full well that either way you
answer requires that you accept that individual preference is a prime
factor? And that, further, there can be no "paradigm" that explains the
whole theory of "correct" stereo reproduction unless you exclude
preference as a variable? Not *the* variable as you attempt to claim, but
a major variable.


Keith, Floyd Toole's entire career has been devoted to a series of studies
on listener preferences in various loudspeaker and rooms tests. The
assumption is that the more preferable speakers have some qualities that are
more correct w respect to reproduction. His big story is that a smooth, wide
radiation pattern is preferred to a more directional speaker. I had a few
arguments with him (as have others) that he hasn't gone far enough into the
world of various speaker types, testing mainly direct firing speakers such
as those that Harman made. Nevertheless, the principle is the same - the
only way we can test for sound reproduction is what we call preference
testing, comparing two examples that vary by just one factor, then asking a
lot of people which they prefer, and then inferring something about speaker
design from that.

This is essentially what I have been doing on an anecdotal basis for the
last 30 years, after I discovered somthing very significant about speaker
positioning.

You say you have never heard of the hole in the middle effect, except from
me. That doesn't put you in a very good light, knowledge wise. You say maybe
you prefer a boxy sound, or that some people may prefer directional
speakers, and that sounds more like live to you. OK, fine. Dave Moran calls
it the "honking" effect, where the high frequencies narrow in their
radiation pattern as FR goes up.

Sieffried Linkwitz asked the musical question straight out, which radiation
pattern, speaker positioning, and room treatments lead to greater realism in
the reproduction. I have been studying these factors for a long time, and
have given my anwers and a theory on why it works that way. If you say it is
no more than a preference one way or another, then fine, but you would have
to try it first to see just what that preference might be, wouldn't you?

That is how it is done. Suggest a variable, put it to a listening test, see
if there is a definite preference, and figure out what is going on with the
physics and press on, hopefully arriving at some asymptotic curve that tells
us something about stereo theory.

I am bellering from my soapbox because very few researchers would think of
trying a negative directivity speaker, nor would they know just how to
position them in the room, and even fewer would think of trying specular
reflectivity at the front of that room. All of "The Big Three" must be
correct in order to perceive the improvement and discover what I have about
these factors. They have been doing it by cut and try and happy accident all
these years and still not stumbled upon IMT, so here I am. Bose tried the
negative directivity index speaker, but screwed up speaker positioning and
got in a lawsuit with Consumers Union over the hole in the middle effect.
Made the situation even worse, so people wrote all that research off. Mark
Davis did an amazing experiment on time/intensity trading to develop the
Soundfield One speaker, but failed to try the rear and side reflected
portion to complete the picture. Magneplanar developed a great ribbon
tweeter that is very omnidirectional, but has equal output front and rear.
Still too hot on the direct sound. Same for MBL - they got the equi-omni
frequency response really good, but don't say much about positioning or room
treatment. A man named Jeffrey Borish invented a system and wrote a paper
about deploying additional speakers up front, to the sides of the main
speakers, on time delay, to simulate the early reflected sound from the
concert hall. He used a description of the image model of live sound to
explain why he was doing that. I had lunch with him at one fine AES, and
explained that simply reflecting part of the speaker output from your room's
walls accomplished the same thing, only more naturally, but he didn't buy
it.

So I get my big chance and enter the Linkwitz Challenge with my cheap mockup
speakers and win, because my speakers are "on theory." Yes, the win was
based on a preference, a preference that my speakers sounded more like live
sound.

It's a whole deal.

Gary Eickmeier


  #9   Report Post  
Posted to rec.audio.high-end
Sebastian Kaliszewski Sebastian Kaliszewski is offline
external usenet poster
 
Posts: 82
Default Mind Stretchers

KH wrote:
On 6/9/2012 2:09 PM, Gary Eickmeier wrote:

[...]
Stereo has nothing to do with HRTF.


As has been stated repeatedly. You asked "where did the recorded
ambiance go?". The answer is that the "ambience" is related directly to
the HRTF of the listener in the venue. Sans listener, there is *ONLY*
temporal and level data available for recording, and when recorded as
such, this data cannot subsequently be used to accurately reproduce
incident angle information. That information is lost in the translation.

That is a binaural, or head-related, process. With stereo, a field-type
system, we are reproducing the object itself in front of us and using our
own natural hearing mechanism and HRTF to listen to it.


Yes, that is the problem. The signal presented to the listener, in the
venue, has angular, temporal, and level clues that,


And phase as Audio Empire points out.

in conjunction with
the HRTF of the listener, create a spacial image. That information was
not, however, encoded into the recording except as temporal and level
information.


And possibly phase as well.

No matter how that information is played back, the signal
reaching the listener cannot be the same as in the venue.


It will never be the same, but that's not the point. The point is similar enough.


Reflecting
the sound cannot, except in the context of listener preference,
ameliorate this constraint.


It's not staright out prooven either way. But I'd say it's rather improbable.
But I'm open to be shown otherwise. That's why I wanted to see a theory not a
nice trick. Theory which would explain that the needed clues are in the
reproduced signal and distractions are either masked or attenuated enough.


snip

OK, here comes my main point of this whole discussion, my "closer."

We have discussed all of the audible parts of the listening experience in
the EEFs, What Can We Hear. We said that the spatial part is the main
stumbling block, the main difference between the reproduction and the
real
thing. Think of it as pure physics.


Feel free to present some, please.

If the spatial qualities I discussed are
audible, then we must make some attempt to reproduce them.


We can't reproduce them, they are not on the recording. What we can do
is to produce an illusion, the efficacy of which is clearly a function
of both engineering efficacy and listener preference.


Yes and no. It could be like Imax-3D -- it's illusion and in fact a simplistic
one -- but majority of people, those with proper binocular vision perceive the
effect.


The "real thing" comes to us as a primarily reverberant field from a
multiplicity of incident angles.


No, it does not, except in a narrow subset of live events.


Oh, in fact it does. In majority of live events it does. You got it wrong. In
your typical concert hall critical distance is about 4m-5m. In clubs and similar
small venues it's even closer. That means that even while one is sitting in a
first row the sound of further away instruments is dominated by reverberant sound.

Many times
the direct component (lets think of outside live events for example,
shall we?)


Outside events are allmost allways reinforced. So there goes that 'natural'
soundstage.

is the dominant component, and sometimes by wide margins.


Its very rare situation it's a dominant component and virtually never by a wide
margin.


[...]

and if the
recording really does contain some of the early reflected sound from the
venue,


It does contain it; translated into TEMPORAL and LEVEL information.

then it is more correct to reproduce that part of the sound by
reflecting it from the similar surfaces in your listening room.


You are not reflecting *that part of the sound*, you are reflecting ALL
of the sound, and in doing so, you reflect the DIRECT portion of the
signal as well. That is clearly inappropriate - it doesn't come to you
that way in the venue does it?


In fact does. It comes dominated by reverberation.

If the spacial is as important as you
maintain, then reflecting the direct portion of the signal is at least
as egregious an error as ignoring the reverberant part of the signal.


This is too simplistic.

In fact real properly[*] recorded events are miked at a distance closer than a
typical listener is. Moreover mikes are typically high in the air, so they get
early reflections primarily just from the floor and not from all the close
surroundings of typical listener (as there aren't any up there). Stereo
recordings recorderd from a typical listener position do not sound too
spectacularily. This is (partly) because that sound is then replayed at listener
venue where there are additional reflections (nobody listens in anechoic
chamber). So good recording already take into account those additional
reflections. Thus additional reflections are often 'unnatural' -- they contain
peaks due to room shape and dimensions (the incorporate replay room info), in
case of box speakers they are much damped in the highs, etc...

Gary's technique aims at getting those reflections right as I understand. But
what I miss is a physical and psychoaccoustical model of things, not an analogy
to mirrors. So this is not a theory it's just a trick. Theory should point out
which additions (due to the whole playback chain -- chain starting at the
recording) to the sound are benign and which are not, which help recreate the
illusion and which are standing in good illusions way. If for example some class
of reflections is benign then we might not care if ther are present or not. Is
some are troublesome then we know what should be dealt with.


[*] I'm speaking in the context of this thread -- propely here means rendering
nice deep audio scene in a listener room.


rgds
\SK
--
"Never underestimate the power of human stupidity" -- L. Lang
--
http://www.tajga.org -- (some photos from my travels)
  #10   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default Mind Stretchers

On Tue, 12 Jun 2012 04:30:37 -0700, KH wrote
(in article ):

On 6/11/2012 4:52 PM, Audio Empire wrote:

snip

You both forgot phase differences.


What feature of phase difference cannot be characterized as "temporal"
difference?

Keith


Not the same thing. Two sounds can arrive at the microphone (or ears) at
exactly the same time, and yet one sound can have a different phase
relationship to the overall sound field than the other. Temporally, they're
alike but the varying phase relationships will add to or subtract from the
overall "sound picture". In fact, if you are miking an ensemble with two
spaced omnidirectional mikes, the phase difference between left and right
channels will actually cause some instruments to "disappear" when you blend
the right and left mike feeds to achieve mono. Phase differences are one of
the main clues our ears use to determine directionality (the others being
temporal - the time delay between between sounds reaching the right and the
left ears, and level differences - the difference in volume between a sound
reaching the left and right ear.


  #11   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default Mind Stretchers

On Tue, 12 Jun 2012 04:07:31 -0700, KH wrote
(in article ):

On 6/11/2012 6:07 AM, Gary Eickmeier wrote:
wrote in message
...

Yes, that is the problem. The signal presented to the listener, in the
venue, has angular, temporal, and level clues that, in conjunction with
the HRTF of the listener, create a spacial image. That information was
not, however, encoded into the recording except as temporal and level
information. No matter how that information is played back, the signal
reaching the listener cannot be the same as in the venue. Reflecting the
sound cannot, except in the context of listener preference, ameliorate
this constraint.


Keith, I'm not sure what exactly your conceptual problem is,


I'm tempted to believe you. Not convinced, but tempted. I would posit,
however, the conceptual difficulty appears to be yours.

but everyone
knows that stereo operates on temporal and level differences between
channels. You have noticed how those differences can cause the perception of
phantom images between the speakers, right? That spatial information is
encoded into the channels by means of temporal and level differences in the
signals.


Then quit asking questions like "where did the information go?". The
spatial information you are describing is left/right - that's it. That
information *is* encoded in the signal. Up/down, front/back, that
information is not present in two channel recordings. You can create an
illusion of depth and height - not the same thing.



I can't agree with you 100% here. Not the "stereo is an illusion" part. That
is certainly true enough, but the part about up/down, front/back, I have to
take serious issue with. I have been making true stereo recordings of
ensembles of all sizes and types, from small, jazz ensembles to large wind
ensembles (concert bands) to full symphony orchestras for many years and I
always use some kind of stereo pair. I either use A-B, X-Y, a coincident pair
or a single stereo mike in M-S mode. All of my recordings have image height,
and front-to-back layering of instruments. How, you ask? It's simple, true
stereo is phase coherent. It properly captures the phase relationships
between two closely-spaced microphones that tells the listener (on playback,
through speakers) that one sound is emanating from in front of or from behind
another. It can also tell via these phase cues that, for instance, the brass
are up in risers and the woodwinds are at stage level. Also, in a true stereo
recording the triangle in the percussion section "seems" to hover over that
section just as it does in a real concert hall situation. These aren't
anomalies or "illusions" as in trickery, these are repeatable phenomenon that
take advantage of the phase coherent nature of true stereo recordings and is
well covered by papers from Alan Blumlein et al.


Now, I have observed that reflecting a part of the sound from room surfaces
can cause an image shift toward the reflecting surfaces. This has a twofold
perceptual impact. One, it causes the sound to go outside the speaker boxes
and appear as an aerial image somewhat behind the plane of the speakers,
seeming like the instruments are right there in the room with you, rather
than coming from speakers.


And I have noticed that this sounds contrived, oversized, diffuse, and
not at all realistic. It is, inarguably, inaccurate since the new
spatial distribution of the reproduction cannot possibly be anywhere
close to the actual event.


This type of "slap" reflection that Mr. Eickmeier refers to tells me that his
listening room is way too live. It looks to me like he needs to add some
acoustic padding at strategic locations to knock-down this type of room
interaction.

Secondly, it causes an impression of spaciousness
in recordings that contain such information, such as correctly miked
symphonies in a good hall. Most of us have experienced this very audible
difference between directional speakers and more omni type speakers.


Yes, we have. Some of us think that's realism, some of us don't.


OK, fine, now between those two types of sound, one is likely to sound
closer to live than the other. If you think that is just a preference and
worth no further study, then that is the bed you shall l ie in.


Taking umbrage at a strawman of your construction is hardly helpful.

If I think
this is a significant point and worth further study, and try to get others
to notice these effects and help me out, then please don't tell me it is all
pointless because you are not interested.


Yet another strawman. Please provide a quote that even intimates any
such thought. As I've said, ad nauseum, and as you've ignored rather
perniciously, is that you are ignoring the role of preference, and want
to divorce it from the process. Ignoring preference is as egregious an
error as ignoring the physics or engineering involved.


Especially since the BEST we can do is so far from reality. Audiophiles tend
to gravitate to some smaller part of the whole enchilada and obsess over it
to try to get it right - often at the expense of other parts of the complete
picture. This is 100% preference. One listener obsesses over imaging and uses
small book-shelf speakers on stands because they image best, while ignoring
the fact that such speakers are often deficient in bass. Another listener
requires that the midrange be right, and the rest of the spectrum be damned.
Still another might be a bass freak with huge sub-woofers that pressurize his
listening room in what he sees as a realistic manner. To pretend that these
choices that disparate audiophiles make aren't personal preferences, is, at
the very least, an arrogant approach to the question. Were it a question of
one speaker system designed according to the precepts of one man (such as our
Mr. Eickmeier, here) then there wouldn't be thousands of different models and
designs of speakers available.

Audiophiles have been trying to
figure out what causes these effects for decades. They have complained about
boxy sounding speakers and the hole in the middle effect and wondered what
makes some systems sound more realistic than others.


And where are all these audiophiles complaining about "hole in the
middle"? I've counted exactly one...you.


Hole in the middle? Then your speakers are too far apart. Put them closer
together until the "hole-in-the-middle" disappears. Easy.
  #12   Report Post  
Posted to rec.audio.high-end
KH KH is offline
external usenet poster
 
Posts: 137
Default Mind Stretchers

On 6/12/2012 8:20 PM, Sebastian Kaliszewski wrote:
KH wrote:
On 6/9/2012 2:09 PM, Gary Eickmeier wrote:

[...]
Stereo has nothing to do with HRTF.

snip

Yes, that is the problem. The signal presented to the listener, in the
venue, has angular, temporal, and level clues that,


And phase as Audio Empire points out.


Well yes, but what is phase except a temporal shift?

in conjunction with the HRTF of the listener, create a spacial image.
That information was not, however, encoded into the recording except
as temporal and level information.


And possibly phase as well.


Ditto

No matter how that information is played back, the signal reaching the
listener cannot be the same as in the venue.


It will never be the same, but that's not the point. The point is
similar enough.


Well, no, the point is it still must be an illusion, because the
information is not in the recording. I agree, it only need be
"sufficient" to fool the listener. But it simply cannot be "the same".


Reflecting the sound cannot, except in the context of listener
preference, ameliorate this constraint.


It's not staright out prooven either way. But I'd say it's rather
improbable. But I'm open to be shown otherwise. That's why I wanted to
see a theory not a nice trick. Theory which would explain that the
needed clues are in the reproduced signal and distractions are either
masked or attenuated enough.


All subject to individual listener response however. Since the
reproduction *must* be different, and must present a different HRTF than
the original, it has to be listener dependent. Can you create a model
that is statistically "better"? Certainly. But then, you must
understand, that statistically, *low* bit-rate MP3's are sonically fine.
Therein lies the rub.

snip


We can't reproduce them, they are not on the recording. What we can do
is to produce an illusion, the efficacy of which is clearly a function
of both engineering efficacy and listener preference.


Yes and no. It could be like Imax-3D -- it's illusion and in fact a
simplistic one -- but majority of people, those with proper binocular
vision perceive the effect.


The "effect", yes. They do not, howefver


The "real thing" comes to us as a primarily reverberant field from a
multiplicity of incident angles.


No, it does not, except in a narrow subset of live events.


Oh, in fact it does. In majority of live events it does. You got it
wrong. In your typical concert hall critical distance is about 4m-5m. In
clubs and similar small venues it's even closer. That means that even
while one is sitting in a first row the sound of further away
instruments is dominated by reverberant sound.


I'm not thinking orchestra, I'm thinking small acoustic groups, in small
settings. Oftentimes 10 feet or less.

Many times the direct component (lets think of outside live events for
example, shall we?)


Outside events are allmost allways reinforced. So there goes that
'natural' soundstage.


Realism is not confined to non-reinforced music.

is the dominant component, and sometimes by wide margins.


Its very rare situation it's a dominant component and virtually never by
a wide margin.


In any amplified outdoor event, it certainly is.


snip
If the spacial is as important as you maintain, then reflecting the
direct portion of the signal is at least as egregious an error as
ignoring the reverberant part of the signal.


This is too simplistic.


I fail to see how, and your description below does not explain it
sufficiently, to my mind.

In fact real properly[*] recorded events are miked at a distance closer
than a typical listener is. Moreover mikes are typically high in the
air, so they get early reflections primarily just from the floor and not
from all the close surroundings of typical listener (as there aren't any
up there).


Yes, and how does reflecting these floor reflections - that arrive at
the listener from a specific incident angle - from a totally different
incident angle, during replay, provide an accurate representation?

Stereo recordings recorderd from a typical listener position
do not sound too spectacularily. This is (partly) because that sound is
then replayed at listener venue where there are additional reflections
(nobody listens in anechoic chamber). So good recording already take
into account those additional reflections. Thus additional reflections
are often 'unnatural' -- they contain peaks due to room shape and
dimensions (the incorporate replay room info), in case of box speakers
they are much damped in the highs, etc...


I would basically agree. But, "taking into account those additional
reflections" means altering the recording to account for the playback
medium and venue. Such tailoring must make the information on the
recording inaccurate relative to the acoustic in the original venue, no?
Almost analogous to an RIAA pre-emphasis and de-emphasis without a
reference standard.

Keith

  #13   Report Post  
Posted to rec.audio.high-end
KH KH is offline
external usenet poster
 
Posts: 137
Default Mind Stretchers

On 6/12/2012 7:09 PM, Gary Eickmeier wrote:
wrote in message
...

So, you have an epiphany that tells you those of use who prefer direct
radiating speakers are nuts, and reflected sound is the only way to have
realism, BUT you need other people to invest time, money, and energy into
exploring *if* you *may* be right about said epiphany?


I used no such terms. I explained the perceptual effects and correlated them
to radiation pattern etc.


No, the description is mine. You've provided no counterpoint to my
interpretation, however.


And, please, explain why you continue to avoid my simple direct question
about whether if you and I disagree about a system being realistic, is one
of us wrong? Could it be that you know full well that either way you
answer requires that you accept that individual preference is a prime
factor? And that, further, there can be no "paradigm" that explains the
whole theory of "correct" stereo reproduction unless you exclude
preference as a variable? Not *the* variable as you attempt to claim, but
a major variable.


Keith, Floyd Toole's entire career has been devoted to a series of studies
on listener preferences in various loudspeaker and rooms tests. The
assumption is that the more preferable speakers have some qualities that are
more correct w respect to reproduction.


Did I ask about Floyd Toole? No, I asked you a direct question and you
you dodge, weave, obfuscate, and refuse to answer. Why? Never mind, I
know why.

snip
This is essentially what I have been doing on an anecdotal basis for the
last 30 years, after I discovered somthing very significant about speaker
positioning.


In your opinion.

You say you have never heard of the hole in the middle effect, except from
me. That doesn't put you in a very good light, knowledge wise.


It puts you in a far worse light integrity-wise. You claimed that
audiophiles have complained for decades about this problem. I relayed
that I have not heard such complaints from any audiophiles except you.
Please quote anyone else in this group who has complained about this issue.

You say maybe
you prefer a boxy sound,


That, is a crock. You are being disingenuous and you know it. I have
taken great pains to disabuse you of such misperceptions yet you persist.

or that some people may prefer directional
speakers, and that sounds more like live to you. OK, fine. Dave Moran calls
it the "honking" effect, where the high frequencies narrow in their
radiation pattern as FR goes up.


And the relevance would be? Dave Moran is an arbiter of my preferences
and perceptions based upon...?


Sieffried Linkwitz asked the musical question straight out, which radiation
pattern, speaker positioning, and room treatments lead to greater realism in
the reproduction. I have been studying these factors for a long time, and
have given my anwers and a theory on why it works that way.


Yes you have. And we've debated that theory, only to have you become
more bellicose and confrontational as others have failed to accept your
theory, sans evidence or supporting theoretical construct. This is not
an effective means of persuasion.

If you say it is
no more than a preference one way or another,


You know what, Gary, it is clear you are going to continue to
misrepresent what others (at least I) say, repeatedly, no matter how
often you are corrected, and no matter how clear and unambiguous those
corrections are. You are not arguing a theory, you are pursuing a pogrom.

Since you refuse to pursue a forthright discussion, I'll devote no more
time to it.

Keith

  #14   Report Post  
Posted to rec.audio.high-end
KH KH is offline
external usenet poster
 
Posts: 137
Default Mind Stretchers

On 6/12/2012 8:24 PM, Audio Empire wrote:
On Tue, 12 Jun 2012 04:07:31 -0700, KH wrote
(in ):

On 6/11/2012 6:07 AM, Gary Eickmeier wrote:
wrote in message
...

snip
Then quit asking questions like "where did the information go?". The
spatial information you are describing is left/right - that's it. That
information *is* encoded in the signal. Up/down, front/back, that
information is not present in two channel recordings. You can create an
illusion of depth and height - not the same thing.



I can't agree with you 100% here. Not the "stereo is an illusion" part. That
is certainly true enough, but the part about up/down, front/back, I have to
take serious issue with.


What part of the signal codes for these spatial effects that are not
temporal or level in nature? What am I missing? Clearly, speaker
radiation patterns are designed to present this illusion, but as far as
I can tell, it is illusory relative to the recorded signal. Left/right
data is directly addressable since you have one speaker per channel.

I have been making true stereo recordings of
ensembles of all sizes and types, from small, jazz ensembles to large wind
ensembles (concert bands) to full symphony orchestras for many years and I
always use some kind of stereo pair. I either use A-B, X-Y, a coincident pair
or a single stereo mike in M-S mode. All of my recordings have image height,
and front-to-back layering of instruments. How, you ask? It's simple, true
stereo is phase coherent. It properly captures the phase relationships
between two closely-spaced microphones that tells the listener (on playback,
through speakers) that one sound is emanating from in front of or from behind
another. It can also tell via these phase cues that, for instance, the brass
are up in risers and the woodwinds are at stage level. Also, in a true stereo
recording the triangle in the percussion section "seems" to hover over that
section just as it does in a real concert hall situation. These aren't
anomalies or "illusions" as in trickery, these are repeatable phenomenon that
take advantage of the phase coherent nature of true stereo recordings and is
well covered by papers from Alan Blumlein et al.


Yes, but phase differences are simply temporal differences. Perhaps I
should've been clearer in my usage. I'm not suggesting that phase
differences cannot be used to convey some spatial information, only that
speaker radiation patterns must 'mimic' the source sufficiently to fool
our hearing. The depth and height is not really there, in a discreet
sense as is left/right information, front/back/high/low frequency "A"
comes from the same point - thus placement of such signals across a
soundstage is an illusion. Not "trickery", just making use of how we
interpret sound. And many, many illusions are repeatable (visual as
well as auditory). And you can certainly mess that up during replay.
Take a couple of sine waves, out of phase by some degree, and purposely
bounce them off a wall such that the phase relationship changes. How is
that accurate, or helpful, or realistic?

snip

Ignoring preference is as egregious an
error as ignoring the physics or engineering involved.


Especially since the BEST we can do is so far from reality. Audiophiles tend
to gravitate to some smaller part of the whole enchilada and obsess over it
to try to get it right - often at the expense of other parts of the complete
picture. This is 100% preference. One listener obsesses over imaging and uses
small book-shelf speakers on stands because they image best, while ignoring
the fact that such speakers are often deficient in bass. Another listener
requires that the midrange be right, and the rest of the spectrum be damned.
Still another might be a bass freak with huge sub-woofers that pressurize his
listening room in what he sees as a realistic manner. To pretend that these
choices that disparate audiophiles make aren't personal preferences, is, at
the very least, an arrogant approach to the question. Were it a question of
one speaker system designed according to the precepts of one man (such as our
Mr. Eickmeier, here) then there wouldn't be thousands of different models and
designs of speakers available.


Agreed, although Mr. Eickmeier sees the plethora of speaker designs as
clear evidence that "no one knows" what's going on, thus some "unified
theory" is needed.

snip

And where are all these audiophiles complaining about "hole in the
middle"? I've counted exactly one...you.


Hole in the middle? Then your speakers are too far apart. Put them closer
together until the "hole-in-the-middle" disappears. Easy.


Kind of my point. I don't hear people complaining about this "problem"
unless they know nothing about speaker placement.

Keith

  #15   Report Post  
Posted to rec.audio.high-end
Sebastian Kaliszewski Sebastian Kaliszewski is offline
external usenet poster
 
Posts: 82
Default Mind Stretchers

KH wrote:
On 6/12/2012 8:20 PM, Sebastian Kaliszewski wrote:
KH wrote:
On 6/9/2012 2:09 PM, Gary Eickmeier wrote:

[...]
Stereo has nothing to do with HRTF.

snip

Yes, that is the problem. The signal presented to the listener, in the
venue, has angular, temporal, and level clues that,


And phase as Audio Empire points out.


Well yes, but what is phase except a temporal shift?


You're conflating phase and wavefront. You can have 180deg off phase signals
coming at the same moment.


in conjunction with the HRTF of the listener, create a spacial image.
That information was not, however, encoded into the recording except
as temporal and level information.


And possibly phase as well.


Ditto


See above. Phase is a property different from timing.


No matter how that information is played back, the signal reaching the
listener cannot be the same as in the venue.


It will never be the same, but that's not the point. The point is
similar enough.


Well, no, the point is it still must be an illusion, because the
information is not in the recording.


PArt of the infomation is. You're making an error 180deg from Gary's error, but
stil an error.
Due to projection from higher to lower number of dimansions part of the
information is lost but part is still retained.

I agree, it only need be
"sufficient" to fool the listener. But it simply cannot be "the same".


But "the same" is not needed. Our ears have finite resolution and our brains are
sesnsitive obly to some parts of the signal.


Reflecting the sound cannot, except in the context of listener
preference, ameliorate this constraint.


It's not staright out prooven either way. But I'd say it's rather
improbable. But I'm open to be shown otherwise. That's why I wanted to
see a theory not a nice trick. Theory which would explain that the
needed clues are in the reproduced signal and distractions are either
masked or attenuated enough.


All subject to individual listener response however. Since the
reproduction *must* be different, and must present a different HRTF than
the original, it has to be listener dependent.


Only to a point. Listeners are humans not superbeings and all have their
limitations. If you have an device capable of running 30Mph you can outrun any
human going on his feet. IOW listener dependence has its bounds.

Can you create a model
that is statistically "better"? Certainly. But then, you must
understand, that statistically, *low* bit-rate MP3's are sonically fine.


And high bitrate Oggs, Musepacks or AC3s are sonically indistinguishable.

[...]
The "real thing" comes to us as a primarily reverberant field from a
multiplicity of incident angles.

No, it does not, except in a narrow subset of live events.


Oh, in fact it does. In majority of live events it does. You got it
wrong. In your typical concert hall critical distance is about 4m-5m. In
clubs and similar small venues it's even closer. That means that even
while one is sitting in a first row the sound of further away
instruments is dominated by reverberant sound.


I'm not thinking orchestra, I'm thinking small acoustic groups, in small
settings. Oftentimes 10 feet or less.


In small venues critical distance is also small.


Many times the direct component (lets think of outside live events for
example, shall we?)


Outside events are allmost allways reinforced. So there goes that
'natural' soundstage.


Realism is not confined to non-reinforced music.


But in case of reinforced music you don't have an natural audio scene to capture.


is the dominant component, and sometimes by wide margins.


Its very rare situation it's a dominant component and virtually never by
a wide margin.


In any amplified outdoor event, it certainly is.


Ditto. Besides those are most typically recorded by taping electical signals
before the go into PA reinforcement system. Any auditory scene is then created
in porstprocessing.


If the spacial is as important as you maintain, then reflecting the
direct portion of the signal is at least as egregious an error as
ignoring the reverberant part of the signal.


This is too simplistic.


I fail to see how, and your description below does not explain it
sufficiently, to my mind.

In fact real properly[*] recorded events are miked at a distance closer
than a typical listener is. Moreover mikes are typically high in the
air, so they get early reflections primarily just from the floor and not
from all the close surroundings of typical listener (as there aren't any
up there).


Yes, and how does reflecting these floor reflections - that arrive at
the listener from a specific incident angle - from a totally different
incident angle, during replay, provide an accurate representation?


First, one have to ascertain what acuracy of incident angles is really needed.
That's in fact a part of what I miss from what Gary presented.

Second, those floor reflections in real life listener position (i.e. not 10 feet
allmost above conductors head) get rereflected as well.


Stereo recordings recorderd from a typical listener position
do not sound too spectacularily. This is (partly) because that sound is
then replayed at listener venue where there are additional reflections
(nobody listens in anechoic chamber). So good recording already take
into account those additional reflections. Thus additional reflections
are often 'unnatural' -- they contain peaks due to room shape and
dimensions (the incorporate replay room info), in case of box speakers
they are much damped in the highs, etc...


I would basically agree. But, "taking into account those additional
reflections" means altering the recording to account for the playback
medium and venue. Such tailoring must make the information on the
recording inaccurate relative to the acoustic in the original venue, no?


Well, this information is accurate at the miking position(s). Only that position
is choosen

Almost analogous to an RIAA pre-emphasis and de-emphasis without a
reference standard.


Somewhat, I agree. But even without pre-emphasis standard no preemphasis at all
was worse.
Similarily, recordings miked from typical listener position tend to present
unimpressive audio scene rendidtion.

If someone came with good theory allowing us convicing audio scene recreation
without sacrificing other audio aspects some standard like -- place mikes always
12ft above flooor, pointed such as such, etc.

rgds
\SK
--
"Never underestimate the power of human stupidity" -- L. Lang
--
http://www.tajga.org -- (some photos from my travels)


  #16   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default Mind Stretchers

On Wed, 13 Jun 2012 05:55:18 -0700, KH wrote
(in article ):

On 6/12/2012 8:24 PM, Audio Empire wrote:
On Tue, 12 Jun 2012 04:07:31 -0700, KH wrote
(in ):

On 6/11/2012 6:07 AM, Gary Eickmeier wrote:
wrote in message
...

snip
Then quit asking questions like "where did the information go?". The
spatial information you are describing is left/right - that's it. That
information *is* encoded in the signal. Up/down, front/back, that
information is not present in two channel recordings. You can create an
illusion of depth and height - not the same thing.



I can't agree with you 100% here. Not the "stereo is an illusion" part. That
is certainly true enough, but the part about up/down, front/back, I have to
take serious issue with.


What part of the signal codes for these spatial effects that are not
temporal or level in nature? What am I missing?


Phase differences! They are critical to that perception.

Clearly, speaker
radiation patterns are designed to present this illusion, but as far as
I can tell, it is illusory relative to the recorded signal. Left/right
data is directly addressable since you have one speaker per channel.

I have been making true stereo recordings of
ensembles of all sizes and types, from small, jazz ensembles to large wind
ensembles (concert bands) to full symphony orchestras for many years and I
always use some kind of stereo pair. I either use A-B, X-Y, a coincident
pair
or a single stereo mike in M-S mode. All of my recordings have image height,
and front-to-back layering of instruments. How, you ask? It's simple, true
stereo is phase coherent. It properly captures the phase relationships
between two closely-spaced microphones that tells the listener (on playback,
through speakers) that one sound is emanating from in front of or from
behind
another. It can also tell via these phase cues that, for instance, the brass
are up in risers and the woodwinds are at stage level. Also, in a true
stereo
recording the triangle in the percussion section "seems" to hover over that
section just as it does in a real concert hall situation. These aren't
anomalies or "illusions" as in trickery, these are repeatable phenomenon
that
take advantage of the phase coherent nature of true stereo recordings and is
well covered by papers from Alan Blumlein et al.


Yes, but phase differences are simply temporal differences.


AS I have explained before, they are not temporal in that many different
phases that make up a wavefront can all arrive at the microphone diaphragm
(or the listener's ears) at EXACTLY the same time and yet be different enough
for the ear to reconstruct from the different phases entering the room from
the speakers a three-dimensional aural image containing width, depth, and
height cues.

Perhaps I
should've been clearer in my usage. I'm not suggesting that phase
differences cannot be used to convey some spatial information, only that
speaker radiation patterns must 'mimic' the source sufficiently to fool
our hearing. The depth and height is not really there, in a discreet
sense as is left/right information, front/back/high/low frequency "A"
comes from the same point - thus placement of such signals across a
soundstage is an illusion.


OK, I buy that. Yes, it is conveyed into our room as strictly a left and a
right component. It takes both components, merging in the air in the room
between the speaker diaphragm and one's ears to recreate a three-dimensional
sound field. Our ears do the interpretation of what these phase differences
actually mean. It's the brain that hears "within" that soundfield, phase
differences between the right channel signal and the same information in the
left channel and from those cues constructs the "illusion" that this
instrument is further back than that instrument or that this instrument is
playing from a position higher than that instrument. The info is all there
UNLESS the microphones are too far apart. In that case, I suspect the time
delay between the arriving phase variations from any given instrument or
group of instruments is too long for the ear to properly reconstruct
pin-point depth and height info. That has been my experience, anyway. I admit
that I'm guessing at why spaced omnis don't image as well as a real stereo
pair, but there it is.

Not "trickery", just making use of how we
interpret sound. And many, many illusions are repeatable (visual as
well as auditory). And you can certainly mess that up during replay.
Take a couple of sine waves, out of phase by some degree, and purposely
bounce them off a wall such that the phase relationship changes. How is
that accurate, or helpful, or realistic?

snip

Ignoring preference is as egregious an
error as ignoring the physics or engineering involved.


Especially since the BEST we can do is so far from reality. Audiophiles tend
to gravitate to some smaller part of the whole enchilada and obsess over it
to try to get it right - often at the expense of other parts of the complete
picture. This is 100% preference. One listener obsesses over imaging and
uses
small book-shelf speakers on stands because they image best, while ignoring
the fact that such speakers are often deficient in bass. Another listener
requires that the midrange be right, and the rest of the spectrum be damned.
Still another might be a bass freak with huge sub-woofers that pressurize
his
listening room in what he sees as a realistic manner. To pretend that these
choices that disparate audiophiles make aren't personal preferences, is, at
the very least, an arrogant approach to the question. Were it a question of
one speaker system designed according to the precepts of one man (such as
our
Mr. Eickmeier, here) then there wouldn't be thousands of different models
and
designs of speakers available.


Agreed, although Mr. Eickmeier sees the plethora of speaker designs as
clear evidence that "no one knows" what's going on, thus some "unified
theory" is needed.


That's where he errs. There is no "Unified Theory" simply because no speaker
is perfect, or even close to perfect, so each maker is going off looking for
the ideal "his piece of the puzzle". If perfect speakers existed, there would
be no need for these disparate design methodologies. One speaker design and
one design only would suffice. I don't see why Mr Eickmeier has so much
trouble with that concept.


snip

And where are all these audiophiles complaining about "hole in the
middle"? I've counted exactly one...you.


Hole in the middle? Then your speakers are too far apart. Put them closer
together until the "hole-in-the-middle" disappears. Easy.


Kind of my point. I don't hear people complaining about this "problem"
unless they know nothing about speaker placement.


Exactly!

Keith


  #17   Report Post  
Posted to rec.audio.high-end
Gary Eickmeier Gary Eickmeier is offline
external usenet poster
 
Posts: 1,449
Default Mind Stretchers

On=20Wed=20Jun=2013=2023:47:10=202012=20Sebastian= 20Kaliszewski=20=
=20wrote:


=20If=20someone=20came=20with=20good=20theory=20a llowing=20us=20c=

onvicing=20audio=20scene=20recreation
=20without=20sacrificing=20other=20audio=20aspect s=20some=20stand=

ard=20like=20--=20place=20mikes=20always
=2012ft=20above=20flooor,=20pointed=20such=20as=2 0such,=20etc.


Hi=20Sebastian=20-

Welcome=20to=20the=20dogfight!

I=20was=20just=20trying=20to=20relate=20a=20subjec t=20that=20has=
=20been=20near=20and=20dear=20to=20my=20heart=20fo r=20a=20long=20t=
ime,=20and=20it=20has=20turned=20into=20a=20Hatfie lds=20and=20McCo=
ys=20pitched=20battle=20for=20some=20reason.=20I=2 0think=20audio=
=20people=20have=20a=20lot=20of=20dug=20in=20ideas =20that=20are=20=
hard=20to=20change.=20There=20is=20also=20a=20lot= 20of=20miscommun=
ication=20here,=20sometimes=20because=20we=20don't =20try=20very=20=
hard=20to=20understand=20the=20other's=20point.=20 In=20any=20case=
=20I=20didn't=20intend=20it=20to=20turn=20out=20th is=20way=20-=20j=
ust=20wanted=20to=20run=20a=20few=20ideas=20up=20t he=20flagpole=20=
in=20a=20friendly=20manner=20and=20use=20the=20sim ple=20to=20compl=
ex=20method=20to=20do=20that.=20Start=20with=20a=2 0few=20things=20=
that=20we=20would=20all=20agree=20on,=20then=20rat chet=20it=20up=
=20to=20some=20things=20that=20I=20have=20discover ed.

In=20order=20to=20do=20your=20convincing=20audio=2 0scene=20recreat=
ion=20we=20first=20study=20What=20Can=20We=20Hear, =20by=20means=20=
of=20describing=20all=20of=20the=20MAJOR=20categor ies,=20or=20aspe=
cts,=20of=20sound=20that=20are=20audible=20and=20r elate=20them=20t=
o=20the=20repro=20problem.=20We=20got=20down=20to= 20the=20spatial=
=20characteristics=20as=20being=20the=20main=20stu mbling=20block,=
=20and=20the=20new=20paradigm=20that=20I=20attempt ed=20to=20relate=
=20as=20a=20way=20of=20looking=20at=20the=20proble m=20is=20the=20w=
ell-known=20technique=20of=20image=20modeling.=20It=20 is=20just=20=
a=20more=20visual=20way=20of=20studying=20the=20di rect=20and=20ear=
ly=20reflected=20parts=20of=20the=20sound=20fields .=20If=20you=20t=
ake=20a=20look=20at=20the=20image=20model=20of=20r eproduced=20soun=
d=20from=20speakers,=20you=20can=20compare=20that= 20to=20the=20liv=
e=20model=20and=20see=20the=20differences.=20There =20are=20obvious=
=20physical=20differences=20in=20the=20"shape"=20o f=20these=20fiel=
ds=20that=20we=20can=20make=20a=20little=20better= 20with=20what=20=
I=20call=20The=20Big=20Three=20-=20speaker=20positioning,=20radiat=
ion=20pattern,=20and=20room=20acoustics.=20This=20 is=20possible=20=
because=20the=20two=20rooms,=20although=20differen t=20sizes,=20are=
=20geometrically=20similar.

Maybe=20I=20should=20quit=20while=20I=20am=20behin d=20-=20and=20yo=
u=20can=20read=20thru=20the=20thread=20-=20but=20that=20is=20basic=
ally=20the=20theory=20that=20you=20are=20asking=20 for.=20Specifica=
lly,=20it=20says=20that=20the=20reproduction=20wil l=20sound=20clos=
est=20to=20the=20live=20sound=20when=20the=20image =20model=20of=20=
the=20reproduction=20sound=20field=20is=20as=20clo se=20to=20that=
=20of=20the=20live=20field=20as=20possible.=20I=20 refer=20to=20all=
=20audible=20characteristics=20of=20both=20fields, =20which=20is=20=
why=20I=20started=20with=20the=20What=20Can=20We=2 0Hear=20thread=
=2E=20To=20me,=20this=20theory=20is=20a=20tautolog y=20-=20an=20ind=
isputable=20fact=20that=20is=20so=20obvious=20that =20it=20requires=
=20no=20proof.=20To=20others,=20it=20is=20a=20chal lenge=20to=20lon=
g=20held=20beliefs.

Gary=20Eickmeier


  #18   Report Post  
Posted to rec.audio.high-end
Gary Eickmeier Gary Eickmeier is offline
external usenet poster
 
Posts: 1,449
Default Mind Stretchers

On Wed Jun 13 23:47:10 2012 Sebastian Kaliszewski wrote:

If someone came with good theory allowing us convicing audio scene recreation
without sacrificing other audio aspects some standard like -- place mikes always
12ft above flooor, pointed such as such, etc.


Hi Sebastian -

Welcome to the dogfight!

I was just trying to relate a subject that has been near and dear to
my heart for a long time, and it has turned into a Hatfields and
McCoys pitched battle for some reason. I think audio people have a lot
of dug in ideas that are hard to change. There is also a lot of
miscommunication here, sometimes because we don't try very hard to
understand the other's point. In any case I didn't intend it to turn
out this way - just wanted to run a few ideas up the flagpole in a
friendly manner and use the simple to complex method to do that. Start
with a few things that we would all agree on, then ratchet it up to
some things that I have discovered.

In order to do your convincing audio scene recreation we first study
What Can We Hear, by means of describing all of the MAJOR categories,
or aspects, of sound that are audible and relate them to the repro
problem. We got down to the spatial characteristics as being the main
stumbling block, and the new paradigm that I attempted to relate as a
way of looking at the problem is the well-known technique of image
modeling. It is just a more visual way of studying the direct and
early reflected parts of the sound fields. If you take a look at the
image model of reproduced sound from speakers, you can compare that to
the live model and see the differences. There are obvious physical
differences in the "shape" of these fields that we can make a little
better with what I call The Big Three - speaker positioning, radiation
pattern, and room acoustics. This is possible because the two rooms,
although different sizes, are geometrically similar.

Maybe I should quit while I am behind - and you can read thru the
thread - but that is basically the theory that you are asking for.
Specifically, it says that the reproduction will sound closest to the
live sound when the image model of the reproduction sound field is as
close to that of the live field as possible. I refer to all audible
characteristics of both fields, which is why I started with the What
Can We Hear thread. To me, this theory is a tautology - an
indisputable fact that is so obvious that it requires no proof. To
others, it is a challenge to long held beliefs.

Gary Eickmeier

  #19   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default Mind Stretchers

On Thu, 14 Jun 2012 17:59:41 -0700, Gary Eickmeier wrote
(in article ):

On Wed Jun 13 23:47:10 2012 Sebastian Kaliszewski wrote:

If someone came with good theory allowing us convicing audio scene
recreation
without sacrificing other audio aspects some standard like -- place mikes
always
12ft above flooor, pointed such as such, etc.


Hi Sebastian -

Welcome to the dogfight!

I was just trying to relate a subject that has been near and dear to
my heart for a long time, and it has turned into a Hatfields and
McCoys pitched battle for some reason. I think audio people have a lot
of dug in ideas that are hard to change. There is also a lot of
miscommunication here, sometimes because we don't try very hard to
understand the other's point. In any case I didn't intend it to turn
out this way - just wanted to run a few ideas up the flagpole in a
friendly manner and use the simple to complex method to do that. Start
with a few things that we would all agree on, then ratchet it up to
some things that I have discovered.


It's not a "pitched battle" Gary, it's just that you offered up a "theory"
that has no basis in the physics of acoustics or even in the realm of
psychoacoustics. Add to that the fact that your theory assumes that everyone
is listening in the same way to the same characteristics in various speakers,
and of, course, you're going to get people who disagree with you. An analogy
would be someone going on a food newsgroup and stating: "I've been eating
different foods lately, and I've come to the conclusion that broccoli is the
best food there is." Then, after making that statement, expecting everyone
to agree with them and becoming upset when they find that many people
disagree.

The fact is that there are many different aspects of reproduced music that
people are drawn to according to their individual tastes. There is no
"Unified Speaker Theory" nor can there be until somebody manages to design a
speaker that is perfect in every way. Then that technology will be the way to
design speakers and everything else can be relegated to history. Of course,
such a design, were it even remotely possible, would. likely, of necessity,
be very expensive, and other companies (from the one that came up with the
perfect speaker) would try to make perfect speakers cheaper. So then they
would start to diverge from the "true path". Basically, even if a perfect
speaker were invented, designers with other ideas would introduce their own
vision of perfection and the whole thing would start anew.


In order to do your convincing audio scene recreation we first study
What Can We Hear, by means of describing all of the MAJOR categories,
or aspects, of sound that are audible and relate them to the repro
problem.


Even after you identify all of the things that humans can hear (which has
been done), physics prevents us from building any contrivances that will get
even half of them right. That's why there are so many different approaches to
these problems: cone speakers, planar speakers, dipoles, omnidirectional
speakers like the German MBLs, Air Motion Transformers, ribbons, the list
goes on and on.


We got down to the spatial characteristics as being the main
stumbling block, and the new paradigm that I attempted to relate as a
way of looking at the problem is the well-known technique of image
modeling. It is just a more visual way of studying the direct and
early reflected parts of the sound fields. If you take a look at the
image model of reproduced sound from speakers, you can compare that to
the live model and see the differences. There are obvious physical
differences in the "shape" of these fields that we can make a little
better with what I call The Big Three - speaker positioning, radiation
pattern, and room acoustics. This is possible because the two rooms,
although different sizes, are geometrically similar.


Again, room acoustics for most listening rooms are a tertiary effect at best,
and really can't be part of the equation because it is something that the
equipment manufacturers have no control over just as they (and the listener)
have no control over how the recording that they are playing was made.

Maybe I should quit while I am behind - and you can read thru the
thread - but that is basically the theory that you are asking for.
Specifically, it says that the reproduction will sound closest to the
live sound when the image model of the reproduction sound field is as
close to that of the live field as possible.


There is no argument there. But since that's impossible for a dozen or more
reasons that are beyond the control of anyone trying to quantize that theory,
it's pretty futile. Not only that, but your notion is nothing new. Audio
engineers, acousticians, etc, have known these things for decades, but have
realized that without complete control of all parameters, from the source
musicians to the listening venue, it's a useless pursuit.

I refer to all audible
characteristics of both fields, which is why I started with the What
Can We Hear thread. To me, this theory is a tautology - an
indisputable fact that is so obvious that it requires no proof. To
others, it is a challenge to long held beliefs.


No it isn't. It's YOUR conclusions that are the challenge, not the basic
questions. Nobody here (that I have read) challenge your premise:
"...reproduction will sound closest to the live sound when the image model of
the reproduction sound field is as close to that of the live field as
possible."

But everybody challenges your singular vision of how to accomplish this and
even the fact that any one solution can possibly even begin to address this
issue.

I don't think that you purposely twist people's words. but it looks like that
sometimes, the way you phrase what you believe to be other people's reactions
to your posts.
  #20   Report Post  
Posted to rec.audio.high-end
KH KH is offline
external usenet poster
 
Posts: 137
Default Mind Stretchers

On 6/13/2012 4:47 PM, Sebastian Kaliszewski wrote:
KH wrote:
On 6/12/2012 8:20 PM, Sebastian Kaliszewski wrote:
KH wrote:


snip


And phase as Audio Empire points out.


Well yes, but what is phase except a temporal shift?


You're conflating phase and wavefront. You can have 180deg off phase
signals coming at the same moment.


What I meant to say was "phase differences". Two identical waves 180deg
off phase are different only in an f/2 temporal shift right?


in conjunction with the HRTF of the listener, create a spacial image.
That information was not, however, encoded into the recording except
as temporal and level information.

And possibly phase as well.


Ditto


See above. Phase is a property different from timing.


With respect to the specific discussion, I don't see how they can be
considered separate properties. For example, if we were to take two
instruments that could produce a single pure tone (hypothetically) and
place them on a stage, one 3M from the mic, and one 5M from the mic, and
shifted laterally 1M. If both instruments were started simultaneously,
calibrated to provide equal signal levels at the microphone, then the
wavefront at the microphone position would, from a direct perspective,
comprise two out of phase waves right? Yet, the only difference in the
signals is arrival times of the respective peaks and troughs, no?

So add a second microphone, and you have the same signals recorded from
a different position in space. As long as you know the microphone
positions, its easy to determine the relative positions of the two
instruments, aurally or mathematically. Yet when you add in the effects
of the reverberant sound field you have a whole new set of signals of
varying strengths and arrival times, and thus phase differences. As a
listener, in place of the microphones, even minor head movements allow
you to localize the instruments by sampling different angular
presentations (i.e. the HRTF effect) and analyzing multiple wave fronts.
This depth of information is simply not captured in a stereo recording.
That is the information that is missing; that's the information that
allows us to establish accurate positional data.


snip

Well, no, the point is it still must be an illusion, because the
information is not in the recording.


PArt of the infomation is.


Part, yes. That's the point. Is there enough to make a pretty
convincing reproduction? Clearly yes for a vast number of folks.

You're making an error 180deg from Gary's
error, but stil an error.
Due to projection from higher to lower number of dimansions part of the
information is lost but part is still retained.


The part that is lost is the part that allows us to "make sense" of the
reverberant sound field. Similar to the failings of binaural; the
reverberant field is there, but from a single fixed perspective which
defeats a lot of our ability to localize sounds as we do normally.

I agree, it only need be "sufficient" to fool the listener. But it
simply cannot be "the same".


But "the same" is not needed. Our ears have finite resolution and our
brains are sesnsitive obly to some parts of the signal.


Absolutely true. Now, if we were talking about using that information
and tailoring the reproduction to accentuate the parts to which are most
sensitive (e.g. perceptual coding), no problem. We're not though.
We're talking about a brute force approach that adds a lot of spacial
information that is not related to the spatial environment of the venue.
Hence the whole concept of the reproduction as a "separate work of art".


Reflecting the sound cannot, except in the context of listener
preference, ameliorate this constraint.

It's not staright out prooven either way. But I'd say it's rather
improbable. But I'm open to be shown otherwise. That's why I wanted to
see a theory not a nice trick. Theory which would explain that the
needed clues are in the reproduced signal and distractions are either
masked or attenuated enough.


I agree that a real theory would be nice. I think, however, that it's
pretty clear that taking the entire signal, direct and reverberant, and
adding additional phase shifts by reflecting off the front wall, is a
totally indiscriminate approach.


All subject to individual listener response however. Since the
reproduction *must* be different, and must present a different HRTF
than the original, it has to be listener dependent.


Only to a point. Listeners are humans not superbeings and all have their
limitations.


I disagree. Listener limitations are the issue. If we were superbeings,
and all heard "perfectly", then we could, indeed, create some
universally recognized paradigm for perfect reproduction (might not get
there physically, but it's theoretically feasible).

If you have an device capable of running 30Mph you can
outrun any human going on his feet. IOW listener dependence has its bounds.


I don't think that's true, in this context. Aural limitations that *I*
have are more likely, IMO, to make me prefer a different presentation
than a 20 year old with perfect hearing. IOW, I don't think there can be
a universal standard of sufficiency, precisely because of listener
limitations or preferences.


Can you create a model that is statistically "better"? Certainly. But
then, you must understand, that statistically, *low* bit-rate MP3's
are sonically fine.


And high bitrate Oggs, Musepacks or AC3s are sonically indistinguishable.


That's not the point though. Even ignoring that there are likely those
who would disagree with that characterization. The point is, yes, you
can do lot's of things to improve the reproduction, things that may,
statistically be considered better - possibly by a large margin - but
that does not a "paradigm" make.


snip

In fact real properly[*] recorded events are miked at a distance closer
than a typical listener is. Moreover mikes are typically high in the
air, so they get early reflections primarily just from the floor and not
from all the close surroundings of typical listener (as there aren't any
up there).


Yes, and how does reflecting these floor reflections - that arrive at
the listener from a specific incident angle - from a totally different
incident angle, during replay, provide an accurate representation?


First, one have to ascertain what acuracy of incident angles is really
needed. That's in fact a part of what I miss from what Gary presented.


Yes, but using his approach, it doesn't matter how much accuracy is
*needed* because there is no selective filtering of any kind being applied.

Second, those floor reflections in real life listener position (i.e. not
10 feet allmost above conductors head) get rereflected as well.


Stereo recordings recorderd from a typical listener position
do not sound too spectacularily. This is (partly) because that sound is
then replayed at listener venue where there are additional reflections
(nobody listens in anechoic chamber). So good recording already take
into account those additional reflections. Thus additional reflections
are often 'unnatural' -- they contain peaks due to room shape and
dimensions (the incorporate replay room info), in case of box speakers
they are much damped in the highs, etc...


I would basically agree. But, "taking into account those additional
reflections" means altering the recording to account for the playback
medium and venue. Such tailoring must make the information on the
recording inaccurate relative to the acoustic in the original venue, no?


Well, this information is accurate at the miking position(s). Only that
position is choosen


OK but then you are not talking about "taking into account" the
additional reflections in the context of adjusting the recording to
compensate, right?


Almost analogous to an RIAA pre-emphasis and de-emphasis without a
reference standard.


Somewhat, I agree. But even without pre-emphasis standard no preemphasis
at all was worse.
Similarily, recordings miked from typical listener position tend to
present unimpressive audio scene rendidtion.


I read your "taking into account" description above as meaning you
adjust the recording to compensate, hence my analogy of RIAA. It appears
I misunderstood you.

Keith


  #21   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default Mind Stretchers

On Sun, 17 Jun 2012 12:59:06 -0700, KH wrote
(in article ):

On 6/13/2012 4:47 PM, Sebastian Kaliszewski wrote:
KH wrote:
On 6/12/2012 8:20 PM, Sebastian Kaliszewski wrote:
KH wrote:


snip


And phase as Audio Empire points out.

Well yes, but what is phase except a temporal shift?


You're conflating phase and wavefront. You can have 180deg off phase
signals coming at the same moment.


What I meant to say was "phase differences". Two identical waves 180deg
off phase are different only in an f/2 temporal shift right?


in conjunction with the HRTF of the listener, create a spacial image.
That information was not, however, encoded into the recording except
as temporal and level information.

And possibly phase as well.

Ditto


See above. Phase is a property different from timing.


With respect to the specific discussion, I don't see how they can be
considered separate properties. For example, if we were to take two
instruments that could produce a single pure tone (hypothetically) and
place them on a stage, one 3M from the mic, and one 5M from the mic, and
shifted laterally 1M. If both instruments were started simultaneously,
calibrated to provide equal signal levels at the microphone, then the
wavefront at the microphone position would, from a direct perspective,
comprise two out of phase waves right? Yet, the only difference in the
signals is arrival times of the respective peaks and troughs, no?


Not necessarily. First of all, both players might night start the note in the
same place, but start it at the same time meaning that their wavefronts might
arrive at the diaphragm at a time shifted by their differences, but might be
greater of less than 180 degrees out of phase. Also a real flute waveform is
more complex than a simple sinewave, and therefore there will be phase
anomalies within the two flutes. Now if we use two speakers fed by a sinewave
generator, the problem will still exist that unless it's the same generator
through two speakers there will still be a random phase component other than
the distance component. But that hardly tells us anything about the real
world.

So add a second microphone, and you have the same signals recorded from
a different position in space. As long as you know the microphone
positions, its easy to determine the relative positions of the two
instruments, aurally or mathematically. Yet when you add in the effects
of the reverberant sound field you have a whole new set of signals of
varying strengths and arrival times, and thus phase differences. As a
listener, in place of the microphones, even minor head movements allow
you to localize the instruments by sampling different angular
presentations (i.e. the HRTF effect) and analyzing multiple wave fronts.
This depth of information is simply not captured in a stereo recording.


Sure it is. It's capture by only two mikes (ideally), but in the right
circumstances, that's enough.

That is the information that is missing; that's the information that
allows us to establish accurate positional data.


I maintain that in a properly made recording, it's not missing.
  #22   Report Post  
Posted to rec.audio.high-end
KH KH is offline
external usenet poster
 
Posts: 137
Default Mind Stretchers

On 6/17/2012 6:34 PM, Audio Empire wrote:
On Sun, 17 Jun 2012 12:59:06 -0700, KH wrote
(in ):

On 6/13/2012 4:47 PM, Sebastian Kaliszewski wrote:
KH wrote:
On 6/12/2012 8:20 PM, Sebastian Kaliszewski wrote:
KH wrote:


snip

So add a second microphone, and you have the same signals recorded from
a different position in space. As long as you know the microphone
positions, its easy to determine the relative positions of the two
instruments, aurally or mathematically. Yet when you add in the effects
of the reverberant sound field you have a whole new set of signals of
varying strengths and arrival times, and thus phase differences. As a
listener, in place of the microphones, even minor head movements allow
you to localize the instruments by sampling different angular
presentations (i.e. the HRTF effect) and analyzing multiple wave fronts.
This depth of information is simply not captured in a stereo recording.


Sure it is. It's capture by only two mikes (ideally), but in the right
circumstances, that's enough.


How is it captured? I'm not referring to *soundstage* depth, clearly
that only requires two mics, rather I'm talking about information
density. A listener, with only minute head movements, samples a number
of different wavefronts, providing an information density much greater
than that achieved by any fixed recording setup, whether stereo,
multichannel, or binaural.

That is the information that is missing; that's the information that
allows us to establish accurate positional data.


I maintain that in a properly made recording, it's not missing.


I believe the information to which I'm referring is missing from the
recording. Where, in a stereo recording, is information from multiple
wavefronts, both normal and off-angle, recorded?

There is no doubt that there is sufficient information in a stereo
recording to create a left/right soundstage, as well as depth
localization, and at least an illusion of height, although I admit I
don't have a firm geometric/visual conception of quite how that is achieved.

But the ability to sample a virtually endless number of stereophonic
(relative to listener reception) wavefronts, available to an audience
member, does not translate to a recording made from any fixed perspective.

If I'm missing something here, feel free to enlighten me. As I stated
previously, I don't claim any special expertise in recording
technologies. But many things in our hobby do not require such
expertise to understand.

Keith

  #23   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default Mind Stretchers

On Mon, 18 Jun 2012 03:39:56 -0700, KH wrote
(in article ):

On 6/17/2012 6:34 PM, Audio Empire wrote:
On Sun, 17 Jun 2012 12:59:06 -0700, KH wrote
(in ):

snip

So add a second microphone, and you have the same signals recorded from
a different position in space. As long as you know the microphone
positions, its easy to determine the relative positions of the two
instruments, aurally or mathematically. Yet when you add in the effects
of the reverberant sound field you have a whole new set of signals of
varying strengths and arrival times, and thus phase differences. As a
listener, in place of the microphones, even minor head movements allow
you to localize the instruments by sampling different angular
presentations (i.e. the HRTF effect) and analyzing multiple wave fronts.
This depth of information is simply not captured in a stereo recording.


Sure it is. It's capture by only two mikes (ideally), but in the right
circumstances, that's enough.


How is it captured? I'm not referring to *soundstage* depth, clearly
that only requires two mics, rather I'm talking about information
density. A listener, with only minute head movements, samples a number
of different wavefronts, providing an information density much greater
than that achieved by any fixed recording setup, whether stereo,
multichannel, or binaural.


I think that the soundfield, by the time it reaches the audience, has
coalesced into a single whole that is perceived in a certain way from each
location within the audience. The human brain allows us to search within that
soundfield and pick out certain sounds upon which to concentrate, but that's
part of the human ear/intelligence interface that allows us to pick certain
sounds out of a plethora of background. I.E. it's a survival skill that
allows us to pick out the snap of a twig against a waterfall, or for a mother
to distinguish her lost child crying in a crowd. It is not a result of the
orchestra being comprised of many different soundfields which moving our
heads allows us to intersect and sample, and which microphones miss because
they are locked in a single location.

That is the information that is missing; that's the information that
allows us to establish accurate positional data.


I maintain that in a properly made recording, it's not missing.


I believe the information to which I'm referring is missing from the
recording. Where, in a stereo recording, is information from multiple
wavefronts, both normal and off-angle, recorded?


It's not necessary as there aren't multiple wavefronts, or if their are, both
our microphones and our ears intersect all of them arriving at that point in
space.

There is no doubt that there is sufficient information in a stereo
recording to create a left/right soundstage, as well as depth
localization, and at least an illusion of height, although I admit I
don't havee a firm geometric/visual conception of quite how that is achieved.


Subtle phase differences that give out ears the (relative) height of a sound
source. They are captured by microphones too in a true stereo recording.

But the ability to sample a virtually endless number of stereophonic
(relative to listener reception) wavefronts, available to an audience
member, does not translate to a recording made from any fixed perspective.


If you accept the premise, then your conclusion is correct. However from my
knowledge and experience, I find that your premise isn't correct.
  #24   Report Post  
Posted to rec.audio.high-end
KH KH is offline
external usenet poster
 
Posts: 137
Default Mind Stretchers

On 6/18/2012 4:46 PM, Audio Empire wrote:
On Mon, 18 Jun 2012 03:39:56 -0700, KH wrote (in
):

On 6/17/2012 6:34 PM, Audio Empire wrote:
On Sun, 17 Jun 2012 12:59:06 -0700, KH wrote (in
):

snip


How is it captured? I'm not referring to *soundstage* depth,
clearly that only requires two mics, rather I'm talking about
information density. A listener, with only minute head movements,
samples a number of different wavefronts, providing an information
density much greater than that achieved by any fixed recording
setup, whether stereo, multichannel, or binaural.


I think that the soundfield, by the time it reaches the audience,
has coalesced into a single whole that is perceived in a certain way
from each location within the audience.


I agree, relative to any specific head orientation. But not as the head
is turned to sample different incident angle information.

The human brain allows us to search within that soundfield and pick
out certain sounds upon which to concentrate, but that's part of the
human ear/intelligence interface that allows us to pick certain
sounds out of a plethora of background. I.E. it's a survival skill
that allows us to pick out the snap of a twig against a waterfall, or
for a mother to distinguish her lost child crying in a crowd. It is
not a result of the orchestra being comprised of many different
soundfields which moving our heads allows us to intersect and sample,
and which microphones miss because they are locked in a single
location.


I don't think that's quite accurate. You can think of it in simple
geometric terms. If you look directly at the center of the soundfield -
both ears equidistant from the center of the stage, you're 'sampling'
one perspective of the soundfield. If you turn your head left 10
degrees, there is now a clear separation in the arrival time of any
given sound ear to ear, a sound previously reaching each ear at the same
time (from the center of the stage), simultaneously you will hear a
higher ratio of reverberant to direct information in the left ear, and,
possibly, a lower ratio in the right ear. A situation the brain has
evolved to interpret into location clues.


That is the information that is missing; that's the information
that allows us to establish accurate positional data.

I maintain that in a properly made recording, it's not missing.


I believe the information to which I'm referring is missing from
the recording. Where, in a stereo recording, is information from
multiple wavefronts, both normal and off-angle, recorded?


It's not necessary as there aren't multiple wavefronts, or if their
are, both our microphones and our ears intersect all of them arriving
at that point in space.


Well, I believe that yes, there are multiple wavefronts in the sense
that turning your head will provide a different perspective to each ear,
and those differences in perspective allow the ear and brain to localize
sounds in three dimensions. Obviously, the ability to discriminate
different off-angle perspectives is subject to threshold values, and
sensitivity and precision constraints, all of which will vary to a
certain degree among individuals.

Again, I'm not saying there's not enough information in the recording to
create a very good reproduction. But, if we're talking about some
theory that would create a playback method/environment/setup that would
be a new paradigm of realism, and overcome all preferential effects,
then close won't do.

There is no doubt that there is sufficient information in a stereo
recording to create a left/right soundstage, as well as depth
localization, and at least an illusion of height, although I admit
I don't havee a firm geometric/visual conception of quite how that
is achieved.


Subtle phase differences that give out ears the (relative) height of
a sound source. They are captured by microphones too in a true stereo
recording.


That I get. From a visual geometry perspective, I don't have a mental
image that accounts for height differences. Left/right, front/back,
fairly clear. Height, not so much, unless frequency plays a role in
height assessment.


But the ability to sample a virtually endless number of
stereophonic (relative to listener reception) wavefronts, available
to an audience member, does not translate to a recording made from
any fixed perspective.


If you accept the premise, then your conclusion is correct. However
from my knowledge and experience, I find that your premise isn't
correct.


Ignoring threshold effects, I don't see how it could not be correct.
Not in the sense of different overall soundfields intertwined in the
venue, but rather different stereophonic interpretations of the overall
soundfield when evaluated from different incident angles. It's fairly
obvious that when you look ahead, look at the right wall, then look at
the left wall, the sound changes significantly.

Keith

  #25   Report Post  
Posted to rec.audio.high-end
Gary Eickmeier Gary Eickmeier is offline
external usenet poster
 
Posts: 1,449
Default Mind Stretchers

On Mon Jun 18 23:46:13 2012 Audio Empire wrote:
On Mon, 18 Jun 2012 03:39:56 -0700, KH wrote



Subtle phase differences that give out ears the (relative) height of a sound
source. They are captured by microphones too in a true stereo recording.

But the ability to sample a virtually endless number of stereophonic
(relative to listener reception) wavefronts, available to an audience
member, does not translate to a recording made from any fixed perspective.


If you accept the premise, then your conclusion is correct. However from my
knowledge and experience, I find that your premise isn't correct.


There are no "subtle phase differences" beyond about 700 Hz, at which
point the wavelength becomes too small for the perception of any phase
effects, and pure time difference takes over. Phase has some effect
below that point, but that region is not a biggie in spacial
perception.

Gary Eickmeier


  #26   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default Mind Stretchers

On Mon, 18 Jun 2012 19:20:16 -0700, KH wrote
(in article ):

On 6/18/2012 4:46 PM, Audio Empire wrote:
On Mon, 18 Jun 2012 03:39:56 -0700, KH wrote (in
):

On 6/17/2012 6:34 PM, Audio Empire wrote:
On Sun, 17 Jun 2012 12:59:06 -0700, KH wrote (in
):

snip


How is it captured? I'm not referring to *soundstage* depth,
clearly that only requires two mics, rather I'm talking about
information density. A listener, with only minute head movements,
samples a number of different wavefronts, providing an information
density much greater than that achieved by any fixed recording
setup, whether stereo, multichannel, or binaural.


I think that the soundfield, by the time it reaches the audience,
has coalesced into a single whole that is perceived in a certain way
from each location within the audience.


I agree, relative to any specific head orientation. But not as the head
is turned to sample different incident angle information.


I think that's the ear/brain at work and not anythning to do with the
soundfield.


The human brain allows us to search within that soundfield and pick
out certain sounds upon which to concentrate, but that's part of the
human ear/intelligence interface that allows us to pick certain
sounds out of a plethora of background. I.E. it's a survival skill
that allows us to pick out the snap of a twig against a waterfall, or
for a mother to distinguish her lost child crying in a crowd. It is
not a result of the orchestra being comprised of many different
soundfields which moving our heads allows us to intersect and sample,
and which microphones miss because they are locked in a single
location.


I don't think that's quite accurate. You can think of it in simple
geometric terms. If you look directly at the center of the soundfield -
both ears equidistant from the center of the stage, you're 'sampling'
one perspective of the soundfield. If you turn your head left 10
degrees, there is now a clear separation in the arrival time of any
given sound ear to ear, a sound previously reaching each ear at the same
time (from the center of the stage), simultaneously you will hear a
higher ratio of reverberant to direct information in the left ear, and,
possibly, a lower ratio in the right ear. A situation the brain has
evolved to interpret into location clues.


My experiements with microphone placement tells me that this is NOT what's
going on.


That is the information that is missing; that's the information
that allows us to establish accurate positional data.

I maintain that in a properly made recording, it's not missing.

I believe the information to which I'm referring is missing from
the recording. Where, in a stereo recording, is information from
multiple wavefronts, both normal and off-angle, recorded?


It's not necessary as there aren't multiple wavefronts, or if their
are, both our microphones and our ears intersect all of them arriving
at that point in space.


Well, I believe that yes, there are multiple wavefronts in the sense
that turning your head will provide a different perspective to each ear,
and those differences in perspective allow the ear and brain to localize
sounds in three dimensions. Obviously, the ability to discriminate
different off-angle perspectives is subject to threshold values, and
sensitivity and precision constraints, all of which will vary to a
certain degree among individuals.

Again, I'm not saying there's not enough information in the recording to
create a very good reproduction. But, if we're talking about some
theory that would create a playback method/environment/setup that would
be a new paradigm of realism, and overcome all preferential effects,
then close won't do.

There is no doubt that there is sufficient information in a stereo
recording to create a left/right soundstage, as well as depth
localization, and at least an illusion of height, although I admit
I don't havee a firm geometric/visual conception of quite how that
is achieved.


Subtle phase differences that give out ears the (relative) height of
a sound source. They are captured by microphones too in a true stereo
recording.


That I get. From a visual geometry perspective, I don't have a mental
image that accounts for height differences. Left/right, front/back,
fairly clear. Height, not so much, unless frequency plays a role in
height assessment.


But the ability to sample a virtually endless number of
stereophonic (relative to listener reception) wavefronts, available
to an audience member, does not translate to a recording made from
any fixed perspective.


If you accept the premise, then your conclusion is correct. However
from my knowledge and experience, I find that your premise isn't
correct.


Ignoring threshold effects, I don't see how it could not be correct.
Not in the sense of different overall soundfields intertwined in the
venue, but rather different stereophonic interpretations of the overall
soundfield when evaluated from different incident angles. It's fairly
obvious that when you look ahead, look at the right wall, then look at
the left wall, the sound changes significantly.

Keith



Different strokes for different folks, I guess. I have done countless
experiments with microphones and the things that those experiments have
taught me is that it's real easy to over think this question. There are a
number of fundamentals and it takes a certain insight to be able to "see"
like a microphone sees (talent, maybe?), but once you understand what
microphone movements will result in an actual change in perspective and which
ones won't, you begging to realize what you are dealing with in terms of the
physics of stereo recording.
  #27   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default Mind Stretchers

On Mon, 18 Jun 2012 20:21:01 -0700, Gary Eickmeier wrote
(in article ):

On Mon Jun 18 23:46:13 2012 Audio Empire wrote:
On Mon, 18 Jun 2012 03:39:56 -0700, KH wrote



Subtle phase differences that give out ears the (relative) height of a sound
source. They are captured by microphones too in a true stereo recording.

But the ability to sample a virtually endless number of stereophonic
(relative to listener reception) wavefronts, available to an audience
member, does not translate to a recording made from any fixed perspective.


If you accept the premise, then your conclusion is correct. However from my
knowledge and experience, I find that your premise isn't correct.


There are no "subtle phase differences" beyond about 700 Hz, at which
point the wavelength becomes too small for the perception of any phase
effects, and pure time difference takes over. Phase has some effect
below that point, but that region is not a biggie in spacial
perception.

Gary Eickmeier


You're going to have to provide some provenance for that statement.
  #28   Report Post  
Posted to rec.audio.high-end
Sebastian Kaliszewski Sebastian Kaliszewski is offline
external usenet poster
 
Posts: 82
Default Mind Stretchers

Gary Eickmeier wrote:
On Mon Jun 18 23:46:13 2012 Audio Empire wrote:
On Mon, 18 Jun 2012 03:39:56 -0700, KH wrote



Subtle phase differences that give out ears the (relative) height of a sound
source. They are captured by microphones too in a true stereo recording.

But the ability to sample a virtually endless number of stereophonic
(relative to listener reception) wavefronts, available to an audience
member, does not translate to a recording made from any fixed perspective.

If you accept the premise, then your conclusion is correct. However from my
knowledge and experience, I find that your premise isn't correct.


There are no "subtle phase differences" beyond about 700 Hz, at which
point the wavelength becomes too small for the perception of any phase
effects, and pure time difference takes over.


This does not agree with things I have read. Please provide source for that
information...

rgds
\SK

--
"Never underestimate the power of human stupidity" -- L. Lang
--
http://www.tajga.org -- (some photos from my travels)
  #29   Report Post  
Posted to rec.audio.high-end
Gary Eickmeier Gary Eickmeier is offline
external usenet poster
 
Posts: 1,449
Default Mind Stretchers

"Sebastian Kaliszewski" wrote in message
...
Gary Eickmeier wrote:


There are no "subtle phase differences" beyond about 700 Hz, at which
point the wavelength becomes too small for the perception of any phase
effects, and pure time difference takes over.


This does not agree with things I have read. Please provide source for
that information...


Sorry, thought this was well known. The best reference I found is in the AES
Anthology of Stereophonic Techniques, James Moir, "Stereophonic
Reproduction," Audio Engineering, 1952 October, pp. 26 - 28. I quote in
part:

"These differences justify further discussion. The reason for the difference
in time of arrival at the two ears is evident and requires no further
exlanation, but the question immediately arises as to which part of the
sound-wave cycle is accepted by the ear as determining the time of arrival
at that ear. On an impulsive sound having a steep wave front it may be
assumed that the arrival of the wave front is recognized, but on a
repetitive waveform there is difficulty in understanding just how the ear
recognizes the difference between successive cycles with identical waveform.
A high frequency wave passing from right to left will have several cycles
pass the right ear before the first cycle reaches the left ear, and the
right ear may not know just how many cycles have passed at the instant the
first cycle reaches the left ear. This rather suggests that there may be
difficulty in fixing the position of a high frequency source having a
frequency such that more than half to one cycle of the wave can be
accomodated in the space between the ears. Taking the velocity of sound as
33,000 cm/sec and the ear spacing as 21 cm, it might be expected that
frequencies above 800 cps (half wave = 21 cm) might present difficulties in
location and it is worth noting that this is found to be the case in
practice."

In other words, at frequencies above 800 Hz a time difference can result in
a phase difference at low frequencies, but at higher freqs phase makes no
sense. In other well-known literature such as Blauert, the statement about
level vs time of arrival differences is that the signal will be slammed
completely right or left with level difference of 30 dB or a time difference
of 630 microseconds to 1 ms.

I note also that with AE's favorite recording technique of coincindent
miking, or Blumlein stereo, there is NO time of arrival or phase difference
between channels.

Gary Eickmeier


  #30   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default Mind Stretchers

On Wed, 20 Jun 2012 16:55:04 -0700, Gary Eickmeier wrote
(in article ):

"Sebastian Kaliszewski" wrote in message
...
Gary Eickmeier wrote:


There are no "subtle phase differences" beyond about 700 Hz, at which
point the wavelength becomes too small for the perception of any phase
effects, and pure time difference takes over.


This does not agree with things I have read. Please provide source for
that information...


Sorry, thought this was well known. The best reference I found is in the AES
Anthology of Stereophonic Techniques, James Moir, "Stereophonic
Reproduction," Audio Engineering, 1952 October, pp. 26 - 28. I quote in
part:

"These differences justify further discussion. The reason for the difference
in time of arrival at the two ears is evident and requires no further
exlanation, but the question immediately arises as to which part of the
sound-wave cycle is accepted by the ear as determining the time of arrival
at that ear. On an impulsive sound having a steep wave front it may be
assumed that the arrival of the wave front is recognized, but on a
repetitive waveform there is difficulty in understanding just how the ear
recognizes the difference between successive cycles with identical waveform.
A high frequency wave passing from right to left will have several cycles
pass the right ear before the first cycle reaches the left ear, and the
right ear may not know just how many cycles have passed at the instant the
first cycle reaches the left ear. This rather suggests that there may be
difficulty in fixing the position of a high frequency source having a
frequency such that more than half to one cycle of the wave can be
accomodated in the space between the ears. Taking the velocity of sound as
33,000 cm/sec and the ear spacing as 21 cm, it might be expected that
frequencies above 800 cps (half wave = 21 cm) might present difficulties in
location and it is worth noting that this is found to be the case in
practice."

In other words, at frequencies above 800 Hz a time difference can result in
a phase difference at low frequencies, but at higher freqs phase makes no
sense. In other well-known literature such as Blauert, the statement about
level vs time of arrival differences is that the signal will be slammed
completely right or left with level difference of 30 dB or a time difference
of 630 microseconds to 1 ms.


This MIGHT be true if we're dealing with sine waves, but neither a musical
instrument nor a musical ensemble produces sine waves. Also, the frequecy
doesn't matter unless we are talking about two or more sine wave signals of
the same frequency, but music is a complex waveform and the ear/brain's
interaction with that waveform is not well understood at best.

I note also that with AE's favorite recording technique of coincindent
miking, or Blumlein stereo, there is NO time of arrival or phase difference
between channels.


That's ridiculous. phase differences start at the SOURCE of the wavefront,
not at the destination (in this case, the microphones, where they already
exist). No two sound sources are ever completely in phase at the same time.
In fact one instrument will produce a complex wave-form that has many
different phase relationships occurring at once within it's own sound.


  #31   Report Post  
Posted to rec.audio.high-end
Gary Eickmeier Gary Eickmeier is offline
external usenet poster
 
Posts: 1,449
Default Mind Stretchers

On Tue Jun 19 02:20:16 2012 KH wrote:

On 6/18/2012 4:46 PM, Audio Empire wrote:


On Mon, 18 Jun 2012 03:39:56 -0700, KH wrote (in


But the ability to sample a virtually endless number of


stereophonic (relative to listener reception) wavefronts, available


to an audience member, does not translate to a recording made from


any fixed perspective.




If you accept the premise, then your conclusion is correct. However


from my knowledge and experience, I find that your premise isn't


correct.




Ignoring threshold effects, I don't see how it could not be correct.


Not in the sense of different overall soundfields intertwined in the


venue, but rather different stereophonic interpretations of the overall


soundfield when evaluated from different incident angles. It's fairly


obvious that when you look ahead, look at the right wall, then look at


the left wall, the sound changes significantly.




I have told you that it doesn't work that way. You are setting up a new
sound field within your listening room. If it is done right, then you can
turn your head all you want within that new field, with similar results to
those you get live. I routinely "see" instruments in my listening room, and
can turn my head in subtle movements toward those sounds just as in the live
venue.

It is not a head related system. The resultant sound need not be from a
fixed perspective. Example, the center channel with dialog for movies, or a
solo singer for stereo. You can go anywhere in the room and it will stay
where it belongs, and be perceived where it belongs, and you can turn toward
it as you please, just as you can toward a left or right channel sound, or
anything in between.



Gary Eickmeier


  #32   Report Post  
Posted to rec.audio.high-end
Gary Eickmeier Gary Eickmeier is offline
external usenet poster
 
Posts: 1,449
Default Mind Stretchers

"Audio Empire" wrote in message
...
On Wed, 20 Jun 2012 16:55:04 -0700, Gary Eickmeier wrote
(in article ):


In other words, at frequencies above 800 Hz a time difference can result
in
a phase difference at low frequencies, but at higher freqs phase makes no
sense. In other well-known literature such as Blauert, the statement
about
level vs time of arrival differences is that the signal will be slammed
completely right or left with level difference of 30 dB or a time
difference
of 630 microseconds to 1 ms.


This MIGHT be true if we're dealing with sine waves, but neither a musical
instrument nor a musical ensemble produces sine waves. Also, the frequecy
doesn't matter unless we are talking about two or more sine wave signals
of
the same frequency, but music is a complex waveform and the ear/brain's
interaction with that waveform is not well understood at best.


Stan Lip****z, in his famous article "Stereo Microphone Techniques - Are the
Purists Wrong?" (JAES 1986 September) puts it like this:

"When listening live, the signals at the two eardrums differ in time of
arrival, level, and spectral content, and these differences depend on the
source position. The time of arrival difference is due to the physical
spacing of our ears and cannot exceed about 630 microseconds in real life.
This corresponds to a path length difference of about 210 mm, and this in
turn represents a half wavelength at a frequency of around 800 Hz. It thus
follows that for frequencies below about 800 Hz there is an unambiguous
phase relationship between the two ear signals and the source direction, the
ear nearer the source having the leading phase. This is what I shall call
the low frequency regime. This phase difference is frequency dependent; in
fact it varies linearly with frequency since it represents a pure time
delay. At frequencies above about 800 Hz the interaural phase shift can
exceed 180 degrees, and so the ability (on periodic steady state signals) to
discern which ear's signal is leading and which is lagging is lost. So,
clearly, interaural phase relationships would appear to be useful cues only
at low freuencies."

I note also that with AE's favorite recording technique of coincindent
miking, or Blumlein stereo, there is NO time of arrival or phase
difference
between channels.


That's ridiculous. phase differences start at the SOURCE of the wavefront,
not at the destination (in this case, the microphones, where they already
exist). No two sound sources are ever completely in phase at the same
time.
In fact one instrument will produce a complex wave-form that has many
different phase relationships occurring at once within it's own sound.


Now you're being silly. Phase differences between what and what? Two sound
sources in phase? If they are two completely different sources, there is no
such thing as phase relationships between them. Phase relationships within
the sound of a single instrument? What in blazes are you talking about?

Gary Eickmeier


  #33   Report Post  
Posted to rec.audio.high-end
KH KH is offline
external usenet poster
 
Posts: 137
Default Mind Stretchers

On 6/21/2012 4:07 AM, Gary Eickmeier wrote:
On Tue Jun 19 02:20:16 2012 KH wrote:

On 6/18/2012 4:46 PM, Audio Empire wrote:


On Mon, 18 Jun 2012 03:39:56 -0700, KH wrote (in


But the ability to sample a virtually endless number of


stereophonic (relative to listener reception) wavefronts, available


to an audience member, does not translate to a recording made from


any fixed perspective.




If you accept the premise, then your conclusion is correct. However


from my knowledge and experience, I find that your premise isn't


correct.




Ignoring threshold effects, I don't see how it could not be correct.


Not in the sense of different overall soundfields intertwined in the


venue, but rather different stereophonic interpretations of the overall


soundfield when evaluated from different incident angles. It's fairly


obvious that when you look ahead, look at the right wall, then look at


the left wall, the sound changes significantly.




I have told you that it doesn't work that way.


Yes, and A) you can "tell" me anything you want, but assertions lacking
data are not compelling in the least, and B) you continually misread,
misunderstand, and/or misconstrue what I write. As with:


You are setting up a new
sound field within your listening room.


A non-cogent statement interjected into a discussion about *venue* and
*recording environment*. Nothing to do with the reproduction except in
the context of what information is not captured on the recording.

Keith


  #34   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default Mind Stretchers

On Thu, 21 Jun 2012 04:07:18 -0700, Gary Eickmeier wrote
(in article ):

"Audio Empire" wrote in message
...
On Wed, 20 Jun 2012 16:55:04 -0700, Gary Eickmeier wrote
(in article ):


In other words, at frequencies above 800 Hz a time difference can result
in
a phase difference at low frequencies, but at higher freqs phase makes no
sense. In other well-known literature such as Blauert, the statement
about
level vs time of arrival differences is that the signal will be slammed
completely right or left with level difference of 30 dB or a time
difference
of 630 microseconds to 1 ms.


This MIGHT be true if we're dealing with sine waves, but neither a musical
instrument nor a musical ensemble produces sine waves. Also, the frequecy
doesn't matter unless we are talking about two or more sine wave signals
of
the same frequency, but music is a complex waveform and the ear/brain's
interaction with that waveform is not well understood at best.


Stan Lip****z, in his famous article "Stereo Microphone Techniques - Are the
Purists Wrong?" (JAES 1986 September) puts it like this:

"When listening live, the signals at the two eardrums differ in time of
arrival, level, and spectral content, and these differences depend on the
source position. The time of arrival difference is due to the physical
spacing of our ears and cannot exceed about 630 microseconds in real life.
This corresponds to a path length difference of about 210 mm, and this in
turn represents a half wavelength at a frequency of around 800 Hz. It thus
follows that for frequencies below about 800 Hz there is an unambiguous
phase relationship between the two ear signals and the source direction, the
ear nearer the source having the leading phase. This is what I shall call
the low frequency regime. This phase difference is frequency dependent; in
fact it varies linearly with frequency since it represents a pure time
delay. At frequencies above about 800 Hz the interaural phase shift can
exceed 180 degrees, and so the ability (on periodic steady state signals) to
discern which ear's signal is leading and which is lagging is lost. So,
clearly, interaural phase relationships would appear to be useful cues only
at low freuencies."

I note also that with AE's favorite recording technique of coincindent
miking, or Blumlein stereo, there is NO time of arrival or phase
difference
between channels.


That's ridiculous. phase differences start at the SOURCE of the wavefront,
not at the destination (in this case, the microphones, where they already
exist). No two sound sources are ever completely in phase at the same
time.
In fact one instrument will produce a complex wave-form that has many
different phase relationships occurring at once within it's own sound.


Now you're being silly. Phase differences between what and what? Two sound
sources in phase? If they are two completely different sources, there is no
such thing as phase relationships between them. Phase relationships within
the sound of a single instrument? What in blazes are you talking about?


This isn't worth discussing. If you don't understand sound and how it works
well enough to grasp the concept of the phase difference between fundamental
and harmonics, or between one instrument and another playing together, then
there is no use me trying to explain it.
Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Mind Stretchers Arny Krueger[_4_] High End Audio 20 June 9th 12 08:58 PM
Mind Stretchers Gary Eickmeier High End Audio 2 May 31st 12 03:44 PM
Mind Stretchers Audio Empire High End Audio 7 May 30th 12 04:43 AM
Mind Stretchers Sebastian Kaliszewski High End Audio 0 May 28th 12 03:38 PM
Mind Stretchers Audio Empire High End Audio 0 May 28th 12 03:35 PM


All times are GMT +1. The time now is 03:02 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"