Reply
 
Thread Tools Display Modes
  #81   Report Post  
Posted to rec.audio.high-end
Andrew Haley Andrew Haley is offline
external usenet poster
 
Posts: 155
Default In Mobile Age, Sound Quality Steps Back

Scott wrote:
On May 17, 7:43?am, Andrew Haley
wrote:
Scott wrote:
On May 15, 1:42=A0pm, bob wrote:
On May 15, 12:49=A0pm, "Harry Lavo" wrote:


Cutting "live music" out of the equation is what is wrong with
much of the "objectivist" philosophy extant today.


So tell us Harry, how close does your system sound to the last time
you had a symphony orchestra in your living room?


Why would you ask that? The correct, or at least better question
would be how close does his system sound to the last time he went to
see a good symphony orchestra in a good concert hall with good seats?


There seems to be a presumption here that the sound in a concert
hall is ideal. But there are fairly well-known acoustic phenomena
such as the "seat-dip effect" where there is a dip of some 10-15 dB
over two octaves, centred on about 150 Hz. (This is just an
example: real halls have other problems too.) We can to some
extent compensate for this when we listen at concerts, but it's
highly questionable whether we want the sound of real halls in our
homes.

This is a matter of goals: do we want to replicate the
concertgoer's experience, or the "pure" sound of a performance,
whatever that may be? There are no simple answers.


You raise an important issue. Yes the presumption is that the sound
of the concert hall is ideal for a symphonic orchestra. But this is
too broad to be true. There are bad halls and there are bad seats in
many good halls. When *I* talk about live acoustic music as a
reference I am refering to live music that excels. That means
excellent music played on excellent instruments by excellent
musicians in an excellent hall from an excellent position in that
hall. The reason to strive for such sound is because IMO it sets the
standard for aesthetic musical beauty.


Well, yeah. But the claim was that "I know what real music sounds
like in a real space, and that sets the standard for musical
reproduction." But that seems to me more of a Platonic ideal than
anything that happens in reality, or at least not very often. My
experience, if I close my eyes at a concert and try to imagine myself
at home, is that the sound of real halls is very far from ideal. And
also, occasionally it would be nice to turn the volume up a little bit
-- or to turn down what appears to be an outing from the emphysema
ward in the rows behind me.

One other thing that's worth mentioning is the highly non-uniform
radiation pattern of string instruments, and the way they sound very
different (brighter, clearer) close up than in the body of the concert
hall. OK, so you can insist on ultra-minimal microphone setups to
replicate the sound of that hall, as it would be heard by a mamber of
the audience. But then you're prioritizing the sound of the hall over
the sound of the music! I think that much reverberation sounds
excessive in the home, and you would hear the music more clearly with
a bit less reverb than you'd get in a hall.

I know this may sound heretical, but maybe the sound at the position
of the microphones above the orchestra is *better* than that of any
seat in the hall. Also, maybe it really is useful for a balance
engineer to be able to turn up the solist in a concerto.

In other words, I am denying that your ideal sound of an excellent
hall from an excellent position in that hall is any sort of ideal at
all.

Andrew.
  #82   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default In Mobile Age, Sound Quality Steps Back

On Tue, 18 May 2010 06:49:44 -0700, Arny Krueger wrote
(in article ):

"Harry Lavo" wrote in message


I do have concern about your first answer and about the
screening in general.

You mentioned the speakers were screened. It is hard to
find material that is acoustically transparent and yet
visually opaque.


It was not opaque cloth. The effect of opaqueness was achieved by lighting -
like a theatrical scrim.

Were the two sets of speakers at all
muffled in the high frequencies by the screening? Were
they visible in vague outline?


No and no.

Secondly, if the speakers were seperately and
independently placed for best sound, wouldn't listeners
be able to tell just by slight shifts in soundstage which
speakers were which?


No. Both speakers projected soundstanges that were alike enough, and there
was nothing about them that gave any clues as to the speaker's technology.

And finally, you say that the speakers did not affect
each other's soundstaging. For that to be highly likely,
the larger speakers most likely would have to be planar
or electrostatic in nature, presenting their "edge" to
the smaller speakers.


I'm not going to try to match reality up with someone's personal acoustical
theory.


My question is what was used to insure that the two speakers were exactly the
same loudness across the entire audio spectrum? Also, how much did the fact
that the Behringers are self-powered and the "not named" $12000 audiophile
speakers required a separate (and entirely different) amp to power them
affect the results?
  #83   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default In Mobile Age, Sound Quality Steps Back

"Audio Empire" wrote in message


My question is what was used to insure that the two
speakers were exactly the same loudness across the entire
audio spectrum?


Didn't happen. The speakers had slightly different frequency responses at
various listening locations.

Also, how much did the fact that the
Behringers are self-powered and the "not named" $12000
audiophile speakers required a separate (and entirely
different) amp to power them affect the results?


We didn't know or care how the speakers we were listening to were made. It
was all about sound quality.

  #84   Report Post  
Posted to rec.audio.high-end
Scott[_8_] Scott[_8_] is offline
external usenet poster
 
Posts: 2
Default In Mobile Age, Sound Quality Steps Back

On May 17, 2:49=A0pm, Audio Empire wrote:
On Mon, 17 May 2010 07:07:58 -0700, Scott wrote
(in article ):





On May 16, 6:07=3DA0pm, Audio Empire wrote:
On Sun, 16 May 2010 11:37:25 -0700, Scott wrote


=3DA0Bias controls do not make it any more difficult to make those
determinations. It is other aspects of the design and execution of an=

y
comparison that will determine if they will tell you which speakers
are "the most accurate" or what you like in a given price range.
Nothing about blind protocols should ever prevent an otherwise
effective test for determining those aspects of audio from doing so.
All bias controls will do is control the biases they are designed to
control from affecting the outcome. IF the bias controls are designed
and implimented well.


Bias controlled tests, ultimately compare one one set of speaker compr=

omi=3D
ses
to another set of compromises, and tell me very little about which is =

the
more accurate.


How does removing the bias controls of any given test allow the test
to tell you *more* about which speaker is more accurate? If a given
test is telling you little about which is the more accurate speaker
then the flaw in that test lies in the design of that test not in any
particular bias controls that may be implimented in that test unless
those specific bias controls are causing some sort of problem. That is
not an intrinsic propperty of bias controls. If that is happening then
the bias controls are being poorly designed or poorly implimented.


Any test one designs for measuring the relative accuracy of speakers
against some sort of reference by ear can only be helped by well
designed and well executed bias controls. If you disagree then please
offer an argument to support *that.* There is no reason to talk about
other bias controlled test designs that are simply not designed to
measure percieved relative accuracy of loudspeakers.


I think you misunderstand me. Comparing one speaker to another using bias
controlled tests like DBT and ABX tells me nothing in and of itself.


As I said, lets not talk about tests that are poorly designed for the
task at hand. ABX is merely a small subset of bias controlled testing.
There is more in this world than an ABX test when it comes to bias
controls. The vast majority of DBTs in this world actually are not
ABX. You have to design a test that measures what you want to measure.
In this case it is accuracy of speakers. *Then* you design bias
controls to take bias out of the equation. This isn't about ABX.

HOWEVER,
if I am allowed to have this same setup over a long period of time (say
several hours to several days), using recordings of my own choosing, I wi=

ll
be able to compare BOTH to my memory of what real, live music sounds like=

and
be able to tell which of the two speakers is the more "realistic" (or, if=

you
prefer, accurate to my memory of the sound of live music).


OK now please tell me how if you were to do this with bias controls in
place it would make the audition less informative?


=A0Of course, this
assumes that an accurate DBT test can be devised for speakers, which I
seriously doubt.


Why? All we are talking about is controlling bias effects.



For instance: How do you normalize such a test?


How do you normalize such a test under sighted conditions? Do it that
way and then impliment the bias controls.


Suppose one speaker is 89
dB/Watt and the other is 93 dB/Watt? You'd have to use a really accurate =

SPL
meter. Few have that. I have a Radio Shack SPL meter like most of us, but
it's probably not accurate enough to set speaker levels within less than =

1 dB
for such a test.



Seems like an issue that is completely independent of bias controls.


=A0Secondly, speakers (and rooms) are NOT amplifiers or CD
decks with ruler-flat frequency response. How do you make them the same
level? You certainly don't want to put T-Pads between the amp and speaker=

s to
equalize them as that would screw-up the impedance matching between amp a=

nd
speaker.



A fair question. But,again, how is this an issue with bias controls?
How do bias controls affect this problem?

=A0All that I can come up with is that you not only need two sets of
speakers for such a test, you'll also need two IDENTICAL stereo amplifier=

s
with some way to trim them on their inputs to give equal SPL for both the=

89
dB/Watt speaker and the 93 dB/Watt speaker - and at what frequency? Each
speaker can vary wildly from one frequency to another and these frequency
response anomalies are exacerbated by the room in which the test is being
conducted, as well as by the placement of each set of speakers in that ro=

om
and It would be difficult to have both test samples occupy the same space=

at
the same time. Thirdly, you can set them to both to produce a single
frequency, say 1KHz, at exactly (less than 1 dB difference) the same =A0l=

evel
but what happens when you switch frequencies to, say, 400 Hz or 5 KHz? On=

e
speaker could exhibit as much as 6 dB difference in volume (or more) from=

the
other depending upon whether speaker "A", for instance, has a 3 dB peak a=

t
400 Hz (with respect to 1KHz) and speaker "B" has a three dB trough at 40=

0 Hz
(again referenced to 1 KHz).



All legitimate concerns none of which have anything to do with bias
controls. You face these issues whether you listen with biases in play
or controlled.




It seems to me that such a test would be incredibly difficult =A0to pull =

off,
in any environment but an anechoic chamber (to eliminate room interaction=

)
and even then would only really work for two speakers who's frequency
response curves were very similar. Even so, people's biases are going to
still come into play. If one likes big bass, the speaker which has the be=

st
bass is going to be his pick, every time. If a listener likes pin-point
imaging, he's going to pick the speaker that images the better of the two=

-
every time.

These are just a few of my real-world =A0doubts as to the efficacy of DBT
testing for speakers and why I believe that they are not only impractical
(because they would be darned difficult to set up), but would not yield a=

ny
kind of a consensus as to which speaker was the most accurate.



None of the concerns you have expressed are affected by the
implimentation of well designed bias controls. You certainly have
touched upon some of the many issues facing any consumer in comparing
speakers but those issues are issues when comparing under sighted
conditions.

  #85   Report Post  
Posted to rec.audio.high-end
Audio Empire[_2_] Audio Empire[_2_] is offline
external usenet poster
 
Posts: 2
Default In Mobile Age, Sound Quality Steps Back

On Tue, 18 May 2010 14:26:48 -0700, Arny Krueger wrote
(in article ):

"Audio Empire" wrote in message


My question is what was used to insure that the two
speakers were exactly the same loudness across the entire
audio spectrum?


Didn't happen. The speakers had slightly different frequency responses at
various listening locations.


That disqualifies the results in my estimation.

Also, how much did the fact that the
Behringers are self-powered and the "not named" $12000
audiophile speakers required a separate (and entirely
different) amp to power them affect the results?


We didn't know or care how the speakers we were listening to were made. It
was all about sound quality.


My point is that you were listening to TWO variables, The two amps and the
two speakers. That pretty much disqualifies the results as well.

If your point in citing this DBT was to make points for that testing
methodology of analyzing speakers, I think you picked a poor example.

You can see that the methodology is seriously flawed, in this test (as you
have explained it), do you not?



  #86   Report Post  
Posted to rec.audio.high-end
Scott[_8_] Scott[_8_] is offline
external usenet poster
 
Posts: 2
Default In Mobile Age, Sound Quality Steps Back

On May 18, 7:56=A0am, Andrew Haley
wrote:
Scott wrote:
On May 17, 7:43?am, Andrew Haley
wrote:
Scott wrote:
On May 15, 1:42=3DA0pm, bob wrote:
On May 15, 12:49=3DA0pm, "Harry Lavo" wrote:


Cutting "live music" out of the equation is what is wrong with
much of the "objectivist" philosophy extant today.


So tell us Harry, how close does your system sound to the last time
you had a symphony orchestra in your living room?


Why would you ask that? The correct, or at least =A0better question
would be how close does his system sound to the last time he went to
see a good symphony orchestra in a good concert hall with good seats=

?

There seems to be a presumption here that the sound in a concert
hall is ideal. =A0But there are fairly well-known acoustic phenomena
such as the "seat-dip effect" where there is a dip of some 10-15 dB
over two octaves, centred on about 150 Hz. =A0(This is just an
example: real halls have other problems too.) =A0We can to some
extent compensate for this when we listen at concerts, but it's
highly questionable whether we want the sound of real halls in our
homes.


This is a matter of goals: do we want to replicate the
concertgoer's experience, or the "pure" sound of a performance,
whatever that may be? =A0There are no simple answers.


You raise an important issue. Yes the presumption is that the sound
of the concert hall is ideal for a symphonic orchestra. But this is
too broad to be true. There are bad halls and there are bad seats in
many good halls. When *I* talk about live acoustic music as a
reference I am refering to live music that excels. That means
excellent music played on excellent instruments by excellent
musicians in an excellent hall from an excellent position in that
hall. The reason to strive for such sound is because IMO it sets the
standard for aesthetic musical beauty.


Well, yeah. =A0But the claim was that "I know what real music sounds
like in a real space, and that sets the standard for musical
reproduction."


Who are you quoting here? Not me. But this claim of some one elses
still goes to the heart of the matter. Most playback sounds little
like anything live. IME even bad halls sound better than most playback
when we are talking orchestral music.


=A0But that seems to me more of a Platonic ideal than
anything that happens in reality, or at least not very often.


I would disagree very strongly. I pretty consistantly get much better
sound from live orchestral music than I hear from playback in general
by a country mile. Now if we limit the comparison to my system with my
choice minimalist recordings then the comparison actually gets a bit
more interesting and competitive. But that is because I use my
experience with live music and my experience with hifi to build such a
system and collection of recordings. Now when I compare my system and
those recordings to the vast majority of other systems and orchestral
recordings that I have heard it's no contest. Mine is in another
league. That was the point of it all.


=A0My
experience, if I close my eyes at a concert and try to imagine myself
at home, is that the sound of real halls is very far from ideal.



That is your experience not mine. Most of my concert experience over
the past few years has been at Disney Hall. So maybe I am just
spoiled. They really raised the bar with that hall. The very best
sound I have ever experienced from any orchestra was there. No hifi
has ever touched that experience.


=A0And
also, occasionally it would be nice to turn the volume up a little bit
-- or to turn down what appears to be an outing from the emphysema
ward in the rows behind me.


It depends on where you sit doesn't it? I think I made that point
already.



One other thing that's worth mentioning is the highly non-uniform
radiation pattern of string instruments, and the way they sound very
different (brighter, clearer) close up than in the body of the concert
hall. =A0OK, so you can insist on ultra-minimal microphone setups to
replicate the sound of that hall, as it would be heard by a mamber of
the audience. =A0But then you're prioritizing the sound of the hall over
the sound of the music!



No. Concert halls over the past few hundred years have been designed
with those instruments in mind and visa versa. Instruments have
developed with orchestras and the halls they play in in mind. The
sound of the music is the sound of the instruments and the hall. This
sound has already been given priority by the development of orchestral
music over a few centuries.


=A0I think that much reverberation sounds
excessive in the home, and you would hear the music more clearly with
a bit less reverb than you'd get in a hall.


If you are hearing the reverb of your listening room then you are
hearing distortion. If you are hearing the reverb of the original hall
and still don't like it that is an opinion you get to have. I don't
share it. I have heard enough multimiked orchestral recordings to know
I hate them. That is an opinion I get to have.



I know this may sound heretical, but maybe the sound at the position
of the microphones above the orchestra is *better* than that of any
seat in the hall. =A0Also, maybe it really is useful for a balance
engineer to be able to turn up the solist in a concerto.



It's just a different aesthetic value. One that I simply do not share.



In other words, I am denying that your ideal sound of an excellent
hall from an excellent position in that hall is any sort of ideal at
all.


You are free to not share my aesthetic values. As for your denial, I
deny it. I am quite certain about what I like and I am quite certain
that I am not alone in my aesthetic values. So it is a sort of ideal
that actually has a pretty broad base for those with the experience to
have a meaningful opinion on the subject.

  #87   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default In Mobile Age, Sound Quality Steps Back

On Wed, 19 May 2010 11:19:46 -0700, Scott wrote
(in article ):

On May 17, 2:49=A0pm, Audio Empire wrote:
On Mon, 17 May 2010 07:07:58 -0700, Scott wrote
(in article ):





On May 16, 6:07=3DA0pm, Audio Empire wrote:
On Sun, 16 May 2010 11:37:25 -0700, Scott wrote


=3DA0Bias controls do not make it any more difficult to make those
determinations. It is other aspects of the design and execution of an=

y
comparison that will determine if they will tell you which speakers
are "the most accurate" or what you like in a given price range.
Nothing about blind protocols should ever prevent an otherwise
effective test for determining those aspects of audio from doing so.
All bias controls will do is control the biases they are designed to
control from affecting the outcome. IF the bias controls are designed
and implimented well.


Bias controlled tests, ultimately compare one one set of speaker compr=

omi=3D
ses
to another set of compromises, and tell me very little about which is =

the
more accurate.


How does removing the bias controls of any given test allow the test
to tell you *more* about which speaker is more accurate? If a given
test is telling you little about which is the more accurate speaker
then the flaw in that test lies in the design of that test not in any
particular bias controls that may be implimented in that test unless
those specific bias controls are causing some sort of problem. That is
not an intrinsic propperty of bias controls. If that is happening then
the bias controls are being poorly designed or poorly implimented.


Any test one designs for measuring the relative accuracy of speakers
against some sort of reference by ear can only be helped by well
designed and well executed bias controls. If you disagree then please
offer an argument to support *that.* There is no reason to talk about
other bias controlled test designs that are simply not designed to
measure percieved relative accuracy of loudspeakers.


I think you misunderstand me. Comparing one speaker to another using bias
controlled tests like DBT and ABX tells me nothing in and of itself.


As I said, lets not talk about tests that are poorly designed for the
task at hand. ABX is merely a small subset of bias controlled testing.
There is more in this world than an ABX test when it comes to bias
controls. The vast majority of DBTs in this world actually are not
ABX. You have to design a test that measures what you want to measure.
In this case it is accuracy of speakers. *Then* you design bias
controls to take bias out of the equation. This isn't about ABX.

HOWEVER,
if I am allowed to have this same setup over a long period of time (say
several hours to several days), using recordings of my own choosing, I wi=

ll
be able to compare BOTH to my memory of what real, live music sounds like=

and
be able to tell which of the two speakers is the more "realistic" (or, if=

you
prefer, accurate to my memory of the sound of live music).


OK now please tell me how if you were to do this with bias controls in
place it would make the audition less informative?


It wouldn't. But most bias controlled tests aren't designed to allow those
circumstances. For instance, they are usually held in venues with which I am
neither familiar nor have the type of access to that I mentioned above.
Secondly, they rarely are set up that I can listen by myself using the music
I know. If my conditions could be met with a DBT, then, the answer to your
question would be that a bias controlled audition would NOT be less
informative.


=A0Of course, this
assumes that an accurate DBT test can be devised for speakers, which I
seriously doubt.


Why? All we are talking about is controlling bias effects.


That's a tall order, especially with speakers. Not so difficult with
electronics though.



For instance: How do you normalize such a test?


How do you normalize such a test under sighted conditions? Do it that
way and then impliment the bias controls.


You don't. You just listen (and take notes). Since I'm not comparing one
speaker directly to another, there's no need to match levels to within 1 dB
or less, there's no problem with making sure that each speaker system is
optimally located, etc.

Suppose one speaker is 89
dB/Watt and the other is 93 dB/Watt? You'd have to use a really accurate =

SPL
meter. Few have that. I have a Radio Shack SPL meter like most of us, but
it's probably not accurate enough to set speaker levels within less than =

1 dB
for such a test.



Seems like an issue that is completely independent of bias controls.


Not at all. Human perception will always bias toward the louder of two
sources, that makes level matching de riguer for eliminating bias.


=A0Secondly, speakers (and rooms) are NOT amplifiers or CD
decks with ruler-flat frequency response. How do you make them the same
level? You certainly don't want to put T-Pads between the amp and speaker=

s to
equalize them as that would screw-up the impedance matching between amp a=

nd
speaker.



A fair question. But,again, how is this an issue with bias controls?
How do bias controls affect this problem?


Those are "bias control" items.

=A0All that I can come up with is that you not only need two sets of
speakers for such a test, you'll also need two IDENTICAL stereo amplifier=

s
with some way to trim them on their inputs to give equal SPL for both the=

89
dB/Watt speaker and the 93 dB/Watt speaker - and at what frequency? Each
speaker can vary wildly from one frequency to another and these frequency
response anomalies are exacerbated by the room in which the test is being
conducted, as well as by the placement of each set of speakers in that ro=

om
and It would be difficult to have both test samples occupy the same space=

at
the same time. Thirdly, you can set them to both to produce a single
frequency, say 1KHz, at exactly (less than 1 dB difference) the same =A0l=

evel
but what happens when you switch frequencies to, say, 400 Hz or 5 KHz? On=

e
speaker could exhibit as much as 6 dB difference in volume (or more) from=

the
other depending upon whether speaker "A", for instance, has a 3 dB peak a=

t
400 Hz (with respect to 1KHz) and speaker "B" has a three dB trough at 40=

0 Hz
(again referenced to 1 KHz).



All legitimate concerns none of which have anything to do with bias
controls. You face these issues whether you listen with biases in play
or controlled.


I don't get your point. These ARE all bias control issues. They are NOT
sighted bias issues or even expectational bias issues, but they are audible
bias issues. The human brain will always pick out the loudest source as the
best. It may be that the louder speaker is NOT the better of the two, but in
an ABX or DBT, the louder speaker will always predominate. Seems to me that
in order for such a test to be relevant, that both speakers must be level
matched just as an amplifier or a CD player must be level matched and for the
same reason.

It seems to me that such a test would be incredibly difficult =A0to pull =

off,
in any environment but an anechoic chamber (to eliminate room interaction=

and even then would only really work for two speakers who's frequency
response curves were very similar. Even so, people's biases are going to
still come into play. If one likes big bass, the speaker which has the be=

st
bass is going to be his pick, every time. If a listener likes pin-point
imaging, he's going to pick the speaker that images the better of the two=

-
every time.

These are just a few of my real-world =A0doubts as to the efficacy of DBT
testing for speakers and why I believe that they are not only impractical
(because they would be darned difficult to set up), but would not yield a=

ny
kind of a consensus as to which speaker was the most accurate.



None of the concerns you have expressed are affected by the
implimentation of well designed bias controls. You certainly have
touched upon some of the many issues facing any consumer in comparing
speakers but those issues are issues when comparing under sighted
conditions.


I don't agree at all. Level matching across the audible spectrum is just as
necessary in a speaker DBT as it would be when comparing electronic
components. If you can't do that, then a speaker DBT where the levels aren't
precisely matched would be just as worthless as an amplifier DBT where the
levels weren't precisely matched.

  #88   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default In Mobile Age, Sound Quality Steps Back

"Audio Empire" wrote in message
...
On Tue, 18 May 2010 14:26:48 -0700, Arny Krueger wrote
(in article ):

"Audio Empire" wrote in message


My question is what was used to insure that the two
speakers were exactly the same loudness across the entire
audio spectrum?


Didn't happen. The speakers had slightly different frequency responses at
various listening locations.


That disqualifies the results in my estimation.

Also, how much did the fact that the
Behringers are self-powered and the "not named" $12000
audiophile speakers required a separate (and entirely
different) amp to power them affect the results?


We didn't know or care how the speakers we were listening to were made.
It
was all about sound quality.


My point is that you were listening to TWO variables, The two amps and the
two speakers. That pretty much disqualifies the results as well.

If your point in citing this DBT was to make points for that testing
methodology of analyzing speakers, I think you picked a poor example.

You can see that the methodology is seriously flawed, in this test (as you
have explained it), do you not?


It raises another point as well. And that is that even a seemingly simple
dbt is extremely taxing to do well in a home setting....making such tests
totally impractical in most home situations. The problem with that is
simple....so long as anybody who offers an opinion about the "sound" of a
piece of gear is immediately accused of deluding themselves for not having
done a dbt, it pretty well destroys much of the rationale for having an
audio newsgroup in the first place.

  #89   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default In Mobile Age, Sound Quality Steps Back

On Wed, 19 May 2010 17:16:07 -0700, Harry Lavo wrote
(in article ):

"Audio Empire" wrote in message
...
On Tue, 18 May 2010 14:26:48 -0700, Arny Krueger wrote
(in article ):

"Audio Empire" wrote in message


My question is what was used to insure that the two
speakers were exactly the same loudness across the entire
audio spectrum?

Didn't happen. The speakers had slightly different frequency responses at
various listening locations.


That disqualifies the results in my estimation.

Also, how much did the fact that the
Behringers are self-powered and the "not named" $12000
audiophile speakers required a separate (and entirely
different) amp to power them affect the results?

We didn't know or care how the speakers we were listening to were made.
It
was all about sound quality.


My point is that you were listening to TWO variables, The two amps and the
two speakers. That pretty much disqualifies the results as well.

If your point in citing this DBT was to make points for that testing
methodology of analyzing speakers, I think you picked a poor example.

You can see that the methodology is seriously flawed, in this test (as you
have explained it), do you not?


It raises another point as well. And that is that even a seemingly simple
dbt is extremely taxing to do well in a home setting....making such tests
totally impractical in most home situations.


This is certainly part of my contention. DBT and ABX tests are
methodologically daunting even for electronics where, ostensibly, the
electronics have ruler-flat frequency response and the same speakers are used
for each unit being compared. That makes level matching a fairly
straightforward (if not altogether easy) endeavor. With speakers, everything
is more difficult. From matching levels of two speaker systems in the same
room when frequency response differences between the two sets of speakers can
be all over the place, to making sure that the speakers in question are
optimally located in space, to making sure that the listeners cannot tell
with either their eyes or their ears which of the two systems is playing at
any given time, to measuring the SPL at the listening positions to make sure
that even at ONE frequency (much less over the entire spectrum), the two sets
of speakers are level matched.

The problem with that is
simple....so long as anybody who offers an opinion about the "sound" of a
piece of gear is immediately accused of deluding themselves for not having
done a dbt, it pretty well destroys much of the rationale for having an
audio newsgroup in the first place.


What many need to understand (in my opinion) about evaluating audio equipment
is WHEN it is useful to use objective test methodologies (DBT, measurements,
etc) and WHEN it is more useful to use subjective evaluation methods. I know
that many here would find that idea heresy, but I think that it can be seen
from the exchange here over the last few days, that this is the most useful
approach.


  #90   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default In Mobile Age, Sound Quality Steps Back

"Audio Empire" wrote in message

On Tue, 18 May 2010 14:26:48 -0700, Arny Krueger wrote
(in article ):

"Audio Empire" wrote in
message

My question is what was used to insure that the two
speakers were exactly the same loudness across the
entire audio spectrum?


Didn't happen. The speakers had slightly different
frequency responses at various listening locations.


That disqualifies the results in my estimation.


If you set the bar this high, then all possible results are disqualified.

My comment is also in error. The speakers were matched using a "Perceptual
Transfer Function" measurement device. I posted a number of other references
that describe this device in more detail, in another post. Here is one of
them so that this post can stand on its own:

http://www.aes.org/e-lib/browse.cfm?elib=9942

Also, how much did the fact that the
Behringers are self-powered and the "not named" $12000
audiophile speakers required a separate (and entirely
different) amp to power them affect the results?


We didn't know or care how the speakers we were
listening to were made. It was all about sound quality.


My point is that you were listening to TWO variables, The
two amps and the two speakers. That pretty much
disqualifies the results as well.


Not at all. Amps that were used were exactly as recommended and supplied by
the suppliers of the two loudspeakers. Again, the proposed standard makes it
impossible to do relevant tests.





  #91   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default In Mobile Age, Sound Quality Steps Back

"Harry Lavo" wrote in message


It raises another point as well. And that is that even a
seemingly simple dbt is extremely taxing to do well in a
home setting.


I think we can take this comment as showing that there are vast differences
in the technical resources that various people bring to bear on problems
that seem to vex many audiophiles. We're just trying to help and shed light.

Obviously some of us have resources that other's don't have. Disqualifying
listening tests because casual audiophiles can't do them for themselves
seems to make no sense.

...making such tests totally impractical in
most home situations.


Nobody is saying that every casual audiophile can do what some of us can do
as a matter of course.

If tests are disqualified because they require resources that every casual
audiophile may not be able to have access to automatically disqualifies
virtually every evaluation that is done by any of the various audio
magazines.

If Harry wants to disqualify each and every issue and every review ever done
by TAS and Stereophile on the grounds that their listening evaluations are
too sophisticated for the average audiophile to duplicate, then he can be my
guest! ;-)

  #92   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default In Mobile Age, Sound Quality Steps Back

"Audio Empire" wrote in message


This is certainly part of my contention. DBT and ABX
tests are methodologically daunting even for electronics
where, ostensibly, the electronics have ruler-flat
frequency response and the same speakers are used for
each unit being compared.


Methodologically daunting?

Does this mean that you are unable to properly level-match electronic
components?

That makes level matching a
fairly straightforward (if not altogether easy) endeavor.


??????????

Didn't you just say it is "methodologically daunting"?

With speakers, everything is more difficult. From
matching levels of two speaker systems in the same room
when frequency response differences between the two sets
of speakers can be all over the place, to making sure
that the speakers in question are optimally located in
space, to making sure that the listeners cannot tell with
either their eyes or their ears which of the two systems
is playing at any given time, to measuring the SPL at the
listening positions to make sure that even at ONE
frequency (much less over the entire spectrum), the two
sets of speakers are level matched.


I think two different kinds of experiements are being confused.

In our speaker evaluations the question we were evaluating was not: "Do they
sound exactly the same". That they did not sound exactly the same was a
given.

As I've been pointing out all along, and which I repeated today, we were
evaluating speakers in accordance with the following set of questions:


1. Speakers Disappear

2. Local Acoustics Not Heard

3. Images Lateral Localization

4. Images Depth Localization

5. Ambience non-Localized

The following relates to the degree to which the liveness of music can be
enjoyed by more than one listener in the room:

6. Freedom of Movement

The problem with that is
simple....so long as anybody who offers an opinion about
the "sound" of a piece of gear is immediately accused of
deluding themselves for not having done a dbt, it pretty
well destroys much of the rationale for having an audio
newsgroup in the first place.


This is a straw man. There is no such general rule that has ever been
applied.

What many need to understand (in my opinion) about
evaluating audio equipment is WHEN it is useful to use
objective test methodologies (DBT, measurements, etc)
and WHEN it is more useful to use subjective evaluation
methods.


To which I say "Doctor cure thyself".

I also see a false dichotomy, one that says that DBTs are *not* subjective.

A confusion I sense is the false idea that an evaluation methodology that
controls bias is not subjective. To get there, one has to invent some new
definition of subjective. Listening tests have long been thought to be
subjective, bias controlled or not.



  #93   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default In Mobile Age, Sound Quality Steps Back

On May 20, 9:37=A0am, Audio Empire wrote:

What many need to understand (in my opinion) about evaluating audio equip=

ment
is WHEN it is useful to use objective test methodologies =A0(DBT, measure=

ments,
etc) and WHEN it is more useful to use subjective evaluation methods. I k=

now
that many here would find that idea heresy, but I think that it can be se=

en
from the exchange here over the last few days, that this is the most usef=

ul
approach.


You seem to be under the misimpression that level-matching ONLY
matters in objective comparisons. If you are (foolishly) relying on
your long-term memory of one speaker in comparing it to another, it
still matters whether you were listening at the same level or not. If
you are comparing two speakers side-by-side but not blind, it still
matters whether they are playing at the same level or not.

Playing speakers at different levels alters their sound. Period. If
you want to compare speakers based on sound quality, you need to level-
match, however you compare them. Otherwise, you're comparing them
based on how you happened to set the volume control.

bob

  #94   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default In Mobile Age, Sound Quality Steps Back

"Audio Empire" wrote in message
...
On Wed, 19 May 2010 17:16:07 -0700, Harry Lavo wrote
(in article ):

"Audio Empire" wrote in message
...
On Tue, 18 May 2010 14:26:48 -0700, Arny Krueger wrote
(in article ):

"Audio Empire" wrote in message


My question is what was used to insure that the two
speakers were exactly the same loudness across the entire
audio spectrum?

Didn't happen. The speakers had slightly different frequency responses
at
various listening locations.

That disqualifies the results in my estimation.

Also, how much did the fact that the
Behringers are self-powered and the "not named" $12000
audiophile speakers required a separate (and entirely
different) amp to power them affect the results?

We didn't know or care how the speakers we were listening to were made.
It
was all about sound quality.


My point is that you were listening to TWO variables, The two amps and
the
two speakers. That pretty much disqualifies the results as well.

If your point in citing this DBT was to make points for that testing
methodology of analyzing speakers, I think you picked a poor example.

You can see that the methodology is seriously flawed, in this test (as
you
have explained it), do you not?


It raises another point as well. And that is that even a seemingly
simple
dbt is extremely taxing to do well in a home setting....making such tests
totally impractical in most home situations.


This is certainly part of my contention. DBT and ABX tests are
methodologically daunting even for electronics where, ostensibly, the
electronics have ruler-flat frequency response and the same speakers are
used
for each unit being compared. That makes level matching a fairly
straightforward (if not altogether easy) endeavor. With speakers,
everything
is more difficult. From matching levels of two speaker systems in the same
room when frequency response differences between the two sets of speakers
can
be all over the place, to making sure that the speakers in question are
optimally located in space, to making sure that the listeners cannot tell
with either their eyes or their ears which of the two systems is playing
at
any given time, to measuring the SPL at the listening positions to make
sure
that even at ONE frequency (much less over the entire spectrum), the two
sets
of speakers are level matched.

The problem with that is
simple....so long as anybody who offers an opinion about the "sound" of a
piece of gear is immediately accused of deluding themselves for not
having
done a dbt, it pretty well destroys much of the rationale for having an
audio newsgroup in the first place.


What many need to understand (in my opinion) about evaluating audio
equipment
is WHEN it is useful to use objective test methodologies (DBT,
measurements,
etc) and WHEN it is more useful to use subjective evaluation methods. I
know
that many here would find that idea heresy, but I think that it can be
seen
from the exchange here over the last few days, that this is the most
useful
approach.


I agree, but I would simplify things.....blind testing is required when
doing serious research. It is not required when assembling a home audio
system. Nor is it reasonable to insist that it be done before commenting on
impressions of sound.


  #95   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default In Mobile Age, Sound Quality Steps Back

"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message


It raises another point as well. And that is that even a
seemingly simple dbt is extremely taxing to do well in a
home setting.


I think we can take this comment as showing that there are vast
differences
in the technical resources that various people bring to bear on problems
that seem to vex many audiophiles. We're just trying to help and shed
light.

Obviously some of us have resources that other's don't have. Disqualifying
listening tests because casual audiophiles can't do them for themselves
seems to make no sense.

...making such tests totally impractical in
most home situations.


Nobody is saying that every casual audiophile can do what some of us can
do
as a matter of course.


Then your friends need to learn to hold their tongues whenever one of those
casual audiophiles makes comment reflecting his opinion about the "sound" of
a piece of gear, or his enjoyment of a piece of gear above minimum price
point. It is one thing to point out common understanding; it is another to
do it in a way that accuses the casual audiophile of being a fool who is
deluding himself, which often seems to be the tone in many audio newsgroups.



If tests are disqualified because they require resources that every casual
audiophile may not be able to have access to automatically disqualifies
virtually every evaluation that is done by any of the various audio
magazines.

If Harry wants to disqualify each and every issue and every review ever
done
by TAS and Stereophile on the grounds that their listening evaluations are
too sophisticated for the average audiophile to duplicate, then he can be
my
guest! ;-)


Stereophiles reviews are not "tests", they are subjective evaluations of the
sound and functionality of the gear under review. Written both as a guide
(for those who trust this particular reviewer) and as a form of
entertainment. John Atkinson's bench work are electronic
measurements.....tests of a sort, but they don't purport to tell you how
things "sound".



  #96   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default In Mobile Age, Sound Quality Steps Back

"Harry Lavo" wrote in message


....blind testing is
required when doing serious research. It is not required
when assembling a home audio system.


While blind testing is not required in every case, it doesn't seem to be an
unreasonable tool to use, were one basing his component choices on serious
research.

I find it easy to agree with the idea that people who disregard blind
testing and lack the interest required to use it as a tool, even when it is
easy to do, are not all that serious.

Nor is it
reasonable to insist that it be done before commenting on
impressions of sound.


It is impossible, in a country with free speech, to require that people do
anything even think superficiallly, before they comment in public. ;-)


  #97   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default In Mobile Age, Sound Quality Steps Back

On May 19, 5:15=A0pm, Audio Empire wrote:
On Wed, 19 May 2010 11:19:46 -0700, Scott wrote
(in article ):





On May 17, 2:49=3DA0pm, Audio Empire wrote:
On Mon, 17 May 2010 07:07:58 -0700, Scott wrote
(in article ):


On May 16, 6:07=3D3DA0pm, Audio Empire wro=

te:
On Sun, 16 May 2010 11:37:25 -0700, Scott wrote


=3D3DA0Bias controls do not make it any more difficult to make thos=

e
determinations. It is other aspects of the design and execution of =

an=3D
y
comparison that will determine if they will tell you which speakers
are "the most accurate" or what you like in a given price range.
Nothing about blind protocols should ever prevent an otherwise
effective test for determining those aspects of audio from doing so=

..
All bias controls will do is control the biases they are designed t=

o
control from affecting the outcome. IF the bias controls are design=

ed
and implimented well.


Bias controlled tests, ultimately compare one one set of speaker com=

pr=3D
omi=3D3D
ses
to another set of compromises, and tell me very little about which i=

s =3D
the
more accurate.


How does removing the bias controls of any given test allow the test
to tell you *more* about which speaker is more accurate? If a given
test is telling you little about which is the more accurate speaker
then the flaw in that test lies in the design of that test not in any
particular bias controls that may be implimented in that test unless
those specific bias controls are causing some sort of problem. That i=

s
not an intrinsic propperty of bias controls. If that is happening the=

n
the bias controls are being poorly designed or poorly implimented.


Any test one designs for measuring the relative accuracy of speakers
against some sort of reference by ear can only be helped by well
designed and well executed bias controls. If you disagree then please
offer an argument to support *that.* There is no reason to talk about
other bias controlled test designs that are simply not designed to
measure percieved relative accuracy of loudspeakers.


I think you misunderstand me. Comparing one speaker to another using b=

ias
controlled tests like DBT and ABX tells me nothing in and of itself.


As I said, lets not talk about tests that are poorly designed for the
task at hand. ABX is merely a small subset of bias controlled testing.
There is more in this world than an ABX test when it comes to bias
controls. The vast majority of DBTs in this world actually are not
ABX. You have to design a test that measures what you want to measure.
In this case it is accuracy of speakers. *Then* you design bias
controls to take bias out of the equation. This isn't about ABX.


HOWEVER,
if I am allowed to have this same setup over a long period of time (sa=

y
several hours to several days), using recordings of my own choosing, I=

wi=3D
ll
be able to compare BOTH to my memory of what real, live music sounds l=

ike=3D
=A0and
be able to tell which of the two speakers is the more "realistic" (or,=

if=3D
=A0you
prefer, accurate to my memory of the sound of live music).


OK now please tell me how if you were to do this with bias controls in
place it would make the audition less informative?


It wouldn't.


My point exactly. Thank you.

But most bias controlled tests aren't designed to allow those
circumstances.


This is why I didn't want to talk about them. Tests that aren't
designed for the purpose we are speaking about are irrelevant.




=3DA0Of course, this
assumes that an accurate DBT test can be devised for speakers, which I
seriously doubt.


Why? All we are talking about is controlling bias effects.


That's a tall order, especially with speakers. Not so difficult with
electronics though. =A0


"Tall order" is not IMO a very good argument. I agree that it is a
tall order. But I don't agree that such a tall order is impossible.




For instance: How do you normalize such a test?


How do you normalize such a test under sighted conditions? Do it that
way and then impliment the bias controls.


You don't.


Then it is a problem with or without bias controls.

You just listen (and take notes). Since I'm not comparing one
speaker directly to another, there's no need to match levels to within 1 =

dB
or less, there's no problem with making sure that each speaker system is
optimally located, etc.


Really? These things are not a problem?


Suppose one speaker is 89
dB/Watt and the other is 93 dB/Watt? You'd have to use a really accura=

te =3D
SPL
meter. Few have that. I have a Radio Shack SPL meter like most of us, =

but
it's probably not accurate enough to set speaker levels within less th=

an =3D
1 dB
for such a test.


Seems like an issue that is completely independent of bias controls.


Not at all. Human perception will always bias toward the louder of two
sources, that makes level matching de riguer for eliminating bias.


How is this not true under sighted conditions?




=3DA0Secondly, speakers (and rooms) are NOT amplifiers or CD
decks with ruler-flat frequency response. How do you make them the sam=

e
level? You certainly don't want to put T-Pads between the amp and spea=

ker=3D
s to
equalize them as that would screw-up the impedance matching between am=

p a=3D
nd
speaker.


A fair question. But,again, how is this an issue with bias controls?
How do bias controls affect this problem?


Those are "bias control" items.


Nope. Level matching is the same issue in accuracy comparison tests
with or without bias controls in play.





=3DA0All that I can come up with is that you not only need two sets of
speakers for such a test, you'll also need two IDENTICAL stereo amplif=

ier=3D
s
with some way to trim them on their inputs to give equal SPL for both =

the=3D
=A089
dB/Watt speaker and the 93 dB/Watt speaker - and at what frequency? Ea=

ch
speaker can vary wildly from one frequency to another and these freque=

ncy
response anomalies are exacerbated by the room in which the test is be=

ing
conducted, as well as by the placement of each set of speakers in that=

ro=3D
om
and It would be difficult to have both test samples occupy the same sp=

ace=3D
=A0at
the same time. Thirdly, you can set them to both to produce a single
frequency, say 1KHz, at exactly (less than 1 dB difference) the same =

=3DA0l=3D
evel
but what happens when you switch frequencies to, say, 400 Hz or 5 KHz?=

On=3D
e
speaker could exhibit as much as 6 dB difference in volume (or more) f=

rom=3D
=A0the
other depending upon whether speaker "A", for instance, has a 3 dB pea=

k a=3D
t
400 Hz (with respect to 1KHz) and speaker "B" has a three dB trough at=

40=3D
0 Hz
(again referenced to 1 KHz).


All legitimate concerns none of which have anything to do with bias
controls. You face these issues whether you listen with biases in play
or controlled.


I don't get your point. These ARE all bias control issues.


No they are not. Bias control issues are about controling biases that
are in the head not in the actual sound.

They are NOT
sighted bias issues or even expectational bias issues, but they are audib=

le
bias issues.


But they do not disappear under sighted conditions so they are not a
valid reason to claim bias controls make for an inferior test.

The human brain will always pick out the loudest source as the
best.


Well. no it won't but that is another subject. The problem exists with
or without bias controls in place so it isn't a legitimate argument
against the use of bias controls. It is a legitimate argument for
adressing levels as a factor in preferences. But that issue exists
under sighted or blind conditions.

It may be that the louder speaker is NOT the better of the two, but in
an ABX or DBT, the louder speaker will always predominate.


So it won't under sighted conditions? Are you claiming that seeing the
speakers will eliminate this effect?

Seems to me that
in order for such a test to be relevant, that both speakers must be level
matched just as an amplifier or a CD player must be level matched and for=

the
same reason.


Seems to me this is an issue under sighted conditions as well as
blind.






It seems to me that such a test would be incredibly difficult =3DA0to =

pull =3D
off,
in any environment but an anechoic chamber (to eliminate room interact=

ion=3D

and even then would only really work for two speakers who's frequency
response curves were very similar. Even so, people's biases are going =

to
still come into play. If one likes big bass, the speaker which has the=

be=3D
st
bass is going to be his pick, every time. If a listener likes pin-poin=

t
imaging, he's going to pick the speaker that images the better of the =

two=3D
=A0-
every time.


These are just a few of my real-world =3DA0doubts as to the efficacy o=

f DBT
testing for speakers and why I believe that they are not only impracti=

cal
(because they would be darned difficult to set up), but would not yiel=

d a=3D
ny
kind of a consensus as to which speaker was the most accurate.


None of the concerns you have expressed are affected by the
implimentation of well designed bias controls. You certainly have
touched upon some of the many issues facing any consumer in comparing
speakers but those issues are issues when comparing under sighted
conditions.


I don't agree at all. Level matching across the audible spectrum is just =

as
necessary in a speaker DBT as it would be when comparing electronic
components.


No it is not. It's not necessary when comparing electronic components
either. The idea of comparing things is to evaluate how they sound not
to eliminate how they sound.

If you can't do that, then a speaker DBT where the levels aren't
precisely matched would be just as worthless as an amplifier DBT where th=

e
levels weren't precisely matched


No it wouldn't. Speakers have their own sound. *That* is what you are
comparing in a bias controlled test for accuracy against a reference.
You don't have to mess around with them. Just set them up propperly
and compare them. Sighted or blind.
  #98   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default In Mobile Age, Sound Quality Steps Back

"Harry Lavo" wrote in message

"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message


It raises another point as well. And that is that even
a seemingly simple dbt is extremely taxing to do well
in a home setting.


I think we can take this comment as showing that there
are vast differences
in the technical resources that various people bring to
bear on problems that seem to vex many audiophiles.
We're just trying to help and shed light.

Obviously some of us have resources that other's don't
have. Disqualifying listening tests because casual
audiophiles can't do them for themselves seems to make
no sense.

...making such tests totally impractical in
most home situations.


Nobody is saying that every casual audiophile can do
what some of us can do
as a matter of course.


Then your friends need to learn to hold their tongues
whenever one of those casual audiophiles makes comment
reflecting his opinion about the "sound" of a piece of
gear, or his enjoyment of a piece of gear above minimum
price point.


Harry, what's unclear to you about the concept of freedom of speech and
respect for differences of opinons?

On occasion some of us disagree with something that is said, and we voice
that disagreement.

You have a problem with me expressing my opinon?

It is one thing to point out common
understanding; it is another to do it in a way that
accuses the casual audiophile of being a fool who is
deluding himself, which often seems to be the tone in
many audio newsgroups.


Here's your next challenge Harry - show me calling anybody a fool or
delusional.

If you can't then you would of course owe me a public apology.

If tests are disqualified because they require resources
that every casual audiophile may not be able to have
access to automatically disqualifies virtually every
evaluation that is done by any of the various audio
magazines.


If Harry wants to disqualify each and every issue and
every review ever done
by TAS and Stereophile on the grounds that their
listening evaluations are too sophisticated for the
average audiophile to duplicate, then he can be my
guest! ;-)


Stereophiles reviews are not "tests",


Harry, you really need to learn how to read. I did not say anything in the
paragraph about Stereophile's reviews being tests.

Harry, if you want to make up statements and then argue with them, please be
my guest but don't expect me to take such posts very seriously! ;-)

they are subjective
evaluations of the sound and functionality of the gear
under review.


That would be a parphrase of what I just said. I used the exact word
evaluation and I did not use the word test.

??????????????

  #99   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default In Mobile Age, Sound Quality Steps Back

On Thu, 20 May 2010 08:26:05 -0700, bob wrote
(in article ):

On May 20, 9:37=A0am, Audio Empire wrote:

What many need to understand (in my opinion) about evaluating audio equip=

ment
is WHEN it is useful to use objective test methodologies =A0(DBT, measure=

ments,
etc) and WHEN it is more useful to use subjective evaluation methods. I k=

now
that many here would find that idea heresy, but I think that it can be se=

en
from the exchange here over the last few days, that this is the most usef=

ul
approach.


You seem to be under the misimpression that level-matching ONLY
matters in objective comparisons. If you are (foolishly) relying on
your long-term memory of one speaker in comparing it to another, it
still matters whether you were listening at the same level or not. If
you are comparing two speakers side-by-side but not blind, it still
matters whether they are playing at the same level or not.

Playing speakers at different levels alters their sound. Period. If
you want to compare speakers based on sound quality, you need to level-
match, however you compare them. Otherwise, you're comparing them
based on how you happened to set the volume control.

bob


Congratulations, that's my point. The fact that one cannot level match two
speakers due to the magnitude of the frequency response differences between
them, disqualifies them from being the subject of a properly conducted DBT.
The louder one will always be chosen as being "better" - even if it is not.
  #100   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default In Mobile Age, Sound Quality Steps Back

On Thu, 20 May 2010 12:07:35 -0700, Scott wrote
(in article ):

OK now please tell me how if you were to do this with bias controls in
place it would make the audition less informative?


It wouldn't.


My point exactly. Thank you.

But most bias controlled tests aren't designed to allow those
circumstances.


This is why I didn't want to talk about them. Tests that aren't
designed for the purpose we are speaking about are irrelevant.


Tell that to the likes of those who insist that DBTs work with speakers. They
are the ones with whom I'm debating, not you. You "get it."


  #101   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default In Mobile Age, Sound Quality Steps Back

On Thu, 20 May 2010 08:25:56 -0700, Arny Krueger wrote
(in article ):

"Audio Empire" wrote in message


This is certainly part of my contention. DBT and ABX
tests are methodologically daunting even for electronics
where, ostensibly, the electronics have ruler-flat
frequency response and the same speakers are used for
each unit being compared.


Methodologically daunting?

Does this mean that you are unable to properly level-match electronic
components?


No it means that one has to have a comparator and someone to operate it in a
manner that this operator doesn't know which DUT he/she is selecting. I means
that levels must be matched to withing a fraction of a DB (the closer the
better). It means that one has to (ideally) assemble a listening panel amnd
line up the components under test.

That makes level matching a
fairly straightforward (if not altogether easy) endeavor.


??????????

Didn't you just say it is "methodologically daunting"?

With speakers, everything is more difficult. From
matching levels of two speaker systems in the same room
when frequency response differences between the two sets
of speakers can be all over the place, to making sure
that the speakers in question are optimally located in
space, to making sure that the listeners cannot tell with
either their eyes or their ears which of the two systems
is playing at any given time, to measuring the SPL at the
listening positions to make sure that even at ONE
frequency (much less over the entire spectrum), the two
sets of speakers are level matched.


I think two different kinds of experiements are being confused.

In our speaker evaluations the question we were evaluating was not: "Do they
sound exactly the same". That they did not sound exactly the same was a
given.


Of course, they won't sound alike, that's the entire reason for the test in
the first place, isn't it? To determine what those differences are and
perhaps come to some sort of conclusion about which is best? But they MUST be
level matched (just like amps, preamps, or other electronic components) or
you won't know whether the differences you hear are level differences or
quality differences. What's good for the goose....

As I've been pointing out all along, and which I repeated today, we were
evaluating speakers in accordance with the following set of questions:


1. Speakers Disappear

2. Local Acoustics Not Heard

3. Images Lateral Localization

4. Images Depth Localization

5. Ambience non-Localized

The following relates to the degree to which the liveness of music can be
enjoyed by more than one listener in the room:

6. Freedom of Movement


And this frees speakers from the same rules of engagement as DBTs for other
types of components, how?

  #102   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default In Mobile Age, Sound Quality Steps Back

On Thu, 20 May 2010 11:31:22 -0700, Arny Krueger wrote
(in article ):

"Harry Lavo" wrote in message


....blind testing is
required when doing serious research. It is not required
when assembling a home audio system.


While blind testing is not required in every case, it doesn't seem to be an
unreasonable tool to use, were one basing his component choices on serious
research.


Yes, where applicable. Speakers are not "where applicable", in my estimation.


  #103   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default In Mobile Age, Sound Quality Steps Back

On Thu, 20 May 2010 10:11:49 -0700, Harry Lavo wrote
(in article ):

"Audio Empire" wrote in message
...
On Wed, 19 May 2010 17:16:07 -0700, Harry Lavo wrote
(in article ):

"Audio Empire" wrote in message
...
On Tue, 18 May 2010 14:26:48 -0700, Arny Krueger wrote
(in article ):

"Audio Empire" wrote in message


My question is what was used to insure that the two
speakers were exactly the same loudness across the entire
audio spectrum?

Didn't happen. The speakers had slightly different frequency responses
at
various listening locations.

That disqualifies the results in my estimation.

Also, how much did the fact that the
Behringers are self-powered and the "not named" $12000
audiophile speakers required a separate (and entirely
different) amp to power them affect the results?

We didn't know or care how the speakers we were listening to were made.
It
was all about sound quality.


My point is that you were listening to TWO variables, The two amps and
the
two speakers. That pretty much disqualifies the results as well.

If your point in citing this DBT was to make points for that testing
methodology of analyzing speakers, I think you picked a poor example.

You can see that the methodology is seriously flawed, in this test (as
you
have explained it), do you not?

It raises another point as well. And that is that even a seemingly
simple
dbt is extremely taxing to do well in a home setting....making such tests
totally impractical in most home situations.


This is certainly part of my contention. DBT and ABX tests are
methodologically daunting even for electronics where, ostensibly, the
electronics have ruler-flat frequency response and the same speakers are
used
for each unit being compared. That makes level matching a fairly
straightforward (if not altogether easy) endeavor. With speakers,
everything
is more difficult. From matching levels of two speaker systems in the same
room when frequency response differences between the two sets of speakers
can
be all over the place, to making sure that the speakers in question are
optimally located in space, to making sure that the listeners cannot tell
with either their eyes or their ears which of the two systems is playing
at
any given time, to measuring the SPL at the listening positions to make
sure
that even at ONE frequency (much less over the entire spectrum), the two
sets
of speakers are level matched.

The problem with that is
simple....so long as anybody who offers an opinion about the "sound" of a
piece of gear is immediately accused of deluding themselves for not
having
done a dbt, it pretty well destroys much of the rationale for having an
audio newsgroup in the first place.


What many need to understand (in my opinion) about evaluating audio
equipment
is WHEN it is useful to use objective test methodologies (DBT,
measurements,
etc) and WHEN it is more useful to use subjective evaluation methods. I
know
that many here would find that idea heresy, but I think that it can be
seen
from the exchange here over the last few days, that this is the most
useful
approach.


I agree, but I would simplify things.....blind testing is required when
doing serious research. It is not required when assembling a home audio
system. Nor is it reasonable to insist that it be done before commenting on
impressions of sound.



I'll buy that. But I will say that blind testing is nonetheless useful for
exposing "snake oil" such as myrtle wood blocks, speaker cable elevators (as
well as speaker cable), green pens, $500 AC line cords, $4000/pair
interconnects etc, for what they are, audibly worthless bling. These tests
are also useful for showing that a $5000 amplifier isn't necessarily better
sounding than a $500 amp of similar power rating.

  #104   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default In Mobile Age, Sound Quality Steps Back

On May 20, 10:12=A0pm, Audio Empire wrote:

Congratulations, that's my point. The fact that one cannot level match tw=

o
speakers due to the magnitude of the frequency response differences betwe=

en
them, disqualifies them from being the subject of a properly conducted DB=

T. =A0
The louder one will always be chosen as being "better" - even if it is no=

t. =A0

I'll thank you not to twist my words. I said no such thing. In fact,
it is quite possible to level-match between two speakers. Sean Olive
does it all the time. His research would be pointless if he did not.

My point was that level-matching is essential for ANY comparison,
objective or subjective. Equal loudness differences do not disappear,
just because you are a subjectivist.

bob

  #105   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default In Mobile Age, Sound Quality Steps Back

"Audio Empire" wrote in message

On Thu, 20 May 2010 12:07:35 -0700, Scott wrote
(in article ):

OK now please tell me how if you were to do this with
bias controls in place it would make the audition less
informative?

It wouldn't.


My point exactly. Thank you.

But most bias controlled tests aren't designed to allow
those circumstances.


This is why I didn't want to talk about them. Tests that
aren't designed for the purpose we are speaking about
are irrelevant.


Tell that to the likes of those who insist that DBTs work
with speakers. They are the ones with whom I'm debating,
not you. You "get it."


The fact that DBTs can work with speakers is rather obvious from the many
writings of Floyd Toole and Sean Olive. I presume that you are completely
unaware of this well-known, highly important, and widely highly regarded
work?

Here is a good starting point:

F.E. Toole and S.E. Olive, "Hearing is Believing vs. Believing is Hearing:
Blind vs. Sighted Listening Tests and Other Interesting Things", 97th
Convention, Audio Eng. Soc., Preprint No. 3894 (1994 Nov.).

And this paper is even online:

http://www.harman.com/EN-US/OurCompa...steningLab.pdf




  #106   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default In Mobile Age, Sound Quality Steps Back

"Audio Empire" wrote in message

On Thu, 20 May 2010 08:25:56 -0700, Arny Krueger wrote
(in article ):

"Audio Empire" wrote in
message

This is certainly part of my contention. DBT and ABX
tests are methodologically daunting even for electronics
where, ostensibly, the electronics have ruler-flat
frequency response and the same speakers are used for
each unit being compared.


Methodologically daunting?


Does this mean that you are unable to properly
level-match electronic components?


No it means that one has to have a comparator and someone
to operate it in a manner that this operator doesn't know
which DUT he/she is selecting.


I guess you are unaware of the fact that one of the purposes of any of the
DBT comparators is to eliminate the need for a separate operator.

I guess you are unaware that many DBTs can be done using a computer with a
high quality audio interface, and that the comparator can be one of several
freely downloadable pieces of software.


I means that levels must
be matched to withing a fraction of a DB (the closer the
better). It means that one has to (ideally) assemble a
listening panel and line up the components under test.


Level matching is generally understood to be a requirement of even sighted
evaluations. Ditto for the need to line up listeners and components.

Looks to me like a common fallacy promoted by DBT detractors who condemn
DBTs for requirements that they have, that are also requirements for sighted
evaluations.

BTW, how do you do a sighted evaluation of components without first
obtaining them? ;-)






  #107   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default In Mobile Age, Sound Quality Steps Back

On May 20, 7:45=A0pm, bob wrote:
On May 20, 10:12=3DA0pm, Audio Empire wrote:

Congratulations, that's my point. The fact that one cannot level match =

tw=3D
o
speakers due to the magnitude of the frequency response differences bet=

we=3D
en
them, disqualifies them from being the subject of a properly conducted =

DB=3D
T. =3DA0
The louder one will always be chosen as being "better" - even if it is =

no=3D

t. =3DA0

I'll thank you not to twist my words. I said no such thing. In fact,
it is quite possible to level-match between two speakers. Sean Olive
does it all the time. His research would be pointless if he did not.

My point was that level-matching is essential for ANY comparison,
objective or subjective. Equal loudness differences do not disappear,
just because you are a subjectivist.



You can't level match speakers that are different in design. But the
idea that this prevents the use of blind protocols in the subjective
evaluation of the relative merits of speaker systems v. a live
reference is pretty absurd. All that is involved in removing sighted
biases is making sure the testees don't know what they are comparing.
Levels are a completely different issue that is completely independent
of the effects of sighted biases. But you can't level matchspeakers.
If Sean Olive thinks he is actually matching levels that is just one
of several mistakes he is making IMO. The others are pretty straight
forward. Limited choice of source material. An insistance to not
optimise the placement or envirement of competing designs are the
other big issues I have with his methodologies. But that is the
subject of another thread.

  #108   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default In Mobile Age, Sound Quality Steps Back

"Scott" wrote in message


You can't level match speakers that are different in
design.


Why not?

Let's presume that we level match the speakers with our PTF measuring
device positioned at the "sweet spot" that will be used during the
comparison.



  #109   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default In Mobile Age, Sound Quality Steps Back

On Thu, 20 May 2010 19:45:39 -0700, bob wrote
(in article ):

On May 20, 10:12=A0pm, Audio Empire wrote:

Congratulations, that's my point. The fact that one cannot level match tw=

o
speakers due to the magnitude of the frequency response differences betwe=

en
them, disqualifies them from being the subject of a properly conducted DB=

T. =A0
The louder one will always be chosen as being "better" - even if it is no=

t. =A0

I'll thank you not to twist my words. I said no such thing. In fact,
it is quite possible to level-match between two speakers. Sean Olive
does it all the time. His research would be pointless if he did not.


I say that it cannot ordinarily be done. Speakers vary too much in frequency
response characteristics. Match a pair at, say 400 Hz and at 60 Hz, or 5,000
Hz one or the other may as much as 6 dB different from the other one.

My point was that level-matching is essential for ANY comparison,
objective or subjective. Equal loudness differences do not disappear,
just because you are a subjectivist.


Where do I say that I disagree with that, Bob? My point is that it cannot be
done in the ordinary course of things. Sure, you can introduce graphic or
partametric equalizers into the equation to equalize these frequency response
disparities, but what would that prove about the speakers themselves?

bob



  #110   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default In Mobile Age, Sound Quality Steps Back

On Fri, 21 May 2010 06:46:02 -0700, Arny Krueger wrote
(in article ):

"Audio Empire" wrote in message

On Thu, 20 May 2010 08:25:56 -0700, Arny Krueger wrote
(in article ):

"Audio Empire" wrote in
message

This is certainly part of my contention. DBT and ABX
tests are methodologically daunting even for electronics
where, ostensibly, the electronics have ruler-flat
frequency response and the same speakers are used for
each unit being compared.

Methodologically daunting?


Does this mean that you are unable to properly
level-match electronic components?


No it means that one has to have a comparator and someone
to operate it in a manner that this operator doesn't know
which DUT he/she is selecting.


I guess you are unaware of the fact that one of the purposes of any of the
DBT comparators is to eliminate the need for a separate operator.


The only ones I've seen have been home-made and use a switch and an operator.


I guess you are unaware that many DBTs can be done using a computer with a
high quality audio interface, and that the comparator can be one of several
freely downloadable pieces of software.


I am unaware of that. But ikt doesn't change my basic premise one iota.



  #111   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default In Mobile Age, Sound Quality Steps Back

On May 21, 1:14=A0pm, Audio Empire wrote:
On Thu, 20 May 2010 19:45:39 -0700, bob wrote

I'll thank you not to twist my words. I said no such thing. In fact,
it is quite possible to level-match between two speakers. Sean Olive
does it all the time. His research would be pointless if he did not.


I say that it cannot ordinarily be done. Speakers vary too much in freque=

ncy
response characteristics. Match a pair at, say 400 Hz and at 60 Hz, or 5,=

000
Hz one or the other may as much as 6 dB different from the other one.


Well, of course you don't want to level-match at every point on the
frequency spectrum. That would defeat the purpose, since the
subjective effect of audible FR differences is one of the key things
you want to identify.

But you need some way to equalize overall levels if you want to draw
any sort of reasonable conclusions about sound quality. I don't know
how Olive does it in his work, but I would think that using pink noise
to level-match would usefully improve any speaker comparison, and is
certainly not beyond the means of any audiophile.

My point was that level-matching is essential for ANY comparison,
objective or subjective. Equal loudness differences do not disappear,
just because you are a subjectivist.


Where do I say that I disagree with that, Bob?


Neither did you acknowledge it, despite several promptings. But if
level-matching is important for any comparison, then it certainly
cannot be used as an argument against blind comparisons in particular.

Again: Any speaker comparison you do can be improved by doing exactly
the same comparison blind, if your specific goal is to evaluate sound
quality.

bob

  #112   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default In Mobile Age, Sound Quality Steps Back

On May 21, 7:57=A0am, "Arny Krueger" wrote:
"Scott" wrote in message





You can't level match speakers that are different in
design.


Why not?

Let's presume that we level match the speakers with our PTF =A0measuring
device positioned at the "sweet spot" that will be used during the
comparison.


Using what as a source signal? Do you think the transients will be
level matched with different speakers? Do you think we will have level
match at all frequencies? What about comb filtering which will be
unique to each design? How do you level match for that? What of
dipoles or other differences in dispersion? Doesn't that affect levels
during the decay of a transient?

IMO the best choice is to level optimiise with musical material that
is going to be used for any comparison. The idea is to find the better
speaker no? Gotta compare each design at their best.
  #113   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default In Mobile Age, Sound Quality Steps Back

On Fri, 21 May 2010 12:43:56 -0700, Scott wrote
(in article ):

On May 21, 7:57=A0am, "Arny Krueger" wrote:
"Scott" wrote in message





You can't level match speakers that are different in
design.


Why not?

Let's presume that we level match the speakers with our PTF =A0measuring
device positioned at the "sweet spot" that will be used during the
comparison.


Using what as a source signal? Do you think the transients will be
level matched with different speakers? Do you think we will have level
match at all frequencies? What about comb filtering which will be
unique to each design? How do you level match for that? What of
dipoles or other differences in dispersion? Doesn't that affect levels
during the decay of a transient?

IMO the best choice is to level optimiise with musical material that
is going to be used for any comparison. The idea is to find the better
speaker no? Gotta compare each design at their best.


This is my contention as well. Some people here seem so enamored with
bias-controlled tests that they fail to see those instances where such tests
won't work. I. myself, fully believe that bias-controlled tests for things
like CD decks, preamplifiers, amplifiers, even vinyl playing setups and
microphones are THE gold-standard; useful and very revealing. I also know
that such tests ruthlessly uncover the mythology in (most) so-called
audiophile "tweaks" and, of course in speaker cables and interconnects. But
for the reasons cited above, I simply cannot see how a Bias controlled test
on speakers could or would be either legitimate, or very revealing of
anything concrete. Add to the above mentioned difficulties, the near
impossibility of being able to set up BOTH pairs of speakers under evaluation
in the optimum room location (it is impossible for two masses to occupy the
same space at the same time). I also find it unlikely that speakers could be
set-up in a way that, either from sight or from location, the listeners
couldn't tell which pair was playing at any one time (one could conduct the
test in total darkness, I suppose. In which case, listeners might be able to
hear that the speakers were emanating from two different locations, but never
having seen them. they wouldn't know which they were hearing at any given
time. Unfortunately, total darkness presents it's own problems...).

  #114   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default In Mobile Age, Sound Quality Steps Back

On Fri, 21 May 2010 11:49:22 -0700, bob wrote
(in article ):

On May 21, 1:14=A0pm, Audio Empire wrote:
On Thu, 20 May 2010 19:45:39 -0700, bob wrote

snip

Again: Any speaker comparison you do can be improved by doing exactly
the same comparison blind, if your specific goal is to evaluate sound
quality.

bob


I say that for speakers, it's not necessary or even desirable to evaluate
speakers that way. Our own individual likes and dislikes will do that for us
very nicely, thank you.
  #115   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default In Mobile Age, Sound Quality Steps Back

Harry Lavo wrote:

And a few other questions: Who were the listeners.....studio pros,
audiophiles, SWM audio club members, the Boston Audio Society, college
students, random off-the-street people, or whom? And finally, who (if
anybody) sponsored the test?


What is the point of this interrogation?


You don't think knowing who sponsored a test that found a $500 minimonitor
to be equally preferred to $12,000 speakers isn't germane? Suppose I told
you the test found a $500 turntable/cartridge to be equally preferred to a
state-of-the-art CD player playing the same recording....you don't think
you'd want to know under what auspices the test was held, among whom, and
whether or not it was sponsored by the manufacturer of the record player?


If the recording were really 'the same' , it would mean the CD had been made from
the LP output. In which case a finding of a preference trend in blind test would be
peculiar indeed.

Of course, loudspeakers DO sound different, so here, a preference trend is not
a priori peculiar.

And assuming that the test was not sponsored or rigged somehow, would you
not want to know what music was used, and how familiar the people listening
would be to that kind of music, how accustomed they might be to listening to
speakers similar to either of the speakers under test, or whether or not
they had even ever heard anything similar (perhaps only earbuds)?


Olive and Toole have written about how music is chosen, how listeners
are trained, what metrics are used, etc., in their publications on measuring
loudspeaker preference. Perhaps you should read them -- again, apparently,
from the below -- before flinging accusations?

I have read much, if not all, of the Olive/Harmon literature up to about two
years ago. I recall one test that found the preferences of trained and
largely untrained listeners to have come out similar.....and that was a test
conducted in a rather austere testing environment, not in a relaxed home
setting, for the specific purpose of finding how comparative their ratings
were. I am not aware of any independent third-party replication of such a
test. Are you? If so, perhaps you could share it with us with a
descriptive summary and a citation?


A thrid party replication would require a third party to build a blind
testing apparatus for loudspeakers, as Harman did. So far, no one seems to have
stepped up to the plate. Cost may be a factor.

As for the 'austere' versus 'relaxed home environment', your employment of such
rhetoric is no substitute for scientific critique.

You know this, I'd bet most of the participants on this thread know this,
and Floyd Toole ever wrote it all up in his recent book 'Sound
Reproduction' -- which was glowingly reviewed by Kal Rubinson in
Stereophile -- for those who don't.


Wow! One test, cited by one of its constructors, in a book viewed favorably
by a Stereophile reviewer. That is impressive!


No, Harry, far from one test. Toole's book is a summary of decades of
tests, by many researchers.

Don't get me wrong....I'm not knocking Olive's test....but it was just
that....one test, and done for a specific purpose....to find out how many
hours of training or not had to be imbued or found in listeners in order to
get comparable ratings of a loudspeakers objective qualites in a test
facility. It was hardly the holy grail of speaker testing And he has done
other interesting tests as well....useful, I guess, for Harmon's development
of car radios, single box systems, etc. not just (or even necessarily
primarily) hi-fi speakers. But as I said in my earlier post, there is no
evidence that this research has put Harman ahead of the pack when it comes
to audiophile preferences.


"You guess" indeed. Harman's high-end speaker systems -- under the JBL , Revel labels -- most
certainly have been designed with the Toole/Olive test results in mind. And your point about
audiophile preference is absurdly miscast. DBTs show that 'audiophile preference' *imagines*
itself to be sound-based, when it's not. Audiophile preference fro loudspeakers is easily
rendered incoherent by ever-present biasing factors. When those factors are controlled for,
the preferences become far more coherent across different listeners -- both trained and
untrained, both 'in' the audio industry, and not. THAT is the point of the scientific
results.

But hey, don't take that all from me. Why not just engage Sean directly over at
AVSforum? Or on his blog?

--
-S
We have it in our power to begin the world over again - Thomas Paine


  #116   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default In Mobile Age, Sound Quality Steps Back

Jenn wrote:
In article ,
Steven Sullivan wrote:


Audio Empire wrote:


I'll bet that the 400 mini-monitors don't have as much or as good quality
bass as did the $12000 system nor could it load the room like a big system.


Sure, you can design tests which minimize differences in things like
amplifiers and speakers. I could easily construct a DBT where a small
mini-monitor and a large full-range system would sound as similar as
possible
- I'd just play solo harpsichord or flute music, or something similar that
has no bass and little in the way of dynamic contrast.


I'm sure you could, but why do you assume Arny's test was like that?


I see no such assumption.


What part of 'I'll bet' implies no assumption?

--
-S
We have it in our power to begin the world over again - Thomas Paine
  #117   Report Post  
Posted to rec.audio.high-end
vlad vlad is offline
external usenet poster
 
Posts: 131
Default In Mobile Age, Sound Quality Steps Back

On May 21, 3:27=A0pm, Audio Empire wrote:
On Fri, 21 May 2010 11:49:22 -0700, bob wrote
(in article ):



On May 21, 1:14=3DA0pm, Audio Empire wrote:
On Thu, 20 May 2010 19:45:39 -0700, bob wrote

snip

Again: Any speaker comparison you do can be improved by doing exactly
the same comparison blind, if your specific goal is to evaluate sound
quality.


bob


I say that for speakers, it's not necessary or even desirable to evaluate
speakers that way. Our own individual likes and dislikes will do that for=

us
very nicely, thank you.


I beg to disagree.

Definitely, when you are picking speakers for your room/system you
can use any criteria, nobody argues here about this. The same way when
I am picking speakers for my room I am the only judge (may be, my wife
too :-). However if somebody whose expertise in a field of speakers
that I trust made a speaker=92s comparison, then I definitely will pay
attention to results of this comparison. And may be it will change my
choice of speakers. So speaker=92s comparison does make sense for some
people. At least it creates some obstacle for charlatans peddling
overpriced mediocrity (no particular references :-).

Thx

vlad
  #118   Report Post  
Posted to rec.audio.high-end
Jenn[_2_] Jenn[_2_] is offline
external usenet poster
 
Posts: 2,752
Default In Mobile Age, Sound Quality Steps Back

In article ,
Steven Sullivan wrote:

Jenn wrote:
In article ,
Steven Sullivan wrote:


Audio Empire wrote:


I'll bet that the 400 mini-monitors don't have as much or as good
quality
bass as did the $12000 system nor could it load the room like a big
system.

Sure, you can design tests which minimize differences in things like
amplifiers and speakers. I could easily construct a DBT where a small
mini-monitor and a large full-range system would sound as similar as
possible
- I'd just play solo harpsichord or flute music, or something similar
that
has no bass and little in the way of dynamic contrast.

I'm sure you could, but why do you assume Arny's test was like that?


I see no such assumption.


What part of 'I'll bet' implies no assumption?


Unless I'm mistaken, AE's "I'll bet" comment was concerning the quantity
and quality of bass, not whether or not there was difference in bass was
detected in the test.

  #119   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default In Mobile Age, Sound Quality Steps Back

On May 21, 3:30=A0pm, Steven Sullivan wrote:
Harry Lavo wrote:

And a few other questions: =A0Who were the listeners.....studio pros=

,
audiophiles, SWM audio club members, the Boston Audio Society, =A0co=

llege
students, random off-the-street people, or whom? =A0And finally, who=

(if
anybody) sponsored the test?


What is the point of this interrogation?

You don't think knowing who sponsored a test that found a $500 minimoni=

tor
to be equally preferred to $12,000 speakers isn't germane? =A0Suppose I=

told
you the test found a $500 turntable/cartridge to be equally preferred t=

o a
state-of-the-art CD player playing the same recording....you don't thin=

k
you'd want to know under what auspices the test was held, among whom, a=

nd
whether or not it was sponsored by the manufacturer of the record playe=

r?

If the recording were really 'the same' , it would mean the CD had been m=

ade from
the LP output. =A0In which case a finding of a preference trend in blind =

test would be
peculiar indeed.


Oh c'mon. It means the CD and the LP were sourced from the same
recording. CDs and LPs are copies of "the recording" on two different
media.



Of course, loudspeakers DO sound different, so here, a preference trend i=

s not
a priori peculiar.


So also do LPs and CDs sourced from the same recording in most cases.



And assuming that the test was not sponsored or rigged somehow, would y=

ou
not want to know what music was used, and how familiar the people liste=

ning
would be to that kind of music, how accustomed they might be to listeni=

ng to
speakers similar to either of the speakers under test, or whether or no=

t
they had even ever heard anything similar (perhaps only earbuds)?


Olive and Toole have written about how music is chosen, how listeners
are trained, what metrics are used, etc., in their publications on measur=

ing
loudspeaker preference. =A0Perhaps you should =A0read them =A0-- again, a=

pparently,
from the below -- before flinging accusations?


Did Toole and Olive do the comparison between the 500 dollar speakers
and the 12,000 speakers Arny is talking about?

  #120   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default In Mobile Age, Sound Quality Steps Back

"Jenn" wrote in message
...
In article ,
Steven Sullivan wrote:

Jenn wrote:
In article ,
Steven Sullivan wrote:


Audio Empire wrote:


I'll bet that the 400 mini-monitors don't have as much or as good
quality
bass as did the $12000 system nor could it load the room like a big
system.

Sure, you can design tests which minimize differences in things
like
amplifiers and speakers. I could easily construct a DBT where a
small
mini-monitor and a large full-range system would sound as similar
as
possible
- I'd just play solo harpsichord or flute music, or something
similar
that
has no bass and little in the way of dynamic contrast.

I'm sure you could, but why do you assume Arny's test was like that?


I see no such assumption.


What part of 'I'll bet' implies no assumption?


Unless I'm mistaken, AE's "I'll bet" comment was concerning the quantity
and quality of bass, not whether or not there was difference in bass was
detected in the test.


I think you are probably correct.

Now we know that apparently the speakers were modified (equalized?) by
something called the "Perceptual Transfer Function" so perhaps the little
guys had some bass added (or the big guys had some bass subtracted)? A bit
of mud in the water, that.

Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
One step forward, ten steps back George M. Middius[_4_] Audio Opinions 0 March 25th 09 11:02 PM
Key steps to make a recording sound "commercial" Nono Pro Audio 0 May 23rd 07 04:48 PM
Key steps to make a recording sound "commercial" Nono Pro Audio 0 May 23rd 07 04:46 PM
WTB: Mobile Fidelity Sound Labs Cassettes Cartrivision1 Marketplace 0 January 11th 06 06:24 AM
XOVISION -- quality mobile video / audio manufacturer and distributor Jerome Bordallo Marketplace 0 July 18th 03 12:42 AM


All times are GMT +1. The time now is 06:05 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"