View Full Version : Olive and Toole
January 24th 06, 07:45 PM
The full .pdf files are available either from Harman's website, or from Sean
Olive
Or I can e-mail the full files to you.
2.7 Blind versus Sighted Listening Tests From a Harman white paper
It is generally accepted among scientists that psychometric experiments must
be
performed double blind. For audio tests, this means the identities of the
components
under test cannot be made known to the listener, and the experimenter cannot
not directly
control or administer the actual test.
In 1996 Toole and Olive in [2] conducted some blind versus sighted
loudspeaker tests
that showed both experienced and inexperienced listeners' judgments were
significantly
influenced by factors such as price, brand name, size and cosmetics. In
fact, the effect of
these biases in the sighted tests were larger than any other significant
factors found in the
blind tests, including loudspeaker, position and program interactions. These
experiments
clearly show that an accurate and unbiased measurement of sound quality
requires that
the tests be done blind.
To remove these biases from listening tests in the MLL an acoustically
transparent
curtain that is visually opaque is placed between the products and the
listeners so that
they do not know the identities of the products under test. All other
associated equipment
in the signal path is also out-of-sight and locked in an equipment rack,
since the
performance and paranoia of some listeners can be affected by simply having
knowledge
that a certain brand of interconnect or CD player is in the signal path.
The front screen consists of a black open knit polyester knit cloth chosen
for its acoustic
transparency and used as grille clothe in many of our loudspeakers. The
material is
attached to a large automated curtain roller so it can be easily lifted down
and up with an
infrared remote control. Weights are attached to a seam in the bottom so the
cloth retains
its tautness when in use. Retractable curtains made of the same material
surround the
listeners to hide the identities of loudspeakers located at the sides and
rear of the listening
room. Figures 4 and 5 show the front, side and rear curtains fully retracted
when not in
use, and Figure 8 shows the curtains in place during an actual listening
test.
__________________________________________________ ______________________
BLIND vs. SIGHTED TESTS - SEEING IS BELIEVING From a paper by Floyd Toole
Knowledge of the products that are being evaluated is generally understood
to be a powerful source of
psychological bias. In scientific tests of many kinds, and even in wine
tasting, considerable effort is expended to
ensure the anonymity of the devices or substances being subjectively
evaluated. In audio, though, things are
more relaxed, and otherwise serious people persist in the belief that they
can ignore such factors as price, size,
brand, etc. In some of the "great debate" issues, like amplifiers, wires,
and the like, there are assertions that
disguising the product identity prevents listeners from hearing differences
that are in the range of extremely
small to inaudible. That debate shows no signs of slowing down. In the
category of loudspeakers and rooms,
however, there is no doubt that differences exist and are clearly audible.
To satisfy ourselves that the additional
The results are very clear, and strongly
supportive of the scientific view. Figure 4 shows that,
in subjective ratings of four loudspeakers, the
differences in ratings caused by knowledge of the
products is as large or larger than those attributable to
the differences in sound alone. The two left-hand
striped bars are scores for loudspeakers that were large,
expensive and impressive looking, the third bar is the
score for a well designed, small, inexpensive, plastic
three-piece system. The right-hand bar represents a
moderately expensive product from a competitor that
had been highly rated by respected reviewers.
When listeners entered the room for the sighted
tests, their positive verbal reactions to the big beautiful
speakers and the jeers for the tiny sub/sat system
foreshadowed dramatic ratings shifts - in opposite
directions. The handsome competitor's system got a
higher rating; so much for employee loyalty.
Other variables were also tested, and the results
indicated that, in the sighted tests, listeners substantially ignored large
differences in sound quality attributable to
position in the listening room and to program material. In other words,
knowledge of the product identity was at
least as important a factor in the tests as the principal acoustical
factors. Incidentally, many of these listeners
were very experienced and, some of them thought, able to ignore the
visually-stimulated biases [6].
At this point, it is correct to say that, with adequate experimental
controls, we are no longer conducting
"listening tests", we are performing "subjective measurements".
SUBJECTIVISM vs. OBJECTIVISM - IN CONCLUSION
In this lengthy summary we have covered a lot of topics. Much of it was
matter-of-factly technical,
driven by data and the need to measure, and much of it was subjective,
driven by the desire to understand what
we can hear. All of it was oriented towards creating loudspeakers that sound
better.
The literature of audio continues to be sprinkled with letters and articles
debating the merits of science in
audio. The subjectivist stance is that "to hear is to believe", and that is
all that matters. Some of the arguments
conjure images of white-coated engineers with putty in their ears, designing
audio equipment, and not caring
how it sounds, only how it measures. I have never met such a person in my 30
years in audio science and
engineering.
The simple fact is that, without science, there would be no audio as we know
it. Without extensive and
meticulous subjective evaluation, there would be no audio science as we know
it. Without audio science, audio
engineering reverts to trial and error. So, where does this leave us?
Clearly, to be successful in this business,
one must be actively involved with both of the objective and subjective
sides.
A faith in the scientific method is not a blind faith. It is a faith built
on a growing trust that measurements
can guide us to produce better sounding products at every price level, for
every application. The proof, as
always, is in the listening, and one MUST listen.
The Harman International loudspeaker companies, JBL, Infinity, and Revel
have invested heavily in
measurement facilities that allow them to take the fullest advantage of
existing audio science. They have
invested in talented engineers who understand and respect the scientific
method, good sound and great music.
They have invested in elaborate listening rooms where they can enjoy and
criticize the fruits of their labors.
There are people on staff with many years of experience in successfully
probing the frontiers of knowledge in
product design and audio science, and they are equipped to continue those
investigations, to push those frontiers.
The arrival of multichannel audio for films required some adjustments in the
performance objectives of
speakers, certainly at the high end. Multichannel music is another, as yet
ill-defined, challenge. More speakers
in rooms, means less consumer tolerance for large boxes. Merging
loudspeakers with rooms is not easy, and it is
the one remaining large challenge for our industry. We are working on all of
these fronts. Stay tuned.
dave weil
January 24th 06, 08:02 PM
On Tue, 24 Jan 2006 19:45:36 GMT, > wrote:
>The full .pdf files are available either from Harman's website, or from Sean
>Olive
>
>Or I can e-mail the full files to you.
>
>
>
>
>
>2.7 Blind versus Sighted Listening Tests From a Harman white paper
snipping copyrighted material...
Apparently, these gentlemen don't buy into the "dbts shouldn't be used
for components with BIG differences".
January 25th 06, 12:37 AM
"dave weil" > wrote in message
...
> On Tue, 24 Jan 2006 19:45:36 GMT, > wrote:
>
>>The full .pdf files are available either from Harman's website, or from
>>Sean
>>Olive
>>
>>Or I can e-mail the full files to you.
>>
>>
>>
>>
>>
>>2.7 Blind versus Sighted Listening Tests From a Harman white paper
>
> snipping copyrighted material...
>
> Apparently, these gentlemen don't buy into the "dbts shouldn't be used
> for components with BIG differences".
How would you know?
dave weil
January 25th 06, 02:34 AM
On Wed, 25 Jan 2006 00:37:58 GMT, > wrote:
>
>"dave weil" > wrote in message
...
>> On Tue, 24 Jan 2006 19:45:36 GMT, > wrote:
>>
>>>The full .pdf files are available either from Harman's website, or from
>>>Sean
>>>Olive
>>>
>>>Or I can e-mail the full files to you.
>>>
>>>
>>>
>>>
>>>
>>>2.7 Blind versus Sighted Listening Tests From a Harman white paper
>>
>> snipping copyrighted material...
>>
>> Apparently, these gentlemen don't buy into the "dbts shouldn't be used
>> for components with BIG differences".
>How would you know?
Don't most speakers sound pretty different?
January 25th 06, 04:55 PM
"dave weil" > wrote in message
...
> On Wed, 25 Jan 2006 00:37:58 GMT, > wrote:
>
>>
>>"dave weil" > wrote in message
...
>>> On Tue, 24 Jan 2006 19:45:36 GMT, > wrote:
>>>
>>>>The full .pdf files are available either from Harman's website, or from
>>>>Sean
>>>>Olive
>>>>
>>>>Or I can e-mail the full files to you.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>2.7 Blind versus Sighted Listening Tests From a Harman white paper
>>>
>>> snipping copyrighted material...
>>>
>>> Apparently, these gentlemen don't buy into the "dbts shouldn't be used
>>> for components with BIG differences".
>>How would you know?
>
> Don't most speakers sound pretty different?
>
Why the snip? I did not reproduce the entire paper of either author, they
were excerpts.
Speakers made by the same manufacturer tend to sound similar.
If they are doing DBT's of very dissimilar sound, I would be interested to
know what information they are trying to gather. Normally for dissimilar
sounding speakers a DBT would not normally be used. These tests were most
likely done to determine how much sighted bias influences people's buying
decisions as compared to evaluating speakers on sound quality alone. Seems
reasonable to me.
January 25th 06, 08:01 PM
"dave weil" > wrote in message
...
> On Wed, 25 Jan 2006 00:37:58 GMT, > wrote:
>
>>>>2.7 Blind versus Sighted Listening Tests From a Harman white paper
>>>
>>> snipping copyrighted material...
>>>
>>> Apparently, these gentlemen don't buy into the "dbts shouldn't be used
>>> for components with BIG differences".
>>How would you know?
>
> Don't most speakers sound pretty different?
You'd think so, wouldn't you. Nevertheless, when experts listen to speakers
whose identity is unknown to them, they refuse to make any hard and fast
judgments about the sound. Their comments tend to "regress to the mean".
This doesn't surprise me in the least; are you surprised?
Norm Strong
dave weil
January 25th 06, 08:28 PM
On Wed, 25 Jan 2006 16:55:19 GMT, > wrote:
>"dave weil" > wrote in message
...
>> On Wed, 25 Jan 2006 00:37:58 GMT, > wrote:
>>
>>>
>>>"dave weil" > wrote in message
...
>>>> On Tue, 24 Jan 2006 19:45:36 GMT, > wrote:
>>>>
>>>>>The full .pdf files are available either from Harman's website, or from
>>>>>Sean
>>>>>Olive
>>>>>
>>>>>Or I can e-mail the full files to you.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>2.7 Blind versus Sighted Listening Tests From a Harman white paper
>>>>
>>>> snipping copyrighted material...
>>>>
>>>> Apparently, these gentlemen don't buy into the "dbts shouldn't be used
>>>> for components with BIG differences".
>>>How would you know?
>>
>> Don't most speakers sound pretty different?
>>
>
>Why the snip? I did not reproduce the entire paper of either author, they
>were excerpts.
Quite lengthy excerpts. I just wanted to be on the safe side.
>Speakers made by the same manufacturer tend to sound similar.
>
>If they are doing DBT's of very dissimilar sound, I would be interested to
>know what information they are trying to gather.
So, are you saying that they're only trying to do dbts of their own
speakers? That seems rather silly (except in terms of possibly rying
to identify "brand quirks"). One would think that in the speaker
arena, a company would be interested in knowing the sonic preferences
of the consumer in relation to the whole market. In fact, that's
exactly what these tests did - they used competitors' products as
well.
> Normally for dissimilar
>sounding speakers a DBT would not normally be used.
Well, that wasn't the case with these tests.
> These tests were most
>likely done to determine how much sighted bias influences people's buying
>decisions as compared to evaluating speakers on sound quality alone.
Well yes. And they used completely different systems. Or are you
saying that the speaker systems presumably sounded so similar that
dbts were necessary?
> Seems reasonable to me.
Well, most people think that dbts aren't very useful for speaker
comparisons, mainly because speakers tend to sound so different. Or do
i have things wrong?
January 25th 06, 08:35 PM
"dave weil" > wrote in message
...
> On Wed, 25 Jan 2006 16:55:19 GMT, > wrote:
>
>>"dave weil" > wrote in message
...
>>> On Wed, 25 Jan 2006 00:37:58 GMT, > wrote:
>>>
>>>>
>>>>"dave weil" > wrote in message
...
>>>>> On Tue, 24 Jan 2006 19:45:36 GMT, > wrote:
>>>>>
>>>>>>The full .pdf files are available either from Harman's website, or
>>>>>>from
>>>>>>Sean
>>>>>>Olive
>>>>>>
>>>>>>Or I can e-mail the full files to you.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>2.7 Blind versus Sighted Listening Tests From a Harman white paper
>>>>>
>>>>> snipping copyrighted material...
>>>>>
>>>>> Apparently, these gentlemen don't buy into the "dbts shouldn't be used
>>>>> for components with BIG differences".
>>>>How would you know?
>>>
>>> Don't most speakers sound pretty different?
>>>
>>
>>Why the snip? I did not reproduce the entire paper of either author, they
>>were excerpts.
>
> Quite lengthy excerpts. I just wanted to be on the safe side.
>
>>Speakers made by the same manufacturer tend to sound similar.
>>
>>If they are doing DBT's of very dissimilar sound, I would be interested to
>>know what information they are trying to gather.
>
> So, are you saying that they're only trying to do dbts of their own
> speakers?
I'm saying if you want to know for sure, ask them.
That seems rather silly (except in terms of possibly rying
> to identify "brand quirks").
As has been discussed here before, sometimes manfactrers do DBT to find out
if something they changed made any differnce.
One would think that in the speaker
> arena, a company would be interested in knowing the sonic preferences
> of the consumer in relation to the whole market.
That makes perfect sense as well.
In fact, that's
> exactly what these tests did - they used competitors' products as
> well.
>
>> Normally for dissimilar
>>sounding speakers a DBT would not normally be used.
>
> Well, that wasn't the case with these tests.
>
>> These tests were most
>>likely done to determine how much sighted bias influences people's buying
>>decisions as compared to evaluating speakers on sound quality alone.
>
> Well yes. And they used completely different systems. Or are you
> saying that the speaker systems presumably sounded so similar that
> dbts were necessary?
>
>> Seems reasonable to me.
>
> Well, most people think that dbts aren't very useful for speaker
> comparisons, mainly because speakers tend to sound so different. Or do
> i have things wrong?
Probably. Best bet is to ask them.
I've sent and recieved e-mail from Sean Olive, I see no reason why you can't
do the same.
I posted what I did to kee things lively. :-)
January 25th 06, 08:40 PM
> wrote in message
...
>
> "dave weil" > wrote in message
> ...
>> On Wed, 25 Jan 2006 00:37:58 GMT, > wrote:
>>
>>>>>2.7 Blind versus Sighted Listening Tests From a Harman white paper
>>>>
>>>> snipping copyrighted material...
>>>>
>>>> Apparently, these gentlemen don't buy into the "dbts shouldn't be used
>>>> for components with BIG differences".
>>>How would you know?
>>
>> Don't most speakers sound pretty different?
>
> You'd think so, wouldn't you. Nevertheless, when experts listen to
> speakers whose identity is unknown to them, they refuse to make any hard
> and fast judgments about the sound. Their comments tend to "regress to
> the mean". This doesn't surprise me in the least; are you surprised?
>
> Norm Strong
>
I have also seen that some people beleive that because of the widespread use
of CAD programs, things are tending to sound more alike from brand to brand
because of it. Same programs, same model of T/S and you wind up with less
difference than before the advent of these programs. Naturally there are
still differences in drivers that can make them sound different, but even
then, one would think that if you get flat response and good phase behavior,
and all the other criteria that go into making speakers sound good, they
could still sound less disdimilar than before these programs came into
common use.
Any thoughts on this, Norm?
dave weil
January 25th 06, 09:21 PM
On Wed, 25 Jan 2006 12:01:21 -0800, > wrote:
>
>"dave weil" > wrote in message
...
>> On Wed, 25 Jan 2006 00:37:58 GMT, > wrote:
>>
>>>>>2.7 Blind versus Sighted Listening Tests From a Harman white paper
>>>>
>>>> snipping copyrighted material...
>>>>
>>>> Apparently, these gentlemen don't buy into the "dbts shouldn't be used
>>>> for components with BIG differences".
>>>How would you know?
>>
>> Don't most speakers sound pretty different?
>
>You'd think so, wouldn't you. Nevertheless, when experts listen to speakers
>whose identity is unknown to them, they refuse to make any hard and fast
>judgments about the sound. Their comments tend to "regress to the mean".
>This doesn't surprise me in the least; are you surprised?
Actually, no. It just indicates the fallacy of tying "specs" to
"results". After all, I'm pretty sure that the tested specs of those
speakers tested are quite different.
i'm pretty sure that unsighted tests make a lot of things sound alike,
even things that aren't *really* alike at all, even when using
measured specs.
January 25th 06, 11:51 PM
"dave weil" > wrote in message
...
> On Wed, 25 Jan 2006 12:01:21 -0800, > wrote:
>
>>
>>"dave weil" > wrote in message
...
>>> On Wed, 25 Jan 2006 00:37:58 GMT, > wrote:
>>>
>>>>>>2.7 Blind versus Sighted Listening Tests From a Harman white paper
>>>>>
>>>>> snipping copyrighted material...
>>>>>
>>>>> Apparently, these gentlemen don't buy into the "dbts shouldn't be used
>>>>> for components with BIG differences".
>>>>How would you know?
>>>
>>> Don't most speakers sound pretty different?
>>
>>You'd think so, wouldn't you. Nevertheless, when experts listen to
>>speakers
>>whose identity is unknown to them, they refuse to make any hard and fast
>>judgments about the sound. Their comments tend to "regress to the mean".
>>This doesn't surprise me in the least; are you surprised?
>
> Actually, no. It just indicates the fallacy of tying "specs" to
> "results". After all, I'm pretty sure that the tested specs of those
> speakers tested are quite different.
>
> i'm pretty sure that unsighted tests make a lot of things sound alike,
> even things that aren't *really* alike at all, even when using
> measured specs.
Or maybe they only sound different when people listen sighted and their bias
comes into play.
Seems much more likely, since we already know that things that sound
different enough will be perceived as such in blind listening.
If it were not so, there would be no blind listening tests, nor would they
be the standard for determining subtle difference.
dave weil
January 26th 06, 12:21 AM
On Wed, 25 Jan 2006 23:51:32 GMT, > wrote:
>> Actually, no. It just indicates the fallacy of tying "specs" to
>> "results". After all, I'm pretty sure that the tested specs of those
>> speakers tested are quite different.
>>
>> i'm pretty sure that unsighted tests make a lot of things sound alike,
>> even things that aren't *really* alike at all, even when using
>> measured specs.
>
>Or maybe they only sound different when people listen sighted and their bias
>comes into play.
OK. I guess that "specs" can be tossed out as some sort of calibrating
factor.
Or are you saying that most speakers actually spec out the same (or
close enough to be indistinguishable)?
Steven Sullivan
January 26th 06, 03:41 PM
dave weil > wrote:
> On Tue, 24 Jan 2006 19:45:36 GMT, > wrote:
> >The full .pdf files are available either from Harman's website, or from Sean
> >Olive
> >
> >Or I can e-mail the full files to you.
> >
> >
> >
> >
> >
> >2.7 Blind versus Sighted Listening Tests From a Harman white paper
> snipping copyrighted material...
> Apparently, these gentlemen don't buy into the "dbts shouldn't be used
> for components with BIG differences".
That's because for such components, dbts are still necessary to
study *preference* for *sound*.
--
-S
"If men were angels, no government would be necessary." - James Madison (1788)
Steven Sullivan
January 26th 06, 03:46 PM
dave weil > wrote:
> On Wed, 25 Jan 2006 12:01:21 -0800, > wrote:
> >
> >"dave weil" > wrote in message
> ...
> >> On Wed, 25 Jan 2006 00:37:58 GMT, > wrote:
> >>
> >>>>>2.7 Blind versus Sighted Listening Tests From a Harman white paper
> >>>>
> >>>> snipping copyrighted material...
> >>>>
> >>>> Apparently, these gentlemen don't buy into the "dbts shouldn't be used
> >>>> for components with BIG differences".
> >>>How would you know?
> >>
> >> Don't most speakers sound pretty different?
> >
> >You'd think so, wouldn't you. Nevertheless, when experts listen to speakers
> >whose identity is unknown to them, they refuse to make any hard and fast
> >judgments about the sound. Their comments tend to "regress to the mean".
> >This doesn't surprise me in the least; are you surprised?
> Actually, no. It just indicates the fallacy of tying "specs" to
> "results". After all, I'm pretty sure that the tested specs of those
> speakers tested are quite different.
Toole's and Olive's work is, among other things, about figuring out
*which* tested specs matter for loudspeakers. And they have a
considerable amount of data on that correlation. You might consider
actually reading some of it before commenting.
> i'm pretty sure that unsighted tests make a lot of things sound alike,
> even things that aren't *really* alike at all, even when using
> measured specs.
Of course you're 'pretty sure' of that, since you rely on sighted
tests -- which scientist have considered to be pretty bad evidence
for many decades now.
--
-S
"If men were angels, no government would be necessary." - James Madison (1788)
Steven Sullivan
January 26th 06, 03:47 PM
dave weil > wrote:
> On Wed, 25 Jan 2006 23:51:32 GMT, > wrote:
> >> Actually, no. It just indicates the fallacy of tying "specs" to
> >> "results". After all, I'm pretty sure that the tested specs of those
> >> speakers tested are quite different.
> >>
> >> i'm pretty sure that unsighted tests make a lot of things sound alike,
> >> even things that aren't *really* alike at all, even when using
> >> measured specs.
> >
> >Or maybe they only sound different when people listen sighted and their bias
> >comes into play.
> OK. I guess that "specs" can be tossed out as some sort of calibrating
> factor.
'specs' and 'independent bench test results' are not the same thing, you
do realize that, right?
--
-S
"If men were angels, no government would be necessary." - James Madison (1788)
Arny Krueger
January 26th 06, 03:50 PM
"Steven Sullivan" > wrote in message
> dave weil > wrote:
>> On Wed, 25 Jan 2006 23:51:32 GMT, >
>> wrote:
>
>>>> Actually, no. It just indicates the fallacy of tying
>>>> "specs" to "results". After all, I'm pretty sure that
>>>> the tested specs of those speakers tested are quite
>>>> different.
>>>>
>>>> i'm pretty sure that unsighted tests make a lot of
>>>> things sound alike, even things that aren't *really*
>>>> alike at all, even when using measured specs.
>>>
>>> Or maybe they only sound different when people listen
>>> sighted and their bias comes into play.
>
>> OK. I guess that "specs" can be tossed out as some sort
>> of calibrating factor.
>
> 'specs' and 'independent bench test results' are not the
> same thing, you do realize that, right?
What Dave knows and what Dave says are often two different things. He can be
smarter than he seems. He's just fishing for attention.
George M. Middius
January 26th 06, 04:24 PM
Sillybot the Audio Poseur is still bloviating flatulently.
> That's because for such components, dbts are still necessary to
> study *preference* for *sound*.
Necessary? For whom? Not for you, hypocrite. It takes a heck of a lot of
nerve to make such pompous pronouncements about what is "necessary" for
Normals to do when you yourself have NEVER undergone the rituals you
prescribe.
Please have your programming updated to include shame, Silly. Or at least
a smidgen of self-awareness. Otherwise you'll never know why Normals laugh
themselves silly at your posturing.
Get back to us when you've finally undertaken your FIRST DBT, Simpy.
January 26th 06, 05:05 PM
> wrote in message
ink.net...
>
> > wrote in message
> ...
>>
>> "dave weil" > wrote in message
>> ...
>>> On Wed, 25 Jan 2006 00:37:58 GMT, > wrote:
>>>
>>>>>>2.7 Blind versus Sighted Listening Tests From a Harman white paper
>>>>>
>>>>> snipping copyrighted material...
>>>>>
>>>>> Apparently, these gentlemen don't buy into the "dbts shouldn't be used
>>>>> for components with BIG differences".
>>>>How would you know?
>>>
>>> Don't most speakers sound pretty different?
>>
>> You'd think so, wouldn't you. Nevertheless, when experts listen to
>> speakers whose identity is unknown to them, they refuse to make any hard
>> and fast judgments about the sound. Their comments tend to "regress to
>> the mean". This doesn't surprise me in the least; are you surprised?
>>
>> Norm Strong
>>
> I have also seen that some people beleive that because of the widespread
> use of CAD programs, things are tending to sound more alike from brand to
> brand because of it. Same programs, same model of T/S and you wind up
> with less difference than before the advent of these programs. Naturally
> there are still differences in drivers that can make them sound different,
> but even then, one would think that if you get flat response and good
> phase behavior, and all the other criteria that go into making speakers
> sound good, they could still sound less disdimilar than before these
> programs came into common use.
>
> Any thoughts on this, Norm?
I would imagine that expertise in any field gradually converges on the same
result. CAD may speed up the process, but it aims at the same result. If
this were not true, then one design or the other would have to be
inadequate. Of course it's possible to set different goals for performance,
emphasizing one desired result or another. But, if designers are aiming for
the same result, they should gradually get closer to that result--and to
each other.
Norm
dave weil
January 26th 06, 07:14 PM
On Thu, 26 Jan 2006 15:41:35 +0000 (UTC), Steven Sullivan
> wrote:
>dave weil > wrote:
>> On Tue, 24 Jan 2006 19:45:36 GMT, > wrote:
>
>> >The full .pdf files are available either from Harman's website, or from Sean
>> >Olive
>> >
>> >Or I can e-mail the full files to you.
>> >
>> >
>> >
>> >
>> >
>> >2.7 Blind versus Sighted Listening Tests From a Harman white paper
>
>> snipping copyrighted material...
>
>> Apparently, these gentlemen don't buy into the "dbts shouldn't be used
>> for components with BIG differences".
>
>That's because for such components, dbts are still necessary to
>study *preference* for *sound*.
Oh, you mean for like amps and cables and CD players...
....or are you just making up protocol as you go?
Steven Sullivan
January 26th 06, 07:29 PM
dave weil > wrote:
> On Thu, 26 Jan 2006 15:41:35 +0000 (UTC), Steven Sullivan
> > wrote:
> >dave weil > wrote:
> >> On Tue, 24 Jan 2006 19:45:36 GMT, > wrote:
> >
> >> >The full .pdf files are available either from Harman's website, or from Sean
> >> >Olive
> >> >
> >> >Or I can e-mail the full files to you.
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >2.7 Blind versus Sighted Listening Tests From a Harman white paper
> >
> >> snipping copyrighted material...
> >
> >> Apparently, these gentlemen don't buy into the "dbts shouldn't be used
> >> for components with BIG differences".
> >
> >That's because for such components, dbts are still necessary to
> >study *preference* for *sound*.
> Oh, you mean for like amps and cables and CD players...
er...yes, that would be true...assuming they were likely to pass
a DBT for *audible difference* first. Which is likely for
electromechanical devices like loudspeakers, but much less so for
amps,cables and CD players.
Is this too hard for you to understand? I can try to use shorter
words if need be. But I'm sure it's been explained before.
> ...or are you just making up protocol as you go?
Given that your questions make increasingly less sense -- I have to ask --
have you done *any* reading of the literature? Or are you just
posting reflexively?
--
-S
"If men were angels, no government would be necessary." - James Madison (1788)
January 26th 06, 08:41 PM
"dave weil" > wrote in message
...
> On Wed, 25 Jan 2006 23:51:32 GMT, > wrote:
>
>>> Actually, no. It just indicates the fallacy of tying "specs" to
>>> "results". After all, I'm pretty sure that the tested specs of those
>>> speakers tested are quite different.
>>>
>>> i'm pretty sure that unsighted tests make a lot of things sound alike,
>>> even things that aren't *really* alike at all, even when using
>>> measured specs.
>>
>>Or maybe they only sound different when people listen sighted and their
>>bias
>>comes into play.
>
> OK. I guess that "specs" can be tossed out as some sort of calibrating
> factor.
>
> Or are you saying that most speakers actually spec out the same (or
> close enough to be indistinguishable)?
Are you pretending to be stupid again?
EddieM
February 2nd 06, 01:24 AM
nyob123 wrote:
The full .pdf files are available either from Harman's website,
or from Sean Olive
Or I can e-mail the full files to you.
2.7 Blind versus Sighted Listening Tests From a Harman white paper
It is generally accepted among scientists that psychometric experiments
must be performed double blind. For audio tests, this means the identities
of the components under test cannot be made known to the listener, and
the experimenter cannot not directly control or administer the actual test.
In 1996 Toole and Olive in [2] conducted some blind versus sighted
loudspeaker tests that showed both experienced and inexperienced
listeners' judgments were significantly influenced by factors such as
price, brand name, size and cosmetics. In fact, the effect of these
biases in the sighted tests were larger than any other significant factors
found in the blind tests, including loudspeaker, position and program
interactions. These experiments clearly show that an accurate and unbiased
measurement of sound quality requires that the tests be done blind.
To remove these biases from listening tests in the MLL an acoustically
transparent curtain that is visually opaque is placed between the products
and the listeners so that they do not know the identities of the products
under test. All other associated equipment in the signal path is also
out-of-sight and locked in an equipment rack, since the performance and
paranoia of some listeners can be affected by simply having knowledge that a
certain brand of interconnect or CD player is in the signal path.
The front screen consists of a black open knit polyester knit cloth chosen
for its acoustic transparency and used as grille clothe in many of our
loudspeakers. The material is attached to a large automated curtain roller
so it can be easily lifted down and up with an infrared remote control.
Weights are attached to a seam in the bottom so the cloth retains
its tautness when in use. Retractable curtains made of the same material
surround the listeners to hide the identities of loudspeakers located at the
sides and rear of the listening room. Figures 4 and 5 show the front, side
and rear curtains fully retracted when not in use, and Figure 8 shows the
curtains in place during an actual listening test.
__________________________________________________ ___
BLIND vs. SIGHTED TESTS - SEEING IS BELIEVING From a paper by Floyd Toole
Knowledge of the products that are being evaluated is generally understood
to be a powerful source of psychological bias. In scientific tests of many
kinds, and even in wine tasting, considerable effort is expended to ensure
the anonymity of the devices or substances being subjectively evaluated. In
audio, though, things are more relaxed, and otherwise serious people persist
in the belief that they can ignore such factors as price, size,
brand, etc. In some of the "great debate" issues, like amplifiers, wires,
and the like, there are assertions that disguising the product identity
prevents listeners from hearing differences that are in the range of
extremely small to inaudible. That debate shows no signs of slowing down. In
the category of loudspeakers and rooms, however, there is no doubt that
differences exist and are clearly audible. To satisfy ourselves that the
additional
[...additional what?]
The results are very clear, and strongly supportive of the scientific view.
Figure 4 shows that, in subjective ratings of four loudspeakers, the
differences in ratings caused by knowledge of the products is as large or
larger than those attributable to the differences in sound alone. The two
left-hand striped bars are scores for loudspeakers that were large,
expensive and impressive looking, the third bar is the score for a well
designed, small, inexpensive, plastic three-piece system. The right-hand bar
represents a moderately expensive product from a competitor that
had been highly rated by respected reviewers.
When listeners entered the room for the sighted tests, their positive verbal
reactions to the big beautiful speakers and the jeers for the tiny sub/sat
system foreshadowed dramatic ratings shifts - in opposite
directions. The handsome competitor's system got a higher rating; so much
for employee loyalty.
Other variables were also tested, and the results indicated that, in the
sighted tests, listeners substantially ignored large differences in sound
quality attributable to position in the listening room and to program
material. In other words, knowledge of the product identity was at least as
important a factor in the tests as the principal acoustical factors.
Incidentally, many of these listeners were very experienced and, some of
them thought, able to ignore the visually-stimulated biases [6].
At this point, it is correct to say that, with adequate experimental
controls, we are no longer conducting "listening tests", we are performing
"subjective measurements".
SUBJECTIVISM vs. OBJECTIVISM - IN CONCLUSION
In this lengthy summary we have covered a lot of topics. Much of it was
matter-of-factly technical, driven by data and the need to measure, and
much of it was subjective, driven by the desire to understand what
we can hear. All of it was oriented towards creating loudspeakers that sound
better.
The literature of audio continues to be sprinkled with letters and articles
debating the merits of science in audio. The subjectivist stance is that
"to hear is to believe", and that is all that matters. Some of the arguments
conjure images of white-coated engineers with putty in their ears, designing
audio equipment, and not caring how it sounds, only how it measures.
I have never met such a person in my 30 years in audio science and
engineering.
The simple fact is that, without science, there would be no audio as we know
it. Without extensive and meticulous subjective evaluation, there would be no
audio science as we know it. Without audio science, audio engineering
reverts to trial and error. So, where does this leave us? Clearly, to be
successful in this business, one must be actively involved with both of the
objective and subjective sides.
A faith in the scientific method is not a blind faith. It is a faith built
on a growing trust that measurements can guide us to produce better
sounding products at every price level, for every application. The proof, as
always, is in the listening, and one MUST listen.
The Harman International loudspeaker companies, JBL, Infinity, and Revel
have invested heavily in measurement facilities that allow them to take the
fullest advantage of existing audio science. They have invested in talented
engineers who understand and respect the scientific method, good sound
and great music.
They have invested in elaborate listening rooms where they can enjoy and
criticize the fruits of their labors. There are people on staff with many
years of experience in successfully probing the frontiers of knowledge in
product design and audio science, and they are equipped to continue those
investigations, to push those frontiers.
The arrival of multichannel audio for films required some adjustments in the
performance objectives of speakers, certainly at the high end. Multichannel
music is another, as yet ill-defined, challenge. More speakers in rooms,
means less consumer tolerance for large boxes. Merging loudspeakers
with rooms is not easy, and it is the one remaining large challenge for our
industry. We are working on all of these fronts. Stay tuned.
vBulletin® v3.6.4, Copyright ©2000-2025, Jelsoft Enterprises Ltd.