A Audio and hi-fi forum. AudioBanter.com

Go Back   Home » AudioBanter.com forum » rec.audio » High End Audio
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Why DBTs in audio do not deliver (was: Finally ... The Furutech CD-do-something)



 
 
Thread Tools Display Modes
  #1  
Old July 1st 03, 04:02 PM
Bob Marcus
external usenet poster
 
Posts: n/a
Default Why DBTs in audio do not deliver (was: Finally ... The Furutech CD-do-something)

Darryl Miyaguchi > wrote in message >...
>
> For what it's worth, I have performed enough ABX testing to convince
> myself that it's possible for me to detect volume differences < 0.5 dB
> using music, so I doubt very highly that a group test would fail to
> show that 1.75 dB differences on a variety of different music are not
> audible using a DBT.
>

I think it's generally acknowledged that such differences are audible.
Mirabel seems to be arguing that, given what he claims is a 1.75 dB
difference, every member of Greenhill's panel should have scored at
or near perfection, and the fact that they didn't bespeaks some flaw
in Greenhill's methodology.

I'm not yet convinced that there really was a 1.75 dB difference here,
however. What Greenhill says about the 24-gauge cable is:

"Its 1.8-ohm resistance resulted in a 1.76-dB insertion loss with an
8-ohm resistive load."

How does this translate to the specific test in question, which used a
recording of a male a cappella chorus (where the fundamental tones, at
least, range from 60 to less than 1000 Hz)?

Greenhill only level-matched a single pink noise test, and the only
times he discusses levels in the article appear to be in reference to
pink noise tests. E.g.:

"A 1- to 2-dB decrease in sound level was measured for the 24-gauge
wire during the pink noise listening tests."

I freely admit that I'm out of my element here, but I don't think we
can automatically assume that there was a similar difference in SPL
when listening to the choral music.

Hopefully, someone with some technical expertise can shed some further
light on this.

bob
Ads
  #2  
Old July 1st 03, 04:20 PM
Arny Krueger
external usenet poster
 
Posts: n/a
Default Why DBTs in audio do not deliver (was: Finally ... The Furutech CD-do-something)

"ludovic mirabel" > wrote in message
news:[email protected]

> (KikeG) wrote in message
> et>...


>>
(ludovic mirabel) wrote in message
>> news:<[email protected]>...
>>
>> I haven't read Greenhill's tests report, but it seems there's some
>> controversy over what you are saying. Even if that was true, that
>> would suggest there were some problems at the test, since anyone of
>> my family could ABX that wideband level difference quite easily.


>> Enrique (Kike For friends)


> Apologies for dealing just with this for the time being . It concerns
> intellectual honesty something I happen to be touchy about.
> The " Cable test's (" Stereo Review",Aug. '83) proctor and reporter
> was immaculately "objectivist" L. Greenhill- still alive and writing
> for "The Stereophile"


Looking at the calendar, I see that in two months it will be 20 years since
this test was published. Considering editing and publishing delays, it's
already been 20 years since the test was done. If this were the only test
that was done in the history of man, or if every other or the vast majority
of DBTs that were done since then agreed with its results, then citing it
would make some sense. Regrettably, DBTs and even ABX tests involving level
differences have been done many times since then, and very many of those
listening tests have provided far more sensitive results.

Therefore, discussion of Greenhill's 1983 test as if it were indicative,
representative or binding on what's happening today is futile and
misleading.

Anybody who wishes to do DBTs to investigate the audibility of level
differences can do so easily using files they can freely download from the
PCABX web site. I think it would be interesting for people to report the
results they obtain with those files.

IME the audibility of level differences for people with normal hearing and
typical listening environments is closer to 0.5 dB than the 1.75 dB reported
by Greenhill.

Since individual listeners have different test environments and different
ears, their results can be reasonably be expected to vary.

In fact if the results didn't vary, it would suggest that there is
something wrong with the test procedure since it would not be demonstrating
the existence of well-known differences in individual listening acuity.
However, it is equally well known that some place around level differences
of 0.2 dB, nobody hears nuttin'.

  #3  
Old July 1st 03, 11:42 PM
Darryl Miyaguchi
external usenet poster
 
Posts: n/a
Default Why DBTs in audio do not deliver (was: Finally ... The Furutech CD-do-something)

On 1 Jul 2003 15:10:59 GMT, (ludovic mirabel)
wrote:

>We're talking about Greenhill old test not because it is perfect but
>because no better COMPONENT COMPARISON tests are available. In fact
>none have been published according to MTry's and Ramplemann's
>bibliographies since 1990.


I frequently see the distinction being made between audio components
and other audio related things (such as codecs) when it comes to
talking about DBT's. What is the reason for this?

In my opinion, there are two topics which should not be mixed up:

1) The effectiveness of DBT's for determining whether an audible
difference exists
2) The practical usefulness of using DBT's for choosing one audio
producer (component or codec) over another.

> I am not knowledgeable enough to decide on differences between
>your and Greenhill's interpretation of the methods and results.
>In my simplistic way I'd ask you to consider the following:
>PINK NOISE signal: 10 out of 11 participants got the maximum possible
>correct answers: 15 out of 15 ie. 100%. ONE was 1 guess short. He got
>only 14 out of 15.
> When MUSIC was used as a signal 1 (ONE) listener got 15 corrects,
>1 got 14 and one 12. The others had results ranging from 7 and 8
>through 10 to (1) 11.
> My question is: was there are ANY significant difference between
>those two sets of results? Is there a *possibility* that music
>disagrees with ABX or ABX with music?


Even between two samples of music (no pink noise involved), I can
certainly believe that a listening panel might have more or less
difficulty in determining if they hear an audible difference. It
doesn't follow that music in general is interfering with the ability
to discriminate differences when using a DBT.

> I would aoppreciate it if would try and make it simple leaving
>"confidence levels" and such out of it. You're talking to ordinary
>audiophiles wanting to hear if your test will help them decide what
>COMPONENTS to buy.


See my first comments. It's too easy to mix up the topic of the
sensitivity of DBT's as instruments for detecting audible differences
with the topic of the practicality of using DBT's to choose hifi
hardware. The latter is impractical for the average audiophile.

> Who can argue with motherhood? The problem is that there are NO
>ABX COMPONENT tests being published- neither better nor worse, NONE.
> I heard of several audio societies considering them. No results.
>Not from the objectivist citadels: Detroit and Boston. Why?. Did they
>pan out?


I can think of a couple of reasons:

1. It's expensive and time consuming to perfom this type of testing
2. The audible differences are, in actuality, too subtle to hear, ABX
or not. Why bother with such a test?

Then there is the possibility that you seem to be focussing on,
ignoring the above two:

3. DBT's in general may be decreasing the ability to hear subtle
differences.

Which of the the above reaons do you think are most likely?

> > Moving away from the question Greenhill was investigating

>(audible
>> differences between cables) and focusing only on DBT testing and
>> volume differences: it is trivial to perform a test of volume
>> difference, if the contention is being made that a DBT hinders the
>> listener from detecting 1.75 dB of volume difference. Especially if
>> the listeners have been trained specifically for detecting volume
>> differences prior to the test.
>> However, such an experiment would be exceedingly uninteresting, and I
>> have doubts it would sway the opinion of anybody participating in this
>> debate.
>>

> The volume difference was just a by-effect of a comparison between
>cables.
>And yes, TRAINED people would do better than Greenhill's "Expert
>audiophiles" ie rank amateurs just like us. Would some though do
>better than the others and some remain untrainable? Just like us.


I have no doubt that there are some people who are unreliable when it
comes to performing a DBT test. In a codec test using ABC/HR, if
somebody rates the hidden reference worse than the revealed reference
(both references are identical), his listening opinion is either
weighted less or thrown out altogether.

>> For what it's worth, I have performed enough ABX testing to convince
>> myself that it's possible for me to detect volume differences < 0.5 dB
>> using music, so I doubt very highly that a group test would fail to
>> show that 1.75 dB differences on a variety of different music are not
>> audible using a DBT.
>>

> I can easily hear 1db difference between channels, and a change
>of 1 db.
>What I can't do is to have 80 db changed to 81 db, then be asked if
>the third unknown is 80 or 81 dbs. and be consistently correct.
>Perhaps I could if I trained as much as you have done. Perhaps not
>Some others could, some couldn't. We're all different. Produce a test
>which will be valid for all ages, genders, extent of training, innate
>musical and ABxing abilities, all kinds of musical experience and
>preference. Then prove BY EXPERIMENT that it works for COMPARING
>COMPONENTS.
>So that anyone can do it and if he gets a null result BE CERTAIN that
>with more training or different musical experience he would not hear
>what he did not hear before. And perhaps just get on widening his
>musical experience and then rcompare (with his eyes covered if he is
>marketing susceptible)
>Let's keep it simple. We're audiophiles here. We're talking about
>MUSICAL REPRODUCTION DIFFERENCES between AUDIO COMPONENTS. I looked
>at your internet graphs. They mean zero to me. I know M. Levinsohn,
>Quad, Apogee, Acoustat not the names of your codecs. You assure me
>that they are relevant. Perhaps. Let's see BY EXPERIMENT if they do.
>In the meantime enjoy your lab work.
>Ludovic Mirabel


Are you really telling me that you didn't understand the gist of the
group listening test I pointed you to?

For one thing, it says that although people have different individual
preferences about how they evaluate codec quality, as a group, they
can identify trends. This, despite the variety of training, hearing
acuity, audio equipment, and listening environment.

Another point is that it would be more difficult to identify trends if
such a study included the opinions of people who judge the hidden
reference to be worse than the revealed reference (simultaneously
judging the encoded signal to be the same as the revealed reference).
In other words, there are people whose listening opinions can't be
trusted, and the DBT is designed to identify them.

The last point is that I can see no reason why such procedures could
not (in theory, if perhaps not in practical terms) be applied to audio
components. Why don't you explain to me what the difference is (in
terms of sensitivity) between using DBT's for audio codecs and using
DBT's for audio components?

Darryl Miyaguchi
  #4  
Old July 1st 03, 11:43 PM
Darryl Miyaguchi
external usenet poster
 
Posts: n/a
Default Why DBTs in audio do not deliver (was: Finally ... The Furutech CD-do-something)

On 1 Jul 2003 15:10:59 GMT, (ludovic mirabel)
wrote:

>We're talking about Greenhill old test not because it is perfect but
>because no better COMPONENT COMPARISON tests are available. In fact
>none have been published according to MTry's and Ramplemann's
>bibliographies since 1990.


I frequently see the distinction being made between audio components
and other audio related things (such as codecs) when it comes to
talking about DBT's. What is the reason for this?

In my opinion, there are two topics which should not be mixed up:

1) The effectiveness of DBT's for determining whether an audible
difference exists
2) The practical usefulness of using DBT's for choosing one audio
producer (component or codec) over another.

> I am not knowledgeable enough to decide on differences between
>your and Greenhill's interpretation of the methods and results.
>In my simplistic way I'd ask you to consider the following:
>PINK NOISE signal: 10 out of 11 participants got the maximum possible
>correct answers: 15 out of 15 ie. 100%. ONE was 1 guess short. He got
>only 14 out of 15.
> When MUSIC was used as a signal 1 (ONE) listener got 15 corrects,
>1 got 14 and one 12. The others had results ranging from 7 and 8
>through 10 to (1) 11.
> My question is: was there are ANY significant difference between
>those two sets of results? Is there a *possibility* that music
>disagrees with ABX or ABX with music?


Even between two samples of music (no pink noise involved), I can
certainly believe that a listening panel might have more or less
difficulty in determining if they hear an audible difference. It
doesn't follow that music in general is interfering with the ability
to discriminate differences when using a DBT.

> I would aoppreciate it if would try and make it simple leaving
>"confidence levels" and such out of it. You're talking to ordinary
>audiophiles wanting to hear if your test will help them decide what
>COMPONENTS to buy.


See my first comments. It's too easy to mix up the topic of the
sensitivity of DBT's as instruments for detecting audible differences
with the topic of the practicality of using DBT's to choose hifi
hardware. The latter is impractical for the average audiophile.

> Who can argue with motherhood? The problem is that there are NO
>ABX COMPONENT tests being published- neither better nor worse, NONE.
> I heard of several audio societies considering them. No results.
>Not from the objectivist citadels: Detroit and Boston. Why?. Did they
>pan out?


I can think of a couple of reasons:

1. It's expensive and time consuming to perfom this type of testing
2. The audible differences are, in actuality, too subtle to hear, ABX
or not. Why bother with such a test?

Then there is the possibility that you seem to be focussing on,
ignoring the above two:

3. DBT's in general may be decreasing the ability to hear subtle
differences.

Which of the the above reaons do you think are most likely?

> > Moving away from the question Greenhill was investigating

>(audible
>> differences between cables) and focusing only on DBT testing and
>> volume differences: it is trivial to perform a test of volume
>> difference, if the contention is being made that a DBT hinders the
>> listener from detecting 1.75 dB of volume difference. Especially if
>> the listeners have been trained specifically for detecting volume
>> differences prior to the test.
>> However, such an experiment would be exceedingly uninteresting, and I
>> have doubts it would sway the opinion of anybody participating in this
>> debate.
>>

> The volume difference was just a by-effect of a comparison between
>cables.
>And yes, TRAINED people would do better than Greenhill's "Expert
>audiophiles" ie rank amateurs just like us. Would some though do
>better than the others and some remain untrainable? Just like us.


I have no doubt that there are some people who are unreliable when it
comes to performing a DBT test. In a codec test using ABC/HR, if
somebody rates the hidden reference worse than the revealed reference
(both references are identical), his listening opinion is either
weighted less or thrown out altogether.

>> For what it's worth, I have performed enough ABX testing to convince
>> myself that it's possible for me to detect volume differences < 0.5 dB
>> using music, so I doubt very highly that a group test would fail to
>> show that 1.75 dB differences on a variety of different music are not
>> audible using a DBT.
>>

> I can easily hear 1db difference between channels, and a change
>of 1 db.
>What I can't do is to have 80 db changed to 81 db, then be asked if
>the third unknown is 80 or 81 dbs. and be consistently correct.
>Perhaps I could if I trained as much as you have done. Perhaps not
>Some others could, some couldn't. We're all different. Produce a test
>which will be valid for all ages, genders, extent of training, innate
>musical and ABxing abilities, all kinds of musical experience and
>preference. Then prove BY EXPERIMENT that it works for COMPARING
>COMPONENTS.
>So that anyone can do it and if he gets a null result BE CERTAIN that
>with more training or different musical experience he would not hear
>what he did not hear before. And perhaps just get on widening his
>musical experience and then rcompare (with his eyes covered if he is
>marketing susceptible)
>Let's keep it simple. We're audiophiles here. We're talking about
>MUSICAL REPRODUCTION DIFFERENCES between AUDIO COMPONENTS. I looked
>at your internet graphs. They mean zero to me. I know M. Levinsohn,
>Quad, Apogee, Acoustat not the names of your codecs. You assure me
>that they are relevant. Perhaps. Let's see BY EXPERIMENT if they do.
>In the meantime enjoy your lab work.
>Ludovic Mirabel


Are you really telling me that you didn't understand the gist of the
group listening test I pointed you to?

For one thing, it says that although people have different individual
preferences about how they evaluate codec quality, as a group, they
can identify trends. This, despite the variety of training, hearing
acuity, audio equipment, and listening environment.

Another point is that it would be more difficult to identify trends if
such a study included the opinions of people who judge the hidden
reference to be worse than the revealed reference (simultaneously
judging the encoded signal to be the same as the revealed reference).
In other words, there are people whose listening opinions can't be
trusted, and the DBT is designed to identify them.

The last point is that I can see no reason why such procedures could
not (in theory, if perhaps not in practical terms) be applied to audio
components. Why don't you explain to me what the difference is (in
terms of sensitivity) between using DBT's for audio codecs and using
DBT's for audio components?

Darryl Miyaguchi
  #5  
Old July 1st 03, 11:44 PM
Bob Marcus
external usenet poster
 
Posts: n/a
Default Why DBTs in audio do not deliver (was: Finally ... The Furutech CD-do-something)

(ludovic mirabel) wrote in message >...

> PINK NOISE signal: 10 out of 11 participants got the maximum possible
> correct answers: 15 out of 15 ie. 100%. ONE was 1 guess short. He got
> only 14 out of 15.
> When MUSIC was used as a signal 1 (ONE) listener got 15 corrects,
> 1 got 14 and one 12. The others had results ranging from 7 and 8
> through 10 to (1) 11.
> My question is: was there are ANY significant difference between
> those two sets of results? Is there a *possibility* that music
> disagrees with ABX or ABX with music?


I suspect a major reason it's more difficult to hear level differences
in music is that the actual level is constantly changing. But of
course this effect wouldn't be limited to listening in ABX tests. It
would be harder to discern level differences in *any* comparison
involving music.

<snip?

> The problem is that there are NO
> ABX COMPONENT tests being published- neither better nor worse, NONE.


Possible explanations for this:

1) People did further tests, but didn't get them published because
they arrived at the same result, and no one publishes "old news."

2) People stopped trying because they had no reason to believe they
*would* get different results.

<snip>
> >

> I can easily hear 1db difference between channels, and a change
> of 1 db.
> What I can't do is to have 80 db changed to 81 db, then be asked if
> the third unknown is 80 or 81 dbs. and be consistently correct.


Two questions:

1) What do you mean by "consistent"? 100% of the time, or just with
statistical reliability?

2) Were you able to switch instantaneously between them? Audiophiles
pooh-pooh this, but it's certainly easier to hear level differences
when you can switch instantaneously.

> Perhaps I could if I trained as much as you have done. Perhaps not
> Some others could, some couldn't. We're all different. Produce a test
> which will be valid for all ages, genders, extent of training, innate
> musical and ABxing abilities, all kinds of musical experience and
> preference.


You're assuming that if you can't hear a difference that some other
people can hear, then the test isn't right for you. But maybe you just
can't hear that difference. As you say, we are all different.

> Then prove BY EXPERIMENT that it works for COMPARING
> COMPONENTS.


How would you prove such a thing?

> So that anyone can do it and if he gets a null result BE CERTAIN that
> with more training or different musical experience he would not hear
> what he did not hear before.


The only way to be certain of this would be to train himself, and then
take the test again. OTOH, if there is no documented case of anyone
ever hearing such a difference, it might be a waste of time to try to
find out if you are the exception.

> And perhaps just get on widening his
> musical experience


I'm not aware of any evidence that musical experience is particularly
helpful in these kinds of tests. That's not what "training" is about
in this context.

>and then rcompare (with his eyes covered if he is
> marketing susceptible)


If??? Everyone is susceptible to sighted bias (which has nothing
necessarily to do with "marketing").

> Let's keep it simple. We're audiophiles here. We're talking about
> MUSICAL REPRODUCTION DIFFERENCES between AUDIO COMPONENTS. I looked
> at your internet graphs. They mean zero to me. I know M. Levinsohn,
> Quad, Apogee, Acoustat not the names of your codecs. You assure me
> that they are relevant. Perhaps. Let's see BY EXPERIMENT if they do.


So far as I can tell, the only experiment that would satisfy you would
be one that confirmed your own beliefs about what is and is not
audible. I'm afraid we can't do that.

bob
  #6  
Old July 2nd 03, 04:15 AM
Steven Sullivan
external usenet poster
 
Posts: n/a
Default Why DBTs in audio do not deliver

Darryl Miyaguchi > wrote:
>>Let's keep it simple. We're audiophiles here. We're talking about
>>MUSICAL REPRODUCTION DIFFERENCES between AUDIO COMPONENTS. I looked
>>at your internet graphs. They mean zero to me. I know M. Levinsohn,
>>Quad, Apogee, Acoustat not the names of your codecs. You assure me
>>that they are relevant. Perhaps. Let's see BY EXPERIMENT if they do.
>>In the meantime enjoy your lab work.
>>Ludovic Mirabel


> Are you really telling me that you didn't understand the gist of the
> group listening test I pointed you to?


Whether he realizes it or not, he's telling you *he* doesn't comprehend them.

--
-S.
  #7  
Old July 2nd 03, 05:42 AM
Nousaine
external usenet poster
 
Posts: n/a
Default Why DBTs in audio do not deliver (was: Finally ... The Furutech CD-do-something)

Darryl Miyaguchi
wrote:

In some parts I will reply to the post to which Mr Miyaguch is replying:

>On 1 Jul 2003 15:10:59 GMT,
(ludovic mirabel)
>wrote:
>
>>We're talking about Greenhill old test not because it is perfect but
>>because no better COMPONENT COMPARISON tests are available. In fact
>>none have been published according to MTry's and Ramplemann's
>>bibliographies since 1990.


This is simply not true. I have published 2 double blind tests personally, one
of which covered 3 different wires subsequent to 1990.

>
>I frequently see the distinction being made between audio components
>and other audio related things (such as codecs) when it comes to
>talking about DBT's. What is the reason for this?
>
>In my opinion, there are two topics which should not be mixed up:
>
>1) The effectiveness of DBT's for determining whether an audible
>difference exists
>2) The practical usefulness of using DBT's for choosing one audio
>producer (component or codec) over another.


Actually if nominally competent components such as wires, parts, bits and
amplifiers have never been shown to materially affect the sound of reproduced
music in normally reverberant conditions why would ANYONE need to conduct more
experimentation, or any listening test, to choose between components? Simply
choose the one with the other non-sonic characteristics (features, price,
terms, availability, cosmetics, style...) that suit your fancy.

Indeed 20 years ago, when I still had a day job, Radio Shack often had the
"perfect" characteristic to guide purchase, which was "open on Sunday."
>> I am not knowledgeable enough to decide on differences between
>>your and Greenhill's interpretation of the methods and results.
>>In my simplistic way I'd ask you to consider the following:
>>PINK NOISE signal: 10 out of 11 participants got the maximum possible
>>correct answers: 15 out of 15 ie. 100%. ONE was 1 guess short. He got
>>only 14 out of 15.
>> When MUSIC was used as a signal 1 (ONE) listener got 15 corrects,
>>1 got 14 and one 12. The others had results ranging from 7 and 8
>>through 10 to (1) 11.
>> My question is: was there are ANY significant difference between
>>those two sets of results? Is there a *possibility* that music
>>disagrees with ABX or ABX with music?


No: it just means with the right set of music 2 dB is at the threshold. Don't
forget that listening position affects this stuff too. Also Mr Atkinson would
say that perhaps the lower scoring subjects didn't have personal control of the
switching.

>Even between two samples of music (no pink noise involved), I can
>certainly believe that a listening panel might have more or less
>difficulty in determining if they hear an audible difference. It
>doesn't follow that music in general is interfering with the ability
>to discriminate differences when using a DBT.


Actually it simp,y shows that pink noise and other test signals are the most
sensitive of programs. It may be possible to divulge a 'difference' with noise
that would never be encountered with any known program material.

It's also possible that certain programs, such as Arny Kreuger's special
signals, might disclose differences that may never be encountered with
commercially available music (or other) programs. So?

>> I would aoppreciate it if would try and make it simple leaving
>>"confidence levels" and such out of it. You're talking to ordinary
>>audiophiles wanting to hear if your test will help them decide what
>>COMPONENTS to buy.


As before; you haven't ever been precluded from making any purchase decisions
from scientific evidence before; why should any disclosure affect that now or
in the future.

Examination of the extant body of controlled listening tests available contain
enough information to aid any enthusiast in making good decisions. Even IF the
existing evidence shows that wire is wire (and it does) how does that preclude
any person from making any purchase decision? In my way of thinking it just
might be useful for a given individual to know what has gone before (and what
hasn't.)

I still don't know how this cannot do anything but IMPROVE decision making?

>See my first comments. It's too easy to mix up the topic of the
>sensitivity of DBT's as instruments for detecting audible differences
>with the topic of the practicality of using DBT's to choose hifi
>hardware. The latter is impractical for the average audiophile.


No it's not. Just like 0-60 times, skid-pad and EPA mileage tests simply cannot
be made by the typical individual that doesn't mean that they cannot be used to
improve decision-making. Likewise the body of controlled listening test results
can be very useful to any individual that wishes to make use of them to guide
decisions.

Otherwise the only information one has is "guidance" from sellers, anecdotal
reports and "open" listening tests. The latter , of course, is quite subject to
non-sonic influence.

So IMO, a person truly interested in maximizing the sonic-quality throughput of
his system simply MUST examine the results of bias controlled listening tests
OR fall prey to non-sonic biasing factors, even if they are inadvertent.

>
>> Who can argue with motherhood? The problem is that there are NO
>>ABX COMPONENT tests being published- neither better nor worse, NONE.
>> I heard of several audio societies considering them. No results.
>>Not from the objectivist citadels: Detroit and Boston. Why?. Did they
>>pan out?


Given the two dozen controlled listening tests of power amplifiers published
through 1991 doesn't it seem that no one needs to conduct more? Wires? The last
test I published was in 1995. Not late enough?

Why not? No manufacturer has EVER produced a single bias controlled experiment
that showed their wires had a sound of their own in over 30 years. Why should
one expect one now?

I certainly can't do it; although I've given it my level (no pun intended)
best. IOW, I can't produce an experiment that shows nominally competent wires
ain't wires .... 'cuz they ain't.

>I can think of a couple of reasons:
>
>1. It's expensive and time consuming to perfom this type of testing
>2. The audible differences are, in actuality, too subtle to hear, ABX
>or not. Why bother with such a test?


Why bother in performing a sound quality "test" that the manufacturers of the
equipment can't produce? IF amps ain't amps; wires ain't wires and parts ain't
parts then why haven't the makers and sellers of this stuff produced repeatable
bias controlled listening tests that show this to be untrue?

>Then there is the possibility that you seem to be focussing on,
>ignoring the above two:
>
>3. DBT's in general may be decreasing the ability to hear subtle
>differences.


Actually they preclude the ability to "hear" non-sonic differences.

>Which of the the above reaons do you think are most likely?
>
>> > Moving away from the question Greenhill was investigating

>>(audible
>>> differences between cables) and focusing only on DBT testing and
>>> volume differences: it is trivial to perform a test of volume
>>> difference, if the contention is being made that a DBT hinders the
>>> listener from detecting 1.75 dB of volume difference. Especially if
>>> the listeners have been trained specifically for detecting volume
>>> differences prior to the test.
>>> However, such an experiment would be exceedingly uninteresting, and I
>>> have doubts it would sway the opinion of anybody participating in this
>>> debate.
>>>

>> The volume difference was just a by-effect of a comparison between
>>cables.
>>And yes, TRAINED people would do better than Greenhill's "Expert
>>audiophiles" ie rank amateurs just like us. Would some though do
>>better than the others and some remain untrainable? Just like us.


I think Ludovic is "untrainable" because he will accept only answers he already
believes are true.

>I have no doubt that there are some people who are unreliable when it
>comes to performing a DBT test. In a codec test using ABC/HR, if
>somebody rates the hidden reference worse than the revealed reference
>(both references are identical), his listening opinion is either
>weighted less or thrown out altogether.


What you are describing is 'reverse significance' which is typically a
inadvertant form of internal bias.

>>> For what it's worth, I have performed enough ABX testing to convince
>>> myself that it's possible for me to detect volume differences < 0.5 dB
>>> using music, so I doubt very highly that a group test would fail to
>>> show that 1.75 dB differences on a variety of different music are not
>>> audible using a DBT.
>>>

>> I can easily hear 1db difference between channels, and a change
>>of 1 db.
>>What I can't do is to have 80 db changed to 81 db, then be asked if
>>the third unknown is 80 or 81 dbs. and be consistently correct.
>>Perhaps I could if I trained as much as you have done. Perhaps not
>>Some others could, some couldn't. We're all different. Produce a test
>>which will be valid for all ages, genders, extent of training, innate
>>musical and ABxing abilities, all kinds of musical experience and
>>preference. Then prove BY EXPERIMENT that it works for COMPARING
>>COMPONENTS.
>>So that anyone can do it and if he gets a null result BE CERTAIN that
>>with more training or different musical experience he would not hear
>>what he did not hear before. And perhaps just get on widening his
>>musical experience and then rcompare (with his eyes covered if he is
>>marketing susceptible)
>>Let's keep it simple. We're audiophiles here. We're talking about
>>MUSICAL REPRODUCTION DIFFERENCES between AUDIO COMPONENTS. I looked
>>at your internet graphs. They mean zero to me. I know M. Levinsohn,
>>Quad, Apogee, Acoustat not the names of your codecs. You assure me
>>that they are relevant. Perhaps. Let's see BY EXPERIMENT if they do.
>>In the meantime enjoy your lab work.
>>Ludovic Mirabel

>
>Are you really telling me that you didn't understand the gist of the
>group listening test I pointed you to?
>
>For one thing, it says that although people have different individual
>preferences about how they evaluate codec quality, as a group, they
>can identify trends. This, despite the variety of training, hearing
>acuity, audio equipment, and listening environment.
>
>Another point is that it would be more difficult to identify trends if
>such a study included the opinions of people who judge the hidden
>reference to be worse than the revealed reference (simultaneously
>judging the encoded signal to be the same as the revealed reference).
>In other words, there are people whose listening opinions can't be
>trusted, and the DBT is designed to identify them.


That result identifies a form of experimental bias, does it not?

>The last point is that I can see no reason why such procedures could
>not (in theory, if perhaps not in practical terms) be applied to audio
>components. Why don't you explain to me what the difference is (in
>terms of sensitivity) between using DBT's for audio codecs and using
>DBT's for audio components?
>
>Darryl Miyaguchi


There is no difference. It seems to me that this poster may have never taken a
bias controlled listening test or, if he has, the results didn't fit with prior
held expectations. It's much easier to argue with the existing evidence than
prove that you can hear things that no human has been able to demonstrate, when
not peeking.

As I've said before; there are many proponents of high-end sound of wires, amps
and parts ... but, so far, no one (in over 30 years) has ever produced a single
repeatable bias controlled experiment that shows that nominally competent
products in a normally reverberant environment (listening room) have any sonic
contribution of their own.

Nobody! Never! How about some evidence? I'll believe in BigFoot ....just show
me the body!
  #8  
Old July 2nd 03, 04:27 PM
KikeG
external usenet poster
 
Posts: n/a
Default Why DBTs in audio do not deliver (was: Finally ... The Furutech CD-do-something)

(ludovic mirabel) wrote in message >...

> We're talking about Greenhill old test not because it is perfect but
> because no better COMPONENT COMPARISON tests are available. In fact
> none have been published according to MTry's and Ramplemann's
> bibliographies since 1990.


I gave a link to one at my previous message, related to soundcards and
a DAT (
http://www.hydrogenaudio.org/index.p...hl=pitch&st=0&
). It revealed some audible differences. Soundcards and DATs are audio
components, aren't they?

There's another one concerning just a soundcard he
http://www.hydrogenaudio.org/index.p...d6a76f1f8d0738
It finally revealed no audible differences.

About Greenhill's test and level differences:

> PINK NOISE signal: 10 out of 11 participants got the maximum possible
> correct answers: 15 out of 15 ie. 100%. ONE was 1 guess short. He got
> only 14 out of 15.
> When MUSIC was used as a signal 1 (ONE) listener got 15 corrects,
> 1 got 14 and one 12. The others had results ranging from 7 and 8
> through 10 to (1) 11.
> My question is: was there are ANY significant difference between
> those two sets of results? Is there a *possibility* that music
> disagrees with ABX or ABX with music?
> I would aoppreciate it if would try and make it simple leaving
> "confidence levels" and such out of it. You're talking to ordinary
> audiophiles wanting to hear if your test will help them decide what
> COMPONENTS to buy.


Citing Greenhill's article, refering to 24-gauge cable: "Its 1.8-ohm
resistance resulted in a 1.76-dB insertion loss with an 8-ohm
resistive load".

I don't know if this has been addressed before, but this 1.76 dB loss
corresponds to a pure resistive load. Speakers are quite different
from pure resistive loads, in the sense that their impedance varies
with frequency, being this impedance higher than the nominal in most
part of the spectrum. So, this 1.8 ohm in series with a real world
speaker would definitely result in an attenuation below 1.76 dB over
the whole audible spectrum on a wideband signal. Also, the attenuation
will vary with frequency, so that max attenuation will be at the
frequencies where the speaker impedance is minimum, and there will be
little attenuation at frequencies where speaker impedance is maximum.
So, the whole attenuation will depend on the spectrum of music used.
There's a possibility that choral music has most of its content at
frequencies where attenuation was not high, but it's difficult to know
without having access to the actual music used and the speaker
impedance curve.

Said, that, I tried yesterday to ABX an 1.7 dB wideband (frequency
constant) level attenuation on a musical sample. Result: 60/60 on a
couple of minutes, Not a single miss. It is obvious to hear, but one
could argue I'm trained.

Despite that, I claim that, any person that does not have serious
auditive problems, would be able to ABX a 1.7 dB wideband level
difference on any kind of real-world music, being trained or not in
ABX testing, just taking a couple of minutes to explain him the basics
of ABX testing.

Now, you make a point in that you have to be trained in ABXing to be
good at it. I say that you have to be trained in *any* method you use
to be good at it. Also, ABX testing per se requires little training.
What takes more training is to learn to detect reliabily some kind of
differences, whether you use ABX or not. Serious ABX training is
required just for detecting very subtle differences, just like in
every other area where high performance is required.

And finally, an analogy: you can't evaluate driving comfort in cars
without driving them, so you have to learn how to drive in order to
evaluate driving comfort. Driving a car is the only reliable way to
evaluate driving confort, whether you are good at it or not. And
such...
  #10  
Old July 3rd 03, 03:59 PM
S888Wheel
external usenet poster
 
Posts: n/a
Default Why DBTs in audio do not deliver (was: Finally ... The Furutech CD-do-something)

Tom said

>
>Actually if nominally competent components such as wires, parts, bits and
>amplifiers have never been shown to materially affect the sound of reproduced
>music in normally reverberant conditions why would ANYONE need to conduct
>more
>experimentation, or any listening test, to choose between components? Simply
>choose the one with the other non-sonic characteristics (features, price,
>terms, availability, cosmetics, style...) that suit your fancy.
>


That is the 64,000 dollar if.

Tom said

>
>Examination of the extant body of controlled listening tests available
>contain
>enough information to aid any enthusiast in making good decisions. Even IF
>the
>existing evidence shows that wire is wire (and it does) how does that
>preclude
>any person from making any purchase decision? In my way of thinking it just
>might be useful for a given individual to know what has gone before (and what
>hasn't.)


Well, so far I don't see it the way you do. I must at this point thanl you for
the articles on this subject you sent me when I asked for the alleged body of
empirical evidence that prooved your position on the audible differences of
amplifiers. The "body of evidence" you sent me that constituted actual
evidence, raw data, was not much of a body. Only two articles out of the six
you sent had raw data ( "Can you trust your ears" by Tom Nousiane and "Do all
amplifiers sound the same" by David Clark) and only the test you conducted had
it in a usful table which could allow for the examination of trends such as
learning curves or fatigue curves. First, this is not much of a body of
evidence. Second, if we are to draw conclusions from the results we would have
to conclude that some people can hear differences between amps and some amps
sound idfferent than some other amps. Of course it would be a mistake to draw
conclusions from those tests by themselves because they simply are not that
conclusive. If what you sent me is the best evidence out there and if what you
sent me is any significant portion of the much talked about "extant body of
controled listening tests available" then I don't see how anyone can draw any
strong conclusions one way or another.

Tom said

>
>So IMO, a person truly interested in maximizing the sonic-quality throughput
>of
>his system simply MUST examine the results of bias controlled listening tests
>OR fall prey to non-sonic biasing factors, even if they are inadvertent.
>


I examined the results of contained in the articles you sent me and do not
find them conclusive. Unfortunately four of the six articles you sent me had no
raw data to examine and only offered conclusions. Given the fact that the two
articles that did offer raw data drew conclusions that I find questionable i
have trouble feeling condifent about the conclusions drawn in the other
articles missing the raw data. So I find the evidence to date that I have seen
less than helpful in purchase decisions.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
Crazy market saturation! CatalystX Car Audio 48 February 12th 04 10:18 AM
FAQ: RAM LISTING OF SCAMMERS, SLAMMERS, AND N'EER DO WELLS! V. 8.1 OFFICIAL RAM BLUEBOOK VALUATION Audio Opinions 0 November 1st 03 09:14 AM
A quick study in very recent RAHE moderator inconsistency Arny Krueger Audio Opinions 74 October 7th 03 05:56 PM
System balance for LP? MiNE 109 Audio Opinions 41 August 10th 03 07:00 PM
gps install: how to mix its audio (voice prompting) with head unit audio-out? bryan Car Audio 0 July 3rd 03 05:46 PM


All times are GMT +1. The time now is 12:34 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
Copyright 2004-2021 AudioBanter.com.
The comments are property of their posters.