View Single Post
  #4   Report Post  
Darryl Miyaguchi
 
Posts: n/a
Default Why DBTs in audio do not deliver (was: Finally ... The Furutech CD-do-something)

On 1 Jul 2003 15:10:59 GMT, (ludovic mirabel)
wrote:

We're talking about Greenhill old test not because it is perfect but
because no better COMPONENT COMPARISON tests are available. In fact
none have been published according to MTry's and Ramplemann's
bibliographies since 1990.


I frequently see the distinction being made between audio components
and other audio related things (such as codecs) when it comes to
talking about DBT's. What is the reason for this?

In my opinion, there are two topics which should not be mixed up:

1) The effectiveness of DBT's for determining whether an audible
difference exists
2) The practical usefulness of using DBT's for choosing one audio
producer (component or codec) over another.

I am not knowledgeable enough to decide on differences between
your and Greenhill's interpretation of the methods and results.
In my simplistic way I'd ask you to consider the following:
PINK NOISE signal: 10 out of 11 participants got the maximum possible
correct answers: 15 out of 15 ie. 100%. ONE was 1 guess short. He got
only 14 out of 15.
When MUSIC was used as a signal 1 (ONE) listener got 15 corrects,
1 got 14 and one 12. The others had results ranging from 7 and 8
through 10 to (1) 11.
My question is: was there are ANY significant difference between
those two sets of results? Is there a *possibility* that music
disagrees with ABX or ABX with music?


Even between two samples of music (no pink noise involved), I can
certainly believe that a listening panel might have more or less
difficulty in determining if they hear an audible difference. It
doesn't follow that music in general is interfering with the ability
to discriminate differences when using a DBT.

I would aoppreciate it if would try and make it simple leaving
"confidence levels" and such out of it. You're talking to ordinary
audiophiles wanting to hear if your test will help them decide what
COMPONENTS to buy.


See my first comments. It's too easy to mix up the topic of the
sensitivity of DBT's as instruments for detecting audible differences
with the topic of the practicality of using DBT's to choose hifi
hardware. The latter is impractical for the average audiophile.

Who can argue with motherhood? The problem is that there are NO
ABX COMPONENT tests being published- neither better nor worse, NONE.
I heard of several audio societies considering them. No results.
Not from the objectivist citadels: Detroit and Boston. Why?. Did they
pan out?


I can think of a couple of reasons:

1. It's expensive and time consuming to perfom this type of testing
2. The audible differences are, in actuality, too subtle to hear, ABX
or not. Why bother with such a test?

Then there is the possibility that you seem to be focussing on,
ignoring the above two:

3. DBT's in general may be decreasing the ability to hear subtle
differences.

Which of the the above reaons do you think are most likely?

Moving away from the question Greenhill was investigating

(audible
differences between cables) and focusing only on DBT testing and
volume differences: it is trivial to perform a test of volume
difference, if the contention is being made that a DBT hinders the
listener from detecting 1.75 dB of volume difference. Especially if
the listeners have been trained specifically for detecting volume
differences prior to the test.
However, such an experiment would be exceedingly uninteresting, and I
have doubts it would sway the opinion of anybody participating in this
debate.

The volume difference was just a by-effect of a comparison between
cables.
And yes, TRAINED people would do better than Greenhill's "Expert
audiophiles" ie rank amateurs just like us. Would some though do
better than the others and some remain untrainable? Just like us.


I have no doubt that there are some people who are unreliable when it
comes to performing a DBT test. In a codec test using ABC/HR, if
somebody rates the hidden reference worse than the revealed reference
(both references are identical), his listening opinion is either
weighted less or thrown out altogether.

For what it's worth, I have performed enough ABX testing to convince
myself that it's possible for me to detect volume differences 0.5 dB
using music, so I doubt very highly that a group test would fail to
show that 1.75 dB differences on a variety of different music are not
audible using a DBT.

I can easily hear 1db difference between channels, and a change
of 1 db.
What I can't do is to have 80 db changed to 81 db, then be asked if
the third unknown is 80 or 81 dbs. and be consistently correct.
Perhaps I could if I trained as much as you have done. Perhaps not
Some others could, some couldn't. We're all different. Produce a test
which will be valid for all ages, genders, extent of training, innate
musical and ABxing abilities, all kinds of musical experience and
preference. Then prove BY EXPERIMENT that it works for COMPARING
COMPONENTS.
So that anyone can do it and if he gets a null result BE CERTAIN that
with more training or different musical experience he would not hear
what he did not hear before. And perhaps just get on widening his
musical experience and then rcompare (with his eyes covered if he is
marketing susceptible)
Let's keep it simple. We're audiophiles here. We're talking about
MUSICAL REPRODUCTION DIFFERENCES between AUDIO COMPONENTS. I looked
at your internet graphs. They mean zero to me. I know M. Levinsohn,
Quad, Apogee, Acoustat not the names of your codecs. You assure me
that they are relevant. Perhaps. Let's see BY EXPERIMENT if they do.
In the meantime enjoy your lab work.
Ludovic Mirabel


Are you really telling me that you didn't understand the gist of the
group listening test I pointed you to?

For one thing, it says that although people have different individual
preferences about how they evaluate codec quality, as a group, they
can identify trends. This, despite the variety of training, hearing
acuity, audio equipment, and listening environment.

Another point is that it would be more difficult to identify trends if
such a study included the opinions of people who judge the hidden
reference to be worse than the revealed reference (simultaneously
judging the encoded signal to be the same as the revealed reference).
In other words, there are people whose listening opinions can't be
trusted, and the DBT is designed to identify them.

The last point is that I can see no reason why such procedures could
not (in theory, if perhaps not in practical terms) be applied to audio
components. Why don't you explain to me what the difference is (in
terms of sensitivity) between using DBT's for audio codecs and using
DBT's for audio components?

Darryl Miyaguchi