View Single Post
  #2   Report Post  
Arny Krueger
 
Posts: n/a
Default Why DBTs in audio do not deliver (was: Finally ... The Furutech

wrote in message


After checking the test description, I see that the test has a built
in assumption that the reference is best, and asks the subject to
rate the codec in terms of the amount of "annoying degredation".


I think it is safe to say that the ABC/hr test is based on the
widely-accepted concept of "sonic accuracy".

If a subject is consistently rating a hidden reference as degraded
under those conditions, I would think that what's happening is this:


The subject hears that particular codec as euphonic. That doesn't fit
the test assumption, and there isn't any way to rate the codec as
better than the reference. The easy way out of that dilemma, rather
than thinking about it enough to recognize what has happened,
is to assume the reference is the better one as the tester
says it should be, and if the subject thinks otherwise they must
have misidentified it. So they pick the better one as the reference
and report degradation on the other one.


Within the context of the concept of sonic accuracy there is no way that
anything can sound better than perfect reproduction of the reference.

If you discard the data from the people who did this, rather than
recognizing what it means, wouldn't that create a severe bias in this
testing procedure against any codec that sounds euphonic to some
people?


Within the context of the sonic accuracy model, there is no such thing as
euphonic coloration. All audible changes represent inaccurate reproduction,
which is undesirable. This is reasonable because there is no universal
agreement about what constitutes euphonic coloration.