View Single Post
  #9   Report Post  
Mkuller
 
Posts: n/a
Default A comparative versus evaluative, double-blind vs. sighted control test

wrote:
Such a 'verifying' test has not been done for a very simple reason:

The blind protocols have been SHOWN to be sensitive down to the lowest
instantaneous loudness that results in a signal at the auditory nerve.

It is a waste of time to do a 'verifying' test when a test validates itself,
based on already well known research data.


This sounds like the old "don't confuse me with facts, I've already made up my
mind" arguement. You guys seem positive you're right in spite of being a small
minority in the audiophile universe. Isn't there a chance you are mistaken?
Until you admit this possibility, I wouldn't expect much help in coming up with
some type of a *verification test* for dbts in audio. In which case we can
just continue the endless debate forever ("perfect DBTs forever" - apologies to
Sony).

Sure dbts have been shown to be sensitive to "the threshold of human hearing"
when the *one-dimensional* artifact being tested for is *known* and
*quantified* and the subjects are *trained* to recognize it. In an audio
component dbt, *none* of these factors is present. It is a very different type
of use for this test than is seen in published clinical research studies.

In audio, the test is *open-ended*, i.e. what the listeners are listening for
(a *multi-dimensional difference*) is *unknown*, *not quantified*, and there is
no training of the subjects, because they can't be trained to hear something
that might not be there. Music is the only meaningful program source, and is
recognized by clinical researchers to be insensitive to audible differences in
dbts.

Until there is a difinitive *verification* test for dbts between audio
components using music, there is no proof that a dbt does not mask or obscure
the very audible differences you are using it to detect.
Regards,
Mike