View Single Post
  #79   Report Post  
Howard Ferstler
 
Posts: n/a
Default science vs. pseudo-science

(ludovic mirabel) wrote in message news:Ilpcb.570671$o%2.255805@sccrnsc02...
"normanstrong" wrote in message news:UBlcb.425745$cF.131919@rwcrnsc53...
"ludovic mirabel" wrote in message
news:5s%bb.555712$Ho3.96892@sccrnsc03...
"All Ears" wrote in message

news:97Fbb.406782$cF.126279@rwcrnsc53...
big snip

See below:
I'm at somewhat of a disadvantage, never having read the article under
consideration. Nevertheless, it seems that we're talking about a test
in which 15 trained individuals each made 5 attempts to recognize a
2db distortion--for a total of 75 attempts. (I hope I got this
right.)

Some subjects aced the test, getting all 5 right. According to Mr.
Mirabel, this was because those listeners actually heard the
difference, while the ones that only got 1 right out of 5 tries were.
. . .were what? What can we say about these individuals in much the
same way that we credited the perfect scorers with more sensitive
hearing? After all, even writing down the answer without listening at
all will give a better score than 1 out of 5. Were these people just
unlucky? If so, couldn't we say that the perfect scorers were
similarly just lucky?

If I wanted to find out if the lucky individuals really were lucky,
I'd run the test again, with these individuals running a total of 75
trials. If they got 61% correct, then they're no better than the
average subject from the first trial. Finally, I'd pick the single
subject that did the very best, and have him run the test again, this
time all 75 trials. My guess would be that he would be right 61% of
the time, which would validate the original supposition.

Norm Strong


Norman, I agree with you. The interesting results are those of the
better performers. Either THEY heard it or not.
Everything possible should have been done to find out. Their results
should have been followed up till no doubt remained either way. You
can guess what you like. Guesses don't replace statistics.
My point is exactly that the Masters, Clarks etc. were not interested
enough.
Neither was their publisher.
As a result we get a homogenised, blended result proving that Mr.
Average rules.


The limitations of our poor "Mr. Average" notwithstanding, even you
will have to admit that the ABX tests done by Masters, Clark, Green,
etc. have indicated that the differences people did manage to hear (by
your reckoning, at least) were small by the standards most listeners
would apply.

In other words, this debate basically involves hair-splitting
differences. Now, I am very aware that a typical high ender is often
obsessed with "hair-splitting differences" (I am that way myself,
particularly with the product reviews I have done, but this mainly
involves speaker, surround processor, and subwoofer performance) but
even such individuals will have to admit that said differences would
be very hard to hear during typical, "music for enjoyment" listening
sessions. If they were not hard to hear, the people taking ABX tests
with amps and wires would not have to struggle so much with (and
supposedly be all stressed out by) the test procedures. Serious
differences would be spotted immediately.

Anyway, in other parts of the thread you go on and on and on about
what Clark, Masters, Green, etc. have done and you debate endlessly
about what it all means. Why not just do an ABX test yourself (with a
real ABX device) and see what YOU come up with. Do the work and see
whether or not differences you hear sighted (between a known A and a
known B) show up when you switch to X. I simply cannot see what all
the big deal is when it comes to the ABX issue.

The interesting thing about all these DBT debates I have been
observing is that the only alternative for some people appears to be
sighted comparisons. For them, it is necessary to know what is playing
in order to know what to listen for, or something like that. And of
course, they claim that the stress caused by the ABX protocol (or any
other DBT protocol) or the supposed "rushed" listening involved causes
their ears to clog up.

However, I see this as poppycock. Basically, a sighted comparison
allows the participant to cheat. He may cheat to fool others or he may
cheat to fool himself. That is it, pure and simple. However, remember
that the ABX device (adjusted so that the levels are precisely
matched) allows them to compare known A and B components and cheat
during that part of the procedure all they want. It is only when they
switch to X that the pressure is on and they have to deliver the
goods.

Yeah, I can see how that would make some people sweat, because it
means a lot to some of them to have a preferred product come out the
winner (why this need for a preferred product to win continues to
amaze me, unless maybe we are talking about the guy who designed the
thing or sells it), and of course there is the issue of discovering
that those golden ears may not be so golden after all. The latter may
be the most stressful thing of all for those who do not have a
commercial stake in the results.

Howard Ferstler