View Single Post
  #120   Report Post  
Harry Lavo
 
Posts: n/a
Default weakest Link in the Chain

"Audio Guy" wrote in message
news:upXMb.43728$sv6.119711@attbi_s52...
In article ntMMb.41503$xy6.112391@attbi_s02,
"Harry Lavo" writes:
"chung" wrote in message
news:iSKMb.39300$5V2.57843@attbi_s53...
Harry Lavo wrote:
"chung" wrote in message
...
RBernst929 wrote:
You know, Mr. Pinkerton, Im getting a little peeved reading about

your
100%
certainty of objectivity. If YOUR belief system includes so

called
"objective"
paradigms then i accept that as your perogative. However, your
assertions that
other people's perceptions are wrong because they dont correlate

with
measurements is pompous. If you really believe that we currently

know
all
there is to know about equipment, measurements and "objectivity"

that
is
your
right. But, im here to tell you that everyone lives through their
perceptions
including scientists and objectivists. Please be so humble as to

leave
room
for perceptions as reality because i do hear a difference in

cables
and
bi-wiring even if you say i should'nt.

That's why he and some of us are putting up the $4K pool: to

motivate
you to prove us wrong.

The difference between us is, i can
acknowledge your point of view without disparaging it or saying

its
wrong. If
science has taught us anything, it is that we know very little

about
physical
processes.

That's not true. We have a good understanding of the human hearing
limits, and of the psychological effects leading to perceived
differences. Sure, we don't know everything, but we know a lot.
Esepcially about electrical engineering as applied to audio

reproduction.


As you know from previous discussions Chung there is widespread

belief
among
audiophiles that the test itself is flawed in revealing most audible
perceptions other than volume differences, frequency differences, and
distortion artifacts.

What else is there?


I guess you only read posts that reinforce your prior beliefs? I spent

a
lot of time here outlining a double-blind proto-monadic test that could
serve as a control, explained the test, and explained its value as a
control. Do you not recall?


I recall that you over and over again promote a single, unduplicated
test that you think shows that some of your unfounded critisms of
standard DBTs might be valid, yet it had nothing to do with component
comparison, much less open-ended, which is often the rallying cry of
the anti-DBTers against DBTs. So explain again why *your* test is
superior?


Thank you for remembering. However, you don't quite remember accurately.
The control test I proposed is similar to the Oohashi et al test in that it
is evaluative over a range of qualitative factors, done in a relaxed state
and environment, and with repeated hearing of full musical excerpts. But it
is not a duplicate of the test. The arguments for such tests have been
made here for years..long before the Oohashi article was published. He and
his researchers apparently reached the same conclusion...that it was a more
effective way of testing for the purposes under study...which were
semi-open-ended evaluations of the musical reproduction. Double blind, by
the way, as was my proposed test.

The test is good enough for designers like Paradigm, Harmon Kardon,

KEF,
etc., no? You trust the so-called audiophiles, some of them believing a
cable needs to be burned in, or you trust the engineers at Paradigm or
KEF, or the designers of codecs?


Again you are joining Stewart in repeating an (at best) half-truth ad
nauseum here. Those firms use DBT's for specific purposes to listen for
specific things that they "train" their listeners to hear. That is

called
"development". In the food industry we used such tests for color, for
texture and "mouth feel", for saltiness and other flavor

characteristics.
That is a far cry from a final, open-ended evaluation whereby when you

start
you have simply a new component and are not sure what you are looking

for /
simply listening and trying to determine if / how the new component

sounds
vs. the old. It is called "open ended" testing for a reason...


Yes, because it never ends, and so never gets to a result.


This is just retoric and is absolute nonesense! :-(

could reason to believe that conventional dbt or abx testing is not the

best
way to do it and may mask certain important factors that may be more
apparent in more relaxed listening.


And again, your only defense is that DBT results don't agree with
your opinions. DBTs can and have been done over long periods with
relaxed listening, and the results are the same.


On Tom's say so and without any detailed description or published data. I'm
talking about a rigorous, scientific test of dbt, control proto-mondadic,
and sighted open ended testing. With careful sample selection, proctoring,
statistical analysis, and peer-reviewed publication. Once that is done I
will be happy to accept what conclusions emerge. It hasn't been done, and
so *assertiosn* that comparative dbt's such as abx are appropriate is just
that, an assertion.

The issue isn't so much the blind vs
sighted as it is the comparative vs. the evaluative....and while a

double
blind evaluative test (such as the proto-monadic "control test" I

outlined)
may be the ideal, it is so difficult to do that it is simply not

practical
for home listeners treating it as a hobby to undertake. So as

audiophiles
not convinced of the validity of convention dbt's for open-ended

evaluation,
we turn to the process of more open ended evaluative testing as a better

bet
despite possible sighted bias.


Again only because you don't agree with the results.


Will you please stop saying that. I have no particular stake in this..I am
not a proponent of cable differences. I use mostly 12 guage twisted pair in
my own system...chosen by sighted listening as offering everything more
exotic cables offered that I tested, and better than some. I have an MBA
with a strong dose of operations analysis and behavioral psychology...my
thinking is pretty disciplined. The inability of the "objectivists" here to
acknowledge the primacy of their (your) belief system drives me up the wall.
That and the fact that I have twenty years of helping to design and analyze
marketing research tests is why I am one of the people here who have
challenged the conventional assumptions.

Of course we've been over this many times here. But obviously you don't
even care to acknowledge the issue.


There are quite a few issues you obviously don't care to acknowledge
yourself, such as JNDs, known bias from knowing a change has been
made, etc. How confident are you that you can overcome all of these
known biases? Why do you think DBTs were invented?


I certainly acknowledge those things. Thats why I propsed a control test
along with both dbt and open-end alternative tests. However, barring such a
definitive control test, I choose open-end evaluative sighted testing for
most purposes in evaluating component, over comparative dbts. I choose it
because I believe the type of error that can result is less troublesome.

Also has been pointed out, no control tests have
*ever* been done on these techniques against other forms of

open-ended
evaluative testing of audio components. *THAT* is why most of us are
totally disinterested in the $4000 challenge (in addition to the fact

that
it has only been vaguely promised and the money itself doesn't

physically
exist in a pool...but it does make a great stick to wave at people,

doesn't
it.)

Why don't you all pool your $4000, do a definitive control test, and

if
it
supports your position write it up and submit it for peer review.

Which self-respecting scientific journal will be interested in
publishing an experiment that agrees with existing knowledge? Now if

you
can show that cables that measure within 0.1dB from 20Hz to 20KHz can

be
distinguished, *that's* worth publishing.


I'm talking about a comparative dbt vs a control dbt for evaluative
listening. That has not been done and not been published. This is a bit
(quite a bit, I think) of a red herring.


Not at all, Chung is quite right that proposing such a test would
just get big yawns from the scientific community, but your *test* is
one huge red herring IMO.


You are welcome to your opinion, but I still believe I am right. Design a
really well done experiment and it will get published. Or alternatively,
show me one rejection from an established journal with a written assessment
of a test that has never been done before, stating that it is not worth
publishing because it is "old news".

Then if
it gets accepted you'll get your $4000 worth, and all the free time

you
won't have to argue here will allow you time to enjoy more music.



I'll take the lack of response to indicate a willingness to concede the
point. Otherwise I'd have to think you are once again not taking my
critique/proposal seriously. :-)


Why would he, you don't seem to take the mounds of evidence about the
results of DBTs seriously. And please no reams of text about 1,75db
differences, etc, etc, as I glaze over everytime I read it, so don't
bother.


I take them seriously, I just don't take them as definitive without a proper
control test. End of discussion.