View Single Post
  #57   Report Post  
ludovic mirabel
 
Posts: n/a
Default science vs. pseudo-science

wrote in message ...
ludovic mirabel wrote:


Be kind. Be useful and instructive. Skip "starting points" and
"trajectories", It might take us back to ancient Egypt and Babylon.
Just let's have the ground-research for ABX in comparing components.
,. The readers such as Mr. Wheel have been waiting and asking for such
evidence for a long , long time.


JjNunes answers:
As has been gone over many times, it is a body of evidence that supports
the fact that the test is sensitive down to the physical limits of the
hearing system. That limit is defined as the lowest instataneous loudness
that results in a detectable signal at the auditory nerve. That level is
well known and is used routinely as a reference in the more sophisticated
hearing tests in anechoic chambers.
There is really no disagreement among professionals in psychoacoustic
research that the test validates itself as described above and the body
of evidence (the books mentioned are just some of the references) supports
the results.

Where is the "ground research for COMPONENT CMPARISON BY ABX"?
Analogies and inferences from other areas will not do (see below)
Translated for clarity your evidence means just this much:
psychometricians find that selected, trained subjects with normal
hearing will still hear normally while ABXing. A great hearing test.
Bully for psychometrics' "professionals". A shame though they will not
do any component comparisons. These are for us ordinary audiophiles or
what are they for?
What has it all got to do with comparing components for their
MUSICAL reproduction differences? Something more complicated is
involved- a zillion different brains of a zillion "audiophiles".
Beethoven and Klemperer would have been disbarred from psychometric
research- what a shame!

An analogy may illustrate the point. There is a deliberate exageration
to help illustrate it:

Writing a boatload of peer reviewed papers and books to show that
the test validates itself is like doing the same for demonstrating
the effectiveness of scalpels in surgery.

You don't use a scalpel to cut bread. It works better in the
surgical theatre and you don't use psychometric tests to distinguish
between audio components.
Psychometricians keep out of it. Perhaps they know something. Or can
you give a references to the contrary?

In such a situation, there is no need to write volumes. At least, that
is how I understand it to be viewed within the field. Somebody can correct
me if they are interested. (maybe there are very old references about
scalpels) But the point is made.

And the point is? (Sorry couldn't resist this generous opening)

You have mentioned that you have a problem with no defined end point.
Most of the 'softer' (for lack of abetter word) sciences are like that.
I think it's unreasonable to dismiss them on that alone. But it seems
to be the thing in a 'postmodern' culture. I don't like postmodernism,
especially the thought of knowledge being a utility.

Your thoughts on postmodern thinking are appreciated. But I'm not
looking in RAHE for new insights into the theory of knowledge but for
something very much simpler. I'll quote my text from one week ago that
also appears to have slipped your attention: ("The endless debate",
Sept 13
DBT2: Use in research including psychoacoustics; Subjects are
trained and the hopeless rejected- ie they are selected. A known
artefact (a certain amount of distortion, frequency bumps etc) is
introduced- subject either hears it or does not. Period.

Something else started being called "DBT" which out of courtesy
I will call "DBT" 3.- suggested for comparing components; Randomly
collected test population. Diferent ages, gender, hearing ability,
training and aptitude for the test protocol, different musical
exposure and interest. *No objective target to aim at* so no one can
tell who is right and who is wrong. The few who hear or the most who
don't? Consequently the proctor verdict is by majority vote-the lowest
common denominator. The whole thing as subjectivist as could be and
certainly not replicable by another panel.

Bibliographies have been posted on RAHE in the past. You may have to wade
through a lot of stuff to find them in Google, but I remember seeing them.
(it was probably before you or I arrived, I think)

Well, I've done more than see them. I reviewed Rampelmann's and
Motry's bibliographies and culled ALL the published ABX component
comparisons by audiophile panels that had been published in the 80's.(
none appeared since- but talk-talk about how wonderful ABX is-
continued) This review was quoted and discussed here in the past 2
years ad nauseam. Sorry this too slpped your attention. Even more
sorry for myself having to repeat it all every few weeks for the
benefit of anyone newly appeared on the horizon. (For Quotes see P.S.)
( None were published since- lots of smoke but no fire-lots of theory
but no practical results). ALL gave: "They all sound the same"
results and so will any others -guaranteed. When you collect a bunch
of "audiophiles" most of them will perform in the middle and give you
random, coin throw results. Only in this strange kind of "research"
the few who heard MORE than the average were added to the overall
results.. Why? Because of the agenda: cables ,amps everything MUST
sound the same- it sounds the same to US "researchers" and
"measurements" (that we have as of year 2003) are the same. All those
engineers such as Palavicini, Meidtner, Strickland, Hafler are con-
men or deluded and only the Rahe experts know how to show them up.

As for comparing components, blind methods are considered

manditory in
validating codec quality, and a codec is a component as is an amplifier,
etc. The only difference is that a codec is software and an amplifier,
cable, CD player, etc.) is hardware. In other words, it isn't considered
a practical problem as I understand it.

Great: testing codex is just the same as testing musical
characteristics of a component. Then please, test some components .
Audiophiles are not in the market for codex. And Mr. JJnunes-
reasoning by inference does not wash.

(description of position snipped for brevity)

Pity. It contained your statement that ABX was "the best known way".
To which I said:
Mr. Jjnunes, this is a strange statement. Are you saying that
audiophiles don't care about "the best known way" to discern
differences between components before buying?


It means that they can use any method they want to make them happy.
There is nothing strange about that. They don't HAVE to use blind testing,
obviously, many audiophiles are happy not to. By the same token, ust because
most choose not to use it, it doesn't mean that the test is wrong
scientifically.

This is a change from " the best known way" Is it just "not wrong
scientifically"-whatever that may mean- or is it "the best known way"?
You can define "science" for your convenience. I define a "test"
as something reproducible by the targeted population from individual
to individual. ABX is not that.
But if it is "the best known way" then you're intellectually duty-
bound to recommend it. I'll tell you in secret: it is not that and it
is not a "test". There ain't no "test" with general audiophile
validity. Neither "best" nor "worse" Nohow, nowhere. In science
bluster and opinions do not replace evidence.
To quote the paragraph you omitted: "2) the "best known way" (ie
ABX/DBT for comparing components available for the last 30 years L.M.)
is not usable on this earth by human beings. Writing paper and angelic
choir are another thing
altogether"
I said:
What's wrong with this
picture? Well, listen carefully this time-all of it's been said many
times before but seems to have slipped past you.
ALL, but ALL ABX component comparison tests with an average
audiophile panel as reported by their proctors failed to verify ANY
differences, "subtle" ("subtle" for you or for me or for Glenn Gould?)
or "gross" between cables, preamps, amps, cdplayers and Dacs
Which proves one of two things: 1) there ARE no differences between
anything and anything else in audio. None-neither subtle nor gross.


I suppose you are referring to Noisaine's tests.

Definitely not. Nousaine's tests are for a few individuals at the
most.. I'm referring to PUBLISHED panel tests results.

I recall some have said
they wern't as sensitive as they could be, but it wasn't really bad.
It's not true there are never no differences.

If you want to know for yourself, the best way is to do your own tests.
But, nobody HAS to do them. That's unreasonable. My position is really
more moderate than some.

Why on earth would I buy a $600:00 switch to find out that in my
hands ABX makes it all "sound the same".?
If you would just stop listening to Glenn Gould, everything would be
alright. ;-)


And if you and others just gave up the quaint idea that there
must be a "test" to measure subjective, individual perceptions of
complex signals like music (in no other sphere of sensory preferences-
just in audio- we're so blessed)... Rahe would become a useful forum
for exchange of personal experiences. And credible opinions of
credible witnesses would be interesting to others with similar
interests and so on. Just like the opinions of the mag. reviewers.
Ludovic Mirabel
Mr Jjnunes do me and your readers a favour and read the threads. The
following appeared in my reply to Mr. ABrams 3/52 ago. The thread is
still current
Representative conclusions of the ABX developers (Clark , Masters
etc)proctoring the ABX listening tests. :
Quoted on 3rd Sept '03 in the "Endless debate" thread
"Masters, Ian G. and Clark, D. L., "Do All CD
Players Sound
the Same?", Stereo Review, pp.50-57 (January 1986)
Conclusions signed by D.L. Clark:
"......it is difficult to imagine a real-life situation in which
audible differences could be reliably detected or in which one player
(CD player L.M.) would be consistently preferred "for its sound alone"

Greenhill, Laurence , "Speaker Cables: Can you Hear the
Difference?"
Stereo Review, ( Aug 1983)

Conclusions signed by Larry Greenhill:
"This project was unable to validate the sonic benefits claimed for
exotic speaker cables over common 16-gauge zipcord.We can only
concludet there is little advantage beside the pride of ownership in
using these thick expensive wires"

In '89 a rather elaborate listening test for audibility of
distortion
was performed .(Masters and Clark, St. Review, Jan. '89).
Various types of distortion with different signals were tested.. There
were 15 TRAINED listeners -? Gender?. At 2 db. distortion level
(2db), playing "natural music" the "average" level of correct hits
was 61% (barely above the minimum statistically significant level of
60%). The individual scores varied from perfect 5/5 to 1/5
Similar discrepancies were observed in phase shift recognition.:
Authors' conclusion: "Distortion has to be very gross and the signal
very simple for it to be noticed" ... by the "average"
Will it do for the time being?