View Single Post
  #112   Report Post  
Posted to rec.audio.opinion
[email protected] elmir2m@shaw.ca is offline
external usenet poster
 
Posts: 818
Default Better Than ABX?


Harry Lavo wrote:
"Arny Krueger" wrote in message
...
"Jenn" wrote in
message

In article ,
"ScottW" wrote:


If the listener has control of the source
selector...which IMO, they should...
they can do whatever they want to obtain maximum comfort
with their selection...quick switch, long passage
listening,
music, pink noise, etc.

That seems like a positive.


People here ranting against ABX
are generally not looking for solutions....they're
looking for excuses.


Solutions to what?


ABX is a solution to the well-known problem of listener bias.



Let's try to make it accurate, Arny. ABX is a solution to the well-known
problem of postitive listener differentiation bias, when using test signals
and artifacts on which the respondent has been trained and proven to be
reliable in differentiating. That is what ABX is.

Here is what it is not.

IT IS NOT a test that eliminates negative listener differentiation bias
(who'd ever think *anybody* might have such biases)
IT IS NOT a test that can be run without listener training (absolutely
essential, and the antithesis of open-ended evaluation)
IT IS NOT a test that everybody can validly use (only roughly half qualify
at H-K)
IT IS NOT a test for proving some sound difference *doesn't* exist (can't
prove a negative, and not designed to)
IT IS NOT a test that has been verified to be valid when used for open-ended
evaluation of the performance of audio components reproducing music
(open-ended listening cannot be reduced to a single artifact for training).

Furthermore,:

ABX *IS* a test that, in order to do open-ended, direct evaluation of audio
components, must be run with an ABX box that is no longer available, and
whose contacts may/may not audibly influence the sound

AND ABX *IS NOT* a program that can be run on a computer to do open-ended
evaluation of actual components in use.

Since Scotties challenge I have thought long and hard about ABX and how it
is used/can be usefully used in product development (which I have a
background in, although only briefly in the audio field). Here is what I
have concluded:

Scientific research:

ABX may have great value in the audiometric field, where it was first used
in audio, in order to determine human threasholds for various forms of
distortion, including compression artifacts. It is best used and most
sensitive with test signals to which listeners can be trained. Even so, a
careful screening of panel member is required. Within these conditions, it
serves as a useful research tool...for scientiific inquiry.

ABX has very little value in the actual development of audio gear. I've
examined the process from several different angels and have concluded it
would be useful only in a few cases. Consider these common development
scenarios:

Practical Development Efforts:

* The manufacturer cost reduces a product by substituting cheaper parts or
redesigning a circuit and wants to know if anybody can hear a difference.
How do you train for "no difference". How do you screen out poor
performers. Can it prove a difference doesnt' exist. No. While some abx
testing might give the manufacture some comfort level if all subjects failed
to differentiate, the test cannot conclusively prove a negative and cannot
even be well-run.

* Ditto for the manufacturer who wants to make a spot-on copy of an existing
competive product. Same caveat as above.

* The manufacture has a new hotshot development engineer/team who completely
redesigns a product. And the maufacture wants to know if the product is
perceived as better (it had better be, else why spend all that money). In
this case, the manufacturer would want to know if their is a difference, but
he would much more want to know prefernces that subsume differences. He'd
want to know overall preference between old and new, and perhaps between new
and some of the competition. He'd not only want to know the extent of
preference, he'd also want to examine the reasons for preference among those
who preferred the old, and among those who preferred the new. This requires
a preference test, which would almost certainly be used instead of an
abx-style differentiation test.

* I can't think of an instance where a manufacturer would deliberately
engineer in a change, and want to know after the fact if it "made a
difference" as opposed to "making the product cheaper or better". In other
words, the list is exhausted except for almost pure research purposes. And
only the Harmon Group and perhaps Panasonic and Sony are large enough to
finance such research commercially.

Practical Open-Ended Evaluation of Audio Components

* The purchaser doesn't really want to buy something "different", they want
to buy something if it sounds "better" to them. This requires a preference
test. If the purchasers doesn't want to trust his sighted judgement, he can
set up a blind or double-blind preference test assuming he can get som
assistance, and it will actually be slightly simpler than the abx test.
Most consumers will forego such rigourous testing on the basis that they can
live with any sighted bias and possitive differentiation bias, and that the
more rigorous test is too demaning of time and manpower resources to be
worthile. This is doubtless helped by the fact that most audio consumers
don't spend a fortune (relative to their income) on their equipment,
particularly if they upgrade over time. ABX testing has virtually no useful
roll to play in this case, as it is even more cumbersome than a double-blind
preference test and provides little or no more in the way of practical and
useful information. This assumes, of course, that it has first been actually
validated for the purpose of open-ended auditioning of audio components
playing music. In addition, an ABX test requires training on the artifacts
to be differentiated, and this won't initially be known in open-ended
testing.

Use of ABX by Reviewers

* ABX might be useful for reviewers in an occassional *validation* mode
(again if it is itself validated first). But it is far to cumbersome to be
used on an ongoing basis for the same reasons as outlined above for
consumers.


==================================

Harry, you just made an excellent exhaustive survey of ABX testing AS
APPLIED TO COMPARISON OF MUSICAL REPRODUCTION BY DIFFERENT audio
components.

It is predictable that it will make no impact in the ABX chapel. The
pipedream promise of an infallible consumer guiide to audio is too
attractive for a resoned argument. And the scientific test tells that
you may just as well listen to your computer whiz loudspeakere
"system".
In addition most of those who try switching from A to B and then to X
soon find that they no longer can tell one piece of music from the
other let alone one amp from another. See the notes on "performance" or
rather lack of it of most of Sean Olive's subjects who yet knew what
they liked best even though their answers to the difference question
were abysmally poor..

As for "training"; by the time they are trained for ABX they no longer
need the ritual.. They are accurate listeners.

All you'll get this time will be a repeat of how good ABX is in audio
research. They can't quote any successes in well-planned trials of its
application to component comparison. Why? Because none exist

Since none exist ABX for audio listeners does not exist either.It is
timethe chapel preachers.showed to the professionals that they arew
serious researchers. Polemics in RAO are not it.

Sheer waste of time and waste of your knowledge and intelligence
treating it seriously..
Ludovic Mirabel