Reply
 
Thread Tools Display Modes
  #1   Report Post  
Posted to rec.audio.high-end
William Eckle William Eckle is offline
external usenet poster
 
Posts: 7
Default New psudo ABX ?

The classic abx listening test is a chore to setup and perform. Its
clear strength is the ability to determine if a difference, any
difference, can be shown to exist by listening alone in a
scientifically valid way. Substitute some element for another which
is thought to possibly make an audible difference and if no
difference can be heard then the claim of amp or wire etc.
differences are moot and any subjective claims highly questionable.

Here is a new way to perform the same kind of test with greatly
simplified methods that create the context where a possible
difference can be shown to exist by listening alone and also
scientifically valid.

http://theaudiocritic.com/blog/index...Id=35&blogId=1

Don't miss the link at the end of this short article that describes
the software and methods used.

-=Bill Eckle=-

Vanity Web Page at:
http://www.wmeckle.com

  #2   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default New psudo ABX ?

"William Eckle" wrote in message
om...
The classic abx listening test is a chore to setup and perform. Its
clear strength is the ability to determine if a difference, any
difference, can be shown to exist by listening alone in a
scientifically valid way. Substitute some element for another which
is thought to possibly make an audible difference and if no
difference can be heard then the claim of amp or wire etc.
differences are moot and any subjective claims highly questionable.


Unfortunately, this is not true. The researchers at Harman Kardon have
found that over 40% of people, even with careful training, cannot reliably
distinquish even known differences in their training, and have to be dropped
from any testing. The test is as much a test of the listener as it is of
the items under test. Moreover, both ABX and ABC/hr were developed and
optimized specifically for codec testing, where easily identifiable
distortions can be used at various levels of impact to "train" listeners,
who then listen to the blind samples but "knowing" what they are listening
for.

Open-end evaluation of audio gear does not work this way. The human brain
tries to relate the sound to a real sound, and doesn't even know "what" to
listen for...timbre, soundspace, subtles distortions, etc. That seems to be
why the result of open-ended listening via ABX results in almost immediate
listening fatique...it is a total unnatural use of the technique for this
purpose. Put simply it violates the first cardinal principal of test
design...that is to prevent any aspect of the test from intervening as a
variable.

ABX can be used for crude audio measures....volume, frequency shifts in
white noise, etc. As soon as it comes to listening to music, sensitivity
decreases or disappears. This isn't just speculation....review the
Greenhill tests in Stereo Review (search index at their site).

Here is a new way to perform the same kind of test with greatly
simplified methods that create the context where a possible
difference can be shown to exist by listening alone and also
scientifically valid.

http://theaudiocritic.com/blog/index...Id=35&blogId=1

Don't miss the link at the end of this short article that describes
the software and methods used.


I think others will argue that it is likely you will be able to meausure
some difference, but you won't be able to hear it. And others will claim
they can hear it. Perhaps an ABX test under this circumstance with some
training might be of some value. But to do all this in a well controlled
test is, again, a research project and not an amateur, home-oriented test.

  #3   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default New psudo ABX ?

It's certainly not a new method, but it's nice to have new software that
automates the process.

___
-S
"As human beings, we understand the world through simile, analogy,
metaphor, narrative and, sometimes, claymation." - B. Mason
  #4   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default New psudo ABX ?

On Mar 28, 11:21 pm, "Harry Lavo" wrote:
Unfortunately, this is not true. The researchers at Harman Kardon have
found that over 40% of people, even with careful training, cannot reliably
distinquish even known differences in their training, and have to be dropped
from any testing.


This is false. Harmon wasn't even training people to "distinguish
differences." All of its subjects could *distinguish* the differences;
what they couldn't do was correlate those differences to specific
variations in frequency response. That is a much harder task, which is
why the failure rate, even after training, was so high. Anyone who's
read Sean Olive's work would understand this (assuming they wanted to
understand it).

The test is as much a test of the listener as it is of
the items under test. Moreover, both ABX and ABC/hr were developed and
optimized specifically for codec testing,


This is also false.

where easily identifiable
distortions can be used at various levels of impact to "train" listeners,
who then listen to the blind samples but "knowing" what they are listening
for.

Open-end evaluation of audio gear does not work this way. The human brain
tries to relate the sound to a real sound, and doesn't even know "what" to
listen for...timbre, soundspace, subtles distortions, etc. That seems to be
why the result of open-ended listening via ABX results in almost immediate
listening fatique...it is a total unnatural use of the technique for this
purpose.


This is idle speculation by someone who knows nothing about the
subject he's talking about. There isn't a shred of real evidence for
any of it. It's pseudoscience.

Put simply it violates the first cardinal principal of test
design...that is to prevent any aspect of the test from intervening as a
variable.


Again, you haven't a shred of evidence that ABX or ABC/hr tests
interfere with perception in any way.

ABX can be used for crude audio measures....volume, frequency shifts in
white noise, etc. As soon as it comes to listening to music, sensitivity
decreases or disappears.


It is certainly true that it is easier to hear differences in level
and FR using test tones than using music. This has nothing to do with
any particular test. It has to do with the way the human hearing
mechanism works, and it is true no matter what listening method you
use.

This isn't just speculation....review the
Greenhill tests in Stereo Review (search index at their site).


Where you won't find it. But if anyone wants to read what Greenhill
actually found (rather than someone's re-invention of it), e-mail me
and I can send you the article.

Harry is peddling pure pseudoscience here.

bob

  #5   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default New psudo ABX ?

Harry Lavo wrote:
"William Eckle" wrote in message
om...
The classic abx listening test is a chore to setup and perform. Its
clear strength is the ability to determine if a difference, any
difference, can be shown to exist by listening alone in a
scientifically valid way. Substitute some element for another which
is thought to possibly make an audible difference and if no
difference can be heard then the claim of amp or wire etc.
differences are moot and any subjective claims highly questionable.


Unfortunately, this is not true. The researchers at Harman Kardon have
found that over 40% of people, even with careful training, cannot reliably
distinquish even known differences in their training, and have to be dropped
from any testing. The test is as much a test of the listener as it is of
the items under test. Moreover, both ABX and ABC/hr were developed and
optimized specifically for codec testing, where easily identifiable
distortions can be used at various levels of impact to "train" listeners,
who then listen to the blind samples but "knowing" what they are listening
for.


Open-end evaluation of audio gear does not work this way. The human brain
tries to relate the sound to a real sound, and doesn't even know "what" to
listen for...timbre, soundspace, subtles distortions, etc. That seems to be
why the result of open-ended listening via ABX results in almost immediate
listening fatique...it is a total unnatural use of the technique for this
purpose. Put simply it violates the first cardinal principal of test
design...that is to prevent any aspect of the test from intervening as a
variable.


.....and then,once these 'differences' have been 'heard' to the
listener's satisfaction, it's time to see if they're real...via a
blind comparison. You just can't get around that.

ABX can be used for crude audio measures....volume, frequency shifts in
white noise, etc. As soon as it comes to listening to music, sensitivity
decreases or disappears. This isn't just speculation....review the
Greenhill tests in Stereo Review (search index at their site).


Nonsense, Harry. ABX has been used to detect the difference between .wav source and 320 kbps
mp3s -- which are *extremely* difficult to tell apart. Sensitivity is certainly reduced for
comparisons (sighted or blind) when music is used RATHER THAN TEST TONES. This is not a
function of sighted versus blind, and the Greenhill tests do not say otherwise.

Here is a new way to perform the same kind of test with greatly
simplified methods that create the context where a possible
difference can be shown to exist by listening alone and also
scientifically valid.

http://theaudiocritic.com/blog/index...Id=35&blogId=1

Don't miss the link at the end of this short article that describes
the software and methods used.


I think others will argue that it is likely you will be able to meausure
some difference, but you won't be able to hear it. And others will claim
they can hear it. Perhaps an ABX test under this circumstance with some
training might be of some value. But to do all this in a well controlled
test is, again, a research project and not an amateur, home-oriented test.


The main value of a difference test is where it produces a null (and by null,
it must mean residual levels below the practical or theoretical limits of human
hearing). There, subjectivists would have to invoke some new sort of audible effect,
akin to homeopathy, where vanishingly small amounts of 'medicine' are said to
effect cures. I wouldn't put it past them.

___
-S
"As human beings, we understand the world through simile, analogy,
metaphor, narrative and, sometimes, claymation." - B. Mason


  #6   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default New psudo ABX ?

"Harry Lavo" wrote in message

"William Eckle" wrote in message
om...


The classic abx listening test is a chore to setup and
perform.


Depends. PCABX is an exact test for many kinds of listening tests that
people are interested in today, and a useful approximation for a wide
variety of tests. It's easier than going to a store and doing a careful job
of comparing components using one of the house's systems.

Its clear strength is the ability to determine
if a difference, any difference, can be shown to exist
by listening alone in a scientifically valid way.


There ain't no such thing. The world is full of differences that are
measurable but not audible at all. So, one caveat is that the difference has
to be audible, and not all differences are audible. Secondly, the audibility
of many differences are contingent on a wide variety of influences, two of
the stronger ones being the sensitivity of the listener, and some
properties of the music being used for the comparison that may be
non-obvioius.

Substitute some element for another which is thought to
possibly make an audible difference and if no difference
can be heard then the claim of amp or wire etc.
differences are moot and any subjective claims highly
questionable.


Or, you picked the wrong music or associated components. Or, the listener's
sensitivity was atypically poor for any number of different reasons.

Unfortunately, this is not true.


So far so good.

The researchers at
Harman Kardon have found that over 40% of people, even
with careful training, cannot reliably distinquish even
known differences in their training, and have to be
dropped from any testing.


Ironically, these same listeners would no doubt report a plethora of audible
differences in a sighted evaluation. ;-)

The test is as much a test of
the listener as it is of the items under test.


Name a subjective test methodology where this *isn't* true. Some people
like to play a silly game where they assert that a certain general problem
that afflicts just about any test is a problem of just their least favorite
kind of listening evaluation.

Moreover,
both ABX and ABC/hr were developed and optimized
specifically for codec testing,


Simply not true of ABX, since ABX was developed and popularized before the
audio world even knew what a codec in the modern sense was.

where easily identifiable
distortions can be used at various levels of impact to
"train" listeners,


Surely true for more than just codecs, so this is also a false claim.

who then listen to the blind samples
but "knowing" what they are listening for.


Which opens yet another general problem with subjective tests. Differences
are very often easier to detect if you know what to listen for, so how do
you know what to listen for until someone has first actually heard it?

Open-end evaluation of audio gear does not work this way.


Actually, open-end evaluation doesn't work at all. It is open-ended all
right - it contains poor-to-non-existent safeguards against false positives.

The human brain tries to relate the sound to a real
sound, and doesn't even know "what" to listen
for...timbre, soundspace, subtles distortions, etc.


A truism that all by itself can't support any particular viewpoint.

That
seems to be why the result of open-ended listening via
ABX results in almost immediate listening fatique...it is
a total unnatural use of the technique for this purpose.


Fact is that lots of people report fatigue while listening by *any* means
that efficiently rejects false positives. The reason is pretty obvious, most
of their former experiences were all about false positives. Take those away
and they suddenly have to produce more than fond wishes that they heard a
difference.

Put simply it violates the first cardinal principal of
test design...that is to prevent any aspect of the test
from intervening as a variable.


There ain't no such rule.

ABX can be used for crude audio measures....volume,
frequency shifts in white noise, etc.


This is one of those statements that immediatly raises a red flag. Broadband
white noise, being an unweighted varying and random combination of all
frequencies, is very much immune audible effects due to frequency shifts.
Theresfore it is a nonsense statement because it is contingent on something
that doesn't ever happen.

As soon as it
comes to listening to music, sensitivity decreases or
disappears.


Name a subjective test methodology where this *isn't* true.

This isn't just speculation


No, its mostly false and mistleading claims, that shouldn't even qualify as
reasonable speculation.

....review the
Greenhill tests in Stereo Review (search index at their
site).


Greenhill was an advocate of ABX at the time those tests were written and
published. I spoke with him personally for hours in those days.

Here is a new way to perform the same kind of test with
greatly simplified methods that create the context where
a possible difference can be shown to exist by listening
alone and also scientifically valid.


http://theaudiocritic.com/blog/index...Id=35&blogId=1


Don't miss the link at the end of this short article
that describes the software and methods used.


Been there, done that many times. The problem with this methodology is that
it is easily overwhelmed by trivial differences such as phase shift of any
kind but linear phase. You end up with two waves that sound very much
alike, but generate relatively large difference signals. The irrelevant
information in the difference signal masks the difference that was being
listened for.

I think others will argue that it is likely you will be
able to meausure some difference, but you won't be able
to hear it.


This can easily happen, because test equipment has become very, very
sensitive.

And others will claim they can hear it.


You may hear a large difference signal, but is it all that relevant?

Perhaps an ABX test under this circumstance with some
training might be of some value.


You can learn how to do that at www.pcabx.com .

But to do all this in a
well controlled test is, again, a research project and
not an amateur, home-oriented test.


www.pcabx.com has links to everything you need to do high quality listening
tests as easily as possible, given the inherent problems with listening
tests in general, such as the ones I mentioned above.

  #7   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default New psudo ABX ?

Arny Krueger wrote:
"Harry Lavo" wrote in message

"William Eckle" wrote in message
om...


The classic abx listening test is a chore to setup and
perform.


Depends. PCABX is an exact test for many kinds of listening tests that
people are interested in today, and a useful approximation for a wide
variety of tests. It's easier than going to a store and doing a careful job
of comparing components using one of the house's systems.


Its clear strength is the ability to determine
if a difference, any difference, can be shown to exist
by listening alone in a scientifically valid way.


There ain't no such thing. The world is full of differences that are
measurable but not audible at all. So, one caveat is that the difference has
to be audible, and not all differences are audible. Secondly, the audibility
of many differences are contingent on a wide variety of influences, two of
the stronger ones being the sensitivity of the listener, and some
properties of the music being used for the comparison that may be
non-obvioius.


However, it only takes one positive ABX test run to demonstrate that
the person who took the test could differentiate the sound of A and B.
(Assuming of course that the test was set up properly.) And it only takes
one person demonstrably hearing a difference, to 'prove' that the two things sound
different. That's not the same as saying 'anyone' will be able to hear
the difference, of course.

___
-S
"As human beings, we understand the world through simile, analogy,
metaphor, narrative and, sometimes, claymation." - B. Mason
  #8   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default New psudo ABX ?

"bob" wrote in message
...
On Mar 28, 11:21 pm, "Harry Lavo" wrote:
Unfortunately, this is not true. The researchers at Harman Kardon have
found that over 40% of people, even with careful training, cannot
reliably
distinquish even known differences in their training, and have to be
dropped
from any testing.


This is false. Harmon wasn't even training people to "distinguish
differences." All of its subjects could *distinguish* the differences;
what they couldn't do was correlate those differences to specific
variations in frequency response. That is a much harder task, which is
why the failure rate, even after training, was so high. Anyone who's
read Sean Olive's work would understand this (assuming they wanted to
understand it).


So what does this say about the average music listeners ability to use ABX
with NO training?


The test is as much a test of the listener as it is of
the items under test. Moreover, both ABX and ABC/hr were developed and
optimized specifically for codec testing,


.This is also false


Wishful thinking.


where easily identifiable
distortions can be used at various levels of impact to "train" listeners,
who then listen to the blind samples but "knowing" what they are
listening
for.

Open-end evaluation of audio gear does not work this way. The human
brain
tries to relate the sound to a real sound, and doesn't even know "what"
to
listen for...timbre, soundspace, subtles distortions, etc. That seems to
be
why the result of open-ended listening via ABX results in almost
immediate
listening fatique...it is a total unnatural use of the technique for this
purpose.


This is idle speculation by someone who knows nothing about the
subject he's talking about. There isn't a shred of real evidence for
any of it. It's pseudoscience.


This is reality, as even people who desire to take the test often drop out
before even 15 samples for this very reason.


Put simply it violates the first cardinal principal of test
design...that is to prevent any aspect of the test from intervening as a
variable.


Again, you haven't a shred of evidence that ABX or ABC/hr tests
interfere with perception in any way.


More evidence (admittedly anecdotal) than has been presented to validate
that it works for open-ended evaluation of audio components.


ABX can be used for crude audio measures....volume, frequency shifts in
white noise, etc. As soon as it comes to listening to music, sensitivity
decreases or disappears.


It is certainly true that it is easier to hear differences in level
and FR using test tones than using music. This has nothing to do with
any particular test. It has to do with the way the human hearing
mechanism works, and it is true no matter what listening method you
use.


To paraphrase, you haven't a shred of evidence that long term, exploratory
tests
paired with short term comparisons doesn't overcome this limitation. We are
talking MUSIC, after all...not white noise.

This isn't just speculation....review the
Greenhill tests in Stereo Review (search index at their site).


Where you won't find it. But if anyone wants to read what Greenhill
actually found (rather than someone's re-invention of it), e-mail me
and I can send you the article.


And if you wish to email me I can send you an accurate and complete Excel
table of the results.


Harry is peddling pure pseudoscience here.


You've heard from a true ABX believer. Ask for the validation test that
this technique, developed very specifically for codec distortions, works as
the best tool for open-ended evaluation of audio components.

  #9   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default New psudo ABX ?

"Steven Sullivan" wrote in message
...
Harry Lavo wrote:
"William Eckle" wrote in message
om...
The classic abx listening test is a chore to setup and perform. Its
clear strength is the ability to determine if a difference, any
difference, can be shown to exist by listening alone in a
scientifically valid way. Substitute some element for another which
is thought to possibly make an audible difference and if no
difference can be heard then the claim of amp or wire etc.
differences are moot and any subjective claims highly questionable.


Unfortunately, this is not true. The researchers at Harman Kardon have
found that over 40% of people, even with careful training, cannot
reliably
distinquish even known differences in their training, and have to be
dropped
from any testing. The test is as much a test of the listener as it is of
the items under test. Moreover, both ABX and ABC/hr were developed and
optimized specifically for codec testing, where easily identifiable
distortions can be used at various levels of impact to "train" listeners,
who then listen to the blind samples but "knowing" what they are
listening
for.


Open-end evaluation of audio gear does not work this way. The human
brain
tries to relate the sound to a real sound, and doesn't even know "what"
to
listen for...timbre, soundspace, subtles distortions, etc. That seems to
be
why the result of open-ended listening via ABX results in almost
immediate
listening fatique...it is a total unnatural use of the technique for this
purpose. Put simply it violates the first cardinal principal of test
design...that is to prevent any aspect of the test from intervening as a
variable.


....and then,once these 'differences' have been 'heard' to the
listener's satisfaction, it's time to see if they're real...via a
blind comparison. You just can't get around that.


Sure you can. You simply don't have to require 100% "proof" to make an
audio purchase.


ABX can be used for crude audio measures....volume, frequency shifts in
white noise, etc. As soon as it comes to listening to music, sensitivity
decreases or disappears. This isn't just speculation....review the
Greenhill tests in Stereo Review (search index at their site).


Nonsense, Harry. ABX has been used to detect the difference between .wav
source and 320 kbps
mp3s -- which are *extremely* difficult to tell apart. Sensitivity is
certainly reduced for
comparisons (sighted or blind) when music is used RATHER THAN TEST TONES.
This is not a
function of sighted versus blind, and the Greenhill tests do not say
otherwise.


I didn't say it did. What I said was that the Greenhill test clearly shows
that the test is sensitive to differences in levels and in white noise,
while remaining insensitve with choral music as the source. It says nothing
about other tests, pro or con. But it does have relevance for ABX.


Here is a new way to perform the same kind of test with greatly
simplified methods that create the context where a possible
difference can be shown to exist by listening alone and also
scientifically valid.

http://theaudiocritic.com/blog/index...Id=35&blogId=1

Don't miss the link at the end of this short article that describes
the software and methods used.


I think others will argue that it is likely you will be able to meausure
some difference, but you won't be able to hear it. And others will claim
they can hear it. Perhaps an ABX test under this circumstance with some
training might be of some value. But to do all this in a well controlled
test is, again, a research project and not an amateur, home-oriented
test.


The main value of a difference test is where it produces a null (and by
null,
it must mean residual levels below the practical or theoretical limits of
human
hearing). There, subjectivists would have to invoke some new sort of
audible effect,
akin to homeopathy, where vanishingly small amounts of 'medicine' are said
to
effect cures. I wouldn't put it past them.


It must be nice to be so certain. In the Pro Audio Digest thread of March
that I read at J. Junes suggestion last night, one of the correspondents
(screened for participation by highly regarded practical professional
engineers) antecdotally told of a blind test whereby a friend with a very
high level of success could pick out two identical samples (that met the
null test and proved bit-identical) on two different brands of gold-plated
CD disks. He wasn't challenged.

Moreover, the general consensus of the group (which included Jim Johnson and
Dan Lavry) was that the CD cutoff was too low and artifacts were often
audible as a result, including pre-ringing and/or phase shift, and that 64K
was the necessary minimum to avoid even the possibility of problems. Please
note that this directly contradicts Arny's recent assertions here that CD's
are audibly at the level of ultimate transparency(1), and that the
66khz/20bit recommendation of the Japanese hi-rez group in the mid-90's was
nothing but marketing-driven propoganda(2).

What I took away after following the discussion, which was one of two major
topics for the month, was that real engineers are very aware of what is NOT
known (including standards of audibility and a means of simulating what the
ear really can hear)...and that the folks here and elsewhere who are so sure
they know the truth are not real scientists or engineers. On the other
hand, some of us having been saying that for years and shouldn't be
surprised. But it is nice to have the real pros reinforce the opinion.

Footnotes:

(1) A. Krueger, RAHE, "Cable Upgrade Solutions", March 26: "The CD format
has always
and continues to prvoide sonically transparent recording and reproduction of
music....a format that is far from perfect can
be sonically transparent."

(2) A. Krueger, RAHE, "Need Vinyl to Digital Advice", March 8 (in reply to
HL):

"HL:The Japanese
consortium that investigated hi-rez in the mid '90's
concluded that 66khz/20 bits was the minimum beyond which
differences could not be heard.


"AK: Poor quality tests by advocates of a now-failed technology does not
establish a scientific fact. "

  #10   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default New psudo ABX ?

"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message

"William Eckle" wrote in message
om...


The classic abx listening test is a chore to setup and
perform.


Depends. PCABX is an exact test for many kinds of listening tests that
people are interested in today,


* This means to determine which bit-rate of MP3 or other codec to choose.
Very hi-fi.

and a useful approximation for a wide
variety of tests. It's easier than going to a store and doing a careful
job
of comparing components using one of the house's systems.


* Yeah, take that amplifier and shove it into your PC. Ditto the tuner.
Maybe the CD player. Wonderful way to listen to components.


Its clear strength is the ability to determine
if a difference, any difference, can be shown to exist
by listening alone in a scientifically valid way.


* If such difference is strong enough to overcome PC digitalization.

There ain't no such thing. The world is full of differences that are
measurable but not audible at all. So, one caveat is that the difference
has
to be audible, and not all differences are audible. Secondly, the
audibility
of many differences are contingent on a wide variety of influences, two of
the stronger ones being the sensitivity of the listener, and some
properties of the music being used for the comparison that may be
non-obvioius.


* Who are you speaking to? Not me.


Substitute some element for another which is thought to
possibly make an audible difference and if no difference
can be heard then the claim of amp or wire etc.
differences are moot and any subjective claims highly
questionable.


Or, you picked the wrong music or associated components. Or, the
listener's
sensitivity was atypically poor for any number of different reasons.


* Who are you speaking to? Not me.

Unfortunately, this is not true.


So far so good.

The researchers at
Harman Kardon have found that over 40% of people, even
with careful training, cannot reliably distinquish even
known differences in their training, and have to be
dropped from any testing.


Ironically, these same listeners would no doubt report a plethora of
audible
differences in a sighted evaluation. ;-)


* Straw man supposition, at best. Flame at worst.


The test is as much a test of
the listener as it is of the items under test.


Name a subjective test methodology where this *isn't* true. Some people
like to play a silly game where they assert that a certain general problem
that afflicts just about any test is a problem of just their least
favorite
kind of listening evaluation.


* The point is: if you want to choose hi-fi, you listen yourself. If you
want a scientific test, you prescreen people for sensitivity. You don't use
ABX willy-nilly at random....you do a carefully constructed, controlled
scientific test, or you just use your ears and do your best.

Moreover,
both ABX and ABC/hr were developed and optimized
specifically for codec testing,


Simply not true of ABX, since ABX was developed and popularized before the
audio world even knew what a codec in the modern sense was.


Tell that to the ITU standards committee who state otherwise.


where easily identifiable
distortions can be used at various levels of impact to
"train" listeners,


Surely true for more than just codecs, so this is also a false claim.


Okay.....train for "natural timbre" across a wide variety of instruments.


who then listen to the blind samples
but "knowing" what they are listening for.


Which opens yet another general problem with subjective tests. Differences
are very often easier to detect if you know what to listen for, so how do
you know what to listen for until someone has first actually heard it?


You don't ... that's partly the point. Open-ended evaluation is called that
because you don't start with preconceptions or "knowing" differences...you
listen for some time on a wide variety of music, form some hypothesis viz a
viz differences when those differences crystalize via your ear/brain
synthesis, and then switch to synched A-B comparisons across some sections
of music that get at it, and then move on to another audio manifestation
that has revealed itself to you. When you are done, you've concluded an
audio profile of the component you are switching. Such a profile may be
(usually is) somewhat multi-dimensional.


Open-end evaluation of audio gear does not work this way.


Actually, open-end evaluation doesn't work at all. It is open-ended all
right - it contains poor-to-non-existent safeguards against false
positives.


Yes, but it also doesn not result in false negatives.


The human brain tries to relate the sound to a real
sound, and doesn't even know "what" to listen
for...timbre, soundspace, subtles distortions, etc.


A truism that all by itself can't support any particular viewpoint.


Wrong...it works directly against ABX and it's need for "known distortion or
flaw" and "staged training".


That
seems to be why the result of open-ended listening via
ABX results in almost immediate listening fatique...it is
a total unnatural use of the technique for this purpose.


Fact is that lots of people report fatigue while listening by *any* means
that efficiently rejects false positives. The reason is pretty obvious,
most
of their former experiences were all about false positives. Take those
away
and they suddenly have to produce more than fond wishes that they heard a
difference.


Well, that is the objectivist hypothesis. Proof that it is correct?


Put simply it violates the first cardinal principal of
test design...that is to prevent any aspect of the test
from intervening as a variable.


There ain't no such rule.


Evidence that you don't really know that much about test design. Absolutely
there is such a rule.

ABX can be used for crude audio measures....volume,
frequency shifts in white noise, etc.


This is one of those statements that immediatly raises a red flag.
Broadband
white noise, being an unweighted varying and random combination of all
frequencies, is very much immune audible effects due to frequency shifts.
Theresfore it is a nonsense statement because it is contingent on
something
that doesn't ever happen.


Okay, I should have said "frequency shifts in white noise reproduction". We
are (or at least I am) talking about open-ended evaluation of audio
components, you know.


As soon as it
comes to listening to music, sensitivity decreases or
disappears.


Name a subjective test methodology where this *isn't* true.


Long term listening.


This isn't just speculation


No, its mostly false and mistleading claims, that shouldn't even qualify
as
reasonable speculation.


Where is your "IMO". Hardly a fact.


....review the
Greenhill tests in Stereo Review (search index at their
site).


Greenhill was an advocate of ABX at the time those tests were written and
published. I spoke with him personally for hours in those days.


Great for you. Your point?

Here is a new way to perform the same kind of test with
greatly simplified methods that create the context where
a possible difference can be shown to exist by listening
alone and also scientifically valid.


http://theaudiocritic.com/blog/index...Id=35&blogId=1


Don't miss the link at the end of this short article
that describes the software and methods used.


Been there, done that many times. The problem with this methodology is
that
it is easily overwhelmed by trivial differences such as phase shift of any
kind but linear phase. You end up with two waves that sound very much
alike, but generate relatively large difference signals. The irrelevant
information in the difference signal masks the difference that was being
listened for.


Well, maybe Acel just listens to sine waves.


I think others will argue that it is likely you will be
able to meausure some difference, but you won't be able
to hear it.


This can easily happen, because test equipment has become very, very
sensitive.

And others will claim they can hear it.


You may hear a large difference signal, but is it all that relevant?

Perhaps an ABX test under this circumstance with some
training might be of some value.


You can learn how to do that at www.pcabx.com .

But to do all this in a
well controlled test is, again, a research project and
not an amateur, home-oriented test.


www.pcabx.com has links to everything you need to do high quality
listening
tests as easily as possible, given the inherent problems with listening
tests in general, such as the ones I mentioned above.


It would have helped very much if you had responded either to me or to Bill
Eckle, but not both at once.



  #11   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default New psudo ABX ?

"Steven Sullivan" wrote in message

Arny Krueger wrote:
"Harry Lavo" wrote in message

"William Eckle" wrote in message
om...


The classic abx listening test is a chore to setup and
perform.


Depends. PCABX is an exact test for many kinds of
listening tests that people are interested in today, and
a useful approximation for a wide variety of tests.
It's easier than going to a store and doing a careful
job of comparing components using one of the house's
systems.


Its clear strength is the ability to determine
if a difference, any difference, can be shown to exist
by listening alone in a scientifically valid way.


There ain't no such thing. The world is full of
differences that are measurable but not audible at all.
So, one caveat is that the difference has to be audible,
and not all differences are audible. Secondly, the
audibility of many differences are contingent on a wide
variety of influences, two of the stronger ones being
the sensitivity of the listener, and some properties of
the music being used for the comparison that may be
non-obvioius.


However, it only takes one positive ABX test run to
demonstrate that
the person who took the test could differentiate the
sound of A and B. (Assuming of course that the test was
set up properly.)


You have to be careful. Every once in a while people win on 100:1 shots.

And it only takes
one person demonstrably hearing a difference, to 'prove'
that the two things sound different. That's not the same
as saying 'anyone' will be able to hear
the difference, of course.


We never found any "golden ears" with ABX. We did find tin ears. We did find
people who got lucky, but couldn't duplicate their short-run results with
longer runs.

  #12   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default New psudo ABX ?

Harry Lavo wrote:
"bob" wrote in message
...
On Mar 28, 11:21 pm, "Harry Lavo" wrote:
Unfortunately, this is not true. The researchers at Harman Kardon have
found that over 40% of people, even with careful training, cannot
reliably
distinquish even known differences in their training, and have to be
dropped
from any testing.


This is false. Harmon wasn't even training people to "distinguish
differences." All of its subjects could *distinguish* the differences;
what they couldn't do was correlate those differences to specific
variations in frequency response. That is a much harder task, which is
why the failure rate, even after training, was so high. Anyone who's
read Sean Olive's work would understand this (assuming they wanted to
understand it).


So what does this say about the average music listeners ability to use ABX
with NO training?


What does it say about the average music listener's ability to correctly
identify difference in a sighted comparison? There, one is compounding
a lack of training, with a surfeit of bias.

Yet that's the regimen used most commonly in audio equipment 'reviewing'.

Any wonder that the 'audiophile' is something of a joke?
  #13   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default New psudo ABX ?

On Mar 30, 11:28 am, "Harry Lavo" wrote:
"bob" wrote in message

...

On Mar 28, 11:21 pm, "Harry Lavo" wrote:
Unfortunately, this is not true. The researchers at Harman Kardon have
found that over 40% of people, even with careful training, cannot
reliably
distinquish even known differences in their training, and have to be
dropped
from any testing.


This is false. Harmon wasn't even training people to "distinguish
differences." All of its subjects could *distinguish* the differences;
what they couldn't do was correlate those differences to specific
variations in frequency response. That is a much harder task, which is
why the failure rate, even after training, was so high. Anyone who's
read Sean Olive's work would understand this (assuming they wanted to
understand it).


So what does this say about the average music listeners ability to use ABX
with NO training?


Nobody *needs* training to take an ABX test. The purpose of training
is to heighten the subject's ability to hear the difference being
tested. An ABX test tells you what you can hear right now, whether
you've had training or not. In particular, if someone claims he can
hear a difference between A and B, then he should need no training at
all in order to demonstrate that ability in a blind test. That he
usually can't do so merely proves he's a phony.

The test is as much a test of the listener as it is of
the items under test. Moreover, both ABX and ABC/hr were developed and
optimized specifically for codec testing,


.This is also false


Wishful thinking.


That ABX was developed to test codecs?

where easily identifiable
distortions can be used at various levels of impact to "train" listeners,
who then listen to the blind samples but "knowing" what they are
listening
for.


Open-end evaluation of audio gear does not work this way. The human
brain
tries to relate the sound to a real sound, and doesn't even know "what"
to
listen for...timbre, soundspace, subtles distortions, etc. That seems to
be
why the result of open-ended listening via ABX results in almost
immediate
listening fatique...it is a total unnatural use of the technique for this
purpose.


This is idle speculation by someone who knows nothing about the
subject he's talking about. There isn't a shred of real evidence for
any of it. It's pseudoscience.


This is reality, as even people who desire to take the test often drop out
before even 15 samples for this very reason.


Your whole paragraph was pseudoscience, not just the last bit of
nonsense. You're making everything up as you go along, Harry. As for
subjects dropping out from "listening fatigue," I can cite you
numerous published studies where this did not happen. Can you cite any
where it did?

Put simply it violates the first cardinal principal of test
design...that is to prevent any aspect of the test from intervening as a
variable.


Again, you haven't a shred of evidence that ABX or ABC/hr tests
interfere with perception in any way.


More evidence (admittedly anecdotal) than has been presented to validate
that it works for open-ended evaluation of audio components.


You don't even know what "open-ended evaluation of audio components"
means. It's just a nonsense phrase you throw into every post because
you haven't any real arguments.

ABX can be used for crude audio measures....volume, frequency shifts in
white noise, etc. As soon as it comes to listening to music, sensitivity
decreases or disappears.


It is certainly true that it is easier to hear differences in level
and FR using test tones than using music. This has nothing to do with
any particular test. It has to do with the way the human hearing
mechanism works, and it is true no matter what listening method you
use.


To paraphrase, you haven't a shred of evidence that long term, exploratory
tests
paired with short term comparisons doesn't overcome this limitation. We are
talking MUSIC, after all...not white noise.


We are talking masking. Something else you appear to know nothing
about. Do you honestly believe that masking doesn't happen when you
listen to music, Harry? Yes, I believe you do.

bob

  #14   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default New psudo ABX ?

Harry Lavo wrote:
"Steven Sullivan" wrote in message
...
Harry Lavo wrote:
"William Eckle" wrote in message
om...
The classic abx listening test is a chore to setup and perform. Its
clear strength is the ability to determine if a difference, any
difference, can be shown to exist by listening alone in a
scientifically valid way. Substitute some element for another which
is thought to possibly make an audible difference and if no
difference can be heard then the claim of amp or wire etc.
differences are moot and any subjective claims highly questionable.


Unfortunately, this is not true. The researchers at Harman Kardon have
found that over 40% of people, even with careful training, cannot
reliably
distinquish even known differences in their training, and have to be
dropped
from any testing. The test is as much a test of the listener as it is of
the items under test. Moreover, both ABX and ABC/hr were developed and
optimized specifically for codec testing, where easily identifiable
distortions can be used at various levels of impact to "train" listeners,
who then listen to the blind samples but "knowing" what they are
listening
for.


Open-end evaluation of audio gear does not work this way. The human
brain
tries to relate the sound to a real sound, and doesn't even know "what"
to
listen for...timbre, soundspace, subtles distortions, etc. That seems to
be
why the result of open-ended listening via ABX results in almost
immediate
listening fatique...it is a total unnatural use of the technique for this
purpose. Put simply it violates the first cardinal principal of test
design...that is to prevent any aspect of the test from intervening as a
variable.


....and then,once these 'differences' have been 'heard' to the
listener's satisfaction, it's time to see if they're real...via a
blind comparison. You just can't get around that.


Sure you can. You simply don't have to require 100% "proof" to make an
audio purchase.


Now you're moving the goalposts. I'm talking about establishing audible difference, period,
not making purchasing decisions. There was nothing about 'purchases' in the stuff you quoted.
For the 10,000th time, no one says anyone 'has' to have 100% proof of audible difference *to
make a purchase*. But when people ask (perhaps in relation to a purchase, perhaps not)
whether one piece of gear SOUNDS 'better' than another (implying difference), or assert same,
they are making an assumption that either stands or falls based on evidence. And 'evidence'
from sighted comparison is inevitably prone to bias, whether the comparison is 'open-ended' or
not. If you're happy with that, fine, jsut don't act like you've got more than '50%' proof
for your claim. If you do want to determine if *you've* heard a *real* difference, then
you cannot get around the need for a blind comparison.

ABX can be used for crude audio measures....volume, frequency shifts in
white noise, etc. As soon as it comes to listening to music, sensitivity
decreases or disappears. This isn't just speculation....review the
Greenhill tests in Stereo Review (search index at their site).


Nonsense, Harry. ABX has been used to detect the difference between .wav
source and 320 kbps
mp3s -- which are *extremely* difficult to tell apart. Sensitivity is
certainly reduced for
comparisons (sighted or blind) when music is used RATHER THAN TEST TONES.
This is not a
function of sighted versus blind, and the Greenhill tests do not say
otherwise.


I didn't say it did. What I said was that the Greenhill test clearly shows
that the test is sensitive to differences in levels and in white noise,
while remaining insensitve with choral music as the source. It says nothing
about other tests, pro or con. But it does have relevance for ABX.


All audio comparisons, sighted or not, are affected by significant changes in level.
And it is well-known from psychoacoustics that test signals can reveal differences that are
masked by more complex signals. So why single out ABX? These effects operate generally and
are well-known to people who advocate scientific standards of 'audio reviewing'. If there is
ignorance of these factors, it's more likely to be on the part of the sighted-paradigm
advocates.

Here is a new way to perform the same kind of test with greatly
simplified methods that create the context where a possible
difference can be shown to exist by listening alone and also
scientifically valid.

http://theaudiocritic.com/blog/index...Id=35&blogId=1

Don't miss the link at the end of this short article that describes
the software and methods used.


I think others will argue that it is likely you will be able to meausure
some difference, but you won't be able to hear it. And others will claim
they can hear it. Perhaps an ABX test under this circumstance with some
training might be of some value. But to do all this in a well controlled
test is, again, a research project and not an amateur, home-oriented
test.


The main value of a difference test is where it produces a null (and by
null,
it must mean residual levels below the practical or theoretical limits of
human
hearing). There, subjectivists would have to invoke some new sort of
audible effect,
akin to homeopathy, where vanishingly small amounts of 'medicine' are said
to
effect cures. I wouldn't put it past them.


It must be nice to be so certain. In the Pro Audio Digest thread of March
that I read at J. Junes suggestion last night, one of the correspondents
(screened for participation by highly regarded practical professional
engineers) antecdotally told of a blind test whereby a friend with a very
high level of success could pick out two identical samples (that met the
null test and proved bit-identical) on two different brands of gold-plated
CD disks. He wasn't challenged.


Perhaps he should have been. But then again, what's one more anecdotal report, no doubt
lacking details of test conditions, number of trials, statistics? I've seen 'trained
professionals' in audio and in engineering (as well as 'audio engineers) make some outright
nonsensical claims before.

Moreover, the general consensus of the group (which included Jim Johnson and
Dan Lavry) was that the CD cutoff was too low and artifacts were often
audible as a result, including pre-ringing and/or phase shift, and that 64K
was the necessary minimum to avoid even the possibility of problems.


Doesn't sound like an *inherently* audible result of the technology to me.
I'm sure both Lavry and JJ are aware that technology can be implemented
well, or suboptimally.

Please
note that this directly contradicts Arny's recent assertions here that CD's
are audibly at the level of ultimate transparency(1), and that the
66khz/20bit recommendation of the Japanese hi-rez group in the mid-90's was
nothing but marketing-driven propoganda(2).


No, it doesn't contradict (1), as long as JJ and Lavry used a qualifier like 'often' and Arny
didn't use the qualifier 'always' to mean that 'a CD is always transparent to its
source'. Which it appears he didn't.

Do Lavry and JJ claim that 16/44 simply *cannot* be transparent to source?

What I took away after following the discussion, which was one of two major
topics for the month, was that real engineers are very aware of what is NOT
known (including standards of audibility and a means of simulating what the
ear really can hear)...and that the folks here and elsewhere who are so sure
they know the truth are not real scientists or engineers. On the other
hand, some of us having been saying that for years and shouldn't be
surprised. But it is nice to have the real pros reinforce the opinion.


Hilarious that you're now becoming a fan of Lavry and JJ. I've been reading their stuff for
some years now.

Btw, how did you access the Pro list? I couldn't find it.

___
-S
"As human beings, we understand the world through simile, analogy,
metaphor, narrative and, sometimes, claymation." - B. Mason
  #15   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default New psudo ABX ?

Harry Lavo wrote:
"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message

"William Eckle" wrote in message
om...


The classic abx listening test is a chore to setup and
perform.


Depends. PCABX is an exact test for many kinds of listening tests that
people are interested in today,


* This means to determine which bit-rate of MP3 or other codec to choose.
Very hi-fi.


Have you ever tried to tell a well-encoded high-bitrate MP3 from source?

and a useful approximation for a wide
variety of tests. It's easier than going to a store and doing a careful
job
of comparing components using one of the house's systems.


* Yeah, take that amplifier and shove it into your PC. Ditto the tuner.
Maybe the CD player. Wonderful way to listen to components.


I suspect Arny is referring to ABX tests generally, not ABX software -- the latter
is a tool for comparing sound files.

Its clear strength is the ability to determine
if a difference, any difference, can be shown to exist
by listening alone in a scientifically valid way.


* If such difference is strong enough to overcome PC digitalization.


'PC digitalization' being different from 'digitalization' exactly how?

Simply not true of ABX, since ABX was developed and popularized before the
audio world even knew what a codec in the modern sense was.


Tell that to the ITU standards committee who state otherwise.


ITU claims ABX was developed to test audio codecs?? How are they
defining 'codec'?

You don't ... that's partly the point. Open-ended evaluation is called that
because you don't start with preconceptions or "knowing" differences...


Of course you do, Harry -- that's what bias is. These 'preconceptions' maybe
be conscious, or not. But they're there. Just seeing that two piece of gear
*look* different, can be enough to induce 'preconceptions' of audible difference.
Just 'knowing' that you are going to listen to more than one thing, is enough.

And around and around we go....

___
-S
"As human beings, we understand the world through simile, analogy,
metaphor, narrative and, sometimes, claymation." - B. Mason


  #16   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default New psudo ABX ?

Arny Krueger wrote:
"Steven Sullivan" wrote in message

Arny Krueger wrote:
"Harry Lavo" wrote in message

"William Eckle" wrote in message
om...


The classic abx listening test is a chore to setup and
perform.


Depends. PCABX is an exact test for many kinds of
listening tests that people are interested in today, and
a useful approximation for a wide variety of tests.
It's easier than going to a store and doing a careful
job of comparing components using one of the house's
systems.


Its clear strength is the ability to determine
if a difference, any difference, can be shown to exist
by listening alone in a scientifically valid way.


There ain't no such thing. The world is full of
differences that are measurable but not audible at all.
So, one caveat is that the difference has to be audible,
and not all differences are audible. Secondly, the
audibility of many differences are contingent on a wide
variety of influences, two of the stronger ones being
the sensitivity of the listener, and some properties of
the music being used for the comparison that may be
non-obvioius.


However, it only takes one positive ABX test run to
demonstrate that
the person who took the test could differentiate the
sound of A and B. (Assuming of course that the test was
set up properly.)


You have to be careful. Every once in a while people win on 100:1 shots.


True. There's no 'absolute' proof. But if the one person did enough
trials, the chance of a 'miracle' is thereby reduced. I include
a large trial number in 'proper set up', if we're going to put all
our bets on one person.

And it only takes
one person demonstrably hearing a difference, to 'prove'
that the two things sound different. That's not the same
as saying 'anyone' will be able to hear
the difference, of course.


We never found any "golden ears" with ABX. We did find tin ears. We did find
people who got lucky, but couldn't duplicate their short-run results with
longer runs.


Ah, probability....

___
-S
"As human beings, we understand the world through simile, analogy,
metaphor, narrative and, sometimes, claymation." - B. Mason
  #17   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default New psudo ABX ?

bob wrote:
On Mar 30, 11:28 am, "Harry Lavo" wrote:
"bob" wrote in message

...

On Mar 28, 11:21 pm, "Harry Lavo" wrote:
Unfortunately, this is not true. The researchers at Harman Kardon have
found that over 40% of people, even with careful training, cannot
reliably
distinquish even known differences in their training, and have to be
dropped
from any testing.


This is false. Harmon wasn't even training people to "distinguish
differences." All of its subjects could *distinguish* the differences;
what they couldn't do was correlate those differences to specific
variations in frequency response. That is a much harder task, which is
why the failure rate, even after training, was so high. Anyone who's
read Sean Olive's work would understand this (assuming they wanted to
understand it).


So what does this say about the average music listeners ability to use ABX
with NO training?


Nobody *needs* training to take an ABX test. The purpose of training
is to heighten the subject's ability to hear the difference being
tested. An ABX test tells you what you can hear right now, whether
you've had training or not. In particular, if someone claims he can
hear a difference between A and B, then he should need no training at
all in order to demonstrate that ability in a blind test. That he
usually can't do so merely proves he's a phony.


Well, a phony knows he's lying. Assuming sincerity, I'd say it proves he's 'probably wrong'
at best, or 'probably deluded' at worst.

___
-S
"As human beings, we understand the world through simile, analogy,
metaphor, narrative and, sometimes, claymation." - B. Mason
  #18   Report Post  
Posted to rec.audio.high-end
[email protected] mpresley@earthlink.net is offline
external usenet poster
 
Posts: 102
Default New psudo ABX ?

Getting back to the original article, isn't this similar to the Bob
Carver "trick" he used a long time ago when he attempted to match
the "sound" of his solid state amplifier to a C-J tube amp for the boys at
Stereophile, and then a Levinson ML-2 in a production run of one model of
his amplifiers? I'd have to investigate, but it kind of seems similar in a
way.

mp
  #20   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default New psudo ABX ?

On Mar 30, 11:32 am, "Harry Lavo" wrote:

I didn't say it did. What I said was that the Greenhill test clearly shows
that the test is sensitive to differences in levels and in white noise,
while remaining insensitve with choral music as the source. It says nothing
about other tests, pro or con. But it does have relevance for ABX.


Denial of the existence of masking noted.

snip

It must be nice to be so certain. In the Pro Audio Digest thread of March
that I read at J. Junes suggestion last night, one of the correspondents
(screened for participation by highly regarded practical professional
engineers) antecdotally told of a blind test whereby a friend with a very
high level of success could pick out two identical samples (that met the
null test and proved bit-identical) on two different brands of gold-plated
CD disks. He wasn't challenged.


And I notice you're not challenging him, either.

Moreover, the general consensus of the group (which included Jim Johnson and
Dan Lavry) was that the CD cutoff was too low and artifacts were often
audible as a result, including pre-ringing and/or phase shift, and that 64K
was the necessary minimum to avoid even the possibility of problems. Please
note that this directly contradicts Arny's recent assertions here that CD's
are audibly at the level of ultimate transparency(1), and that the
66khz/20bit recommendation of the Japanese hi-rez group in the mid-90's was
nothing but marketing-driven propoganda(2).


Yes, it's long been recognized by a lot of technical types that
16/44.1 might not be quite enough for perfect transparency. And how
did they figure this out? Blind tests, Harry.

What I took away after following the discussion, which was one of two major
topics for the month, was that real engineers are very aware of what is NOT
known (including standards of audibility and a means of simulating what the
ear really can hear)...and that the folks here and elsewhere who are so sure
they know the truth are not real scientists or engineers. On the other
hand, some of us having been saying that for years and shouldn't be
surprised. But it is nice to have the real pros reinforce the opinion.


Harry slays his straw man. Well, not really. Harry slays his straw man
while citing the same tests he claims his straw men are erroneously
citing. Harry really ought to try to get his story straight.

bob


  #21   Report Post  
Posted to rec.audio.high-end
Gary Eickmeier Gary Eickmeier is offline
external usenet poster
 
Posts: 1,449
Default New psudo ABX ?

William Eckle wrote:
The classic abx listening test is a chore to setup and perform. Its
clear strength is the ability to determine if a difference, any
difference, can be shown to exist by listening alone in a
scientifically valid way. Substitute some element for another which
is thought to possibly make an audible difference and if no
difference can be heard then the claim of amp or wire etc.
differences are moot and any subjective claims highly questionable.

Here is a new way to perform the same kind of test with greatly
simplified methods that create the context where a possible
difference can be shown to exist by listening alone and also
scientifically valid.

http://theaudiocritic.com/blog/index...Id=35&blogId=1

Don't miss the link at the end of this short article that describes
the software and methods used.


This is in no way a substitute for a valid listening test. It is a
measurement, a technical curiosity and no more. A given result in this
test may or may not be audible, the answer to which can only be
determined by... a listening test!

Gary Eickmeier
  #22   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default New psudo ABX ?

"Steven Sullivan" wrote in message
...
Harry Lavo wrote:
"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message

"William Eckle" wrote in message
om...

The classic abx listening test is a chore to setup and
perform.

Depends. PCABX is an exact test for many kinds of listening tests that
people are interested in today,


* This means to determine which bit-rate of MP3 or other codec to choose.
Very hi-fi.


Have you ever tried to tell a well-encoded high-bitrate MP3 from source?


I have listened to 192k MP3 on the main system on which I normally play the
CD's, and after five minutes my ears bled.


and a useful approximation for a wide
variety of tests. It's easier than going to a store and doing a
careful
job
of comparing components using one of the house's systems.


* Yeah, take that amplifier and shove it into your PC. Ditto the tuner.
Maybe the CD player. Wonderful way to listen to components.


I suspect Arny is referring to ABX tests generally, not ABX software --
the latter
is a tool for comparing sound files.


His exact quote in it's totality, as above:

AK: Depends. PCABX is an exact test for many kinds of
listening tests that people are interested in today, and
a useful approximation for a wide variety of tests.
It's easier than going to a store and doing a careful
job of comparing components using one of the house's
systems.


Sounds like a PC test to me.


Its clear strength is the ability to determine
if a difference, any difference, can be shown to exist
by listening alone in a scientifically valid way.


* If such difference is strong enough to overcome PC digitalization.


'PC digitalization' being different from 'digitalization' exactly how?

Simply not true of ABX, since ABX was developed and popularized before
the
audio world even knew what a codec in the modern sense was.


Tell that to the ITU standards committee who state otherwise.


ITU claims ABX was developed to test audio codecs?? How are they
defining 'codec'?


It was developed to help test telephone transmission techniques, which were
early forerunners of what we nowadays call audio codecs.


You don't ... that's partly the point. Open-ended evaluation is called
that
because you don't start with preconceptions or "knowing" differences...


Of course you do, Harry -- that's what bias is. These 'preconceptions'
maybe
be conscious, or not. But they're there. Just seeing that two piece of
gear
*look* different, can be enough to induce 'preconceptions' of audible
difference.
Just 'knowing' that you are going to listen to more than one thing, is
enough.


You miss my point completely. Whether deliberately or not I do not know.
Perhaps if you hadn't cut two-thirds of it out.....


And around and around we go....


And up and down go the horses....

  #23   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default New psudo ABX ?

"bob" wrote in message
...
On Mar 30, 11:28 am, "Harry Lavo" wrote:
"bob" wrote in message

...

On Mar 28, 11:21 pm, "Harry Lavo" wrote:
Unfortunately, this is not true. The researchers at Harman Kardon
have
found that over 40% of people, even with careful training, cannot
reliably
distinquish even known differences in their training, and have to be
dropped
from any testing.


This is false. Harmon wasn't even training people to "distinguish
differences." All of its subjects could *distinguish* the differences;
what they couldn't do was correlate those differences to specific
variations in frequency response. That is a much harder task, which is
why the failure rate, even after training, was so high. Anyone who's
read Sean Olive's work would understand this (assuming they wanted to
understand it).


So what does this say about the average music listeners ability to use
ABX
with NO training?


Nobody *needs* training to take an ABX test. The purpose of training
is to heighten the subject's ability to hear the difference being
tested. An ABX test tells you what you can hear right now, whether
you've had training or not. In particular, if someone claims he can
hear a difference between A and B, then he should need no training at
all in order to demonstrate that ability in a blind test. That he
usually can't do so merely proves he's a phony.


You ignore the fact that Harmon rejects over 40% of people as simply unable
to be consistent in the use of an ABX test. Since obviously 100% of people
can listen to two components playing music, this means that USING THE ABX
TEST many people are inadequate to use the test. It doesn't mean they are
phonys. Nor does it mean they couldn't hear a difference using other tests.
Or using no tests. All you can conclude is that USING THE ABX TEST, they
can hear no difference with any consistency. Even when there is one. So in
using the test without knowing if there is a difference, if one "flunks" the
test, then one does not know whether that is because there is no difference,
or because one is not good at ABX'ng. And this cannot be determined without
pre-screening. You oversimplifications undermine any legitimacy to your
argument.


The test is as much a test of the listener as it is of
the items under test. Moreover, both ABX and ABC/hr were developed
and
optimized specifically for codec testing,


.This is also false


Wishful thinking.


That ABX was developed to test codecs?


Telephone transmission compression, yes. A forerunner.


where easily identifiable
distortions can be used at various levels of impact to "train"
listeners,
who then listen to the blind samples but "knowing" what they are
listening
for.


Open-end evaluation of audio gear does not work this way. The human
brain
tries to relate the sound to a real sound, and doesn't even know
"what"
to
listen for...timbre, soundspace, subtles distortions, etc. That seems
to
be
why the result of open-ended listening via ABX results in almost
immediate
listening fatique...it is a total unnatural use of the technique for
this
purpose.


This is idle speculation by someone who knows nothing about the
subject he's talking about. There isn't a shred of real evidence for
any of it. It's pseudoscience.


This is reality, as even people who desire to take the test often drop
out
before even 15 samples for this very reason.


Your whole paragraph was pseudoscience, not just the last bit of
nonsense. You're making everything up as you go along, Harry. As for
subjects dropping out from "listening fatigue," I can cite you
numerous published studies where this did not happen. Can you cite any
where it did?


Anecdotally, in several efforts at testing here on usenet (not in this
forum) by people actively attempting to use the technique (not
subjectivists), yes. Can I cite the posts, no....some of them were
seven-eight years ago.

Put simply it violates the first cardinal principal of test
design...that is to prevent any aspect of the test from intervening as
a
variable.


Again, you haven't a shred of evidence that ABX or ABC/hr tests
interfere with perception in any way.


More evidence (admittedly anecdotal) than has been presented to validate
that it works for open-ended evaluation of audio components.


You don't even know what "open-ended evaluation of audio components"
means. It's just a nonsense phrase you throw into every post because
you haven't any real arguments.


That is pure baloney. I have defined it over and over and have done so just
recently. You just don't want to acknowledge what it is, for there is no
way it can be accomodated in an ABX test.


ABX can be used for crude audio measures....volume, frequency shifts
in
white noise, etc. As soon as it comes to listening to music,
sensitivity
decreases or disappears.


It is certainly true that it is easier to hear differences in level
and FR using test tones than using music. This has nothing to do with
any particular test. It has to do with the way the human hearing
mechanism works, and it is true no matter what listening method you
use.


To paraphrase, you haven't a shred of evidence that long term,
exploratory
tests
paired with short term comparisons doesn't overcome this limitation. We
are
talking MUSIC, after all...not white noise.


We are talking masking. Something else you appear to know nothing
about. Do you honestly believe that masking doesn't happen when you
listen to music, Harry? Yes, I believe you do.


I believe masking happens when listening for differences in an ABX test. I
also believe "sharpening" happens when listening in extended listening
tests, if one is a careful listener. Music is not just "sound", a fact
that you and others ignore time and time again, at the expense of
undermining your own arguments.

  #24   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default New psudo ABX ?

"Steven Sullivan" wrote in message
...
Harry Lavo wrote:
"bob" wrote in message
...
On Mar 28, 11:21 pm, "Harry Lavo" wrote:
Unfortunately, this is not true. The researchers at Harman Kardon
have
found that over 40% of people, even with careful training, cannot
reliably
distinquish even known differences in their training, and have to be
dropped
from any testing.

This is false. Harmon wasn't even training people to "distinguish
differences." All of its subjects could *distinguish* the differences;
what they couldn't do was correlate those differences to specific
variations in frequency response. That is a much harder task, which is
why the failure rate, even after training, was so high. Anyone who's
read Sean Olive's work would understand this (assuming they wanted to
understand it).


So what does this say about the average music listeners ability to use
ABX
with NO training?


What does it say about the average music listener's ability to correctly
identify difference in a sighted comparison? There, one is compounding
a lack of training, with a surfeit of bias.


What it does is rely on the listeners ability to judge the natural sound of
music. If the person is good at it, he will probably reach a valid
conclusion. If he has a tin ear, he probably will reach a "no difference"
or "random difference" conclusion, both of which are okay for him. But he
will judge based on music heard in a more natural way without any need to be
"trained". And he can listen multidimensionally, rather than as a test
subject.


Yet that's the regimen used most commonly in audio equipment 'reviewing'.

Any wonder that the 'audiophile' is something of a joke?


And any wonder why the attempts to impose sterile and artificial tests are
mostly just simply ignored?

  #25   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default New psudo ABX ?

"Harry Lavo" wrote in message

"bob" wrote in message
...
On Mar 28, 11:21 pm, "Harry Lavo"
wrote:
Unfortunately, this is not true. The researchers at
Harman Kardon have found that over 40% of people, even
with careful training, cannot reliably
distinquish even known differences in their training,
and have to be dropped
from any testing.


This is false. Harmon wasn't even training people to
"distinguish differences." All of its subjects could
*distinguish* the differences; what they couldn't do was
correlate those differences to specific variations in
frequency response. That is a much harder task, which is
why the failure rate, even after training, was so high.
Anyone who's read Sean Olive's work would understand
this (assuming they wanted to understand it).


So what does this say about the average music listeners
ability to use ABX with NO training?


What does this say about anybody's ability to reliably hear differences with
or without training?

As usual we see a post that tries to blame the problems of the world on ABX.
One core problem is the horrific propensity of sighted evaluations to
product false positives. If the number of positive results that is used to
judge is based on a listening methodolgy that spawns false positives like
pregnant pacific salmon produce eggs, what can compete?

The test is as much a test of the listener as it is of
the items under test.


True of any listening test involving subtle differences, or even differences
that aren't glaringly obvious.

Moreover, both ABX and ABC/hr
were developed and optimized specifically for codec
testing,


.This is also false


Wishful thinking.


No factually wrong. ABX was developed and popularized before testing codecs
was a serious business. Back in the middle 1970s, MP3 was 20 or more years
in the future.

where easily identifiable
distortions can be used at various levels of impact to
"train" listeners, who then listen to the blind samples
but "knowing" what they are listening
for.


Open-end evaluation of audio gear does not work this
way. The human brain
tries to relate the sound to a real sound, and doesn't
even know "what" to
listen for...timbre, soundspace, subtles distortions,
etc. That seems to be
why the result of open-ended listening via ABX results
in almost immediate
listening fatique...it is a total unnatural use of the
technique for this purpose.


This is idle speculation by someone who knows nothing
about the subject he's talking about. There isn't a
shred of real evidence for any of it. It's pseudoscience.


Especially given that "open-ended" listening is whatever Harry wants it to
be today.

This is reality, as even people who desire to take the
test often drop out before even 15 samples for this very
reason.


No foundation has been laid for this claim.

Put simply it violates the first cardinal principal of
test design...that is to prevent any aspect of the test
from intervening as a variable.


Again, you haven't a shred of evidence that ABX or
ABC/hr tests interfere with perception in any way.


More evidence (admittedly anecdotal) than has been
presented to validate that it works for open-ended
evaluation of audio components.


Keys words "admittedly anecdotal". IOW there's nothing but unsubstantiated
stories to back it up. As other posters have said, urban legends, but in
this case urban legends know to only one person.

ABX can be used for crude audio measures....volume,
frequency shifts in white noise, etc. As soon as it
comes to listening to music, sensitivity decreases or
disappears.


It is certainly true that it is easier to hear
differences in level and FR using test tones than using
music. This has nothing to do with any particular test.
It has to do with the way the human hearing mechanism
works, and it is true no matter what listening method
you use.


To paraphrase, you haven't a shred of evidence that long
term, exploratory tests
paired with short term comparisons doesn't overcome this
limitation. We are talking MUSIC, after all...not white
noise.


What is an exploratory test?

I have an idea of what an exploratory test is, and it no way does it
conflict with the use of ABX, ABC/hr or whatever.

This isn't just speculation....review the
Greenhill tests in Stereo Review (search index at their
site).


Where you won't find it. But if anyone wants to read
what Greenhill actually found (rather than someone's
re-invention of it), e-mail me and I can send you the
article.


Agreed, I believe I have the text of those tests on my hard drive.

And if you wish to email me I can send you an accurate
and complete Excel table of the results.


Seems like rehashing just one test done about 20 years ago is pretty
senseless, anyhow.

Harry is peddling pure pseudoscience here.


You've heard from a true ABX believer.


Who might that be?

Ask for the
validation test that this technique, developed very
specifically for codec distortions, works as the best
tool for open-ended evaluation of audio components.


This is nonsense. ABX predates codec tests by about 20 years. I won't say
that there weren't codecs way back then, but the ones that provided
meaninful amount of compression had so many audible faults that they were
only proposed for use with telephones, not hifis.



  #26   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default New psudo ABX ?

"Harry Lavo" wrote in message

"Steven Sullivan" wrote in message
...
Harry Lavo wrote:
"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message

"William Eckle" wrote in message
om...

The classic abx listening test is a chore to setup
and perform.

Depends. PCABX is an exact test for many kinds of
listening tests that people are interested in today,


* This means to determine which bit-rate of MP3 or
other codec to choose. Very hi-fi.


Have you ever tried to tell a well-encoded high-bitrate
MP3 from source?


I have listened to 192k MP3 on the main system on which
I normally play the CD's, and after five minutes my ears
bled.


The answer would be thus be "no", as the question presumes bias controls,
and none are in evidence. A good 192K codec can be suitable challenge for a
listener, once the classic prejudices of audiophilia are kept under control.

Let Harry provide a CD track of his choosing. It will be returned to him
repeated 16 times, some repetitions precise copies, and some repetitions
round-tripped through 196 kB MP3-land. Harry will tell us which are the "ear
bleed" tracks.

and a useful approximation for a wide
variety of tests. It's easier than going to a store
and doing a carefuljob
of comparing components using one of the house's
systems.


* Yeah, take that amplifier and shove it into your PC.


A ludicrous and dismissive comment.

Ditto the tuner. Maybe the CD player. Wonderful way to
listen to components.


Proof?

I suspect Arny is referring to ABX tests generally, not
ABX software -- the latter
is a tool for comparing sound files.


His exact quote in it's totality, as above:


AK: Depends. PCABX is an exact test for many kinds of
listening tests that people are interested in today, and
a useful approximation for a wide variety of tests.
It's easier than going to a store and doing a careful
job of comparing components using one of the house's
systems.


Sounds like a PC test to me.


PC's are widely used for audio production purposes. Perhaps Harry has a "ear
bleed" remark to go with all of the recordings he already has that were
unbeknownst to him, produced on a PC? ;-)

Its clear strength is the ability to determine
if a difference, any difference, can be shown to
exist by listening alone in a scientifically valid
way.


* If such difference is strong enough to overcome PC
digitalization.


'PC digitalization' being different from
'digitalization' exactly how?


Harry has no response - he must be agreeing with the obvious point that
there need be no difference between PC digitization and other forms of
digitization.

Interesting that Harry chooses a word whose primary meaning is:

"To administer digitalis in a dosage sufficient to achieve the maximum
therapeutic effect without producing toxic symptoms."

Simply not true of ABX, since ABX was developed and
popularized before the
audio world even knew what a codec in the modern sense
was.


Tell that to the ITU standards committee who state
otherwise.


ITU claims ABX was developed to test audio codecs?? How
are they defining 'codec'?


It was developed to help test telephone transmission
techniques, which were early forerunners of what we
nowadays call audio codecs.


I don't know about this.

You don't ... that's partly the point. Open-ended
evaluation is called that
because you don't start with preconceptions or
"knowing" differences...


Of course you do, Harry -- that's what bias is. These
'preconceptions' maybe
be conscious, or not. But they're there. Just seeing
that two piece of gear
*look* different, can be enough to induce
'preconceptions' of audible difference.
Just 'knowing' that you are going to listen to more than
one thing, is enough.


Harry also did not respond to this in any meaninful way.

  #27   Report Post  
Posted to rec.audio.high-end
[email protected] mpresley@earthlink.net is offline
external usenet poster
 
Posts: 102
Default New psudo ABX ?

Gary Eickmeier wrote:

This is in no way a substitute for a valid listening test. It is a
measurement, a technical curiosity and no more. A given result in this
test may or may not be audible, the answer to which can only be
determined by... a listening test!


What you say is true, strictly speaking. You are just describing the idea
that all empirical phenomenon are presumed to be contingent. But
practically it is not so relevant. I can use a sort of paraphrase that
actually came from The Audio Critic many years ago.

The idea was that if you wanted to investigate fast animals you'd probably
want to check out the gazelle, a couple of the big predator cats, a
thoroughbred race horse and so forth. You'd probably not waste your time
investigating a pig, or clocking an opossum. On the other hand, if someone
credible told you that they'd observed a fast pig, it might be a different
matter and out of curiosity you might want to check it out.

It is the same here. With this test, if the measurements show no (or very
little) difference, and upon further investigation by listening you then
conclude that whatever differences you actually measured are
indistinguishable in your listening test, then you can rightly conclude and
have grounds for certainty that in the future audible differences are not
going to manifest below this threshold. It is just reasoning by induction.

mp
  #28   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default New psudo ABX ?

On Mar 31, 10:23 am, "Harry Lavo" wrote:

You ignore the fact that Harmon rejects over 40% of people as simply unable
to be consistent in the use of an ABX test.


Well, let's see a reference for this, please. Harman certainly
screened lots of people out, but not because they were "unable" to
"use" any test. Harman screened them out because their hearing wasn't
good enough for the research Harman was doing. If Harman used ABX
tests at all, it was because they know ABX tests WORK. Your
tendentious misinterpretation of Harman's work is astounding.

Let's assume, just for the sake of argument, that there really are
people who are "unable to use ABX" (as opposed to "unable to get the
results Harry wants"). Why would Harman screen them out? Harman wasn't
using ABX tests in its research. Why would it care whether people
could "use" them or not?

snip

I believe masking happens when listening for differences in an ABX test. I
also believe "sharpening" happens when listening in extended listening
tests, if one is a careful listener. Music is not just "sound", a fact
that you and others ignore time and time again, at the expense of
undermining your own arguments.


Here in the reality-based community, Harry, masking happens all the
time, no matter how you're listening. But in order to hold on to your
baseless beliefs about ABX tests, you have to deny that masking occurs
when listening to music. That's how far out of the scientific world
you've stepped, Harry. Your arguments resemble those of the
Creationist who has to deny carbon dating and genetic mutation.

bob
  #29   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default New psudo ABX ?

Harry Lavo wrote:
"Steven Sullivan" wrote in message
...
Harry Lavo wrote:
"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message

"William Eckle" wrote in message
om...

The classic abx listening test is a chore to setup and
perform.

Depends. PCABX is an exact test for many kinds of listening tests that
people are interested in today,


* This means to determine which bit-rate of MP3 or other codec to choose.
Very hi-fi.


Have you ever tried to tell a well-encoded high-bitrate MP3 from source?


I have listened to 192k MP3 on the main system on which I normally play the
CD's, and after five minutes my ears bled.


And of course, you weren't listening blind, right?

If you were, that must've been an extremely poorly made MP3 then. I doubt you could tell
the ones I make, apart from source.


and a useful approximation for a wide
variety of tests. It's easier than going to a store and doing a
careful
job
of comparing components using one of the house's systems.


* Yeah, take that amplifier and shove it into your PC. Ditto the tuner.
Maybe the CD player. Wonderful way to listen to components.


I suspect Arny is referring to ABX tests generally, not ABX software --
the latter
is a tool for comparing sound files.


His exact quote in it's totality, as above:


AK: Depends. PCABX is an exact test for many kinds of
listening tests that people are interested in today, and
a useful approximation for a wide variety of tests.
It's easier than going to a store and doing a careful
job of comparing components using one of the house's
systems.


Sounds like a PC test to me.


True, I don't see how one can easily compare components
using PCABX, *if* they aren't among the data set Arny has provided.

Its clear strength is the ability to determine
if a difference, any difference, can be shown to exist
by listening alone in a scientifically valid way.


* If such difference is strong enough to overcome PC digitalization.


'PC digitalization' being different from 'digitalization' exactly how?

Simply not true of ABX, since ABX was developed and popularized before
the
audio world even knew what a codec in the modern sense was.


Tell that to the ITU standards committee who state otherwise.


ITU claims ABX was developed to test audio codecs?? How are they
defining 'codec'?


It was developed to help test telephone transmission techniques, which were
early forerunners of what we nowadays call audio codecs.


You don't say?

___
-S
"As human beings, we understand the world through simile, analogy,
metaphor, narrative and, sometimes, claymation." - B. Mason
  #30   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default New psudo ABX ?

Harry Lavo wrote:
"Steven Sullivan" wrote in message
...
Harry Lavo wrote:
"bob" wrote in message
...
On Mar 28, 11:21 pm, "Harry Lavo" wrote:
Unfortunately, this is not true. The researchers at Harman Kardon
have
found that over 40% of people, even with careful training, cannot
reliably
distinquish even known differences in their training, and have to be
dropped
from any testing.

This is false. Harmon wasn't even training people to "distinguish
differences." All of its subjects could *distinguish* the differences;
what they couldn't do was correlate those differences to specific
variations in frequency response. That is a much harder task, which is
why the failure rate, even after training, was so high. Anyone who's
read Sean Olive's work would understand this (assuming they wanted to
understand it).


So what does this say about the average music listeners ability to use
ABX
with NO training?


What does it say about the average music listener's ability to correctly
identify difference in a sighted comparison? There, one is compounding
a lack of training, with a surfeit of bias.


What it does is rely on the listeners ability to judge the natural sound of
music. If the person is good at it, he will probably reach a valid
conclusion. If he has a tin ear, he probably will reach a "no difference"
or "random difference" conclusion, both of which are okay for him. But he
will judge based on music heard in a more natural way without any need to be
"trained". And he can listen multidimensionally, rather than as a test
subject.


You argument presumes where it needs to proved

Yet that's the regimen used most commonly in audio equipment 'reviewing'.

Any wonder that the 'audiophile' is something of a joke?


And any wonder why the attempts to impose sterile and artificial tests are
mostly just simply ignored?


The only place they should be 'imposed' is in the context of professionally
reviewing the sound...where the stated intent is to inform the consumer (though
the actual intent may be to sell advertising space).

That they are 'ignored' there is a testament to the bankruptcy of
professional 'audiophile' journalism.

('Ignored' is in quotes because in fact, magazines like Stereophile
actually print anti-DBT screeds).

___
-S
"As human beings, we understand the world through simile, analogy,
metaphor, narrative and, sometimes, claymation." - B. Mason


  #31   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default New psudo ABX ?

"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message

"bob" wrote in message
...
On Mar 28, 11:21 pm, "Harry Lavo"
wrote:
Unfortunately, this is not true. The researchers at
Harman Kardon have found that over 40% of people, even
with careful training, cannot reliably
distinquish even known differences in their training,
and have to be dropped
from any testing.

This is false. Harmon wasn't even training people to
"distinguish differences." All of its subjects could
*distinguish* the differences; what they couldn't do was
correlate those differences to specific variations in
frequency response. That is a much harder task, which is
why the failure rate, even after training, was so high.
Anyone who's read Sean Olive's work would understand
this (assuming they wanted to understand it).


So what does this say about the average music listeners
ability to use ABX with NO training?


What does this say about anybody's ability to reliably hear differences
with
or without training?


It doesn't say anything about other tests...what it does say is 40%+ of
people have difficulty getting a valid result using ABX even when there are
known subtle differences.


As usual we see a post that tries to blame the problems of the world on
ABX.
One core problem is the horrific propensity of sighted evaluations to
product false positives. If the number of positive results that is used to
judge is based on a listening methodolgy that spawns false positives like
pregnant pacific salmon produce eggs, what can compete?

The test is as much a test of the listener as it is of
the items under test.


True of any listening test involving subtle differences, or even
differences
that aren't glaringly obvious.


Wrong for two reasons.

First, it is a particularly bad problem with ABX because that difficulty
creates a false obstacle...eg. in Greenhills test, suppose 40% of the
testers who heard no difference had been thrown out as incompetent testers.
The results among those remaining would have looked less like chance,
wouldn't they? Many other tests don't require throwing out testers to be
valid...they have different ways of dealing with the issue.

Second, the problem is test specific. It is quite possible that those
people would have no problem doing a blind A-B preference test . Or doing a
monadic or protomonadic semantic scaling test. This gets at the stress
factor that is seemingly inherent in substituting an identification test for
a preference test or a "liking test" when it comes to music. And this in
turn gets at the fact that identification tests per se reduce emotional
involvement to zip. That's one reason ABC/hr is a better technique than
ABX, although it is still a quasi-id test. It at least allows the sound of
the real world to be a factor.


Moreover, both ABX and ABC/hr
were developed and optimized specifically for codec
testing,


.This is also false


Wishful thinking.


No factually wrong. ABX was developed and popularized before testing
codecs
was a serious business. Back in the middle 1970s, MP3 was 20 or more years
in the future.


Exactly...and compressed telephone transmissions were the forerunner of
today's codecs. In fact, as you well know, JJ moved from the former to the
latter.


where easily identifiable
distortions can be used at various levels of impact to
"train" listeners, who then listen to the blind samples
but "knowing" what they are listening
for.


Open-end evaluation of audio gear does not work this
way. The human brain
tries to relate the sound to a real sound, and doesn't
even know "what" to
listen for...timbre, soundspace, subtles distortions,
etc. That seems to be
why the result of open-ended listening via ABX results
in almost immediate
listening fatique...it is a total unnatural use of the
technique for this purpose.


This is idle speculation by someone who knows nothing
about the subject he's talking about. There isn't a
shred of real evidence for any of it. It's pseudoscience.


Especially given that "open-ended" listening is whatever Harry wants it to
be today.


No, open-ended listening has been repeatedly defined here by me. And you
know it...there probably are a half dozen posters to this forum who could
paraphrase it.


This is reality, as even people who desire to take the
test often drop out before even 15 samples for this very
reason.


No foundation has been laid for this claim.


See below.


Put simply it violates the first cardinal principal of
test design...that is to prevent any aspect of the test
from intervening as a variable.


Again, you haven't a shred of evidence that ABX or
ABC/hr tests interfere with perception in any way.


More evidence (admittedly anecdotal) than has been
presented to validate that it works for open-ended
evaluation of audio components.


Keys words "admittedly anecdotal". IOW there's nothing but
unsubstantiated
stories to back it up. As other posters have said, urban legends, but in
this case urban legends know to only one person.


I seem to recall another poster here who refers to anecdotal evidence when
asked to back his claims. Sauce for the Goose, Arny??? (Except you don't
admit that is the basis for your evidence until pressed to the wall).


ABX can be used for crude audio measures....volume,
frequency shifts in white noise, etc. As soon as it
comes to listening to music, sensitivity decreases or
disappears.


It is certainly true that it is easier to hear
differences in level and FR using test tones than using
music. This has nothing to do with any particular test.
It has to do with the way the human hearing mechanism
works, and it is true no matter what listening method
you use.


To paraphrase, you haven't a shred of evidence that long
term, exploratory tests
paired with short term comparisons doesn't overcome this
limitation. We are talking MUSIC, after all...not white
noise.


What is an exploratory test?

I have an idea of what an exploratory test is, and it no way does it
conflict with the use of ABX, ABC/hr or whatever.


An exploratory test is a test whereby one is trying to get a "fix" on the
sonic characteristics of a piece of audio gear, Arny. And the whole ABX,
ABC/hr test prtocols are wrong for this sort of thing.....they are not
designed for music, and trying to use them for this is simply false
"science".


This isn't just speculation....review the
Greenhill tests in Stereo Review (search index at their
site).


Where you won't find it. But if anyone wants to read
what Greenhill actually found (rather than someone's
re-invention of it), e-mail me and I can send you the
article.


Agreed, I believe I have the text of those tests on my hard drive.


Many of us have that test on our hard drives, Arny. Your point??


And if you wish to email me I can send you an accurate
and complete Excel table of the results.


Seems like rehashing just one test done about 20 years ago is pretty
senseless, anyhow.

Harry is peddling pure pseudoscience here.


You've heard from a true ABX believer.


Who might that be?


Who am I responding to here, Arny? Hint: it is not you.


Ask for the
validation test that this technique, developed very
specifically for codec distortions, works as the best
tool for open-ended evaluation of audio components.


This is nonsense. ABX predates codec tests by about 20 years. I won't say
that there weren't codecs way back then, but the ones that provided
meaninful amount of compression had so many audible faults that they were
only proposed for use with telephones, not hifis.


Which is exactly when and why ABX was developed/codified by the ITU...and
then found not to be so good, so later replaced by ABX/hr for this purpose.
But that is hardly finding subtle flaws in a piece of equipment's dynamic
reproduction of music, now, is it Arny?

  #32   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default New psudo ABX ?

"Steven Sullivan" wrote in message
...
Harry Lavo wrote:
"Steven Sullivan" wrote in message
...
Harry Lavo wrote:
"bob" wrote in message
...
On Mar 28, 11:21 pm, "Harry Lavo" wrote:
Unfortunately, this is not true. The researchers at Harman Kardon
have
found that over 40% of people, even with careful training, cannot
reliably
distinquish even known differences in their training, and have to
be
dropped
from any testing.

This is false. Harmon wasn't even training people to "distinguish
differences." All of its subjects could *distinguish* the
differences;
what they couldn't do was correlate those differences to specific
variations in frequency response. That is a much harder task, which
is
why the failure rate, even after training, was so high. Anyone who's
read Sean Olive's work would understand this (assuming they wanted
to
understand it).

So what does this say about the average music listeners ability to use
ABX
with NO training?

What does it say about the average music listener's ability to
correctly
identify difference in a sighted comparison? There, one is compounding
a lack of training, with a surfeit of bias.


What it does is rely on the listeners ability to judge the natural sound
of
music. If the person is good at it, he will probably reach a valid
conclusion. If he has a tin ear, he probably will reach a "no
difference"
or "random difference" conclusion, both of which are okay for him. But
he
will judge based on music heard in a more natural way without any need to
be
"trained". And he can listen multidimensionally, rather than as a test
subject.


You argument presumes where it needs to proved


Well, let's see how unreasonable or reasonable the presumptions are!

1) "What it does is rely on the listeners ability to judge the natural sound
of music.". Now we are judging the quality of audio components to reproduce
live music with versimiltude...that is the task. Can you suggest an
alternative means of making that judgement that is better?

2) "If the person is good at it, he will probably reach a valid conclusion."
This is a truism. If a person is good at judging via the above approach,
his judements with regard to the quality of audio components to reproduce
live music with versimitude will be good, do you not expect? Would the
opposite likely be true?

3) "If he has a tin ear, he probably will reach a "no difference" or "random
difference" conclusion, both of which are okay for him. " This needs broken
down. Assuming you understand that by "tin ear" I mean he seems to not be
able to recognize really good sound from average sound from very poor sound,
why would the results be anything but as stated...he either can't hear any
difference, or what he hears essentially is random. Do you have a problem
with this?

4) "But he will judge based on music heard in a more natural way without any
need to be "trained"." In other words, however he listens to music, however
carefully or sloppily or backgroundy, however full of discernment or
ignorance of live instruments, that is the approach he will bring to
open-ended listening. Is there some more likely alternative I have
overlooked? No, then let's move on

5) "And he can listen multidimensionally, rather than as a test subject."
Meaning, if his mind drifts off to the bass line, and then notices that the
snares seem really realistic behind the bass, and that then the alto has
"air" around it, and that by golly this is a teriffic reproduction of a jazz
trio, he can do so without being asked to focus on "which reproduction of
bass sounds like 'X'", a single sound and a single focus...which is
one-dimensional.

Okay...that's the end of my dissection. What part of the above needs to be
proven to be reasonable and likely true? And versus what alternative?


Yet that's the regimen used most commonly in audio equipment
'reviewing'.

Any wonder that the 'audiophile' is something of a joke?


And any wonder why the attempts to impose sterile and artificial tests
are
mostly just simply ignored?


The only place they should be 'imposed' is in the context of
professionally
reviewing the sound...where the stated intent is to inform the consumer
(though
the actual intent may be to sell advertising space).


A magazine is not a lab. A consumers reports devoted to audio would be out
of business in no time...the potential market is simply not large enough.
Instead, the magazine reviewers essentially try to relate to the equipment
as audiophiles. And audiophiles learn which reviewers judgments square with
their own and theirfore which to trust highly (and which to not trust
highly). This is how the magazines approach the subject, and this is how
audiophiles use and understand the reviews. Given the cost and complexity
of a really tight and controlled scientific test, is not this a reasonable
approach for a consumer hobbiest magazine to take? Do you suppose this
might be so strongly the case that most audio hobby magazines no matter
where they originate in the world seem to follow this approach?

That they are 'ignored' there is a testament to the bankruptcy of
professional 'audiophile' journalism.


Or perhaps is a testimony to the financial astuteness of their owners and
publishers.

('Ignored' is in quotes because in fact, magazines like Stereophile
actually print anti-DBT screeds).


One man's screed is another man's alternative take on things.

  #33   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default New psudo ABX ?

On Mar 31, 3:29 pm, "Harry Lavo" wrote:

It doesn't say anything about other tests...what it does say is 40%+ of
people have difficulty getting a valid result using ABX even when there are
known subtle differences.


Still waiting for a reference.

snip

First, it is a particularly bad problem with ABX because that difficulty
creates a false obstacle...eg. in Greenhills test, suppose 40% of the
testers who heard no difference had been thrown out as incompetent testers.


But there's no evidence that there *were* "incompetent testers" (a
category we still haven't established the existence of).

The results among those remaining would have looked less like chance,
wouldn't they?


Given that 5 out of 6 tests had positive outcomes, it's hard to
imagine how this could be. (And excluding the 4 worst performers
overall from the one negative test wouldn't get you a positive result,
either.)

Many other tests don't require throwing out testers to be
valid...they have different ways of dealing with the issue.

Second, the problem is test specific. It is quite possible that those
people would have no problem doing a blind A-B preference test . Or doing a
monadic or protomonadic semantic scaling test. This gets at the stress
factor that is seemingly inherent in substituting an identification test for
a preference test or a "liking test" when it comes to music. And this in
turn gets at the fact that identification tests per se reduce emotional
involvement to zip.


The preceding paragraph is entirely empiricism-free. Harry has no
evidence that any other test is better than ABX. None. It is baseless
belief.

That's one reason ABC/hr is a better technique than
ABX, although it is still a quasi-id test.


ABC/hr ISN'T a better test; it's a different test with a different
purpose. You really ought to study up on these things before you write
about them, Harry.

snip

This is reality, as even people who desire to take the
test often drop out before even 15 samples for this very
reason.


No foundation has been laid for this claim.


See below.


I looked. Ain't nothin' there.

snip

An exploratory test is a test whereby one is trying to get a "fix" on the
sonic characteristics of a piece of audio gear, Arny.


That's not even a test. And, of course, there's nothing to stop
someone from doing this "exploration" before taking any test. Indeed,
people who claim to hear a difference between cables or amps or
whatever have presumably already done this "exploration." Which makes
it doubly puzzling why they can't find ANY blind test that will
confirm the results of that "exploration."

And the whole ABX,
ABC/hr test prtocols are wrong for this sort of thing.....they are not
designed for music, and trying to use them for this is simply false
"science".


So if they weren't designed for music, just what source material did
the researchers use in the codec tests, Harry?

This isn't just speculation....review the
Greenhill tests in Stereo Review (search index at their
site).


Where you won't find it. But if anyone wants to read
what Greenhill actually found (rather than someone's
re-invention of it), e-mail me and I can send you the
article.


Agreed, I believe I have the text of those tests on my hard drive.


Many of us have that test on our hard drives, Arny. Your point??



And if you wish to email me I can send you an accurate
and complete Excel table of the results.


Seems like rehashing just one test done about 20 years ago is pretty
senseless, anyhow.


Harry is peddling pure pseudoscience here.


You've heard from a true ABX believer.


Who might that be?


Who am I responding to here, Arny? Hint: it is not you.



Ask for the
validation test that this technique, developed very
specifically for codec distortions, works as the best
tool for open-ended evaluation of audio components.


This is nonsense. ABX predates codec tests by about 20 years. I won't say
that there weren't codecs way back then, but the ones that provided
meaninful amount of compression had so many audible faults that they were
only proposed for use with telephones, not hifis.


Which is exactly when and why ABX was developed/codified by the ITU...and
then found not to be so good, so later replaced by ABX/hr for this purpose.


Poppycock. Nobody "replaced" ABX. They developed a variant of ABX for
a different purpose.

bob

  #34   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default New psudo ABX ?

"Harry Lavo" wrote in message


It must be nice to be so certain. In the Pro Audio
Digest thread of March that I read at J. Junes suggestion
last night, one of the correspondents (screened for
participation by highly regarded practical professional
engineers) antecdotally told of a blind test whereby a
friend with a very high level of success could pick out
two identical samples (that met the null test and proved
bit-identical) on two different brands of gold-plated CD
disks. He wasn't challenged.


That's beause that is a well-known historical problem that has nothing to do
with resolution, per se.

Moreover, the general consensus of the group (which
included Jim Johnson and Dan Lavry) was that the CD
cutoff was too low and artifacts were often audible as a
result, including pre-ringing and/or phase shift, and
that 64K was the necessary minimum to avoid even the
possibility of problems. Please note that this directly
contradicts Arny's recent assertions here that CD's are
audibly at the level of ultimate transparency(1), and
that the 66khz/20bit recommendation of the Japanese
hi-rez group in the mid-90's was nothing but
marketing-driven propoganda(2).


Without supporting quotes, this is just another anecdote.

BTW, here's some interesting reading:

http://www.paudio.com/Pages/presenta...ity/sld001.htm

  #35   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default New psudo ABX ?

On Mar 31, 8:20 pm, "Harry Lavo" wrote:
2) "If the person is good at it, he will probably reach a valid conclusion."
This is a truism.


No, it's actually a falsehood. No matter how good someone is at "open-
ended evaluation of audio components" (whatever that is), we know that
he will frequently come to demonstrably incorrect conclusions. That's
what's wrong with "open-ended evaluation of audio
components" (whatever that is): It produces a high incidence of false
positives. But, as Steven noted, you are assuming that little problem
away.

bob


  #36   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default New psudo ABX ?

"Harry Lavo" wrote in message

"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message

"bob" wrote in message
...
On Mar 28, 11:21 pm, "Harry Lavo"
wrote:
Unfortunately, this is not true. The researchers at
Harman Kardon have found that over 40% of people, even
with careful training, cannot reliably
distinquish even known differences in their training,
and have to be dropped
from any testing.

This is false. Harmon wasn't even training people to
"distinguish differences." All of its subjects could
*distinguish* the differences; what they couldn't do
was correlate those differences to specific variations
in frequency response. That is a much harder task,
which is why the failure rate, even after training,
was so high. Anyone who's read Sean Olive's work would
understand this (assuming they wanted to understand
it).

So what does this say about the average music listeners
ability to use ABX with NO training?


What does this say about anybody's ability to reliably
hear differences with
or without training?


It doesn't say anything about other tests...what it does
say is 40%+ of people have difficulty getting a valid
result using ABX even when there are known subtle
differences.


If you can't compare ABX to other testing methods, what's your point Harry?

  #37   Report Post  
Posted to rec.audio.high-end
Gary Eickmeier Gary Eickmeier is offline
external usenet poster
 
Posts: 1,449
Default New psudo ABX ?

wrote:

What you say is true, strictly speaking. You are just describing the idea
that all empirical phenomenon are presumed to be contingent. But
practically it is not so relevant. I can use a sort of paraphrase that
actually came from The Audio Critic many years ago.

The idea was that if you wanted to investigate fast animals you'd probably
want to check out the gazelle, a couple of the big predator cats, a
thoroughbred race horse and so forth. You'd probably not waste your time
investigating a pig, or clocking an opossum. On the other hand, if someone
credible told you that they'd observed a fast pig, it might be a different
matter and out of curiosity you might want to check it out.

It is the same here. With this test, if the measurements show no (or very
little) difference, and upon further investigation by listening you then
conclude that whatever differences you actually measured are
indistinguishable in your listening test, then you can rightly conclude and
have grounds for certainty that in the future audible differences are not
going to manifest below this threshold. It is just reasoning by induction.


It is not a pseudo ABX test. It is not even a listening test. He can
call it something else. It is certainly not "the same kind of test with
greatly simplified methods."

Gary Eickmeier
  #38   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default New psudo ABX ?

"bob" wrote in message
...
On Mar 31, 3:29 pm, "Harry Lavo" wrote:

It doesn't say anything about other tests...what it does say is 40%+ of
people have difficulty getting a valid result using ABX even when there
are
known subtle differences.


Still waiting for a reference.

snip


Well, if you are impatient...how about contacting Sean himself?

First, it is a particularly bad problem with ABX because that difficulty
creates a false obstacle...eg. in Greenhills test, suppose 40% of the
testers who heard no difference had been thrown out as incompetent
testers.


But there's no evidence that there *were* "incompetent testers" (a
category we still haven't established the existence of).


You certainly can't tell if they are incompetent if you haven't developed a
screening protocol to find them. Can you, Bob?


The results among those remaining would have looked less like chance,
wouldn't they?


Given that 5 out of 6 tests had positive outcomes, it's hard to
imagine how this could be. (And excluding the 4 worst performers
overall from the one negative test wouldn't get you a positive result,
either.)


There were six tests, only two of which had positive outcomes even summing
across all 11 testers.


Many other tests don't require throwing out testers to be
valid...they have different ways of dealing with the issue.

Second, the problem is test specific. It is quite possible that those
people would have no problem doing a blind A-B preference test . Or
doing a
monadic or protomonadic semantic scaling test. This gets at the stress
factor that is seemingly inherent in substituting an identification test
for
a preference test or a "liking test" when it comes to music. And this in
turn gets at the fact that identification tests per se reduce emotional
involvement to zip.


The preceding paragraph is entirely empiricism-free. Harry has no
evidence that any other test is better than ABX. None. It is baseless
belief.


Agreed that it is empiricism-free. But it is not antecdotal-free. As to
other tests, where other types of testing have been used, their hasn't been
such a plethora of anecdotal-reported problems. So you can view this as an
informed opinion by one who spent a good portion of his life helping design
and approve tests, albeit in a diferent sensory field.


That's one reason ABC/hr is a better technique than
ABX, although it is still a quasi-id test.


ABC/hr ISN'T a better test; it's a different test with a different
purpose. You really ought to study up on these things before you write
about them, Harry.


It was and is a better test for telephone transmission testing, which is
what it was developed for. And it is a better test not on my say so, but
because it gives better (e.g. more discriminating) test results regarding
the human voice. Because, the ITU says, test subjects have a reference for
human voices in the real world. Now, interestingly, the ABX/hr test in some
regards represents a semantic differential test (somewhat similar to one of
the suggested alternatives for home evaluation of gear) more than it does
the original ABX test.

snip

This is reality, as even people who desire to take the
test often drop out before even 15 samples for this very
reason.


No foundation has been laid for this claim.


See below.


I looked. Ain't nothin' there.


Well it was anecdotal and you have snipped it. So you don't accept
anecdotals as even worthy of tentative credibility?

snip

An exploratory test is a test whereby one is trying to get a "fix" on the
sonic characteristics of a piece of audio gear, Arny.


That's not even a test. And, of course, there's nothing to stop
someone from doing this "exploration" before taking any test. Indeed,
people who claim to hear a difference between cables or amps or
whatever have presumably already done this "exploration." Which makes
it doubly puzzling why they can't find ANY blind test that will
confirm the results of that "exploration."


No, an exploratory test is where you move swiftly from open-ended listening
into short-snippet comparative listening. That is a form of a test...simply
not one your approve of.


And the whole ABX,
ABC/hr test prtocols are wrong for this sort of thing.....they are not
designed for music, and trying to use them for this is simply false
"science".


So if they weren't designed for music, just what source material did
the researchers use in the codec tests, Harry?


They were listening for specific distortion artifacts...that is what the
training is for. For example, here is the sound as it affects cymbal
reproduction, at various levels. At what level can you identify it? That
is not "listening to music". Not open-ended. And not even close-ended.
Instead you have been trained to listen for a specific type of non-musical
distortion.

snip, to shorten

...

Which is exactly when and why ABX was developed/codified by the ITU...and
then found not to be so good, so later replaced by ABX/hr for this
purpose.


Poppycock. Nobody "replaced" ABX. They developed a variant of ABX for
a different purpose.


They developed it specifically because participants knew what real human
voices sounded like and wanted to use that as a frame of reference for
evaluation, but ABX didn't allow that. And that test became the standard
for telephone transmission testing as a result. Notice any similarities to
the reproduction of music, Bob?

  #39   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default New psudo ABX ?

"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message


It must be nice to be so certain. In the Pro Audio
Digest thread of March that I read at J. Junes suggestion
last night, one of the correspondents (screened for
participation by highly regarded practical professional
engineers) antecdotally told of a blind test whereby a
friend with a very high level of success could pick out
two identical samples (that met the null test and proved
bit-identical) on two different brands of gold-plated CD
disks. He wasn't challenged.


That's beause that is a well-known historical problem that has nothing to
do
with resolution, per se.


I see...my guess is this comes as a surprise to most here. Care to
elaborate?


Moreover, the general consensus of the group (which
included Jim Johnson and Dan Lavry) was that the CD
cutoff was too low and artifacts were often audible as a
result, including pre-ringing and/or phase shift, and
that 64K was the necessary minimum to avoid even the
possibility of problems. Please note that this directly
contradicts Arny's recent assertions here that CD's are
audibly at the level of ultimate transparency(1), and
that the 66khz/20bit recommendation of the Japanese
hi-rez group in the mid-90's was nothing but
marketing-driven propoganda(2).


Without supporting quotes, this is just another anecdote.


Anecdotal here, and never positioned as anything other than a report.
Anybody can go read for themselves...just sign up for the digest version and
pull the report for the entire month of March and read it.


BTW, here's some interesting reading:

http://www.paudio.com/Pages/presenta...ity/sld001.htm


  #40   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default New psudo ABX ?

"bob" wrote in message
...
On Mar 31, 8:20 pm, "Harry Lavo" wrote:
2) "If the person is good at it, he will probably reach a valid
conclusion."
This is a truism.


No, it's actually a falsehood. No matter how good someone is at "open-
ended evaluation of audio components" (whatever that is), we know that
he will frequently come to demonstrably incorrect conclusions. That's
what's wrong with "open-ended evaluation of audio
components" (whatever that is): It produces a high incidence of false
positives. But, as Steven noted, you are assuming that little problem
away.


I said "probably reach a valid conclusion". That implies more often than
not. Where is your proof that a listener well-versed in live performances
of acoustic instruments in a variety of venues (my qualifiers to the above
statement) cannot be right more often than not, even in a sighted
comparison. The possibility of bias does not mandate bias, and the
possibility of error does not mandate error. This is a common objectivist
oversight.

Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off



All times are GMT +1. The time now is 03:40 AM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"