Reply
 
Thread Tools Display Modes
  #1   Report Post  
 
Posts: n/a
Default HE2005: The Great Debate

The recording of the Atkinson vs Arny Krueger debate at
Home Entertainment 2005 is now available. Go to:
http://www.stereophile.com/news/050905debate/

John Atkinson
Editor, Stereophile
  #2   Report Post  
 
Posts: n/a
Default

wrote:
The recording of the Atkinson vs Arny Krueger debate at
Home Entertainment 2005 is now available. Go to:
http://www.stereophile.com/news/050905debate/

John Atkinson
Editor, Stereophile



Thanks for posting this, John, and also for hosting it. I'm sure it
won't change any minds, but a civil airing of views is always welcome.

Would anyone who was present care to point themselves out in the
accompanying photograph, and/or take credit for any of the questions
asked?

bob
  #3   Report Post  
Steven R. Rochlin
 
Posts: n/a
Default

Bob,

i am in the front row and was very busy typing (in red shirt to JA's right
in pic http://www.stereophile.com/images/ne...atdebate.4.jpg ). Was
busy typing as fast as i could to 'record' the event and post it online that
evening. My writings concerning the event can be read at
http://www.enjoythemusic.com/hifi2005/ .

Enjoy the Music,

Steven R. Rochlin
http://www.EnjoyTheMusic.com


Where you can find:

Superior Audio, The Absolute Sound,
Review Magazine, The $ensible Sound,
Audiophile Audition, The Audiophile Voice...
....and MUCH more!

wrote in message ...
wrote:
The recording of the Atkinson vs Arny Krueger debate at
Home Entertainment 2005 is now available. Go to:
http://www.stereophile.com/news/050905debate/

John Atkinson
Editor, Stereophile



Thanks for posting this, John, and also for hosting it. I'm sure it
won't change any minds, but a civil airing of views is always welcome.

Would anyone who was present care to point themselves out in the
accompanying photograph, and/or take credit for any of the questions
asked?

bob

  #4   Report Post  
 
Posts: n/a
Default

wrote:
Would anyone who was present care to point themselves out in
the accompanying photograph, and/or take credit for any of the
questions asked?


Looking at the photo of the audience, here's who I can identify
in the front row (R-L): John Marks, unknown, Jason Serinus,
Steven Rochlin, Art Dudley. To the left of my head is the
Show's AV guy with the roving mike. Standing, addressing Mr.
Krueger and myself is Harry Lavo.

John Atkinson
Editor, Stereophile
  #5   Report Post  
Steven Sullivan
 
Posts: n/a
Default

wrote:
wrote:
The recording of the Atkinson vs Arny Krueger debate at
Home Entertainment 2005 is now available. Go to:
http://www.stereophile.com/news/050905debate/

John Atkinson
Editor, Stereophile



Thanks for posting this, John, and also for hosting it. I'm sure it
won't change any minds, but a civil airing of views is always welcome.


Would anyone who was present care to point themselves out in the
accompanying photograph, and/or take credit for any of the questions
asked?


I thought I already had -- I asked the question that was directed to
JA, about the conclusions he drew from his 'conversion from objectivist to
subjectivist' experience.

Harry Lavo asked the question to Arny about monadic listening tests.

The question to Arny about absolute phase was from a Primedia/Stereophile
employee, I think. At least, he was sitting ion front of me, and I was sitting
amidst a pack of Primedia editorial and support staff.



--

-S
It's not my business to do intelligent work. -- D. Rumsfeld, testifying
before the House Armed Services Committee


  #6   Report Post  
Harry Lavo
 
Posts: n/a
Default

wrote in message ...
wrote:
The recording of the Atkinson vs Arny Krueger debate at
Home Entertainment 2005 is now available. Go to:
http://www.stereophile.com/news/050905debate/

John Atkinson
Editor, Stereophile



Thanks for posting this, John, and also for hosting it. I'm sure it
won't change any minds, but a civil airing of views is always welcome.

Would anyone who was present care to point themselves out in the
accompanying photograph, and/or take credit for any of the questions
asked?


Well, that last photo was taken during my long-winded spiel. I'm the guy
with the mic.

Harry

  #7   Report Post  
 
Posts: n/a
Default

Steven Sullivan wrote:
The question to Arny about absolute phase was from a
Primedia/Stereophile employee, I think.


It was Stereophile columnist John Marks.

John Atkinson, Editor, Stereophile
  #8   Report Post  
Jim Cate
 
Posts: n/a
Default

Reading the above excerpts from the DBT debate, I have the following (to
me, obvious) questions:

1. If there are issues with the methodology of a particular DB test,
shouldn't the proper response be to devise improvements to or
modifications of the testing method instead of deriding the whole
concept of blind testing? For example, if the listening times are too
short, in your opinion, wouldn't extending the listening times be the
logical response? If instruments for converting the signal are deemed
questionable or unreliable, wouldn't modifing the instruments, or
removing them and using simple switching circuitry and
level balancing, be a logical response? - If, that is, one truly wants
such tests to succeed.

2. The complaint is made that such tests are typically or often
"inconclusive," and therefore not of consequence or value. But isn't
that missing the whole point. - The fact that listeners have difficulty
in distinguishing one unit from another, particularly if one unit sells
for $400 and the other sells for $4,000, can be of substantial interest
and value to many audiophiles. Among other factors, including one's own
listening experience, it can reveal which components, or upgrades, may
be likely to produce the most audible improvements, and at what price.
(And, such reviews and reports by others are of importance to many of us
who don't live in a major metropolitan area or have the budget and time
to travel to varous dealers and shows. - Although I do listen to major
components, particularly speakers, before making a purchase, the usual
recommendation that everyone should listen carefully to every component
of interest is often impracticable.)

3. The suggestion that the performance of audio components is really a
matter of personal, subjective taste, much like the difference between
orchestras, musicians, etc., is highly misleading. If we are talking
about high fidelity audio, that is. Music, and our preferences therein,
are subjective, but the wires, transistors,resistors, magnets, and
acoustics entailed in reproducing music are governed by the laws of physics.

4. As a long-time Sterephile subscriber, I would challenge the editors
to submit the following question to their subscribers: Would you like to
see more tests of at least some audio components in which at least
portions of the listening tests and evaluations were performed under
conditions in which the reviewers didn't know what component they were
listening to, or it's price? Note that I am not suggesting that the
tests have to be totally DBT a la Arnie's system, or the like, but
merely that they be listening tests in which the reviewer isn't told
what component he or she is listening to at least part of the time.) To
sidestep one of the usual objections, I would suggest that you add a
question inquiring whether the listener would be willing to pay a few
dollars more to cover the costs of such tests. I would also suggest
that your poll or inquiry be conducted without your usual propoganda
about the limitations and uncertainty of such tests.

Jim Cate

wrote:
The recording of the Atkinson vs Arny Krueger debate at
Home Entertainment 2005 is now available. Go to:
http://www.stereophile.com/news/050905debate/

John Atkinson
Editor, Stereophile

  #9   Report Post  
Steven Sullivan
 
Posts: n/a
Default

Jim Cate wrote:
Reading the above excerpts from the DBT debate, I have the following (to
me, obvious) questions:


1. If there are issues with the methodology of a particular DB test,
shouldn't the proper response be to devise improvements to or
modifications of the testing method instead of deriding the whole
concept of blind testing? For example, if the listening times are too
short, in your opinion, wouldn't extending the listening times be the
logical response? If instruments for converting the signal are deemed
questionable or unreliable, wouldn't modifing the instruments, or
removing them and using simple switching circuitry and
level balancing, be a logical response? - If, that is, one truly wants
such tests to succeed.


Yes.


2. The complaint is made that such tests are typically or often
"inconclusive," and therefore not of consequence or value. But isn't
that missing the whole point. - The fact that listeners have difficulty
in distinguishing one unit from another, particularly if one unit sells
for $400 and the other sells for $4,000, can be of substantial interest
and value to many audiophiles. Among other factors, including one's own
listening experience, it can reveal which components, or upgrades, may
be likely to produce the most audible improvements, and at what price.
(And, such reviews and reports by others are of importance to many of us
who don't live in a major metropolitan area or have the budget and time
to travel to varous dealers and shows. - Although I do listen to major
components, particularly speakers, before making a purchase, the usual
recommendation that everyone should listen carefully to every component
of interest is often impracticable.)


Yes.


3. The suggestion that the performance of audio components is really a
matter of personal, subjective taste, much like the difference between
orchestras, musicians, etc., is highly misleading. If we are talking
about high fidelity audio, that is. Music, and our preferences therein,
are subjective, but the wires, transistors,resistors, magnets, and
acoustics entailed in reproducing music are governed by the laws of physics.



Indeed.

4. As a long-time Sterephile subscriber, I would challenge the editors
to submit the following question to their subscribers: Would you like to
see more tests of at least some audio components in which at least
portions of the listening tests and evaluations were performed under
conditions in which the reviewers didn't know what component they were
listening to, or it's price? Note that I am not suggesting that the
tests have to be totally DBT a la Arnie's system, or the like, but
merely that they be listening tests in which the reviewer isn't told
what component he or she is listening to at least part of the time.) To
sidestep one of the usual objections, I would suggest that you add a
question inquiring whether the listener would be willing to pay a few
dollars more to cover the costs of such tests. I would also suggest
that your poll or inquiry be conducted without your usual propoganda
about the limitations and uncertainty of such tests.


Consider the fallout -- from subscribers with rigs consting upwards of
$10K, and more importantly, from advertisers who make and market the stuff
-- when there turns out to be little correlation between price and
performance in such evaluations.



--
-S
It's not my business to do intelligent work. -- D. Rumsfeld, testifying
before the House Armed Services Committee
  #10   Report Post  
 
Posts: n/a
Default

Jim Cate wrote:
Reading the above excerpts from the DBT debate, I have the following

(to
me, obvious) questions:

1. If there are issues with the methodology of a particular DB test,
shouldn't the proper response be to devise improvements to or
modifications of the testing method instead of deriding the whole
concept of blind testing?




Absolutely.



For example, if the listening times are too
short, in your opinion, wouldn't extending the listening times be the


logical response? If instruments for converting the signal are

deemed
questionable or unreliable, wouldn't modifing the instruments, or
removing them and using simple switching circuitry and
level balancing, be a logical response? - If, that is, one truly

wants
such tests to succeed.



Most definitely. However, this does not seem to apply to John's
anecdote since Niether John nor any of the other participants, as far
as we know, had any issues with that particular single blind test and
were quite satisfied with the protocols and the results of that test.





2. The complaint is made that such tests are typically or often
"inconclusive," and therefore not of consequence or value. But isn't


that missing the whole point. - The fact that listeners have

difficulty
in distinguishing one unit from another, particularly if one unit

sells
for $400 and the other sells for $4,000, can be of substantial

interest
and value to many audiophiles.



It could be, however those kinds of tests only look for differences,
They do not allow one to evaluate the value of differences should they
be found. I'm not sure that difficulty in hearing differences in abx
dbts really comments on the value of differences should they exist.
When one looks at John's anecdote and accepts it at face value it does
suggest that differences missed or even imaged are significant enough
in long term listening for *some* to go out and buy a more expensive
amp.





Among other factors, including one's own
listening experience, it can reveal which components, or upgrades,

may
be likely to produce the most audible improvements, and at what

price.


Or it may not. It will at the very least be a great deal more work for
the reviewers. Perhaps that is why *none* of the audio journals
including those that subscribe to the objectivist approach to audio do
dbts on components up for review.





(And, such reviews and reports by others are of importance to many of

us
who don't live in a major metropolitan area or have the budget and

time
to travel to varous dealers and shows. - Although I do listen to

major
components, particularly speakers, before making a purchase, the

usual
recommendation that everyone should listen carefully to every

component
of interest is often impracticable.)




I suspect doing dbts of any merit on every component up for review
would be every bit as impracitable for any audio journal.





3. The suggestion that the performance of audio components is really

a
matter of personal, subjective taste, much like the difference

between
orchestras, musicians, etc., is highly misleading. If we are talking
about high fidelity audio, that is. Music, and our preferences

therein,
are subjective, but the wires, transistors,resistors, magnets, and
acoustics entailed in reproducing music are governed by the laws of

physics.



Then what do you say to the objectivist who has done a blind test that
resulted in a null and then goes out armed with this knowledge and buys
a less expensive amp only to find in the long run the change has
rendered home listening unpleasant? Does objectivism demand that
audiophiles some how change what they percieve? I have yet to find an
objectivist who can answer this question. They all want to change the
circustances of the question to suit their approach to audio.




4. As a long-time Sterephile subscriber, I would challenge the

editors
to submit the following question to their subscribers: Would you like

to
see more tests of at least some audio components in which at least
portions of the listening tests and evaluations were performed under
conditions in which the reviewers didn't know what component they

were
listening to, or it's price? Note that I am not suggesting that the
tests have to be totally DBT a la Arnie's system, or the like, but
merely that they be listening tests in which the reviewer isn't told
what component he or she is listening to at least part of the time.)

To
sidestep one of the usual objections, I would suggest that you add a
question inquiring whether the listener would be willing to pay a few


dollars more to cover the costs of such tests. I would also suggest
that your poll or inquiry be conducted without your usual propoganda
about the limitations and uncertainty of such tests.



I suppose it should also ask if the subscribers are willing to foot the
additional costs incurred by such a cumbersome process. Consider the
fact that most of the equipment is simply delivered to reviewers (most
of whom make their living outside of working for Stereophile) for them
to use as they would if they were merely a purchaser of that piece of
equipment. The logistics involved in keeping the reviewers blind to the
identity of a component would involve a susbstantial amount of man
hours along with considerable intrusion on the reviewer's home life.




Jim Cate

wrote:
The recording of the Atkinson vs Arny Krueger debate at
Home Entertainment 2005 is now available. Go to:
http://www.stereophile.com/news/050905debate/

John Atkinson
Editor, Stereophile





Scott Wheeler


  #11   Report Post  
 
Posts: n/a
Default

I have bookmarked the page and will read it in detail later. I did note
one item of intrest. It is said one participant was once one who accepted
the validity of blind testing but changed his mind when long term
listening to a bit of gear seemed to reveal difference after all where
none was first noticed. May I suggest this is perfectly inline with the
testing school of thought and is in fact confirmation of one of it's
conclusions. As the gear was heard over time it was known and the "test",
if we can call it such, had long since stopped being blind. Having first
been a believer that two amps of different type sound different and then
concluding otherwise during a blind test, all the cognative and perceptual
framework for resuming that conclusion were in place when the longer not
blind "test" commenced. The controls for perception flaws were removed
and the results not surprising in the least. It would be more intresting
to have done the ab test again blind to see if additional exposure sighted
now makes a difference, or that the longer period have been one where the
amp types were switched without knowledge and then identity tested in a
blind session. Repeat, the testimonial report is specific confirmation of
the listening alone school of testing to determine differences in
audibility and not support in the other direction.
  #12   Report Post  
Harry Lavo
 
Posts: n/a
Default

"Jim Cate" wrote in message
...
Reading the above excerpts from the DBT debate, I have the following (to
me, obvious) questions:

1. If there are issues with the methodology of a particular DB test,
shouldn't the proper response be to devise improvements to or
modifications of the testing method instead of deriding the whole
concept of blind testing? For example, if the listening times are too
short, in your opinion, wouldn't extending the listening times be the
logical response? If instruments for converting the signal are deemed
questionable or unreliable, wouldn't modifing the instruments, or
removing them and using simple switching circuitry and
level balancing, be a logical response? - If, that is, one truly wants
such tests to succeed.

2. The complaint is made that such tests are typically or often
"inconclusive," and therefore not of consequence or value. But isn't
that missing the whole point. - The fact that listeners have difficulty
in distinguishing one unit from another, particularly if one unit sells
for $400 and the other sells for $4,000, can be of substantial interest
and value to many audiophiles. Among other factors, including one's own
listening experience, it can reveal which components, or upgrades, may
be likely to produce the most audible improvements, and at what price.
(And, such reviews and reports by others are of importance to many of us
who don't live in a major metropolitan area or have the budget and time
to travel to varous dealers and shows. - Although I do listen to major
components, particularly speakers, before making a purchase, the usual
recommendation that everyone should listen carefully to every component
of interest is often impracticable.)

3. The suggestion that the performance of audio components is really a
matter of personal, subjective taste, much like the difference between
orchestras, musicians, etc., is highly misleading. If we are talking
about high fidelity audio, that is. Music, and our preferences therein,
are subjective, but the wires, transistors,resistors, magnets, and
acoustics entailed in reproducing music are governed by the laws of

physics.

4. As a long-time Sterephile subscriber, I would challenge the editors
to submit the following question to their subscribers: Would you like to
see more tests of at least some audio components in which at least
portions of the listening tests and evaluations were performed under
conditions in which the reviewers didn't know what component they were
listening to, or it's price? Note that I am not suggesting that the
tests have to be totally DBT a la Arnie's system, or the like, but
merely that they be listening tests in which the reviewer isn't told
what component he or she is listening to at least part of the time.) To
sidestep one of the usual objections, I would suggest that you add a
question inquiring whether the listener would be willing to pay a few
dollars more to cover the costs of such tests. I would also suggest
that your poll or inquiry be conducted without your usual propoganda
about the limitations and uncertainty of such tests.


Jim, those are absolutely good questions. The problem is, even with
modifications it is easy to construct a theoretical model suggesting that
the testing process itself destroys the ability to measure musical response.
The way around this is to use testing which simulates normal relaxed
listening (as closely as possible). Listen, relax, enjoy (or not). Then
evaluate. Period. Ideally, you should not even know what is being tested.
However this approach requires multiple testees (dozens, better hundreds)
and the time and location to do such testing. Let me describe an actual
example.

Some researchers in Japan used such an approach to measure the impact of
ultrasonic response on listeners ratings of reproduced music. They
constructed a testing room with an armchair, soft lighting, a soothing
outdoor view, and very carefully constructed audio system employing separate
amps and supertweeters for the ultrasonics. The testees knew only that they
were to listen to the music, and afterward fill out a simple questionnaire.

Employing Gamelan music (chosen for its abundance of overtones), they found
statistical significance at the 95% level between music reproduced with a
20khz cutoff and that reproduced with frequencies extending up to 80khz.
They measured not only overall quality of the sound ratings, but also
specific attributes...some also statistically significant. When they
presented the paper to the AES, the skepticism was so severe they went back
and repeated the test...this time they wired the subjects and monitored
their brains but otherwise they were just told to listen to the music and
various aspects of the brain were recorded. They found that the pleasure
centers of the brain were activated when the overtones were used, and were
not activated when the 20khz cutoff was used. They also were not activated
when listening to silence, used as a control. Moreover, the correlation
with the earlier test was statistically significant (about half the subjects
were repeaters).

When I presented the data here, Arny Kruger who was posting here at the time
and is the main champion of ABX testing on the web, became defensive. At
first he tried to dismiss the test as "old news". Then he claimed he found
evidence that the ultrasonic frequencies affected the upper regions of the
hearing range (despite the researchers specific attempts to defeat this
possibility). Then he dismissed the whole thing as worthless because it
hadn't been corroborated (this was only a few months after it was
published).

Perhaps Arny's reaction was typically human when strongly held beliefs and
conventional wisdom are challenged. But Arny missed the main point. That
point was that monadic testing, under relaxed conditions and with*no*
comparison or even "rating" during the test, gave statistically significant
results. And that these results were not a statistical aberration, but were
repeated and correlated with a physiological response to music. So whether
Army's belief in sub-ultrasonic corruption is true or not, the fact is the
testing yielded differences to a stimulus that was supposedly inaudible, and
if audible, subtle in the extreme.

I and a few others have been arguing that some similar test protocol was
more likely to correlate with in-home experience. The problem is, even if
we are right, such testing is too cumbersome to be of any real world use
except in special showcase scenarios...it is not practical for reviewing, or
for choosing audio equipment in the home. However, it does certainly
suggest caution in substituting AB or ABX testing. Such testing is
radically different in the underlying conditions, and since the musical
response of the ear/brain complex is so subtle and unpredictable and mis- or
un- understood, it is simply too simplistic to assert that what works for
testing using white noise or audio codecs works for overall open-ended
musical evaluation of equipment. That is why some of us prefer to stay with
conventional audio evaluation given the Hobson's choice.

I hope this helps you understand that I have a reason for being skeptical of
DBT's. Even more important, why I believe it is intellectually dishonest to
promote them as the be-all and end-all for determining audio "truth", as is
done here on RAHE by some. They are a tool...useful in some
cases...unproven in others. Until that later qualifier is removed, I think
overselling them does a disservice and can be classified as "brainwashing".

  #13   Report Post  
 
Posts: n/a
Default

wrote in message
...
Jim Cate wrote:
Reading the above excerpts from the DBT debate, I have the following

(to
me, obvious) questions:

1. If there are issues with the methodology of a particular DB test,
shouldn't the proper response be to devise improvements to or
modifications of the testing method instead of deriding the whole
concept of blind testing?


Let's put it this way: John Atkinson HAS to believe as he does in order to
hold his job as the editor of a high-end stereo magazine. It's easier to
change your views from objective to subjective than it is to find another
decent job. People believe what they must believe to hold their position in
society. You can't be a narc without supporting the laws proscribing
narcotics.

Norm Strong

  #14   Report Post  
 
Posts: n/a
Default

"Jim, those are absolutely good questions. The problem is, even with
modifications it is easy to construct a theoretical model suggesting that
the testing process itself destroys the ability to measure musical
response."

That is a theory and merely posing it gains no support for it's validity.
As a theory it can be tested. While not a test of it per sey, the fact
that such testing is universal and unchallendged as to validity in 99
percent of all other situations where humans are involved would lead one
to think testing it a not meaningful gesture. It too is fammiliar in 99
ppercent of the anti/un scientific theories of astrology, esp, etc.
wherein testing has directed results under cutting closely held folk
models of reality.
  #15   Report Post  
 
Posts: n/a
Default

Harry Lavo wrote:
Some researchers in Japan used such an approach to measure the impact

of
ultrasonic response on listeners ratings of reproduced music. They
constructed a testing room with an armchair, soft lighting, a

soothing
outdoor view, and very carefully constructed audio system employing

separate
amps and supertweeters for the ultrasonics. The testees knew only

that they
were to listen to the music, and afterward fill out a simple

questionnaire.

Employing Gamelan music (chosen for its abundance of overtones), they

found
statistical significance at the 95% level between music reproduced

with a
20khz cutoff and that reproduced with frequencies extending up to

80khz.
They measured not only overall quality of the sound ratings, but also
specific attributes...some also statistically significant. When they
presented the paper to the AES, the skepticism was so severe they

went back
and repeated the test...this time they wired the subjects and

monitored
their brains but otherwise they were just told to listen to the music

and
various aspects of the brain were recorded.


I haven't read this article in a while, but I think you are
misremembering it. I don't believe the listening tests and the brain
scans were done simultaneously. And what is the basis for your
statement that this was met with skepticism at AES? (Not that it
shouldn't have been.)

They found that the pleasure
centers of the brain were activated when the overtones were used, and

were
not activated when the 20khz cutoff was used. They also were not

activated
when listening to silence, used as a control. Moreover, the

correlation
with the earlier test was statistically significant (about half the

subjects
were repeaters).

When I presented the data here, Arny Kruger who was posting here at

the time
and is the main champion of ABX testing on the web, became defensive.

At
first he tried to dismiss the test as "old news". Then he claimed he

found
evidence that the ultrasonic frequencies affected the upper regions

of the
hearing range (despite the researchers specific attempts to defeat

this
possibility).


Well, as long as they tried! Seriously Harry, how hard is it to get two
graphs to look alike, if you want them to look alike?

Then he dismissed the whole thing as worthless because it
hadn't been corroborated (this was only a few months after it was
published).


In case you hadn't noticed, Oohashi & Co. have not exactly electrified
the psychoacoustics field in the intervening years.

Perhaps Arny's reaction was typically human when strongly held

beliefs and
conventional wisdom are challenged. But Arny missed the main point.


Surely not the main point. It's not even mentioned in the abstract.

That
point was that monadic testing, under relaxed conditions and with*no*
comparison or even "rating" during the test, gave statistically

significant
results.


Yes (assuming anyone can replicate this), but only for ultrasonic sound
(signal and/or noise, and there must have been a lot of noise in a
system like that). For that matter, Oohashi has basically admitted that
he couldn't replicate his result on any other audio system except the
one he specifically designed for that experiment. (A system he'd be
glad to sell you if you care to do the replication yourself!) And,
while we're at it, a system he has described differently in different
papers. Makes you wonder.

In fact, there are so many holes in this research that I'd be skeptical
of its statistical claims without seeing the raw data. I'd also like to
see some better analysis of what exactly that unique system was putting
out. And I'd like to see replication, but that appears to be a vain
hope.

And that these results were not a statistical aberration, but were
repeated


This IS news. That is, if you have any basis for the assertion.

and correlated with a physiological response to music.


Not exactly. It excited certain pleasure sensors of the brain. But he
provides no evidence that such pleasure sensors are excited when
listening to live gamelan music. (or any other kind). You're stretching
here.

So whether
Army's belief in sub-ultrasonic corruption is true or not, the fact

is the
testing yielded differences to a stimulus that was supposedly

inaudible,

That the authors themselves called inaudible. They couldn't explain
their own results, but they pretty much admitted that the subjects were
not "hearing" anything as hearing is generally understood.

and
if audible, subtle in the extreme.

I and a few others have been arguing that some similar test protocol

was
more likely to correlate with in-home experience. The problem is,

even if
we are right, such testing is too cumbersome to be of any real world

use
except in special showcase scenarios...it is not practical for

reviewing, or
for choosing audio equipment in the home. However, it does certainly
suggest caution in substituting AB or ABX testing.


I've seen no evidence that anyone in the psychoacoustics field thinks
this. Have you?

Such testing is
radically different in the underlying conditions, and since the

musical
response of the ear/brain complex is so subtle and unpredictable and

mis- or
un- understood, it is simply too simplistic to assert that what works

for
testing using white noise or audio codecs works for overall

open-ended
musical evaluation of equipment. That is why some of us prefer to

stay with
conventional audio evaluation given the Hobson's choice.

I hope this helps you understand that I have a reason for being

skeptical of
DBT's.


Odd, given that the experiment you've just described WAS a DBT.

Even more important, why I believe it is intellectually dishonest to
promote them as the be-all and end-all for determining audio "truth",

as is
done here on RAHE by some. They are a tool...useful in some
cases...unproven in others. Until that later qualifier is removed, I

think
overselling them does a disservice and can be classified as

"brainwashing".

That's absurd.

bob


  #16   Report Post  
Harry Lavo
 
Posts: n/a
Default

wrote in message ...
Harry Lavo wrote:
Some researchers in Japan used such an approach to measure the impact

of
ultrasonic response on listeners ratings of reproduced music. They
constructed a testing room with an armchair, soft lighting, a

soothing
outdoor view, and very carefully constructed audio system employing

separate
amps and supertweeters for the ultrasonics. The testees knew only

that they
were to listen to the music, and afterward fill out a simple

questionnaire.

Employing Gamelan music (chosen for its abundance of overtones), they

found
statistical significance at the 95% level between music reproduced

with a
20khz cutoff and that reproduced with frequencies extending up to

80khz.
They measured not only overall quality of the sound ratings, but also
specific attributes...some also statistically significant. When they
presented the paper to the AES, the skepticism was so severe they

went back
and repeated the test...this time they wired the subjects and

monitored
their brains but otherwise they were just told to listen to the music

and
various aspects of the brain were recorded.


I haven't read this article in a while, but I think you are
misremembering it. I don't believe the listening tests and the brain
scans were done simultaneously. And what is the basis for your
statement that this was met with skepticism at AES? (Not that it
shouldn't have been.)


I didn't say they listened simultaneous...I said they repeated the test with
brain scans instead of stated evaluations. Then the brain scans were
correlated with the stated responses from the earlier test and an extremely
strong correlation found.

As to the scepticism at AES, Arny and others (but particularly Arny) was the
one emphasizing it.


They found that the pleasure
centers of the brain were activated when the overtones were used, and

were
not activated when the 20khz cutoff was used. They also were not

activated
when listening to silence, used as a control. Moreover, the

correlation
with the earlier test was statistically significant (about half the

subjects
were repeaters).

When I presented the data here, Arny Kruger who was posting here at

the time
and is the main champion of ABX testing on the web, became defensive.

At
first he tried to dismiss the test as "old news". Then he claimed he

found
evidence that the ultrasonic frequencies affected the upper regions

of the
hearing range (despite the researchers specific attempts to defeat

this
possibility).


Well, as long as they tried! Seriously Harry, how hard is it to get two
graphs to look alike, if you want them to look alike?


Huh? Explain please.

Then he dismissed the whole thing as worthless because it
hadn't been corroborated (this was only a few months after it was
published).


In case you hadn't noticed, Oohashi & Co. have not exactly electrified
the psychoacoustics field in the intervening years.


No, but research into transient response brought on by the age of high
sampling rates does confirm that their is enough transient distortion when
musical signals are arbitrarily cut off below about 70hz to affect
perception (in other words) the transent smear and (not to mention pre-echo
if digital) lasts well into the range at which the ear can be sensitive to
it. This might provide a perfectly rational and "scientific" reason why
ultrasonic signals could help the brain interpret a sound as "real" and
"pleasant" vs. unreal and unsolicitive of an emotional response.


Perhaps Arny's reaction was typically human when strongly held

beliefs and
conventional wisdom are challenged. But Arny missed the main point.


Surely not the main point. It's not even mentioned in the abstract.


The main point that I was trying to emphasize to the group at the time.


That
point was that monadic testing, under relaxed conditions and with*no*
comparison or even "rating" during the test, gave statistically

significant
results.


Yes (assuming anyone can replicate this), but only for ultrasonic sound
(signal and/or noise, and there must have been a lot of noise in a
system like that). For that matter, Oohashi has basically admitted that
he couldn't replicate his result on any other audio system except the
one he specifically designed for that experiment. (A system he'd be
glad to sell you if you care to do the replication yourself!) And,
while we're at it, a system he has described differently in different
papers. Makes you wonder.


Could you please cite the publication where he said this, and point out the
descrepancies. Keep in mind also that he worked with others as a team, and
certainly there would be team editing of any articles, perhaps not always by
exactly the same person. Doesn't require a conspiracy theory.

In fact, there are so many holes in this research that I'd be skeptical
of its statistical claims without seeing the raw data. I'd also like to
see some better analysis of what exactly that unique system was putting
out. And I'd like to see replication, but that appears to be a vain
hope.


Well I suggested to you an Arny back then that you write to them, requesting
same. Have you?


And that these results were not a statistical aberration, but were
repeated


This IS news. That is, if you have any basis for the assertion.

and correlated with a physiological response to music.


Not exactly. It excited certain pleasure sensors of the brain. But he
provides no evidence that such pleasure sensors are excited when
listening to live gamelan music. (or any other kind). You're stretching
here.


Sorry Charlie. It excited the pleasure senses *only* in response to the
ultrasonic cell *and* the results were highly correlated with the
respondents ratings.


So whether
Army's belief in sub-ultrasonic corruption is true or not, the fact

is the
testing yielded differences to a stimulus that was supposedly

inaudible,

That the authors themselves called inaudible. They couldn't explain
their own results, but they pretty much admitted that the subjects were
not "hearing" anything as hearing is generally understood.


That doesn't make the test or the results invalid. They simply said they
didn't know the mechanism. Often the case early in the life of a new
discovery.

and
if audible, subtle in the extreme.

I and a few others have been arguing that some similar test protocol

was
more likely to correlate with in-home experience. The problem is,

even if
we are right, such testing is too cumbersome to be of any real world

use
except in special showcase scenarios...it is not practical for

reviewing, or
for choosing audio equipment in the home. However, it does certainly
suggest caution in substituting AB or ABX testing.


I've seen no evidence that anyone in the psychoacoustics field thinks
this. Have you?


I don't know any psychoacouticians who are also into audio. I asked you
before, if you do, please cite three and their published work on open-ended
evaluative testing of audio gear.

Such testing is
radically different in the underlying conditions, and since the

musical
response of the ear/brain complex is so subtle and unpredictable and

mis- or
un- understood, it is simply too simplistic to assert that what works

for
testing using white noise or audio codecs works for overall

open-ended
musical evaluation of equipment. That is why some of us prefer to

stay with
conventional audio evaluation given the Hobson's choice.

I hope this helps you understand that I have a reason for being

skeptical of
DBT's.


Odd, given that the experiment you've just described WAS a DBT.


Sorry that I did substitute DBT for comparative quick-switch DBT, a
distinction I usually make. It wasn't double blind, it was blind. And it
wan't comparative. It was monadic. *EXACTLY* the kind of test I have been
promoting here as an alternative to short term comparative testing a la ABX,
which most of objectivists seem to prefer.

Even more important, why I believe it is intellectually dishonest to
promote them as the be-all and end-all for determining audio "truth",

as is
done here on RAHE by some. They are a tool...useful in some
cases...unproven in others. Until that later qualifier is removed, I

think
overselling them does a disservice and can be classified as

"brainwashing".

That's absurd.


You are welcome to your opinion. I have mine. I've just stated it.

  #17   Report Post  
 
Posts: n/a
Default

"Employing Gamelan music (chosen for its abundance of overtones), they
found statistical significance at the 95% level between music reproduced
with a 20khz cutoff and that reproduced with frequencies extending up to
80khz. They measured not only overall quality of the sound ratings, but
also specific attributes...some also statistically significant. When they
presented the paper to the AES, the skepticism was so severe they went
back and repeated the test...this time they wired the subjects and
monitored their brains but otherwise they were just told to listen to the
music and various aspects of the brain were recorded. They found that the
pleasure centers of the brain were activated when the overtones were used,
and were not activated when the 20khz cutoff was used. They also were not
activated when listening to silence, used as a control. Moreover, the
correlation"


Assuming validity, this is a perfect example in favor of testing using
listening alone. The test could have been simplified. Leaving aside
"quality" etc. which is in many ways irrelevant as to the validity of
listening alone testing, they need only have tested for having heard a
difference, any difference. If found, the source of the difference could
have been explored. Having shown brain reaction to ultrasonic signals the
next step is to exclude the very real possibility that resonances within
the nasal cavity and skull etc. were not excited. This is needed because
there is firm grounds to hold a 20 k cut off via eardrum based on testing.
This test should have every subjective advocate nodding their heads in
agreement and it doesn't undermine the listening alone benchmark of
testing showing no difference in amps, wire, cd players, etc. in the
least; it supports it. I will never be afraid of such testing and we
should have more of it.
  #18   Report Post  
Ernst Raedecker
 
Posts: n/a
Default

On 13 May 2005 15:55:32 GMT, Jim Cate wrote:

The fact that listeners have difficulty
in distinguishing one unit from another, particularly if one unit sells
for $400 and the other sells for $4,000, can be of substantial interest
and value to many audiophiles.


The point is that virtually all listeners DO hear the quality
differences between a $400 set and a $4000 set. If a certain person
belongs to the tiny minority who doesn't hear the difference, then he
should buy the simple piece of equipment. Unless of course he wants to
buy a B & O from Denmark. people buy B & O for the looks, it's a
sculpture, an objet d'art that happens to emanate sound.

but the wires, transistors,resistors, magnets, and
acoustics entailed in reproducing music are governed by the laws of physics.


Yes. And as those parts are not ideal types in the scientific meaning
of the notion "ideal type", but real things, they sound differently
when constructed differently. A cap is never a cap with this or that
value.

Everything in nature is governed by the laws of physics, there is no
magic about it. However many things cannot be measured and computed
readily. Simple things can, complex things cannot.

Fortunately the behaviour of an audio system can be registered, so
that the result can be replayed ad libitum, by designing a careful
setup with specially designed measurement microphones. This is the way
some speaker builders do their research.

Curiously this is never done in the magazines. Curiously this is never
done by one Arny Krueger or other objectivist advocates.

It seems to me that nobody cares for the physical data, for objective
empirical research.

Ernesto.

"You don't have to learn science if you don't feel
like it. So you can forget the whole business if
it is too much mental strain, which it usually is."

Richard Feynman
  #19   Report Post  
Ernst Raedecker
 
Posts: n/a
Default

On 14 May 2005 17:24:58 GMT, wrote:

Let's put it this way: John Atkinson HAS to believe as he does in order to
hold his job as the editor of a high-end stereo magazine. It's easier to
change your views from objective to subjective than it is to find another
decent job. People believe what they must believe to hold their position in
society. You can't be a narc without supporting the laws proscribing
narcotics.


I personally believe that you can only write for an audio magazine in
an anecdotal way. When they test equipment, it's not a rigorous,
scientific test. But why should it be? We never ask Gardener's World
to do rigorous tests. We never ask car magazines to do rigorous tests.
We nver ask yachting magazines to do rigorous tests.

My problem is that NOBODY is doing rigorous tests. Not even at the
universities. The humble science of measurement and computation is not
applied to audio, it seems.

Ernesto.

"You don't have to learn science if you don't feel
like it. So you can forget the whole business if
it is too much mental strain, which it usually is."

Richard Feynman
  #20   Report Post  
 
Posts: n/a
Default

Harry Lavo wrote:
wrote in message

...
Harry Lavo wrote:
Some researchers in Japan used such an approach to measure the

impact
of
ultrasonic response on listeners ratings of reproduced music.

They
constructed a testing room with an armchair, soft lighting, a

soothing
outdoor view, and very carefully constructed audio system

employing
separate
amps and supertweeters for the ultrasonics. The testees knew only

that they
were to listen to the music, and afterward fill out a simple

questionnaire.

Employing Gamelan music (chosen for its abundance of overtones),

they
found
statistical significance at the 95% level between music

reproduced
with a
20khz cutoff and that reproduced with frequencies extending up to

80khz.
They measured not only overall quality of the sound ratings, but

also
specific attributes...some also statistically significant. When

they
presented the paper to the AES, the skepticism was so severe they

went back
and repeated the test...this time they wired the subjects and

monitored
their brains but otherwise they were just told to listen to the

music
and
various aspects of the brain were recorded.


I haven't read this article in a while, but I think you are
misremembering it. I don't believe the listening tests and the

brain
scans were done simultaneously. And what is the basis for your
statement that this was met with skepticism at AES? (Not that it
shouldn't have been.)


I didn't say they listened simultaneous...I said they repeated the

test with
brain scans instead of stated evaluations.


That's not repeating a test. That's doing an entirely different
experiment. They've never repeated the listening test with any other
system, or any other music, so far as I know. Which makes it difficult
to extrapolate this result to cover anything reported by common
audiophiles. In particular, we can state categorically that this
research is utterly irrelevant to any comparison involving an audio
system including a CD player. It *could* be relevant to SACD or hi-rez
DVD, but that's not what we're talking about here, so I'm puzzled that
you brought it up.

Then the brain scans were
correlated with the stated responses from the earlier test and an

extremely
strong correlation found.


So they say.

As to the scepticism at AES, Arny and others (but particularly Arny)

was the
one emphasizing it.


Arny's NOT the AES.

They found that the pleasure
centers of the brain were activated when the overtones were used,

and
were
not activated when the 20khz cutoff was used. They also were not

activated
when listening to silence, used as a control. Moreover, the

correlation
with the earlier test was statistically significant (about half

the
subjects
were repeaters).

When I presented the data here, Arny Kruger who was posting here

at
the time
and is the main champion of ABX testing on the web, became

defensive.
At
first he tried to dismiss the test as "old news". Then he

claimed he
found
evidence that the ultrasonic frequencies affected the upper

regions
of the
hearing range (despite the researchers specific attempts to

defeat
this
possibility).


Well, as long as they tried! Seriously Harry, how hard is it to get

two
graphs to look alike, if you want them to look alike?


Huh? Explain please.


As I recall, Arny examined two graphs published in one of their
articles. One showed the output of the low-pass sample, the other the
full-range sample. Arny found differences between the two in the
audible range. Shouldn't have happened the way they claimed to have set
it up. Might be a graphics problem, but it's not something researchers
should let slip.

Then he dismissed the whole thing as worthless because it
hadn't been corroborated (this was only a few months after it was
published).


In case you hadn't noticed, Oohashi & Co. have not exactly

electrified
the psychoacoustics field in the intervening years.


No, but research into transient response brought on by the age of

high
sampling rates does confirm that their is enough transient distortion

when
musical signals are arbitrarily cut off below about 70hz to affect
perception (in other words) the transent smear and (not to mention

pre-echo
if digital) lasts well into the range at which the ear can be

sensitive to
it. This might provide a perfectly rational and "scientific" reason

why
ultrasonic signals could help the brain interpret a sound as "real"

and
"pleasant" vs. unreal and unsolicitive of an emotional response.


Couldn't say. What I could say is that, Oohashi's single case excepted,
no one has ever to my knowledge published an article finding that
ultrasonic signals were detectable by humans in ANY form of listening
test. I'm also unaware of any clamor within the psychoacoustic
community to try out Oohashi's technique. I'd be interested to know
why not, and i don't think "tradition" would fully explain it.

Perhaps Arny's reaction was typically human when strongly held

beliefs and
conventional wisdom are challenged. But Arny missed the main

point.

Surely not the main point. It's not even mentioned in the abstract.


The main point that I was trying to emphasize to the group at the

time.

Ah, YOUR main point.

That
point was that monadic testing, under relaxed conditions and

with*no*
comparison or even "rating" during the test, gave statistically

significant
results.


Yes (assuming anyone can replicate this), but only for ultrasonic

sound
(signal and/or noise, and there must have been a lot of noise in a
system like that). For that matter, Oohashi has basically admitted

that
he couldn't replicate his result on any other audio system except

the
one he specifically designed for that experiment. (A system he'd be
glad to sell you if you care to do the replication yourself!) And,
while we're at it, a system he has described differently in

different
papers. Makes you wonder.


Could you please cite the publication where he said this, and point

out the
descrepancies.


This would take more research than I am capable of on a Saturday night
after X (none of your business) glasses of Bordeaux. But I will try to
get back to you on this when my head is back together.

Keep in mind also that he worked with others as a team, and
certainly there would be team editing of any articles, perhaps not

always by
exactly the same person. Doesn't require a conspiracy theory.


I don't see a conspiracy. I see sloppiness. Could be a simple mistake,
but again, it cries out for explanation.

In fact, there are so many holes in this research that I'd be

skeptical
of its statistical claims without seeing the raw data. I'd also

like to
see some better analysis of what exactly that unique system was

putting
out. And I'd like to see replication, but that appears to be a vain
hope.


Well I suggested to you an Arny back then that you write to them,

requesting
same. Have you?


I meant replication by someone not invested in the result, which
Oohashi now is. (I'm not saying he was invested in the result when he
did the initial experiment. But I'm not saying he wasn't, either.)

And that these results were not a statistical aberration, but

were
repeated


This IS news. That is, if you have any basis for the assertion.

and correlated with a physiological response to music.


Not exactly. It excited certain pleasure sensors of the brain. But

he
provides no evidence that such pleasure sensors are excited when
listening to live gamelan music. (or any other kind). You're

stretching
here.


Sorry Charlie. It excited the pleasure senses *only* in response to

the
ultrasonic cell *and* the results were highly correlated with the
respondents ratings.


That's not precisely accurate, but it is true that both the listening
test results and the brain scans correlated with the presence of
high-frequency content (assuming the numbers are right). My point,
however, is that Oohashi never actually did brain scans with people
listening to LIVE gamelan music, so we don't really know whether THAT
would have the same effect on the brain. It might be that those
particular pleasure sensors react to the presence of high-frequency
noise. We just don't know.

So whether
Army's belief in sub-ultrasonic corruption is true or not, the

fact
is the
testing yielded differences to a stimulus that was supposedly

inaudible,

That the authors themselves called inaudible. They couldn't explain
their own results, but they pretty much admitted that the subjects

were
not "hearing" anything as hearing is generally understood.


That doesn't make the test or the results invalid. They simply said

they
didn't know the mechanism. Often the case early in the life of a new
discovery.


Point taken. But even AFTER this experiment, Oohashi & Co. still
described those signals as inaudible. Don't try to get ahead of him.

and
if audible, subtle in the extreme.

I and a few others have been arguing that some similar test

protocol
was
more likely to correlate with in-home experience. The problem

is,
even if
we are right, such testing is too cumbersome to be of any real

world
use
except in special showcase scenarios...it is not practical for

reviewing, or
for choosing audio equipment in the home. However, it does

certainly
suggest caution in substituting AB or ABX testing.


I've seen no evidence that anyone in the psychoacoustics field

thinks
this. Have you?


I don't know any psychoacouticians who are also into audio. I asked

you
before, if you do, please cite three and their published work on

open-ended
evaluative testing of audio gear.


No psychoacoustician would waste his time on "open-ended evaluative
testing of audio gear," because he sees no evidence that it would be
fruitful. You'd like to believe that it would be fruitful, but you have
no evidence that would convince a trained psychoacoustician to try it.
So far as I can tell, Oohashi's one-off effort hasn't done the trick.

Such testing is
radically different in the underlying conditions, and since the

musical
response of the ear/brain complex is so subtle and unpredictable

and
mis- or
un- understood, it is simply too simplistic to assert that what

works
for
testing using white noise or audio codecs works for overall

open-ended
musical evaluation of equipment. That is why some of us prefer

to
stay with
conventional audio evaluation given the Hobson's choice.

I hope this helps you understand that I have a reason for being

skeptical of
DBT's.


Odd, given that the experiment you've just described WAS a DBT.


Sorry that I did substitute DBT for comparative quick-switch DBT, a
distinction I usually make. It wasn't double blind, it was blind.


Gee, I thought it was DB. Well, SB might explain the unexpected result!
Perhaps that's why JAES wouldn't publish it.

And it
wan't comparative. It was monadic. *EXACTLY* the kind of test I have

been
promoting here as an alternative to short term comparative testing a

la ABX,
which most of objectivists seem to prefer.


We prefer it because we know it to be effective, and relatively easy to
do. But we would accept any evidence based on double-blind,
level-matched, forced choice listening tests--that can be replicated.

Even more important, why I believe it is intellectually dishonest

to
promote them as the be-all and end-all for determining audio

"truth",
as is
done here on RAHE by some. They are a tool...useful in some
cases...unproven in others. Until that later qualifier is

removed, I
think
overselling them does a disservice and can be classified as

"brainwashing".

That's absurd.


You are welcome to your opinion. I have mine. I've just stated it.


Well, if we're so good at brainwashing, why do you still beset us,
Harry?

bob


  #21   Report Post  
 
Posts: n/a
Default

wrote:
wrote in message
...
Jim Cate wrote:
Reading the above excerpts from the DBT debate, I have the

following (to me, obvious) questions:

1. If there are issues with the methodology of a particular
DB test, shouldn't the proper response be to devise
improvements to or modifications of the testing method
instead of deriding the whole concept of blind testing?


Mr. Cate should note that I didn't _deride_ the whole concept
of blind testing at the debate. Instead I stated, correctly,
that the vast majority of published blind tests that have been
cited as "proving" that, for example, amplifiers under normal
conditions of use do not sound different from one another
have had methodolical and/or organizational problems.

Please note that I published the recording of the debate to
prevent people from being able to misrepresent what Arny Krueger
and I actually said.

Let's put it this way: John Atkinson HAS to believe as he does
in order to hold his job as the editor of a high-end stereo
magazine. It's easier to change your views from objective to
subjective than it is to find another decent job.


Norm, I have always responded to your questions with respect
and truthfulness, so I am taken back by the fact that you now
appear to be accusing me of dishonesty. The change from
"objectivist" to "subjectivist" I described in the debate
was true . As I mentioned, due both to the arrogance of
youth and my career as a research scientist, I was a hard-line
objectivist who was certain back in the 1970s that no audible
differences existed between amplifiers. To have to admit that
my opinion was incorrect as a result of the experience I
described at the debate neither happened overnight nor did it
happen without a great deal of soul-searching. It also
happened years before I became the editor of a hi-fi magazine

John Atkinson
Editor, Stereophile
  #22   Report Post  
Rui Pedro Mendes Salgueiro
 
Posts: n/a
Default

Ernst Raedecker wrote:
I personally believe that you can only write for an audio magazine in
an anecdotal way.


When they test equipment, it's not a rigorous, scientific test.


That must be the reason why Noel Keywood, who is the guy who does the
measurements for hi-fi World wrote in the March 2004 issue (page 81)
that he wrote "a cheque large enough for a good car" to buy a "new
Rohde & Schwarz UPL analyser [which] can resolve down to 0.0002%".

http://www.hi-fiworld.co.uk/

Probably Noel Keywood was talking about this item:
http://www.rohde-schwarz.com/www/dev...f/html/1116118

Now, in an amplifier or DVD player review, the part written by Noel
Keywood is a small box at the end of a larger subjective text written
by someone else. Like Stereophile but worse (less graphs), specially
because he seems to try to make graphs that convey as little information
as possible.

We never ask car magazines to do rigorous tests.


The hell we don't! Every serious car magazine braggs (a bit) about the
sophistication of their measuring devices (some even bragg about being
ISO 9002 certified).

Car magazines take cars to test tracks and measure performances and
consumption with sophisticated timing equipment. I still remember in
1983, when L'Automobile Magazine took a Lamborghini Countach to the
Nardo test track (http://www.prototipo.org/ 12km long circle). Lots
of computer printouts on that article.

If you pick up the lastest issue of Sport-Auto (the French magazine, not
the German one with the same name) you will find a very complete article
measuring the latest Porsche 911 with normal steel/iron brakes and the
very expensive (+8000 euros) ceramic brakes. They took the 2 cars to
the Montlhery speed ring / test track (http://montlhery.com/autodrom.htm)
and did a serie of measures.

The magazine Echappement (also French) does an anual election of the
sports car of the year. That test, apart from the usual performance
measurements, includes inviting two race drivers to do timed laps
in a circuit and a rally stage.

Computer magazines (C'T, for instance), photo magazines (Chasseur
d'Images, for instance), etc. all do tests as rigorous as they can
afford. And note that, if they don't want to be sucessfully sued, they
better do rigorous tests before criticising anything. Of course, the
kind of magazine which never criticises any product (and doesn't do
comparisions) doesn't has that problem.

--
http://www.mat.uc.pt/~rps/

..pt is Portugal| `Whom the gods love die young'-Menander (342-292 BC)
Europe | Villeneuve 50-82, Toivonen 56-86, Senna 60-94
  #23   Report Post  
Harry Lavo
 
Posts: n/a
Default

wrote in message ...
Harry Lavo wrote:
wrote in message

...
Harry Lavo wrote:
Some researchers in Japan used such an approach to measure the

impact
of
ultrasonic response on listeners ratings of reproduced music.

They
constructed a testing room with an armchair, soft lighting, a
soothing
outdoor view, and very carefully constructed audio system

employing
separate
amps and supertweeters for the ultrasonics. The testees knew only
that they
were to listen to the music, and afterward fill out a simple
questionnaire.

Employing Gamelan music (chosen for its abundance of overtones),

they
found
statistical significance at the 95% level between music

reproduced
with a
20khz cutoff and that reproduced with frequencies extending up to
80khz.
They measured not only overall quality of the sound ratings, but

also
specific attributes...some also statistically significant. When

they
presented the paper to the AES, the skepticism was so severe they
went back
and repeated the test...this time they wired the subjects and
monitored
their brains but otherwise they were just told to listen to the

music
and
various aspects of the brain were recorded.

I haven't read this article in a while, but I think you are
misremembering it. I don't believe the listening tests and the

brain
scans were done simultaneously. And what is the basis for your
statement that this was met with skepticism at AES? (Not that it
shouldn't have been.)


I didn't say they listened simultaneous...I said they repeated the

test with
brain scans instead of stated evaluations.


That's not repeating a test. That's doing an entirely different
experiment. They've never repeated the listening test with any other
system, or any other music, so far as I know. Which makes it difficult
to extrapolate this result to cover anything reported by common
audiophiles. In particular, we can state categorically that this
research is utterly irrelevant to any comparison involving an audio
system including a CD player. It *could* be relevant to SACD or hi-rez
DVD, but that's not what we're talking about here, so I'm puzzled that
you brought it up.


I brought it up to show a superior form (IMO) of audio testing. That's why.
And since the test was challenged on "listening" grounds, I brought up the
fact that it had been confirmed on physiological grounds.

Then the brain scans were
correlated with the stated responses from the earlier test and an

extremely
strong correlation found.


So they say.


They had statistical specialists as part of the team. The article was
peer-reviewed and found acceptable by a panel of neurophysicists, who
presumably know statistics pretty well themselves. What more do you want.

This is an age-old "when you can't think of any counter-argument, assert
invalidity" to almost any new findings that gore sacred cows.



As to the scepticism at AES, Arny and others (but particularly Arny)

was the
one emphasizing it.


Arny's NOT the AES.


He asserted it here, with support from a few others (I can't remember who)
and no dissent from anybody. There certainly a fair number of people here
who do belong to the AES who pop up from time to time, so it is reasonable
to assume they too concurred.


They found that the pleasure
centers of the brain were activated when the overtones were used,

and
were
not activated when the 20khz cutoff was used. They also were not
activated
when listening to silence, used as a control. Moreover, the
correlation
with the earlier test was statistically significant (about half

the
subjects
were repeaters).

When I presented the data here, Arny Kruger who was posting here

at
the time
and is the main champion of ABX testing on the web, became

defensive.
At
first he tried to dismiss the test as "old news". Then he

claimed he
found
evidence that the ultrasonic frequencies affected the upper

regions
of the
hearing range (despite the researchers specific attempts to

defeat
this
possibility).

Well, as long as they tried! Seriously Harry, how hard is it to get

two
graphs to look alike, if you want them to look alike?


Huh? Explain please.


As I recall, Arny examined two graphs published in one of their
articles. One showed the output of the low-pass sample, the other the
full-range sample. Arny found differences between the two in the
audible range. Shouldn't have happened the way they claimed to have set
it up. Might be a graphics problem, but it's not something researchers
should let slip.


That's why I'm giving Arny's theory the benefit of the doubt, although I
personally have trouble seeing it. But even if true, it was so high up in
the frequency range and so small in magnitude that it can be only classified
as "subtle at best", as I have done.


Then he dismissed the whole thing as worthless because it
hadn't been corroborated (this was only a few months after it was
published).

In case you hadn't noticed, Oohashi & Co. have not exactly

electrified
the psychoacoustics field in the intervening years.


By the way, what do you mean by this? That he should have become a Rock
Star? Been interviewed on U.S. TV (who knows, he may not even speak
English)? Written articles for the popular press?

Seems to me this is just a case of innuendo in the absence of any factual
information.


No, but research into transient response brought on by the age of

high
sampling rates does confirm that their is enough transient distortion

when
musical signals are arbitrarily cut off below about 70hz to affect
perception (in other words) the transent smear and (not to mention

pre-echo
if digital) lasts well into the range at which the ear can be

sensitive to
it. This might provide a perfectly rational and "scientific" reason

why
ultrasonic signals could help the brain interpret a sound as "real"

and
"pleasant" vs. unreal and unsolicitive of an emotional response.


Couldn't say. What I could say is that, Oohashi's single case excepted,
no one has ever to my knowledge published an article finding that
ultrasonic signals were detectable by humans in ANY form of listening
test. I'm also unaware of any clamor within the psychoacoustic
community to try out Oohashi's technique. I'd be interested to know
why not, and i don't think "tradition" would fully explain it.


You are not part of that community and have absolutely no way of knowing
what is going on that may emerge in future years. Oohashi's test took a
long time to set up and execute...presumably duplicating or confirming it
(and hopefully improving it) would as well. Not to mention that funding
itself might take years to obtain....


Perhaps Arny's reaction was typically human when strongly held
beliefs and
conventional wisdom are challenged. But Arny missed the main

point.

Surely not the main point. It's not even mentioned in the abstract.


The main point that I was trying to emphasize to the group at the

time.

Ah, YOUR main point.


Yep, my main point.


That
point was that monadic testing, under relaxed conditions and

with*no*
comparison or even "rating" during the test, gave statistically
significant
results.

Yes (assuming anyone can replicate this), but only for ultrasonic

sound
(signal and/or noise, and there must have been a lot of noise in a
system like that). For that matter, Oohashi has basically admitted

that
he couldn't replicate his result on any other audio system except

the
one he specifically designed for that experiment. (A system he'd be
glad to sell you if you care to do the replication yourself!) And,
while we're at it, a system he has described differently in

different
papers. Makes you wonder.


Could you please cite the publication where he said this, and point

out the
descrepancies.


This would take more research than I am capable of on a Saturday night
after X (none of your business) glasses of Bordeaux. But I will try to
get back to you on this when my head is back together.


Good, thanks. I'll wait for it.

Keep in mind also that he worked with others as a team, and
certainly there would be team editing of any articles, perhaps not

always by
exactly the same person. Doesn't require a conspiracy theory.


I don't see a conspiracy. I see sloppiness. Could be a simple mistake,
but again, it cries out for explanation.


Again, I'd suggest you write to him.


In fact, there are so many holes in this research that I'd be

skeptical
of its statistical claims without seeing the raw data. I'd also

like to
see some better analysis of what exactly that unique system was

putting
out. And I'd like to see replication, but that appears to be a vain
hope.


Well I suggested to you an Arny back then that you write to them,

requesting
same. Have you?


I meant replication by someone not invested in the result, which
Oohashi now is. (I'm not saying he was invested in the result when he
did the initial experiment. But I'm not saying he wasn't, either.)


He might be able to tell you of some one or group or organization that is
doing further work. And he may have answers for some of your concerns.



And that these results were not a statistical aberration, but

were
repeated

This IS news. That is, if you have any basis for the assertion.

and correlated with a physiological response to music.

Not exactly. It excited certain pleasure sensors of the brain. But

he
provides no evidence that such pleasure sensors are excited when
listening to live gamelan music. (or any other kind). You're

stretching
here.


Sorry Charlie. It excited the pleasure senses *only* in response to

the
ultrasonic cell *and* the results were highly correlated with the
respondents ratings.


That's not precisely accurate, but it is true that both the listening
test results and the brain scans correlated with the presence of
high-frequency content (assuming the numbers are right). My point,
however, is that Oohashi never actually did brain scans with people
listening to LIVE gamelan music, so we don't really know whether THAT
would have the same effect on the brain. It might be that those
particular pleasure sensors react to the presence of high-frequency
noise. We just don't know.


A valid point, but one that would be extremely difficult to include in the
test. All tests involve some tradeoffs...I'm sure he would consider the
absence of live as an acceptable one, since the absence of live had to be
done via a recording.


So whether
Army's belief in sub-ultrasonic corruption is true or not, the

fact
is the
testing yielded differences to a stimulus that was supposedly
inaudible,

That the authors themselves called inaudible. They couldn't explain
their own results, but they pretty much admitted that the subjects

were
not "hearing" anything as hearing is generally understood.


That doesn't make the test or the results invalid. They simply said

they
didn't know the mechanism. Often the case early in the life of a new
discovery.


Point taken. But even AFTER this experiment, Oohashi & Co. still
described those signals as inaudible. Don't try to get ahead of him.


I'm not

and
if audible, subtle in the extreme.

I and a few others have been arguing that some similar test

protocol
was
more likely to correlate with in-home experience. The problem

is,
even if
we are right, such testing is too cumbersome to be of any real

world
use
except in special showcase scenarios...it is not practical for
reviewing, or
for choosing audio equipment in the home. However, it does

certainly
suggest caution in substituting AB or ABX testing.

I've seen no evidence that anyone in the psychoacoustics field

thinks
this. Have you?


I don't know any psychoacouticians who are also into audio. I asked

you
before, if you do, please cite three and their published work on

open-ended
evaluative testing of audio gear.


No psychoacoustician would waste his time on "open-ended evaluative
testing of audio gear," because he sees no evidence that it would be
fruitful. You'd like to believe that it would be fruitful, but you have
no evidence that would convince a trained psychoacoustician to try it.
So far as I can tell, Oohashi's one-off effort hasn't done the trick.


That's why I suggested that they needed to be audiophiles as well. From
that would come the motivation, perhaps, to undertake such a test, or to
worry about its applicability.


Such testing is
radically different in the underlying conditions, and since the
musical
response of the ear/brain complex is so subtle and unpredictable

and
mis- or
un- understood, it is simply too simplistic to assert that what

works
for
testing using white noise or audio codecs works for overall
open-ended
musical evaluation of equipment. That is why some of us prefer

to
stay with
conventional audio evaluation given the Hobson's choice.

I hope this helps you understand that I have a reason for being
skeptical of
DBT's.

Odd, given that the experiment you've just described WAS a DBT.


Sorry that I did substitute DBT for comparative quick-switch DBT, a
distinction I usually make. It wasn't double blind, it was blind.


Gee, I thought it was DB. Well, SB might explain the unexpected result!
Perhaps that's why JAES wouldn't publish it.


I don't recall whether or not it was single or double blind, but it was
blind. That's a more accurate statement than the one just above. Sorry.

No evidence they wouldn't publish it in its final form. No evidence it was
submitted. On what do you base your conclusion?


And it
wan't comparative. It was monadic. *EXACTLY* the kind of test I have

been
promoting here as an alternative to short term comparative testing a

la ABX,
which most of objectivists seem to prefer.


We prefer it because we know it to be effective, and relatively easy to
do. But we would accept any evidence based on double-blind,
level-matched, forced choice listening tests--that can be replicated.


Monadic is not a forced choice listening tests. It can (and preferably
should) be blind. It should be level matched. But by its very definition,
it is a single evaluation (thus "monadic"). And in practice, the "rating"
always follows the stimulus so as to not interfere with what is under test.
That's true in almost any field it is used in.


Even more important, why I believe it is intellectually dishonest

to
promote them as the be-all and end-all for determining audio

"truth",
as is
done here on RAHE by some. They are a tool...useful in some
cases...unproven in others. Until that later qualifier is

removed, I
think
overselling them does a disservice and can be classified as
"brainwashing".

That's absurd.


You are welcome to your opinion. I have mine. I've just stated it.


Well, if we're so good at brainwashing, why do you still beset us,
Harry?


Dunno. Perhaps my skull is to thick to be "washed"? :-)

  #24   Report Post  
Stewart Pinkerton
 
Posts: n/a
Default

On 15 May 2005 16:11:55 GMT, (Ernst Raedecker) wrote:

On 13 May 2005 15:55:32 GMT, Jim Cate wrote:

The fact that listeners have difficulty
in distinguishing one unit from another, particularly if one unit sells
for $400 and the other sells for $4,000, can be of substantial interest
and value to many audiophiles.


The point is that virtually all listeners DO hear the quality
differences between a $400 set and a $4000 set.


If you're talking about amps and CD players, then that statement is
simply not true - leaving aside overpriced rubbish such as Audio Note
which is deliberately broken.

If a certain person
belongs to the tiny minority who doesn't hear the difference, then he
should buy the simple piece of equipment. Unless of course he wants to
buy a B & O from Denmark. people buy B & O for the looks, it's a
sculpture, an objet d'art that happens to emanate sound.

but the wires, transistors,resistors, magnets, and
acoustics entailed in reproducing music are governed by the laws of physics.


Yes. And as those parts are not ideal types in the scientific meaning
of the notion "ideal type", but real things, they sound differently
when constructed differently. A cap is never a cap with this or that
value.


No, caps and especially cables do *not* sound different, whatever you
may care to claim. There remains a considerable pool of money waiting
for anyone who can prove otherwise.

Everything in nature is governed by the laws of physics, there is no
magic about it. However many things cannot be measured and computed
readily. Simple things can, complex things cannot.


Audio equipment is a simple thing in this regard. I have *never* heard
an audible difference which could not be traced to some easily
measurable parameter.
--

Stewart Pinkerton | Music is Art - Audio is Engineering
  #26   Report Post  
Robert Trosper
 
Posts: n/a
Default

On 14 May 2005 17:24:58 GMT, wrote:



Let's put it this way: John Atkinson HAS to believe as he does in order to
hold his job as the editor of a high-end stereo magazine. It's easier to
change your views from objective to subjective than it is to find another
decent job. People believe what they must believe to hold their position in
society. You can't be a narc without supporting the laws proscribing
narcotics.


Why not, Norman? Based on the number of narcs prosecuted and convicted
for corruption you certainly can do things you don't believe in. I
would venture that if what you say is true the entire corporate world
and the legal system would freeze up overnight - perhaps most of the
modern world. What a statement - "people believe what they must
believe in order to hold their position in society.". Now, if you'd
said people must APPEAR to believe what they must believe ... I could
go along.


  #28   Report Post  
Jim Cate
 
Posts: n/a
Default

wrote:

wrote:

wrote in message
...

Jim Cate wrote:

Reading the above excerpts from the DBT debate, I have the

following (to me, obvious) questions:

1. If there are issues with the methodology of a particular
DB test, shouldn't the proper response be to devise
improvements to or modifications of the testing method
instead of deriding the whole concept of blind testing?



Mr. Cate should note that I didn't _deride_ the whole concept
of blind testing at the debate. Instead I stated, correctly,
that the vast majority of published blind tests that have been
cited as "proving" that, for example, amplifiers under normal
conditions of use do not sound different from one another
have had methodolical and/or organizational problems.

Please note that I published the recording of the debate to
prevent people from being able to misrepresent what Arny Krueger
and I actually said.



And Mr. Atkinson should note that I did not, in fact, state that he had
derided the whole concept of blind testing at the debate. However, the
policies and practices of Stereophile over the years have substantially
done so. (Maybe I missed them, but how many Stereophile reviews in the
past 10 years have published the results of blind tests with methodology
generally approved by Stereophile? - Thirty? Fifteen maybe? Perhaps 10
or so? Five?) And how many articles have been published explaining the
potential benefits to the readers of blind testing under at least some
circumstances?

Jim
  #29   Report Post  
gofab.com
 
Posts: n/a
Default

On 13 May 2005 15:55:32 GMT, in article , Jim Cate
stated:

Reading the above excerpts from the DBT debate, I have the following (to
me, obvious) questions:

1. If there are issues with the methodology of a particular DB test,
shouldn't the proper response be to devise improvements to or
modifications of the testing method instead of deriding the whole
concept of blind testing? For example, if the listening times are too
short, in your opinion, wouldn't extending the listening times be the
logical response? If instruments for converting the signal are deemed
questionable or unreliable, wouldn't modifing the instruments, or
removing them and using simple switching circuitry and
level balancing, be a logical response? - If, that is, one truly wants
such tests to succeed.


I think the point that double blind tests are too time consuming is a canard.

Not every product review need include a double blind test.

Instead, double blind tests should be conducted every so often. In that way,
the time required can be devoted to them. Even though every product won't be
tested that way, the results of those tests can help consumers of product
reviews evaluate the likely reliability of the subjective reviews. If double
blind tests consistently show that listeners can't distinguish certain products,
subject claims can be evaluated in that light. If they show otherwise, so be
it.

2. The complaint is made that such tests are typically or often
"inconclusive," and therefore not of consequence or value. But isn't
that missing the whole point. - The fact that listeners have difficulty
in distinguishing one unit from another, particularly if one unit sells
for $400 and the other sells for $4,000, can be of substantial interest
and value to many audiophiles.


I agree. The fact that no difference can be detected is not inconclusive -- it
is a conclusion, and it is valuable information.

3. The suggestion that the performance of audio components is really a
matter of personal, subjective taste, much like the difference between
orchestras, musicians, etc., is highly misleading. If we are talking
about high fidelity audio, that is. Music, and our preferences therein,
are subjective, but the wires, transistors,resistors, magnets, and
acoustics entailed in reproducing music are governed by the laws of physics.


That's true. And since there are major financial ramifications to judgments
about wires and transistors, we should know if there are actual differences. If
in lieu of spending an extra $10,000 I should rather drink a couple of glasses
of red wine before listening, I'd like to know that.

4. As a long-time Sterephile subscriber, I would challenge the editors
to submit the following question to their subscribers: Would you like to
see more tests of at least some audio components in which at least
portions of the listening tests and evaluations were performed under
conditions in which the reviewers didn't know what component they were
listening to, or it's price?


That's an absolutely great idea.
  #30   Report Post  
gofab.com
 
Posts: n/a
Default

On 13 May 2005 18:59:14 GMT, in article , Steven
Sullivan stated:

4. As a long-time Sterephile subscriber, I would challenge the editors
to submit the following question to their subscribers: Would you like to
see more tests of at least some audio components in which at least
portions of the listening tests and evaluations were performed under
conditions in which the reviewers didn't know what component they were
listening to, or it's price? Note that I am not suggesting that the
tests have to be totally DBT a la Arnie's system, or the like, but
merely that they be listening tests in which the reviewer isn't told
what component he or she is listening to at least part of the time.) To
sidestep one of the usual objections, I would suggest that you add a
question inquiring whether the listener would be willing to pay a few
dollars more to cover the costs of such tests. I would also suggest
that your poll or inquiry be conducted without your usual propoganda
about the limitations and uncertainty of such tests.


Consider the fallout -- from subscribers with rigs consting upwards of
$10K, and more importantly, from advertisers who make and market the stuff
-- when there turns out to be little correlation between price and
performance in such evaluations.



I find it shocking that you would raise that as a reason not to put that
question out there -- "We can't have a search for the truth because some people
are not going to like the truth." But I appreciate your honesty in being so
open about what the rub is!

I guess you've raised another point -- before we answer the question of what
Stereophile should do, we need to answer the question of what is Stereophile's
mission and to whom does it owe its allegiances. Manufacturers or consumers?
Is Stereophile an industry publication and thus, essentially, propoganda? Or is
it journalism?

Most likely the answer is somewhere in the middle. Maybe what Stereophile is,
is a compromise, finely honed over the years, between the competing demands
posed by these poles.


  #31   Report Post  
 
Posts: n/a
Default

gofab.com wrote:
I guess you've raised another point -- before we answer the question of what
Stereophile should do, we need to answer the question of what is Stereophile's
mission and to whom does it owe its allegiances. Manufacturers or consumers?
Is Stereophile an industry publication and thus, essentially, propoganda? Or is
it journalism?

Most likely the answer is somewhere in the middle. Maybe what Stereophile is,
is a compromise, finely honed over the years, between the competing demands
posed by these poles.


Oh, no, it's not a compromise at all. Like all for-profit publications,
Stereophile seeks to provide content that will appeal to large numbers
of readers considered desirable by the advertisers. In other words, it
needs both readers who believe in snake oil and advertisers who sell
it.

bob
____________

"Further carefully-conducted blind tests will be necessary
if these conclusions are felt to be in error."
--Stanley P. Lip****z
  #32   Report Post  
 
Posts: n/a
Default

wrote:
gofab.com wrote:
I guess you've raised another point -- before we answer the question of what
Stereophile should do, we need to answer the question of what is Stereophile's
mission and to whom does it owe its allegiances. Manufacturers or consumers?
Is Stereophile an industry publication and thus, essentially, propoganda? Or is
it journalism?

Most likely the answer is somewhere in the middle. Maybe what Stereophile is,
is a compromise, finely honed over the years, between the competing demands
posed by these poles.


Oh, no, it's not a compromise at all. Like all for-profit publications,
Stereophile seeks to provide content that will appeal to large numbers
of readers considered desirable by the advertisers. In other words, it
needs both readers who believe in snake oil and advertisers who sell
it.



Sorry but that is just a bunch of balony. The history of Stereophile is
well documented. It was founded by J Gordon Holt in reaction to hifi
magazines of the time (objectivst magazines) to provide an alternative
approach to audio reviewing in which the reviews were based on actual
usage. Stereophile grew over the years using this approach to audio
review. Obviously the subscribers to Stereophile were audiophiles who
shared J. Gordon Holt's lack of satisfaction with established
objectivist audio publications. There were no advertisements in
Stereophile for years. So obviously the readership was established
before any adveretisers were in place. Just because some one feels that
the established MO to hifi review from the objectist magazines was not
satisfactory does not mean they believe in snake oil.





Scott Wheeler
Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
F.S. tons of studio/keyboard/rack gear Cheapgear1 Pro Audio 5 April 18th 08 04:58 PM
Home Entertainment 2005 Debate John Atkinson Audio Opinions 176 December 18th 04 04:56 PM
Powerful Argument in Favor of Agnosticism and Athetism Robert Morein Audio Opinions 3 August 17th 04 06:37 AM
Lots Of Great Tubes For Sale Jim McShane Marketplace 0 April 14th 04 02:22 PM
Lots Of Great Audio Tubes For Sale! Jim McShane Marketplace 0 November 21st 03 02:05 PM


All times are GMT +1. The time now is 02:48 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"