Reply
 
Thread Tools Display Modes
  #1   Report Post  
George M. Middius
 
Posts: n/a
Default Note to the Idiot



I just read Sanders' latest salvo at you in which he claimed that at a
social dinner, you launched into a rant about Stereophile not doing
blind testing of equipment.

If that's true -- and I accept it at face value based on the little I
know of you -- I am absolutely certain that any time I spent in your
company would be wasted and a total bore.



  #2   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"George M. Middius" wrote in message
...


I just read Sanders' latest salvo at you in which he claimed that at a
social dinner, you launched into a rant about Stereophile not doing
blind testing of equipment.

If that's true -- and I accept it at face value based on the little I
know of you -- I am absolutely certain that any time I spent in your
company would be wasted and a total bore.


If you accept anything Sanders says at face value you're much stupider
than I thought.

But explain this contradiction in Stereophile.

They provide eloquent subjective appraisals of equipment including
lots of words on the "sound" of the equipment.

They also provide detailed test measurements.

Sometimes the two don't fully concur with one another. Why?

ScottW


  #3   Report Post  
George M. Middius
 
Posts: n/a
Default Note to the Idiot



Yappity-yappity-yap.

I just read Sanders' latest salvo at you in which he claimed that at a
social dinner, you launched into a rant about Stereophile not doing
blind testing of equipment.

If that's true -- and I accept it at face value based on the little I
know of you -- I am absolutely certain that any time I spent in your
company would be wasted and a total bore.


If you accept anything Sanders says at face value you're much stupider
than I thought.


You don't really "think", anyway, so that's not much of an insult.

But explain this contradiction in Stereophile.


The weight of evidence tilts the scale toward Sanders' version.



  #4   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"George M. Middius" wrote in message
...


Yappity-yappity-yap.

I just read Sanders' latest salvo at you in which he claimed that at

a
social dinner, you launched into a rant about Stereophile not doing
blind testing of equipment.

If that's true -- and I accept it at face value based on the little I
know of you -- I am absolutely certain that any time I spent in your
company would be wasted and a total bore.


If you accept anything Sanders says at face value you're much stupider
than I thought.


You don't really "think", anyway, so that's not much of an insult.

But explain this contradiction in Stereophile.


The weight of evidence tilts the scale toward Sanders' version.


He spent at least 10x the time on his "joke" than I did on
stating my view on Stereophile reviews.
You can let him spin that into a rant if it suits your
purpose. Truth does not often suit your purpose.

ScottW


  #5   Report Post  
Sockpuppet Yustabe
 
Posts: n/a
Default Note to the Idiot


"ScottW" wrote in message
news0pGb.37791$m83.36994@fed1read01...

"George M. Middius" wrote in message
...


I just read Sanders' latest salvo at you in which he claimed that at a
social dinner, you launched into a rant about Stereophile not doing
blind testing of equipment.

If that's true -- and I accept it at face value based on the little I
know of you -- I am absolutely certain that any time I spent in your
company would be wasted and a total bore.


If you accept anything Sanders says at face value you're much stupider
than I thought.

But explain this contradiction in Stereophile.

They provide eloquent subjective appraisals of equipment including
lots of words on the "sound" of the equipment.

They also provide detailed test measurements.

Sometimes the two don't fully concur with one another. Why?

ScottW



The measurements do not adequately describe the perceptions of the sound of
the music




----== Posted via Newsfeed.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeed.com The #1 Newsgroup Service in the World! 100,000 Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---


  #6   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"Sockpuppet Yustabe" wrote in message
...

"ScottW" wrote in message
news0pGb.37791$m83.36994@fed1read01...

"George M. Middius" wrote in message
...


I just read Sanders' latest salvo at you in which he claimed that at

a
social dinner, you launched into a rant about Stereophile not doing
blind testing of equipment.

If that's true -- and I accept it at face value based on the little I
know of you -- I am absolutely certain that any time I spent in your
company would be wasted and a total bore.


If you accept anything Sanders says at face value you're much stupider
than I thought.

But explain this contradiction in Stereophile.

They provide eloquent subjective appraisals of equipment including
lots of words on the "sound" of the equipment.

They also provide detailed test measurements.

Sometimes the two don't fully concur with one another. Why?

ScottW



The measurements do not adequately describe the perceptions of the sound

of
the music


Possibly, and occasionally the preceptions only exist in the ear of the
beholder.

ScottW


  #7   Report Post  
George M. Middius
 
Posts: n/a
Default Note to the Idiot



The moribund M&M "life"style gets an infusion of doggie breath.

The measurements do not adequately describe the perceptions of the sound of
the music


Possibly, and occasionally the preceptions only exist in the ear of the
beholder.


Nobody's buying your antihuman propaganda, little 'borg. Go suck a
bone.





This post reformatted by the Resistance,
laboring tirelessly to de-Kroogerize Usenet.
  #8   Report Post  
Sockpuppet Yustabe
 
Posts: n/a
Default Note to the Idiot


"ScottW" wrote in message
news:hGsGb.37833$m83.16466@fed1read01...

"Sockpuppet Yustabe" wrote in message
...

"ScottW" wrote in message
news0pGb.37791$m83.36994@fed1read01...

"George M. Middius" wrote in message
...


I just read Sanders' latest salvo at you in which he claimed that at

a
social dinner, you launched into a rant about Stereophile not doing
blind testing of equipment.

If that's true -- and I accept it at face value based on the little

I
know of you -- I am absolutely certain that any time I spent in your
company would be wasted and a total bore.

If you accept anything Sanders says at face value you're much stupider
than I thought.

But explain this contradiction in Stereophile.

They provide eloquent subjective appraisals of equipment including
lots of words on the "sound" of the equipment.

They also provide detailed test measurements.

Sometimes the two don't fully concur with one another. Why?

ScottW



The measurements do not adequately describe the perceptions of the sound

of
the music


Possibly, and occasionally the preceptions only exist in the ear of the
beholder.


Then you must worship at the feet of the Gods of Accuracy
and listen to music that 'tests' perfectly, no matter whether
it is perceived to sound good, or not.

I will listen to what I perceive as sounding good.

The interesting point is whether or not one would
change one's perception of the sound of the music,
after learning how accurate or not the equipment tests.
A reverse expectation effect.





----== Posted via Newsfeed.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeed.com The #1 Newsgroup Service in the World! 100,000 Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---
  #9   Report Post  
George M. Middius
 
Posts: n/a
Default Note to the Idiot



Sockpuppet Yustabe said:

The interesting point is whether or not one would
change one's perception of the sound of the music,
after learning how accurate or not the equipment tests.



If you ask the Krooborg, it will tell you that music is "irrelevant"
for evaluating audio equipment.




  #10   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"Sockpuppet Yustabe" wrote in message
...

The measurements do not adequately describe the perceptions of the

sound
of
the music


Possibly, and occasionally the preceptions only exist in the ear of

the
beholder.


Then you must worship at the feet of the Gods of Accuracy
and listen to music that 'tests' perfectly, no matter whether
it is perceived to sound good, or not.


Don't go and stick words in my mouth. You didn't hear
me profess the need for absolute accuracy or even realism.

What I am referring to are the reviews where different units
are compared and perceptions of differences in sonic
performances are claimed which can't
be validated through differences in measured performance.
Accuracy or lack thereof is irrelevant. I would like to see
these subjective perceptions of difference validated through
DBTs. I don't think that is too much to ask of the
professionals performing these reviews.


I will listen to what I perceive as sounding good.


As do I. I am not talking about listening. I am talking
about reading, actually paying for a professionals opinion
on the sonic characteristics of equipment.

The interesting point is whether or not one would
change one's perception of the sound of the music,
after learning how accurate or not the equipment tests.
A reverse expectation effect.


I have heard systems which are supposedly far more accurate
than mine which weren't as pleasing to me. I do realize that
we get accustomed to things. I still enjoy my old Large Advents.
Everytime I play Selling England I long for those speakers just
because of the unique way they'd nearly explode on that low
organ note on Firth of Fifth. Nothing accurate about it, but I still like
it.

BTW, Merry Christmas. I hope you're recovering from your flood.

ScottW




  #11   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

"ScottW" wrote in message
news:p0pGb.37791$m83.36994@fed1read01...
explain this contradiction in Stereophile.

They provide eloquent subjective appraisals of equipment including
lots of words on the "sound" of the equipment.

They also provide detailed test measurements.

Sometimes the two don't fully concur with one another. Why?


Hi Scott, I have written at length in the magazine about this occasional
lack of correlation in the magazine, most recently in the current
(January) issue. I don't see it as an indictment of my policy, merely a
byproduct of my trying to be open about the subject with my readers and
of giving them as much information about a product as I can.

Happy holidays.

John Atkinson
Editor, Stereophile
  #12   Report Post  
S888Wheel
 
Posts: n/a
Default Note to the Idiot


But explain this contradiction in Stereophile.

They provide eloquent subjective appraisals of equipment including
lots of words on the "sound" of the equipment.

They also provide detailed test measurements.

Sometimes the two don't fully concur with one another. Why?

ScottW







Interesting question. Could it be that in some cases the measured performance
doesn't really say much about the subjective peformance? Maybe in some cases
there were other influences including component synergy. Maybe in some cases
the reviewer was just off the mark.
  #13   Report Post  
George M. Middius
 
Posts: n/a
Default Note to the Idiot



S888Wheel said:

Sometimes the two don't fully concur[sic] with one another. Why?


Interesting question. Could it be that in some cases the measured performance
doesn't really say much about the subjective peformance? Maybe in some cases
there were other influences including component synergy. Maybe in some cases
the reviewer was just off the mark.



Maybe measurements are meaningless for consumers.


  #14   Report Post  
S888Wheel
 
Posts: n/a
Default Note to the Idiot



Maybe measurements are meaningless for consumers.


I think some are and some are not.
  #15   Report Post  
George M. Middius
 
Posts: n/a
Default Note to the Idiot



S888Wheel said:

Maybe measurements are meaningless for consumers.


I think some are and some are not.


You can have the ones allocated for me. My Xmas present to you.




  #16   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"S888Wheel" wrote in message
...

But explain this contradiction in Stereophile.

They provide eloquent subjective appraisals of equipment including
lots of words on the "sound" of the equipment.

They also provide detailed test measurements.

Sometimes the two don't fully concur with one another. Why?

ScottW







Interesting question. Could it be that in some cases the measured

performance
doesn't really say much about the subjective peformance?


There is almost infinite depth of detail one can explore
measurements. Occasionally my work is to conduct a
detailed performance evaluation of a cellular data modem
in harsh environments.
I can almost guarantee you I can find a deficiency if you let
me test long enough. Last one dropped 10 db in receive
sensitivity only after a channel handoff at cold temps.
Dumb luck we found it.

Maybe in some cases
there were other influences including component synergy.


Still, that should be measurable.

Maybe in some cases
the reviewer was just off the mark.


I think the easiest way to know is the DBT.
If the reviewer is not off the mark, then I'd like
to see Anderson embark on figuring out which
measurement needs to be added to his repertoire
to show the delta.

ScottW


  #17   Report Post  
S888Wheel
 
Posts: n/a
Default Note to the Idiot


I said



There is almost infinite depth of detail one can explore
measurements. Occasionally my work is to conduct a
detailed performance evaluation of a cellular data modem
in harsh environments.
I can almost guarantee you I can find a deficiency if you let
me test long enough. Last one dropped 10 db in receive
sensitivity only after a channel handoff at cold temps.
Dumb luck we found it.


Maybe we are simply being to general in this discussion. You seem to think
there have been specific measurements that would suggest audible performance
that is in conflict with the subjective report of specific gear. If that is an
accurate assesment then it might be better to discuss such specific reports.


I said



Maybe in some cases
there were other influences including component synergy.


Scott said



Still, that should be measurable.



But they have to be measured. Are you suggesting that maybe Stereophile is not
making measurements they should be making?

I said


Maybe in some cases
the reviewer was just off the mark.



Scott said



I think the easiest way to know is the DBT.
If the reviewer is not off the mark, then I'd like
to see Anderson embark on figuring out which
measurement needs to be added to his repertoire
to show the delta.


I am not against it but I think you are suggesting that Stereophile should
conduct some very challenging research to corolate subjective impressions with
measured performance.
  #18   Report Post  
George M. Middius
 
Posts: n/a
Default Note to the Idiot



S888Wheel said to The Idiot:

I think the easiest way to know is the DBT.
If the reviewer is not off the mark, then I'd like
to see Anderson embark on figuring out which
measurement needs to be added to his repertoire
to show the delta.


I am not against it but I think you are suggesting that Stereophile should
conduct some very challenging research to corolate subjective impressions with
measured performance.


From the 'borg viewpoint, no expense is too great, no undertaking too
complex, if there's the tiniest chance that the E.H.E.E. will be
"exposed" as the "scam operation" the 'borgs know it to be.




  #19   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"S888Wheel" wrote in message
...

I said



There is almost infinite depth of detail one can explore
measurements. Occasionally my work is to conduct a
detailed performance evaluation of a cellular data modem
in harsh environments.
I can almost guarantee you I can find a deficiency if you let
me test long enough. Last one dropped 10 db in receive
sensitivity only after a channel handoff at cold temps.
Dumb luck we found it.


Maybe we are simply being to general in this discussion. You seem to

think
there have been specific measurements that would suggest audible

performance
that is in conflict with the subjective report of specific gear. If that

is an
accurate assesment then it might be better to discuss such specific

reports.

I'll have to browse the archives. I'm sure a good example shouldn't be
hard to find. I'm also sure avid Stereophile readers could point out a few
examples with ease. I've been a casual reader at best.


I said



Maybe in some cases
there were other influences including component synergy.


Scott said



Still, that should be measurable.



But they have to be measured. Are you suggesting that maybe Stereophile

is not
making measurements they should be making?


No, not until the measurements say there isn't an audible
difference yet a DBT confirms there is.


I said


Maybe in some cases
the reviewer was just off the mark.



Scott said



I think the easiest way to know is the DBT.
If the reviewer is not off the mark, then I'd like
to see Anderson embark on figuring out which
measurement needs to be added to his repertoire
to show the delta.


I am not against it but I think you are suggesting that Stereophile

should
conduct some very challenging research to corolate subjective impressions

with
measured performance.


Well, let's first remove the subjectivity and simply
confirm audible differences.

ScottW


  #20   Report Post  
JBorg
 
Posts: n/a
Default Note to the Idiot

ScottW wrote:


...


If you accept anything Sanders says at face value you're much stupider
than I thought.

But explain this contradiction in Stereophile.

They provide eloquent subjective appraisals of equipment including
lots of words on the "sound" of the equipment.

They also provide detailed test measurements.




Sometimes the two don't fully concur with one another. Why?




Why? It's because you're attempting to compare and then collate
the results from two incongruent sources.


Merry Christmas!


ScottW



  #21   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"JBorg" wrote in message
om


Why? It's because you're attempting to compare and then collate
the results from two incongruent sources.


Here's another idiot who obvious doesn't know the difference between collate
and correlate. Probably due to a lifetime of dead-end clerical jobs.


  #22   Report Post  
JBorg
 
Posts: n/a
Default Note to the Idiot

Arny Krueger wrote:
JBorg wrote in message





Why? It's because you're attempting to compare and then collate
the results from two incongruent sources.




Here's another idiot who obvious doesn't know the difference between
collate and correlate. Probably due to a lifetime of dead-end clerical
jobs.



Shooooooo... not you. Go awayyy.


To correlate is to bring into causal, complementary, parallel, or
reciprocal relation. That is by way of saying-- to bring the
reviewer's perception into causal relation with the detailed test
measurements.

To collate is to examine and compare carefully in order to note
points of disagreement. That is, to establish and to verify the
point of differences between the reviewer's perception against the
results of the detailed test measurements. Here lies the original
poster's curiosity.

To wit: The eloquent subjective appraisals of the reviewers do
not concur with test measurements.
  #23   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"ScottW" wrote in message
news0pGb.37791$m83.36994@fed1read01

But explain this contradiction in Stereophile.


They provide eloquent subjective appraisals of equipment including
lots of words on the "sound" of the equipment.


They also provide detailed test measurements.


Sometimes the two don't fully concur with one another. Why?


First off, Stereophile doesn't always do appropriate kinds of listening
tests. Their dogmatic adherence to sighted, level-matched, single
presentation method listening techniques, minimizes real listener
sensitivity and maximizes the possibility of imaginary results. The only
thing they do right is the level-matching and I suspect that their reviewers
don't always adhere to that.

Stereophile goes out of its way to avoid time-synchronization and formal
bias controls, despite all the evidence that these are critical if
sensitive, reliable results are desired. I've concluded that Stereophile
does not want to do listening tests that are sensitive and reliable, because
they are afraid of the results. Science can be very unpredictable and the
results could easily go against years of a grotesquely-flawed editorial
policies such as the RCL, and embarrass many advertisers.

So, any Stereophile comparison of ear versus gear can easily be garbage-in,
garbage out; on the ear side of the equation.

Secondly, Stereophile does some really weird measurements, such as their
undithered tests of digital gear. The AES says don't do it, but John
Atkinson appears to be above all authority but the voices that only he
hears. He does other tests, relating to jitter, for which there is no
independent confirmation of reliable relevance to audibility. I hear that
this is not because nobody has tried to find correlation. It's just that the
measurement methodology is flawed, or at best has no practical advantages
over simpler methodologies that correlate better with actual use.

Thirdly, there are whole classes of equipment, mostly relating to snake oil
toys and vinyl, for which Stereophile doesn't perform any relevant technical
tests of at all. No test gear is used, so therefore no possibility of a
valid ear versus gear comparison.

Finally, Stereophile seems to bend over backward to avoid mentioning an
increasingly-common situation where the equipment is so accurate that it has
no sonic character at all, or very little sonic character. In these cases
Stereophile's measurements are effectively meaningless when it comes to
describing sonic character, because there is precious little or no sonic
character to describe.


  #24   Report Post  
Scott Gardner
 
Posts: n/a
Default Note to the Idiot

On Thu, 25 Dec 2003 07:49:29 -0500, "Arny Krueger"
wrote:
snip
Finally, Stereophile seems to bend over backward to avoid mentioning an
increasingly-common situation where the equipment is so accurate that it has
no sonic character at all, or very little sonic character. In these cases
Stereophile's measurements are effectively meaningless when it comes to
describing sonic character, because there is precious little or no sonic
character to describe.


Along these lines, who was it back in the sixties that first said "All
sonically-accurate equipment must, by definition, sound alike"? (I'm
paraphrasing, but that's the gist of the statement.

Scott Gardner

  #25   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"Scott Gardner" wrote in message

On Thu, 25 Dec 2003 07:49:29 -0500, "Arny Krueger"
wrote:
snip
Finally, Stereophile seems to bend over backward to avoid mentioning
an increasingly-common situation where the equipment is so accurate
that it has no sonic character at all, or very little sonic
character. In these cases Stereophile's measurements are effectively
meaningless when it comes to describing sonic character, because
there is precious little or no sonic character to describe.


Along these lines, who was it back in the sixties that first said "All
sonically-accurate equipment must, by definition, sound alike"? (I'm
paraphrasing, but that's the gist of the statement.


Sounds like the sort of thing that the late Julian Hirsch would say. I
don't know if he said this in the 60s or 70s but it was about then that at
least a modest amount of sonically-accurate or nearly-sonically-accurate
started showing up on the market.




  #26   Report Post  
Scott Gardner
 
Posts: n/a
Default Note to the Idiot

On Thu, 25 Dec 2003 16:58:23 -0500, "Arny Krueger"
wrote:

"Scott Gardner" wrote in message

On Thu, 25 Dec 2003 07:49:29 -0500, "Arny Krueger"
wrote:
snip
Finally, Stereophile seems to bend over backward to avoid mentioning
an increasingly-common situation where the equipment is so accurate
that it has no sonic character at all, or very little sonic
character. In these cases Stereophile's measurements are effectively
meaningless when it comes to describing sonic character, because
there is precious little or no sonic character to describe.


Along these lines, who was it back in the sixties that first said "All
sonically-accurate equipment must, by definition, sound alike"? (I'm
paraphrasing, but that's the gist of the statement.


Sounds like the sort of thing that the late Julian Hirsch would say. I
don't know if he said this in the 60s or 70s but it was about then that at
least a modest amount of sonically-accurate or nearly-sonically-accurate
started showing up on the market.


I came across the quote when I was reading about Richard
Clark's "Amplifier Challenge". The statement seems pretty obvous to
me, but the author of the article I was reading implied that it was a
pretty ground-breaking assertion at the time it was originally made.
The idea that audible differences between two high-end pieces
of equipment is proof that one (or both) of them is noticeably
inaccurate is a powerful statement, and one that doesn't seem to get
much mention in the literature these days.

Scott Gardner


  #27   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

In message
Arny Krueger ) wrote:
The only thing [Stereophile does] right is the level-matching and
I suspect that their reviewers don't always adhere to that.


Amazing! I never suspected that when I perform listening tests you
are right there in the room observing me, Mr. Krueger. Nevertheless,
whatever you "suspect," Mr. Krueger, I match levels to within less
than 0.1dB whenever I directly compare components. See my review
of the Sony SCD-XA9000ES SACD player in the December issue for an
example (http://www.stereophile.com/digitalso...views/1203sony).

Stereophile does some really weird measurements, such as their
undithered tests of digital gear. The AES says don't do it, but
John Atkinson appears to be above all authority but the voices
that only he hears.


It always gratifying to learn, rather late of course, that I had
bested Arny Krueger in a technical discussion. My evidence for
this statement is his habit of waiting a month, a year, or even more
after he has ducked out of a discussion before raising the subject
again on Usenet as though his arguments had prevailed. Just as he has
done here. (This subject was discussed on r.a.o in May 2002, with Real
Audio Guys Paul Bamborough and Glenn Zelniker joining me in pointing
out the flaws in Mr. Krueger's argument.)

So let's examine what the Audio Engineering Society (of which I am
a long-term member and Mr. Krueger is not) says on the subject of
testing digital gear, in their standard AES17-1998 (revision of
AES17-1991):

Section 4.2.5.2: "For measurements where the stimulus is generated in
the digital domain, such as when testing Compact-Disc (CD) players,
the reproduce sections of record/replay devices, and digital-to-analog
converters, the test signals shall be dithered.

I imagine this is what Mr. Krueger means when wrote "The AES says don't
do it." But unfortunately for Mr. Krueger, the very same AES standard
goes on to say in the very next section (4.2.5.3):

"The dither may be omitted in special cases for investigative purposes.
One example of when this is desirable is when viewing bit weights on an
oscilloscope with ramp signals. In these circumstances the dither signal
can obscure the bit variations being viewed."

As the first specific test I use an undithered signal for is indeed for
investigative purposes -- looking at how the error in a DAC's MSBs
compare to the LSB, in other words, the "bit weights" -- it looks as if
Mr. Krueger's "The AES says don't do it" is just plain wrong.

Mr. Krueger is also incorrect about the second undithered test signal
I use, which is to examine a DAC's or CD player's rejection of
word-clock jitter, to which he refers in his next paragraph:

He does other tests, relating to jitter, for which there is no
independent confirmation of reliable relevance to audibility. I hear
that this is not because nobody has tried to find correlation. It's
just that the measurement methodology is flawed, or at best has no
practical advantages over simpler methodologies that correlate better
with actual use.


And once again, Arny Krueger's lack of comprehension of why the latter
test -- the "J-Test," invented by the late Julian Dunn and implemented as
a commercially available piece of test equipment by Paul Miller -- needs
to use an undithered signal reveals that he still does not grasp the
significance of the J-Test or perhaps even the philosophy of measurement
in general. To perform a measurement to examine a specific aspect of
component behavior, you need to use a diagnostic signal. The J-Test
signal is diagnostic for the assessment of word-clock jitter because:

1) As both the components of the J-Test signal are exact integer
fractions of the sample frequency, there is _no_ quantization error.
Even without dither. Any spuriae that appear in the spectra of the
device under test's analog output are _not_ due to quantization.
Instead, they are _entirely_ due to the DUT's departure from
theoretically perfect behavior.

2) The J-Test signal has a specific sequence of 1s and 0s that
maximally stresses the DUT and this sequence has a low-enough frequency
that it will be below the DUT's jitter-cutoff frequency.

Adding dither to this signal will interfere with these characteristics,
rendering it no longer diagnostic in nature. As an example of a
_non-diagnostic terst signal, see Arny Krueger's use of a dithered
11.025kHz tone in his website tests of soundcards at a 96kHz sample
rate. This meets none of the criteria I have just outlined.

He does other tests, relating to jitter, for which there is no
independent confirmation of reliable relevance to audibility.


One can argue about the audibility of jitter, but the J-Test's lack of
dither does not render it "really weird," merely consistent and
repeatable. These, of course, are desirable in a measurement technique.
And perhaps it worth noting that, as I have pointed out before,
consistency is something lacking from Mr. Krueger's own published
measurements of digital components on his website, with different
measurement bandwidths, word lengths, and FFT sizes making comparisons
very difficult, if not impossible.

I have snipped the rest of Mr. Krueger's comments as they reveal merely
that he doesn't actually read the magazine he so loves to criticize. :-)

John Atkinson
Editor, Stereophile
  #28   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"John Atkinson" wrote in message
om
In message
Arny Krueger ) wrote:
The only thing [Stereophile does] right is the level-matching and
I suspect that their reviewers don't always adhere to that.


Amazing! I never suspected that when I perform listening tests you
are right there in the room observing me, Mr. Krueger. Nevertheless,
whatever you "suspect," Mr. Krueger, I match levels to within less
than 0.1dB whenever I directly compare components. See my review
of the Sony SCD-XA9000ES SACD player in the December issue for an
example (http://www.stereophile.com/digitalso...views/1203sony).


I guess that Atkinson wants us to believe that when one speaks of "all their
reviewers", one speaks only of him.

Stereophile does some really weird measurements, such as their
undithered tests of digital gear. The AES says don't do it, but
John Atkinson appears to be above all authority but the voices
that only he hears.


It always gratifying to learn, rather late of course, that I had
bested Arny Krueger in a technical discussion.


I mention Atkinson's delusions, and he gifts us with another one - that he's
bested me in a technical discussion.

My evidence for
this statement is his habit of waiting a month, a year, or even more
after he has ducked out of a discussion before raising the subject
again on Usenet as though his arguments had prevailed. Just as he has
done here. (This subject was discussed on r.a.o in May 2002, with Real
Audio Guys Paul Bamborough and Glenn Zelniker joining me in pointing
out the flaws in Mr. Krueger's argument.)


So, as Atkinson's version of the story evolves, it wasn't him alone that
bested me, but the dynamic trio of Atkinson, Bamboroguh, and Zelniker.
Notice how the story is changing right before our very eyes! In fact
Bamborough and Zelniker use the same hit-and-run confuse-not-convince
"debating trade" tactics that Atkinson uses here.

So let's examine what the Audio Engineering Society (of which I am
a long-term member and Mr. Krueger is not) says on the subject of
testing digital gear, in their standard AES17-1998 (revision of
AES17-1991):


Section 4.2.5.2: "For measurements where the stimulus is generated in
the digital domain, such as when testing Compact-Disc (CD) players,
the reproduce sections of record/replay devices, and digital-to-analog
converters, the test signals shall be dithered.




I imagine this is what Mr. Krueger means when wrote "The AES says
don't do it." But unfortunately for Mr. Krueger, the very same AES
standard goes on to say in the very next section (4.2.5.3):


"The dither may be omitted in special cases for investigative
purposes. One example of when this is desirable is when viewing bit
weights on an oscilloscope with ramp signals. In these circumstances
the dither signal can obscure the bit variations being viewed."


At this point Atkinson tries to confuse "investigation" with "testing
equipment performance for consumer publication reviews" Of course these are
two very different things, but in the spirit of his shifting claims in the
matter already demonstrated once above, let's see where this goes...

As the first specific test I use an undithered signal for is indeed
for investigative purposes -- looking at how the error in a DAC's MSBs
compare to the LSB, in other words, the "bit weights" -- it looks as
if Mr. Krueger's "The AES says don't do it" is just plain wrong.


The problem here is that again Atkinson has confused detailed investigations
into how individual subcomponents of chips in the player works (i.e.,
"inivestigation") with the business of characterizing how it will satisfy
consumers. Consumers don't care about whether one individual bits of the
approximately 65,000 levels supported by the CD format works, they want to
know how the device will sound. It's a simple matter to show that nobody,
not even John Atkinson can hear a single one of those bits working or not
working. Yet he deems it appropriate to confuse consumers with this sort of
minutae, perhaps so that they won't notice his egregiously-flawed subjective
tests.

Mr. Krueger is also incorrect about the second undithered test signal
I use, which is to examine a DAC's or CD player's rejection of
word-clock jitter, to which he refers in his next paragraph:


He does other tests, relating to jitter, for which there is no
independent confirmation of reliable relevance to audibility. I hear
that this is not because nobody has tried to find correlation. It's
just that the measurement methodology is flawed, or at best has no
practical advantages over simpler methodologies that correlate better
with actual use.


And once again, Arny Krueger's lack of comprehension of why the latter
test -- the "J-Test," invented by the late Julian Dunn and
implemented as a commercially available piece of test equipment by
Paul Miller -- needs to use an undithered signal reveals that he
still does not grasp the significance of the J-Test or perhaps even
the philosophy of measurement in general. To perform a measurement to
examine a specific aspect of component behavior, you need to use a
diagnostic signal. The J-Test signal is diagnostic for the assessment
of word-clock jitter because:


1) As both the components of the J-Test signal are exact integer
fractions of the sample frequency, there is _no_ quantization error.
Even without dither. Any spuriae that appear in the spectra of the
device under test's analog output are _not_ due to quantization.
Instead, they are _entirely_ due to the DUT's departure from
theoretically perfect behavior.

2) The J-Test signal has a specific sequence of 1s and 0s that
maximally stresses the DUT and this sequence has a low-enough
frequency that it will be below the DUT's jitter-cutoff frequency.

Adding dither to this signal will interfere with these
characteristics, rendering it no longer diagnostic in nature. As an
example of a _non-diagnostic terst signal, see Arny Krueger's use of
a dithered
11.025kHz tone in his website tests of soundcards at a 96kHz sample
rate. This meets none of the criteria I have just outlined.



Notice that *none* of the above mintuae and fine detail addresses my opening
critical comment:

"He does other tests, relating to jitter for which there is no independent
confirmation of reliable relevance to audibility".

Now did you see anything in Atkinson two numbered paragraphs above and the
subsequent unnumbered paragraph that address my comment about listening
tests and independent confirmation of audibility? No you didn't!

What you saw is the same-old, same-old old Atkinson song-and-dance which
reminds many knowledgeable people of that old carny's advice "If you can't
convince them, confuse them!".

He does other tests, relating to jitter, for which there is no
independent confirmation of reliable relevance to audibility.


One can argue about the audibility of jitter, but the J-Test's lack of
dither does not render it "really weird," merely consistent and
repeatable.


A repeatable test with no real-world confirmation (i.e., audibility) is just
a reliable producer of meaningless garbage. Is a reliable producer of
irrelevant garbage numbers better or worse than an unreliable producer of
irrelevant garbage numbers?

These, of course, are desirable in a measurement
technique. And perhaps it worth noting that, as I have pointed out
before, consistency is something lacking from Mr. Krueger's own
published measurements of digital components on his website, with
different measurement bandwidths, word lengths, and FFT sizes making
comparisons very difficult, if not impossible.


This is just more of Atkinson's "confuse 'em if you can't convince 'em"
schtick. My web sites test a wide range of equipment, in virtually every
performance category from the sublime to the ridiculous. Of course I pick
testing parameters that are appropriate to the general quality level and
characteristics of the equipment I test. I've also evolved my testing
techniques as I learned more about how audio equipment works.

BTW, note that Atkinson complains that I use different measurement
bandwidths, word lengths and FFT sizes. Atkinson doesn't test the wide range
of equipment I test, and he doesn't test it as thoroughly. For example,
compare his test of the Card Deluxe to mine. The relevant URLs' are

http://www.pcavtech.com/soundcards/C...luxe/index.htm


and

http://www.stereophile.com/digitalso...80/index4.html


Compare Atkinson's Figure 2

to

my

http://www.pcavtech.com/soundcards/C..._2496-a_FS.gif

http://www.pcavtech.com/soundcards/C..._2444-a_FS.gif

http://www.pcavtech.com/soundcards/C..._1644-a_FS.gif


First off, you will notice that Atkinson's shows the results of his 1 KHz
performance test under just one operational mode, while I provided data
about three different and highly relevant operational modes.

Note that my plots document measurement bandwidths, word lengths and FFT
sizes, while Atkinson's figure 2 and supporting text don't document this
very information that Atkinson complained about. So, he's complaining about
information that I publish with every test as a matter of course, while he
doesn't put the same information into his own reports as they are published
in his magazine and on his web site.

Note that my plots provide high resolution information down to below 20 Hz
while Atkinson's plot squishes all data below 1 KHz into a tiny strip along
the left edge of the plot where it is difficult or impossible to analyze. My
plots allow people to determine if there are low frequency spurious
responses, hum or significant amounts of 1/F noise. Atkinson's don't.

I have snipped the rest of Mr. Krueger's comments as they reveal
merely that he doesn't actually read the magazine he so loves to
criticize. :-)


It appears that Atkinson is up to tricks as usual. If you analyze his
technical critique of my published tests you find that he's basically
faulting me for testing equipment in more operational modes than he does,
and providing more documentation about test conditions than he provides.

Let me add that I am fully aware of the effects of testing equipment with
different measurement bandwidths, word lengths and FFT sizes. I take steps
to ensure that any variations in test conditions don't adversely affect my
summary evaluation of equipment performance.

Furthermore, while Atkinson asks you to trust his poorly-contrived listening
tests, I provide the means for people to audition the Card Deluxe with their
own speakers and ears at PCAVTech's sister www.pcabx.com web site.



  #29   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

"Arny Krueger" wrote in message
...
"John Atkinson" wrote in message
. com
In message
Arny Krueger ) wrote:
Stereophile does some really weird measurements, such as their
undithered tests of digital gear. The AES says don't do it, but
John Atkinson appears to be above all authority but the voices
that only he hears.


It always gratifying to learn, rather late of course, that I had
bested Arny Krueger in a technical discussion. My evidence for
this statement is his habit of waiting a month, a year, or even more
after he has ducked out of a discussion before raising the subject
again on Usenet as though his arguments had prevailed. Just as he has
done here. (This subject was discussed on r.a.o in May 2002, with Real
Audio Guys Paul Bamborough and Glenn Zelniker joining me in pointing
out the flaws in Mr. Krueger's argument.)


So, as Atkinson's version of the story evolves, it wasn't him alone
that bested me, but the dynamic trio of Atkinson, Bamborough, and
Zelniker. Notice how the story is changing right before our very eyes!


"Our?" Do you have a frog in your pocket, Mr. Krueger? No, Mr. Krueger.
The story hasn't changed. I was merely pointing out that Paul Bamborough
and Glenn Zelniker, both digital engineers with enviable reputations,
posted agreement with the case I made, and as I said, joined me in
pointing out the flaws in your argument.

So let's examine what the Audio Engineering Society (of which I am
a long-term member and Mr. Krueger is not) says on the subject of
testing digital gear, in their standard AES17-1998 (revision of
AES17-1991):
Section 4.2.5.2: "For measurements where the stimulus is generated in
the digital domain, such as when testing Compact-Disc (CD) players,
the reproduce sections of record/replay devices, and digital-to-analog
converters, the test signals shall be dithered.

I imagine this is what Mr. Krueger means when wrote "The AES says
don't do it." But unfortunately for Mr. Krueger, the very same AES
standard goes on to say in the very next section (4.2.5.3):
"The dither may be omitted in special cases for investigative
purposes. One example of when this is desirable is when viewing bit
weights on an oscilloscope with ramp signals. In these circumstances
the dither signal can obscure the bit variations being viewed."


At this point Atkinson tries to confuse "investigation" with "testing
equipment performance for consumer publication reviews" Of course these
are two very different things...


Not at all, Mr. Krueger. As I explained to you back in 2002 and again
now, the very test that you describe as "really weird" and that you
claim the "AES says don't do" is specifically outlined in the AES
standard as an example of a test for which a dithered signal is
inappropriate, because it "can obscure the bit variations being viewed."

It is also fair to point out that both the undithered ramp signal and
the undithered 1kHz tone at exactly -90.31dBFS that I use for the same
purpose are included on the industry-standard CD-1 Test CD, that was
prepared under the aegis of the AES.

If you continue to insist that the AES says "don't do it," then why on
earth would the same AES help make such signals available?

As the first specific test I use an undithered signal for is indeed
for investigative purposes -- looking at how the error in a DAC's MSBs
compare to the LSB, in other words, the "bit weights" -- it looks as
if Mr. Krueger's "The AES says don't do it" is just plain wrong.


The problem here is that again Atkinson has confused detailed
investigations into how individual subcomponents of chips in the player
works (i.e., "[investigation]") with the business of characterizing how
it will satisfy consumers.


The AES standard concerns the measured assessment of "Compact-Disc (CD)
players, the reproduce sections of record/replay devices, and
digital-to-analog converters." As I pointed out, it makes an exception
for "investigative purposes" and makes no mention of such "purposes"
being limited to the "subcomponents of chips." The examination of "bit
weights" is fundamental to good sound from a digital component, because
if each one of the 65,535 integral step changes in the digital word
describing the signal produces a different-sized change in the
reconstructed analog signal, the result is measureable and audible
distortion.

Consumers don't care about whether one individual bits of the
approximately 65,000 levels supported by the CD format works, they
want to know how the device will sound.


Of course. And being able to pass a "bit weight" test is fundamental to
a digital component being able to sound good. This is why I publish the
results of this test for every digital product reviewed in Stereophile.
I am pleased to report that the bad old days, when very few DACs could
pass this test, are behind us.

It's a simple matter to show that nobody, not even John Atkinson can
hear a single one of those bits working or not working.


I am not sure what this means. If a player fails the test I am describing,
both audible distortion and sometimes even more audible changes in pitch
can result. I would have thought it important for consumers to learn
of such departures from ideal performance.

Yet he deems it appropriate to confuse consumers with this sort of
[minutiae], perhaps so that they won't notice his egregiously-flawed
subjective tests.


In your opinion, Mr. Krueger, and I have no need to argue with you
about opinions, only when you mistate facts. As you have done in this
instance. To recap:

I use just two undithered test signals as part of the battery of tests
I perform on digital components for Stereophile. Mr. Krueger has
characterized my use of these test signals as "really weird" and has
claimed that their use is forbidden by the Audio Engineering
Society. Yet, as I have shown by quoting the complete text of the
relevant paragraphs from the AES standard on the subject, one of the
tests I use is specifically mentioned as an example as the kind of test
where dither would interfere with the results and where an undithered
signal is recommended.

As my position on this subject has been supported by two widely
respected experts on digital audio, I don't think that anything more
needs to said about it.

And as I said, Mr. Krueger is also incorrect about the second
undithered test signal I use, which is to examine a DAC's or CD player's
rejection of word-clock jitter. My use is neither "really weird," nor
is it specifically forbidden by the Audio Engineering Society.

He does other tests, relating to jitter, for which there is no
independent confirmation of reliable relevance to audibility. I hear
that this is not because nobody has tried to find correlation. It's
just that the measurement methodology is flawed, or at best has no
practical advantages over simpler methodologies that correlate better
with actual use.


And once again, Arny Krueger's lack of comprehension of why the latter
test -- the "J-Test," invented by the late Julian Dunn and
implemented as a commercially available piece of test equipment by
Paul Miller -- needs to use an undithered signal reveals that he
still does not grasp the significance of the J-Test or perhaps even
the philosophy of measurement in general. To perform a measurement to
examine a specific aspect of component behavior, you need to use a
diagnostic signal. The J-Test signal is diagnostic for the assessment
of word-clock jitter because:

1) As both the components of the J-Test signal are exact integer
fractions of the sample frequency, there is _no_ quantization error.
Even without dither. Any spuriae that appear in the spectra of the
device under test's analog output are _not_ due to quantization.
Instead, they are _entirely_ due to the DUT's departure from
theoretically perfect behavior.

2) The J-Test signal has a specific sequence of 1s and 0s that
maximally stresses the DUT and this sequence has a low-enough
frequency that it will be below the DUT's jitter-cutoff frequency.

Adding dither to this signal will interfere with these
characteristics, rendering it no longer diagnostic in nature. As an
example of a _non_-diagnostic test signal, see Arny Krueger's use of
a dithered 11.025kHz tone in his website tests of soundcards at a
96kHz sample rate. This meets none of the criteria I have just
outlined.


Notice that *none* of the above mintuae and fine detail addresses my
opening critical comment: "He does other tests, relating to jitter for
which there is no independent confirmation of reliable relevance to
audibility".

Now did you see anything in Atkinson two numbered paragraphs above and
the subsequent unnumbered paragraph that address my comment about
listening tests and independent confirmation of audibility? No you
didn't!


You are absolutely correct, Mr. Krueger. There is nothing about the
audibility of jitter in these paragraphs. This is because I was
addressing your statements that this test, like the one examining bit
weights, was "really weird" and that "The AES says don't do it, but
John Atkinson appears to be above all authority but the voices that
only he hears."

Regarding audibility, I then specifically said, in my next paragraph,
that "One can argue about the audibility of jitter..." As you _don't_
think it is audible but my experience leads me to believe that it _can_
be, depending on level and spectrum, again I don't see any point in
arguing this subject with you, Mr. Krueger. All I am doing is
specifically addressing the point you made in your original posting
and showing that it was incorrect. Which I have done.

Finally, you recently claimed in another posting that your attacking me
was "highly appropriate," given that my views about you "are totally
fallacious, libelous and despicable." I suggest to those reading this
thread that they note that Arny Krueger has indeed made this discussion
highly personal, using phrases such as "the voices that only [John
Atkinson] hears"; "Notice how [John Atkinson's] story is changing right
before our very eyes!"; "the same-old, same-old old Atkinson
song-and-dance which reminds many knowledgeable people of that old
carny's advice 'If you can't convince them, confuse them!'"; "This is
just more of Atkinson's 'confuse 'em if you can't convince 'em'
schtick"; "Atkinson is up to tricks as usual."

I suggest people think for themselves about how appropriate Mr.
Krueger's attacks are, and how relevant they are to a subject where
it is perfectly acceptable for people to hold different views.

John Atkinson
Editor, Stereophile
  #30   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"John Atkinson" wrote in message
om...

And once again, Arny Krueger's lack of comprehension of why the latter
test -- the "J-Test," invented by the late Julian Dunn and implemented as
a commercially available piece of test equipment by Paul Miller -- needs
to use an undithered signal reveals that he still does not grasp the
significance of the J-Test or perhaps even the philosophy of measurement
in general. To perform a measurement to examine a specific aspect of
component behavior, you need to use a diagnostic signal. The J-Test
signal is diagnostic for the assessment of word-clock jitter because:

1) As both the components of the J-Test signal are exact integer
fractions of the sample frequency, there is _no_ quantization error.
Even without dither. Any spuriae that appear in the spectra of the
device under test's analog output are _not_ due to quantization.
Instead, they are _entirely_ due to the DUT's departure from
theoretically perfect behavior.

2) The J-Test signal has a specific sequence of 1s and 0s that
maximally stresses the DUT and this sequence has a low-enough frequency
that it will be below the DUT's jitter-cutoff frequency.

Adding dither to this signal will interfere with these characteristics,
rendering it no longer diagnostic in nature. As an example of a
_non-diagnostic terst signal, see Arny Krueger's use of a dithered
11.025kHz tone in his website tests of soundcards at a 96kHz sample
rate. This meets none of the criteria I have just outlined.

He does other tests, relating to jitter, for which there is no
independent confirmation of reliable relevance to audibility.


One can argue about the audibility of jitter, but the J-Test's lack of
dither does not render it "really weird," merely consistent and
repeatable. These, of course, are desirable in a measurement technique.
And perhaps it worth noting that, as I have pointed out before,
consistency is something lacking from Mr. Krueger's own published
measurements of digital components on his website, with different
measurement bandwidths, word lengths, and FFT sizes making comparisons
very difficult, if not impossible.


Isn't there a question about the validity of applying this test to CD
players which don't have to regnerate the clock?

I thought it was generally applied to HT receivers with DACs
and external DACs?

ScottW




  #31   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

"ScottW" wrote in message
news:Pz1Hb.41708$m83.13206@fed1read01...
Isn't there a question about the validity of applying this test to CD
players which don't have to regnerate the clock?

I thought it was generally applied to HT receivers with DACs
and external DACs?


Hi ScottW, yes, the J-Test was originally intended to examine devices where
the clock was embedded in serial data. What I find interesting is that
CD players do differ quite considerably in how they handle this signal,
meaning that there are other mechanisms going on producing the same effect.
(Meitner's and Gendron LIM, for example, which they discussed in an AES paper
about 10 years ago.) And of course, those CD players that use an internal
S/PDIF link stand revealed for what they are on the J-Test.

BTW, you might care to look at the results on the J-Test for the Burmester
CD player in our December issue (avaiable in our on-line archives). It did
extraordinaruly well on 44.1k material, both on internal CD playback and
on external S/PDIF data, but failed miserably with other sample rates.
Most peculiar. My point is that the J-Test was invaluable in finding
this out.

John Atkinson
Editor, Stereophile
  #32   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

"ScottW" wrote in message
news:Pz1Hb.41708$m83.13206@fed1read01...
I thought it was generally applied to HT receivers with DACs
and external DACs?


Oh, and one more thing. I have no problem with people not thinking this
test is useful, or is not being applied appropriately, or offers no
proven correlation with audible problems. If those are your opinions, I
have no intention of arguing with you. They just don't happen to be _my_
opinions. I see nothing wrong in us agreeing to disagree. What I _am_
objecting to is Arny Krueger's trying to disseminate something that is
not true, which is his statement that tests that don't use dither are
forbidden by an AES standard. For him to keep repeating this falsehood
is dirty pool.

John Atkinson
Editor, Stereophile
Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Google Proof of An Unprovoked Personal Attack from Krueger Bruce J. Richman Audio Opinions 27 December 11th 03 05:21 AM
Note to Krooger George M. Middius Audio Opinions 1 October 22nd 03 07:57 AM
Note to the Krooborg George M. Middius Audio Opinions 17 October 16th 03 11:53 PM
Note to Marc Phillips Lionel Chapuis Audio Opinions 9 September 11th 03 06:07 PM
Note on Google Groups URLs George M. Middius Audio Opinions 19 September 8th 03 11:45 PM


All times are GMT +1. The time now is 11:49 AM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"