Reply
 
Thread Tools Display Modes
  #41   Report Post  
S888Wheel
 
Posts: n/a
Default Note to the Idiot


Well, let's first remove the subjectivity and simply
confirm audible differences.

ScottW


I don't have a problem with that. But if we are going to expect Stereophile to
hang their hats on the results the protocols have to be up to valid scientific
standards IMO. I think this is where DBTs stop being a snap.
  #42   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"S888Wheel" wrote in message


Arny said


JJ was a free agent for a while after Lucent fired him, and before
Microsoft hired him. However, JJ seems to be too much of a closet
golden ear to be as aggressive and pragmatic as scientific
objectivity demands.


That's a load of crap.


Prove it.

Unlike you, he made his living at it.


Not proof. The basic fallacy is that anybody who does something for a money
is perfect.

Arny said

This allows him
to curry favor with the golden ear press which he actively did for a
while.


Nonsense. It is his professional pedagree that gives him credibility.


I'm quite sure that JJ has no "pedagree". Sockpuppet Wheel, why don't you
learn to spell at the 6 th grade level and work up to the adult level from
there?

Arny said

Yet he talks the talk, maintaining a veneer of scientific
respectability.


No, he simply is respectable scientifically.


In your mind all kinds of charlatans seem to be credible, and people who do
work to extend scientific objectivity are fools.

Arny said

Hey, its what he seems to need to be comfortable.



No, it was what he needed to do his job all those years.


Didn't work in the long run, did it?

Arny said


It's not that tough to DBT just about any audio component if you are
pragmatic enough. JJ's incessant public mindless and evidenceless
criticism of PCABX convinced me that he's simply not pragmatic enough
to be worth much trouble.


So said the novice about the pro.


Not proof. The basic fallacy is that anybody who does something for a money
is perfect.

I said


I think DBT with speakers and source components are
quite a bit more difficult.



Arny said


Shows how little you know, sockpuppet wheel.



Nonsense.


I said


Would you limit such tests to
verification of actual audible differences? Personally, I like blind
comparisons for preferences. They are more difficult than sighted
comparisons for obvious reasons.



Arny said



Preference comparisons make no sense if there are no audible
differences. There are two major DBT protocols:


No **** Sherlock. No one said otherwise.


Defensive little turd, aren't you?

Arny said


ABC/hr for determining degree of impairment or degradation, which
roughly equates to preferences if you presume that audiophiles
naturally prefer undegraded sound or sound that is less degraded or
less impaired. Since there are so-called audiophiles who prefer the
sound of tubes and vinyl which can be rife with audible degradations,
its not clear that one can blithely presume that all audiophile
prefer sound that has less impairment.


One can do preference tests blind the same way they do them sighted by
comparing A to B and forming a preference only without knowing what
is A and what is B. One can from a preference regardless of your
hangups and do it without the effects of sighted bias.


I never said otherwise, did I? ABC/hr just happens to be a recognized,
standardized means for doing that.

Arny said



The tools for doing DBTs of just about *everything* are readily
available, presuming that the investigator is sufficiently pragmatic.
Since we're talking religious beliefs, we can't presume pragmatic
investigators in every case.


We are not talking about religious beliefs here unless you insist on
inserting your religious beliefs.


You may be too naive to recognize religious beliefs about audio when you see
them, sockpupppet Yustabe.

We were talking about the practice
of subjective review by a particular publication


That publication seems to have a lengthy track record for forming unfounded
and therefore irrational beliefs in its readers minds. These kinds of
beliefs are often called "religious". Since you can't spell worth a hill of
beans, and are too arrogant to use a proper spell-checker, I thought I'd try
to bring you up-to-date, sockpuppet wheel.

Arny said


In the case of Stereophile, the use of DBTs would no doubt embarrass
the management and many of the advertisers. Therefore, Stereophile
has maximal incentive to be as non-pragmatic as possible. They simply
behave predictably.


So says the novice who thinks he is objective.


Shows how little you understand. I favor bias controls BECAUSE I believe
that I am biased. If I thought that any listener including myself could be
perfectly objective I wouldn't favor the use of bas controls, now would I.

You wear your prejudices like a badge.


I simply know a little something about myself and everybody else in the
world. We have a hard time behaving in perfectly objective ways. Every human
has biases. Therefore listening tests that are at all difficult or
controversial, need bias controls.


  #43   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"ScottW" wrote in message
newskOGb.41378$m83.22809@fed1read01
"S888Wheel" wrote in message
...

I don't see the big deal. Lets have Arny create a PC controlled
switch box which stores results over the net in a secure server.
All the reviewer has to do is hook it up and make his
selections. Results tallied and bingo.


I don't see Arny working with Stereophile.


The point is the creation of a tool that would
minimize the labor involved in a DBT is
no great endeavour.


It's been done.


Performing the DBTs would be a snap if Atkinson set 'em up
with the tools to do it.


I am not so convinced it is a snap to do them well.


I guess I need your definition of well.
No more difficult than listening to gear,
subjectively characterizing the sound and putting
that to paper.


Having done dozens of DBTs, probably 100's by now, I think that doing DBTs
takes more work than the schlock procedures that the Stereophile reviewers
use.

I think if Stereophile were
to do something like this it would be wise for them to consult
someone like JJ who conducted such tests for a living. Would you
suggest that such DBTs be limited to comparisons of cables amps and
preamps?


Those are certainly the easiest components.


Digital players are just as easy.

Digital sources being next with a challenge to sync them
such that the subject isn't tipped off.


The same requirements for synchronization apply to analog sources as well.

I think DBT with speakers
and source components are quite a bit more difficult.


Speakers are definitely out. It could be done but not without
significant difficulty.


The biggest problem is not with DBTs, but a factor that affects sighted
evaluations as well. Speakers are profoundly affected by their location in
the room, and two speakers can't occupy the same location at the same time.

Would you limit such
tests to verification of actual audible differences?


Yes, if that fails then the preference test is really
kind of pointless.


Personally, I like blind
comparisons for preferences. They are more difficult than sighted
comparisons for obvious reasons.


The fact that they don't even create
the tools to do it is telling to me.



How so?


I think they are afraid of the possible (or even probable)
outcome.


I'm quite sure of it. Atkinson has tried to slip hidden sources of bias into
his alleged DBTs. He seems to have this need to control the outcome of the
listening tests that are done for his ragazine.



  #44   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"S888Wheel" wrote in message


Scott said


I think they are afraid of the possible (or even probable)
outcome.


Maybe but I am skeptical of this. It didn't seem to hurt Stereo
Review to take the position that all amps, preamps and cables sounded
the same.


Stereo Review works in a different, more pragmatic market than the high end
ragazines.

Stereophile did take the Carver challenge.


Really? What Stereophile issue describes that?

They weren't afraid of the outcome of that.


Awaiting details of this test.


  #45   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"S888Wheel" wrote in message




I don't have a problem with that. But if we are going to expect
Stereophile to hang their hats on the results the protocols have to
be up to valid scientific standards IMO. I think this is where DBTs
stop being a snap.


Shows how little you know about the existing scientific standards for doing
blind tests, sockpuppet wheel.




  #46   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

In message
Arny Krueger ) wrote:
The only thing [Stereophile does] right is the level-matching and
I suspect that their reviewers don't always adhere to that.


Amazing! I never suspected that when I perform listening tests you
are right there in the room observing me, Mr. Krueger. Nevertheless,
whatever you "suspect," Mr. Krueger, I match levels to within less
than 0.1dB whenever I directly compare components. See my review
of the Sony SCD-XA9000ES SACD player in the December issue for an
example (http://www.stereophile.com/digitalso...views/1203sony).

Stereophile does some really weird measurements, such as their
undithered tests of digital gear. The AES says don't do it, but
John Atkinson appears to be above all authority but the voices
that only he hears.


It always gratifying to learn, rather late of course, that I had
bested Arny Krueger in a technical discussion. My evidence for
this statement is his habit of waiting a month, a year, or even more
after he has ducked out of a discussion before raising the subject
again on Usenet as though his arguments had prevailed. Just as he has
done here. (This subject was discussed on r.a.o in May 2002, with Real
Audio Guys Paul Bamborough and Glenn Zelniker joining me in pointing
out the flaws in Mr. Krueger's argument.)

So let's examine what the Audio Engineering Society (of which I am
a long-term member and Mr. Krueger is not) says on the subject of
testing digital gear, in their standard AES17-1998 (revision of
AES17-1991):

Section 4.2.5.2: "For measurements where the stimulus is generated in
the digital domain, such as when testing Compact-Disc (CD) players,
the reproduce sections of record/replay devices, and digital-to-analog
converters, the test signals shall be dithered.

I imagine this is what Mr. Krueger means when wrote "The AES says don't
do it." But unfortunately for Mr. Krueger, the very same AES standard
goes on to say in the very next section (4.2.5.3):

"The dither may be omitted in special cases for investigative purposes.
One example of when this is desirable is when viewing bit weights on an
oscilloscope with ramp signals. In these circumstances the dither signal
can obscure the bit variations being viewed."

As the first specific test I use an undithered signal for is indeed for
investigative purposes -- looking at how the error in a DAC's MSBs
compare to the LSB, in other words, the "bit weights" -- it looks as if
Mr. Krueger's "The AES says don't do it" is just plain wrong.

Mr. Krueger is also incorrect about the second undithered test signal
I use, which is to examine a DAC's or CD player's rejection of
word-clock jitter, to which he refers in his next paragraph:

He does other tests, relating to jitter, for which there is no
independent confirmation of reliable relevance to audibility. I hear
that this is not because nobody has tried to find correlation. It's
just that the measurement methodology is flawed, or at best has no
practical advantages over simpler methodologies that correlate better
with actual use.


And once again, Arny Krueger's lack of comprehension of why the latter
test -- the "J-Test," invented by the late Julian Dunn and implemented as
a commercially available piece of test equipment by Paul Miller -- needs
to use an undithered signal reveals that he still does not grasp the
significance of the J-Test or perhaps even the philosophy of measurement
in general. To perform a measurement to examine a specific aspect of
component behavior, you need to use a diagnostic signal. The J-Test
signal is diagnostic for the assessment of word-clock jitter because:

1) As both the components of the J-Test signal are exact integer
fractions of the sample frequency, there is _no_ quantization error.
Even without dither. Any spuriae that appear in the spectra of the
device under test's analog output are _not_ due to quantization.
Instead, they are _entirely_ due to the DUT's departure from
theoretically perfect behavior.

2) The J-Test signal has a specific sequence of 1s and 0s that
maximally stresses the DUT and this sequence has a low-enough frequency
that it will be below the DUT's jitter-cutoff frequency.

Adding dither to this signal will interfere with these characteristics,
rendering it no longer diagnostic in nature. As an example of a
_non-diagnostic terst signal, see Arny Krueger's use of a dithered
11.025kHz tone in his website tests of soundcards at a 96kHz sample
rate. This meets none of the criteria I have just outlined.

He does other tests, relating to jitter, for which there is no
independent confirmation of reliable relevance to audibility.


One can argue about the audibility of jitter, but the J-Test's lack of
dither does not render it "really weird," merely consistent and
repeatable. These, of course, are desirable in a measurement technique.
And perhaps it worth noting that, as I have pointed out before,
consistency is something lacking from Mr. Krueger's own published
measurements of digital components on his website, with different
measurement bandwidths, word lengths, and FFT sizes making comparisons
very difficult, if not impossible.

I have snipped the rest of Mr. Krueger's comments as they reveal merely
that he doesn't actually read the magazine he so loves to criticize. :-)

John Atkinson
Editor, Stereophile
  #47   Report Post  
S888Wheel
 
Posts: n/a
Default Note to the Idiot


Arny said


JJ was a free agent for a while after Lucent fired him, and before
Microsoft hired him. However, JJ seems to be too much of a closet
golden ear to be as aggressive and pragmatic as scientific
objectivity demands.



I said


That's a load of crap.



Arny said


Prove it.



His career is proof. You are the one challenging his objectivity despite the
fact that his objectivity was an important factor in his ability to do his job
correctly. You made the attack on JJ it is up to you to prove it is true.

I said


Unlike you, he made his living at it.



Arny said


Not proof. The basic fallacy is that anybody who does something for a money
is perfect.



Yes it is proof. I made no such premise that pros are inherently perfect. My
premise is that pros are more likely to do their job better than hobbyists
would do the same job.


Arny said

This allows him
to curry favor with the golden ear press which he actively did for a
while.



I said



Nonsense. It is his professional pedagree that gives him credibility.



Arny said


I'm quite sure that JJ has no "pedagree". Sockpuppet Wheel,


"Why note?"

Arny said

why don't you
learn to spell at the 6 th grade level and work up to the adult level from
there?


Of coure this is typical of you Arny. Attack the spelling once youv'e been
beaten by the logic of a post. You are quite the "characture" and hypocrite.



Arny said

Yet he talks the talk, maintaining a veneer of scientific
respectability.



I said



No, he simply is respectable scientifically.




  #48   Report Post  
S888Wheel
 
Posts: n/a
Default Note to the Idiot

Arny said


In your mind all kinds of charlatans seem to be credible, and people who do
work to extend scientific objectivity are fools.



So Arny considers legitimate scientists and accomplished industry pros to be
charlatans. That figures. The creationists consider the body of scientists who
believe life evolved to be charlatans. You fit in quite well with the
creationists with your audio religion and facade of science. No wonder you
would attack a real scientist like this.


Arny said

Hey, its what he seems to need to be comfortable.





I said


No, it was what he needed to do his job all those years.



Arny said


Didn't work in the long run, did it?



Compared to you? LOL


Arny said


It's not that tough to DBT just about any audio component if you are
pragmatic enough. JJ's incessant public mindless and evidenceless
criticism of PCABX convinced me that he's simply not pragmatic enough
to be worth much trouble.



I said


So said the novice about the pro.



Arny said


Not proof. The basic fallacy is that anybody who does something for a money
is perfect.


It's a matter of credibility. JJ has it and you don't.



I said


I think DBT with speakers and source components are
quite a bit more difficult.






Arny said


Shows how little you know, sockpuppet wheel.



I said


Nonsense



No response? Did you figure out how stupid your comment was?


I said


Would you limit such tests to
verification of actual audible differences? Personally, I like blind
comparisons for preferences. They are more difficult than sighted
comparisons for obvious reasons.




Arny said



Preference comparisons make no sense if there are no audible
differences. There are two major DBT protocols:



I said


No **** Sherlock. No one said otherwise.



Arny said


Defensive little turd, aren't you?



One must be defensive when dealing with one who is so offensive.


Arny said


ABC/hr for determining degree of impairment or degradation, which
roughly equates to preferences if you presume that audiophiles
naturally prefer undegraded sound or sound that is less degraded or
less impaired. Since there are so-called audiophiles who prefer the
sound of tubes and vinyl which can be rife with audible degradations,
its not clear that one can blithely presume that all audiophile
prefer sound that has less impairment.



I said


One can do preference tests blind the same way they do them sighted by
comparing A to B and forming a preference only without knowing what
is A and what is B. One can from a preference regardless of your
hangups and do it without the effects of sighted bias.



Arny said


I never said otherwise, did I?


Who is being defensive now?

Arny said

ABC/hr just happens to be a recognized,
standardized means for doing that.



It is different though which is why I pointed this out. A/B comparisons do not
need a reference. ABC/hr presumes the reference is the ideal. One does not
always have access to the ideal when judging audio.


Arny said



The tools for doing DBTs of just about *everything* are readily
available, presuming that the investigator is sufficiently pragmatic.
Since we're talking religious beliefs, we can't presume pragmatic
investigators in every case.



I said



We are not talking about religious beliefs here unless you insist on
inserting your religious beliefs.



Arny said



You may be too naive to recognize religious beliefs about audio when you see
them, sockpupppet Yustabe.



And you may be to stupid to keep track of who you are talking to in a single
post. I recognize your religious beliefs about audio Arny. I'm one up on you
there. You clearly don't recognize your religious beliefs on audio for what
they are.

I said


We were talking about the practice
of subjective review by a particular publication



Arny said


That publication seems to have a lengthy track record for forming unfounded
and therefore irrational beliefs in its readers minds. These kinds of
beliefs are often called "religious". Since you can't spell worth a hill of
beans, and are too arrogant to use a proper spell-checker, I thought I'd try
to bring you up-to-date, sockpuppet wheel.


"Religious" is the correct spelling fool. You remain quite the hypocrite for
making poor spelling an issue. You remain quite the "characture." You also
managed to fall on your face in your feeble attempt to actually get back onto
the thread subject. Good job.


Arny said


In the case of Stereophile, the use of DBTs would no doubt embarrass
the management and many of the advertisers. Therefore, Stereophile
has maximal incentive to be as non-pragmatic as possible. They simply
behave predictably.



I said


So says the novice who thinks he is objective.



Arny said


Shows how little you understand. I favor bias controls BECAUSE I believe
that I am biased. If I thought that any listener including myself could be
perfectly objective I wouldn't favor the use of bas controls, now would I.



Shows how lacking you are in self-awareness. You were the one attacking
legitimate researchers like JJ who not only believe in bias control but
actually made a living using it.

I said


You wear your prejudices like a badge.



Arny said


I simply know a little something about myself and everybody else in the
world.


If that were really true you would never stop vomiting.

Arny said

We have a hard time behaving in perfectly objective ways. Every human
has biases. Therefore listening tests that are at all difficult or
controversial, need bias controls.





Yep. And it gets worse when people with an agenda hide behind a facade of
science to try to give their biases more credibility. I'll take honest biases
over your agenda any day.
  #49   Report Post  
S888Wheel
 
Posts: n/a
Default Note to the Idiot


Stereo Review works in a different, more pragmatic market than the high end
ragazines.


They reviewed some very expensive equipment including highend tubed
electronics. Stereophile OTOH has reviewed some very inexpensive equipment.


I said


Stereophile did take the Carver challenge.



Arny said


Really? What Stereophile issue describes that?


I don't remember. you could always check their archives.

I said



They weren't afraid of the outcome of that.



Arny said



Awaiting details of this test.




Look 'em up yourself.
  #50   Report Post  
S888Wheel
 
Posts: n/a
Default Note to the Idiot

I said



I don't have a problem with that. But if we are going to expect
Stereophile to hang their hats on the results the protocols have to
be up to valid scientific standards IMO. I think this is where DBTs
stop being a snap.



Arny said


Shows how little you know about the existing scientific standards for doing
blind tests, sockpuppet wheel


given your comment about it on a previous post.

"Having done dozens of DBTs, probably 100's by now, I think that doing DBTs
takes more work than the schlock procedures that the Stereophile reviewers
use."

You are clearly talking out both sides of your mouth. Business as usual.


  #51   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"John Atkinson" wrote in message
om
In message
Arny Krueger ) wrote:
The only thing [Stereophile does] right is the level-matching and
I suspect that their reviewers don't always adhere to that.


Amazing! I never suspected that when I perform listening tests you
are right there in the room observing me, Mr. Krueger. Nevertheless,
whatever you "suspect," Mr. Krueger, I match levels to within less
than 0.1dB whenever I directly compare components. See my review
of the Sony SCD-XA9000ES SACD player in the December issue for an
example (http://www.stereophile.com/digitalso...views/1203sony).


I guess that Atkinson wants us to believe that when one speaks of "all their
reviewers", one speaks only of him.

Stereophile does some really weird measurements, such as their
undithered tests of digital gear. The AES says don't do it, but
John Atkinson appears to be above all authority but the voices
that only he hears.


It always gratifying to learn, rather late of course, that I had
bested Arny Krueger in a technical discussion.


I mention Atkinson's delusions, and he gifts us with another one - that he's
bested me in a technical discussion.

My evidence for
this statement is his habit of waiting a month, a year, or even more
after he has ducked out of a discussion before raising the subject
again on Usenet as though his arguments had prevailed. Just as he has
done here. (This subject was discussed on r.a.o in May 2002, with Real
Audio Guys Paul Bamborough and Glenn Zelniker joining me in pointing
out the flaws in Mr. Krueger's argument.)


So, as Atkinson's version of the story evolves, it wasn't him alone that
bested me, but the dynamic trio of Atkinson, Bamboroguh, and Zelniker.
Notice how the story is changing right before our very eyes! In fact
Bamborough and Zelniker use the same hit-and-run confuse-not-convince
"debating trade" tactics that Atkinson uses here.

So let's examine what the Audio Engineering Society (of which I am
a long-term member and Mr. Krueger is not) says on the subject of
testing digital gear, in their standard AES17-1998 (revision of
AES17-1991):


Section 4.2.5.2: "For measurements where the stimulus is generated in
the digital domain, such as when testing Compact-Disc (CD) players,
the reproduce sections of record/replay devices, and digital-to-analog
converters, the test signals shall be dithered.




I imagine this is what Mr. Krueger means when wrote "The AES says
don't do it." But unfortunately for Mr. Krueger, the very same AES
standard goes on to say in the very next section (4.2.5.3):


"The dither may be omitted in special cases for investigative
purposes. One example of when this is desirable is when viewing bit
weights on an oscilloscope with ramp signals. In these circumstances
the dither signal can obscure the bit variations being viewed."


At this point Atkinson tries to confuse "investigation" with "testing
equipment performance for consumer publication reviews" Of course these are
two very different things, but in the spirit of his shifting claims in the
matter already demonstrated once above, let's see where this goes...

As the first specific test I use an undithered signal for is indeed
for investigative purposes -- looking at how the error in a DAC's MSBs
compare to the LSB, in other words, the "bit weights" -- it looks as
if Mr. Krueger's "The AES says don't do it" is just plain wrong.


The problem here is that again Atkinson has confused detailed investigations
into how individual subcomponents of chips in the player works (i.e.,
"inivestigation") with the business of characterizing how it will satisfy
consumers. Consumers don't care about whether one individual bits of the
approximately 65,000 levels supported by the CD format works, they want to
know how the device will sound. It's a simple matter to show that nobody,
not even John Atkinson can hear a single one of those bits working or not
working. Yet he deems it appropriate to confuse consumers with this sort of
minutae, perhaps so that they won't notice his egregiously-flawed subjective
tests.

Mr. Krueger is also incorrect about the second undithered test signal
I use, which is to examine a DAC's or CD player's rejection of
word-clock jitter, to which he refers in his next paragraph:


He does other tests, relating to jitter, for which there is no
independent confirmation of reliable relevance to audibility. I hear
that this is not because nobody has tried to find correlation. It's
just that the measurement methodology is flawed, or at best has no
practical advantages over simpler methodologies that correlate better
with actual use.


And once again, Arny Krueger's lack of comprehension of why the latter
test -- the "J-Test," invented by the late Julian Dunn and
implemented as a commercially available piece of test equipment by
Paul Miller -- needs to use an undithered signal reveals that he
still does not grasp the significance of the J-Test or perhaps even
the philosophy of measurement in general. To perform a measurement to
examine a specific aspect of component behavior, you need to use a
diagnostic signal. The J-Test signal is diagnostic for the assessment
of word-clock jitter because:


1) As both the components of the J-Test signal are exact integer
fractions of the sample frequency, there is _no_ quantization error.
Even without dither. Any spuriae that appear in the spectra of the
device under test's analog output are _not_ due to quantization.
Instead, they are _entirely_ due to the DUT's departure from
theoretically perfect behavior.

2) The J-Test signal has a specific sequence of 1s and 0s that
maximally stresses the DUT and this sequence has a low-enough
frequency that it will be below the DUT's jitter-cutoff frequency.

Adding dither to this signal will interfere with these
characteristics, rendering it no longer diagnostic in nature. As an
example of a _non-diagnostic terst signal, see Arny Krueger's use of
a dithered
11.025kHz tone in his website tests of soundcards at a 96kHz sample
rate. This meets none of the criteria I have just outlined.



Notice that *none* of the above mintuae and fine detail addresses my opening
critical comment:

"He does other tests, relating to jitter for which there is no independent
confirmation of reliable relevance to audibility".

Now did you see anything in Atkinson two numbered paragraphs above and the
subsequent unnumbered paragraph that address my comment about listening
tests and independent confirmation of audibility? No you didn't!

What you saw is the same-old, same-old old Atkinson song-and-dance which
reminds many knowledgeable people of that old carny's advice "If you can't
convince them, confuse them!".

He does other tests, relating to jitter, for which there is no
independent confirmation of reliable relevance to audibility.


One can argue about the audibility of jitter, but the J-Test's lack of
dither does not render it "really weird," merely consistent and
repeatable.


A repeatable test with no real-world confirmation (i.e., audibility) is just
a reliable producer of meaningless garbage. Is a reliable producer of
irrelevant garbage numbers better or worse than an unreliable producer of
irrelevant garbage numbers?

These, of course, are desirable in a measurement
technique. And perhaps it worth noting that, as I have pointed out
before, consistency is something lacking from Mr. Krueger's own
published measurements of digital components on his website, with
different measurement bandwidths, word lengths, and FFT sizes making
comparisons very difficult, if not impossible.


This is just more of Atkinson's "confuse 'em if you can't convince 'em"
schtick. My web sites test a wide range of equipment, in virtually every
performance category from the sublime to the ridiculous. Of course I pick
testing parameters that are appropriate to the general quality level and
characteristics of the equipment I test. I've also evolved my testing
techniques as I learned more about how audio equipment works.

BTW, note that Atkinson complains that I use different measurement
bandwidths, word lengths and FFT sizes. Atkinson doesn't test the wide range
of equipment I test, and he doesn't test it as thoroughly. For example,
compare his test of the Card Deluxe to mine. The relevant URLs' are

http://www.pcavtech.com/soundcards/C...luxe/index.htm


and

http://www.stereophile.com/digitalso...80/index4.html


Compare Atkinson's Figure 2

to

my

http://www.pcavtech.com/soundcards/C..._2496-a_FS.gif

http://www.pcavtech.com/soundcards/C..._2444-a_FS.gif

http://www.pcavtech.com/soundcards/C..._1644-a_FS.gif


First off, you will notice that Atkinson's shows the results of his 1 KHz
performance test under just one operational mode, while I provided data
about three different and highly relevant operational modes.

Note that my plots document measurement bandwidths, word lengths and FFT
sizes, while Atkinson's figure 2 and supporting text don't document this
very information that Atkinson complained about. So, he's complaining about
information that I publish with every test as a matter of course, while he
doesn't put the same information into his own reports as they are published
in his magazine and on his web site.

Note that my plots provide high resolution information down to below 20 Hz
while Atkinson's plot squishes all data below 1 KHz into a tiny strip along
the left edge of the plot where it is difficult or impossible to analyze. My
plots allow people to determine if there are low frequency spurious
responses, hum or significant amounts of 1/F noise. Atkinson's don't.

I have snipped the rest of Mr. Krueger's comments as they reveal
merely that he doesn't actually read the magazine he so loves to
criticize. :-)


It appears that Atkinson is up to tricks as usual. If you analyze his
technical critique of my published tests you find that he's basically
faulting me for testing equipment in more operational modes than he does,
and providing more documentation about test conditions than he provides.

Let me add that I am fully aware of the effects of testing equipment with
different measurement bandwidths, word lengths and FFT sizes. I take steps
to ensure that any variations in test conditions don't adversely affect my
summary evaluation of equipment performance.

Furthermore, while Atkinson asks you to trust his poorly-contrived listening
tests, I provide the means for people to audition the Card Deluxe with their
own speakers and ears at PCAVTech's sister www.pcabx.com web site.



  #52   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"S888Wheel" wrote in message

Arny said


In your mind all kinds of charlatans seem to be credible, and people
who do work to extend scientific objectivity are fools.



So Arny considers legitimate scientists and accomplished industry
pros to be charlatans.


Obviously, if they are legitimate scientists then they aren't charlatans and
if they are charlatans, they aren't legitimate scientists, so on the face of
it, this claim is inherently false.

The rest of this post is full of similar non-sequitors, not to mention a lot
of bad writing and misformatted text that would be more trouble to respond
to than its worth.


  #53   Report Post  
S888Wheel
 
Posts: n/a
Default Note to the Idiot

Arny said


In your mind all kinds of charlatans seem to be credible, and people
who do work to extend scientific objectivity are fools.




I said



So Arny considers legitimate scientists and accomplished industry
pros to be charlatans.



Arny said


Obviously, if they are legitimate scientists then they aren't charlatans and
if they are charlatans, they aren't legitimate scientists, so on the face of
it, this claim is inherently false.



It seems false until we start naming names. Quiz time. Which of the following
are and are not legitimate scientists. 1. Arny Krueger and 2. JJ?

Arny said


The rest of this post is full of similar non-sequitors,


Similar nonsequitors? How can you claim similar nonsequitors when you have
failed to establish the first nonsequitor? Obviously you are burning a straw
man. You lost the argument and now you are running away while tossing out
jargon as excuses.

Arny said

not to mention a lot
of bad writing and misformatted text that would be more trouble to respond
to than its worth.



Prove the writing was bad. You actually accused me of misspelling words that
weren't misspelled. You are in over your head as usual. Run away.
  #54   Report Post  
Powell
 
Posts: n/a
Default Note to the Idiot


"S888Wheel" wrote

Stereophile did take the Carver challenge.

Really? What Stereophile issue describes that?


I don't remember. you could always check their archives.


They weren't afraid of the outcome of that.


Awaiting details of this test.

The Carver Challenge. Bob Carver made statements
that he could replicate any amplifier design using a
technique called "transfer function." Stereophile
took up his challenge wanting Bob to replicate the
sound of the Conrad-Johnson Premier 5 mono-blocks.
I think that over a two-day period he accomplished
that task to Holt's and Archibald's satisfaction. From
there the Carver M1.0t was born.

Based on this experience (Carver Challenge) Bob
set about to refine the "transfer function." He next
built a reference vacuum amp of his own, this later
became the Silver Seven. Based on this amp he
developed a technique called "Vacuum Tube
Transfer." The TMF line of amps was created from
this experience, the hallmark being the Silver Seven-t.

On the dark side to this tail, Stereophile was later
prohibited from publishing any reference to Carver
after trying to undo (publish/verbally) the results of the
empirical findings.







  #55   Report Post  
George M. Middius
 
Posts: n/a
Default Note to the Idiot



S888Wheel said:

So Arny considers legitimate scientists and accomplished industry pros to be
charlatans.


Shall we compile a list? ;-)

• Phoebe Johnston
• John Atkinson
• Paul Frindle
• Earl Geddes
• Glenn Zelniker
• Dan Lavry
• Paul Bamborough

And those are just some of the Real Audio Guys whom Krooger has
trashed on RAO. I'm sure there are others. Plus the equally
accomplished people whom Krooger has "deconstructed" snicker on
RAPro and probably other forums.

All of these guys appear to be successful, knowledgeable, talented,
and productive, but according to Krooger, they are time-wasters,
blowhards, ignorant twerps, etc. What would we do without
Mr. ****'s fabulous enlightenment to set us right? ;-)






  #56   Report Post  
Bruce J. Richman
 
Posts: n/a
Default Note to the Idiot

George M. Middius wrote:


S888Wheel said:

So Arny considers legitimate scientists and accomplished industry pros to

be
charlatans.


Shall we compile a list? ;-)

€¢ Phoebe Johnston
€¢ John Atkinson
€¢ Paul Frindle
€¢ Earl Geddes
€¢ Glenn Zelniker
€¢ Dan Lavry
€¢ Paul Bamborough

And those are just some of the Real Audio Guys whom Krooger has
trashed on RAO. I'm sure there are others. Plus the equally
accomplished people whom Krooger has "deconstructed" snicker on
RAPro and probably other forums.

All of these guys appear to be successful, knowledgeable, talented,
and productive, but according to Krooger, they are time-wasters,
blowhards, ignorant twerps, etc. What would we do without
Mr. ****'s fabulous enlightenment to set us right? ;-)












If memory serves, we could also add Pete Goudreau to the list. Yes, indeedy.
Krueger's omniscient "exposure" of all these evil conspirators proves that we
all sreiously erred in thinking that he was perhaps a little bit (or more)
paranoid and delusional. Perhaps some day those twin bastions of authenticity,
McDonald's and Circuit City, will give him his long overdue medals for
accomplishment in the fields of statistical analysis and objective audio
reviewing.





Bruce J. Richman



  #57   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"S888Wheel" wrote in message
...
Scott said


Performing the DBTs would be a snap if Atkinson set 'em up
with the tools to do it.



I said


I am not so convinced it is a snap to do them well.



Scott said



I guess I need your definition of well.
No more difficult than listening to gear,
subjectively characterizing the sound and putting
that to paper.


Well would be within the bounds of rigor that would be scientifically
acceptable. I see no point in half-assing an attempt to bring greater
reliability to the process of subjective review. Let's just say Howard

fell way
short in his endevours and the results spoke to that fact.


I am talking about reviews in a magazine. Who said it
had to be "scientifically acceptable"? That would require
independent witnesses which would make it way beyond the
scope of what I am talking about.
A tool to allow a single person toconduct and report
statistically valid results (if not independently witnessed)
would be required. After that, conducting the tests would
be relatively easy.


I said



I think if Stereophile were
to do something like this it would be wise for them to consult someone

like JJ
who conducted such tests for a living. Would you suggest that such DBTs

be
limmited to comparisons of cables amps and preamps?



Scott said


Those are certainly the easiest components.

Digital sources being next with a challenge to sync them
such that the subject isn't tipped off.


I said



I think DBT with speakers
and source components are quite a bit more difficult.



Scott said



Speakers are definitely out. It could be done but not without
significant difficulty.


I did do some single blind comparisons. The dealer was very nice about

it.

Problem is speaker location. What if both speakers optimal location is
within the
same space? Even if it is not, the spacing should be relatively easy to
differentiate.
It is quite difficult to conduct such a test truly blind.

I said


Would you limmit such
tests to varification of actual audible differences?



Scott said



Yes, if that fails then the preference test is really
kind of pointless.



I don't think so. It has been shown that with components that are agreed

to
sound different sighted bias can still have an affect on preference.


I did say if the difference test fails. Obviously if the difference test
is passed
then a preference test could be undertaken. I'm not that concerned about
people being influenced on preference. Preference can be learned.

Scott said



The fact that they don't even create
the tools to do it is telling to me.



I said



How so?



Scott said


I think they are afraid of the possible (or even probable)
outcome.


Maybe but I am skeptical of this. It didn't seem to hurt Stereo Review to

take
the position that all amps, preamps and cables sounded the same.

Stereophile
did take the Carver challenge. They weren't afraid of the outcome of

that.

Then why not? A chance to cater to both objectivists and subjectivists.
Sounds like a win-win.

ScottW


  #58   Report Post  
George M. Middius
 
Posts: n/a
Default Note to the Idiot



Terriers are bothered by fleas. The RAO Terrierborg appears seriously
bothered. Does it follow that he must, perforce, have fleas?

Performing the DBTs would be a snap if Atkinson set 'em up
with the tools to do it.


I am not so convinced it is a snap to do them well.


I guess I need your definition of well.


Well would be within the bounds of rigor that would be scientifically
acceptable.


I am talking about reviews in a magazine. Who said it
had to be "scientifically acceptable"?


Are you always this stupid? Wait, I know that one......



  #59   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"S888Wheel" wrote in message
...

Well, let's first remove the subjectivity and simply
confirm audible differences.

ScottW


I don't have a problem with that. But if we are going to expect

Stereophile to
hang their hats on the results the protocols have to be up to valid

scientific
standards IMO. I think this is where DBTs stop being a snap.


Ok, then lets not impose such a level of rigor.
Nothing else reported in Stereophile has to meet this criteria,
why impose it on DBTs?

ScottW


  #60   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"John Atkinson" wrote in message
om...

And once again, Arny Krueger's lack of comprehension of why the latter
test -- the "J-Test," invented by the late Julian Dunn and implemented as
a commercially available piece of test equipment by Paul Miller -- needs
to use an undithered signal reveals that he still does not grasp the
significance of the J-Test or perhaps even the philosophy of measurement
in general. To perform a measurement to examine a specific aspect of
component behavior, you need to use a diagnostic signal. The J-Test
signal is diagnostic for the assessment of word-clock jitter because:

1) As both the components of the J-Test signal are exact integer
fractions of the sample frequency, there is _no_ quantization error.
Even without dither. Any spuriae that appear in the spectra of the
device under test's analog output are _not_ due to quantization.
Instead, they are _entirely_ due to the DUT's departure from
theoretically perfect behavior.

2) The J-Test signal has a specific sequence of 1s and 0s that
maximally stresses the DUT and this sequence has a low-enough frequency
that it will be below the DUT's jitter-cutoff frequency.

Adding dither to this signal will interfere with these characteristics,
rendering it no longer diagnostic in nature. As an example of a
_non-diagnostic terst signal, see Arny Krueger's use of a dithered
11.025kHz tone in his website tests of soundcards at a 96kHz sample
rate. This meets none of the criteria I have just outlined.

He does other tests, relating to jitter, for which there is no
independent confirmation of reliable relevance to audibility.


One can argue about the audibility of jitter, but the J-Test's lack of
dither does not render it "really weird," merely consistent and
repeatable. These, of course, are desirable in a measurement technique.
And perhaps it worth noting that, as I have pointed out before,
consistency is something lacking from Mr. Krueger's own published
measurements of digital components on his website, with different
measurement bandwidths, word lengths, and FFT sizes making comparisons
very difficult, if not impossible.


Isn't there a question about the validity of applying this test to CD
players which don't have to regnerate the clock?

I thought it was generally applied to HT receivers with DACs
and external DACs?

ScottW




  #61   Report Post  
S888Wheel
 
Posts: n/a
Default Note to the Idiot

Scott said



I guess I need your definition of well.
No more difficult than listening to gear,
subjectively characterizing the sound and putting
that to paper.


I said


Well would be within the bounds of rigor that would be scientifically
acceptable. I see no point in half-assing an attempt to bring greater
reliability to the process of subjective review. Let's just say Howard

fell way
short in his endevours and the results spoke to that fact.


Scott said



I am talking about reviews in a magazine. Who said it
had to be "scientifically acceptable"?


You asked for my definition of well done DBTs.

Scott said

That would require
independent witnesses which would make it way beyond the
scope of what I am talking about.


I don't think it would require independent witnesses but it would require
Stereophile to establish their own formal peer review group. But we are talking
about Stereophile dealing with the current level of uncertainty that now exists
with the current protocols.I think to do standard DBTs right would be a major
pain in the ass for them. Even the magazines which make a big issue out of such
tests don't often actually do such tests and when they do they often do a crap
job of it.

Scott said

A tool to allow a single person toconduct and report
statistically valid results (if not independently witnessed)
would be required. After that, conducting the tests would
be relatively easy.



Is it ever easy? Look what Howard did with such a tool.

I said


I did do some single blind comparisons. The dealer was very nice about

it.


Scott said


Problem is speaker location. What if both speakers optimal location is
within the
same space? Even if it is not, the spacing should be relatively easy to
differentiate.
It is quite difficult to conduct such a test truly blind.


It was difficult. The speakers had to be moved between each listening
session.It was blind because I had my eyes closed. It all felt a bit ridiculous
but it worked. I didn't know which speaker was which on the first samples. It
didn't take long for me to figure which was which just by listening though. At
that point I didn't bother with the closing of eyes.

Scott said


I did say if the difference test fails. Obviously if the difference test
is passed
then a preference test could be undertaken. I'm not that concerned about
people being influenced on preference. Preference can be learned.


Yes you did. My mistake. But sighted bias does affect preference. That has been
proven. I wanted to compare the Martin Logans to the Apogees blind for that
very reason. I knew I liked the looks of the Martin Logans.

I said


Maybe but I am skeptical of this. It didn't seem to hurt Stereo Review to

take
the position that all amps, preamps and cables sounded the same.

Stereophile
did take the Carver challenge. They weren't afraid of the outcome of

that.


Scott said


Then why not? A chance to cater to both objectivists and subjectivists.
Sounds like a win-win.


I cannot speak for Stereophile and I cannot rule out your hunch. But you cannot
rule out the possibility that cost and inconvenience of propper implimentation
of such protocols on a staff comprised largely of hobbyists is a factor.
  #62   Report Post  
S888Wheel
 
Posts: n/a
Default Note to the Idiot


Ok, then lets not impose such a level of rigor.
Nothing else reported in Stereophile has to meet this criteria,
why impose it on DBTs?


For the sake of improving protocols to improve reliability of subjective
reports. When it comes to such protocols I think quality is more important than
quantity. If DBTs aren't done well they will not improve the state of reviews
published by Stereophile.The source of your beef with Stereophile is that it
lacks reliability now is it not?
  #63   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

"Powell" wrote in
message ...
On the dark side to this tail, Stereophile was later
prohibited from publishing any reference to Carver
after trying to undo (publish/verbally) the results of the
empirical findings.


This is simply not true, Mr. Powell. You can retrieve previous
disucssions of this subject from Google, but I will dig up the story
from my archives and post it to r.a.o.

John Atkinson
Editor, Stereophile
  #64   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"S888Wheel" wrote in message
...
Scott said



I guess I need your definition of well.
No more difficult than listening to gear,
subjectively characterizing the sound and putting
that to paper.


I said


Well would be within the bounds of rigor that would be scientifically
acceptable. I see no point in half-assing an attempt to bring greater
reliability to the process of subjective review. Let's just say Howard

fell way
short in his endevours and the results spoke to that fact.


Scott said



I am talking about reviews in a magazine. Who said it
had to be "scientifically acceptable"?


You asked for my definition of well done DBTs


But I don't agree that is necessary for Stereophile to
implement.


Scott said

That would require
independent witnesses which would make it way beyond the
scope of what I am talking about.


I don't think it would require independent witnesses but it would require
Stereophile to establish their own formal peer review group.


Let me be clear, I don't want to impose a bunch of requirements that
make this effort too difficult to implement.



But we are talking
about Stereophile dealing with the current level of uncertainty that now

exists
with the current protocols.


? Stereophile has conducted elaborate DBTs with more rigourous
protocols than I call for. Let them establish a protocol
that is workable for their reviewers to conduct.
Publish it for comment, should be very interesting.

I think to do standard DBTs right would be a major
pain in the ass for them. Even the magazines which make a big issue out

of such
tests don't often actually do such tests and when they do they often do a

crap
job of it.


If they have a tool which controls switching and tabulates results,
I really don't see what the problem is.
What needs to happen is a level of automation is provided
to match the skill level of the tester. That wouldn't be that difficult.

Scott said

A tool to allow a single person toconduct and report
statistically valid results (if not independently witnessed)
would be required. After that, conducting the tests would
be relatively easy.



Is it ever easy? Look what Howard did with such a tool.


I don't believe Howards ABX box provided the level of automation
I am talking about.

ScottW


  #65   Report Post  
George M. Middius
 
Posts: n/a
Default Note to the Idiot



S888Wheel said to the Terrierborg:

The source of your beef with Stereophile is that it
lacks reliability now is it not?


Of course not. Woofies hates Stereophile because it describes,
evaluates, and ultimately endorses luxury goods. Communists don't need
luxury goods.




  #66   Report Post  
George M. Middius
 
Posts: n/a
Default Note to the Idiot



John Atkinson said:

On the dark side to this tail, Stereophile was later
prohibited from publishing any reference to Carver
after trying to undo (publish/verbally) the results of the
empirical findings.


This is simply not true, Mr. Powell.


Powell has told us he prefers to be addressed as "No Stick Um". No
honorific, of course.



  #67   Report Post  
Ernst Raedecker
 
Posts: n/a
Default Note to the Idiot

On Wed, 24 Dec 2003 21:37:57 -0800, "ScottW"
wrote:

What I am referring to are the reviews where different units
are compared and perceptions of differences in sonic
performances are claimed which can't
be validated through differences in measured performance.


This is a very valid point. Many times what we hear is not what we
measure, and what we measure is not confirmed by our hearing. How is
it possible that our hearing and our measurements many times do NOT
correlate?

The answer is that there are problems with:
(1) our measurements: they are NOT really objective.
(2) our hearing: this depends MORE on our signal processing ability in
the brain than on our data-collecting ability of the ear.

(1) Let's discuss the "measured performance". The hearing stuff will
have to wait.

Contrary to common belief there is NOT an objective standard for
"measured performance" or "THE measured performance". What you choose
to measure is subjective, and the weight you give to certain elements
of your measurements is also subjective. Of course statisticians have
known all this for 70 years and more. Unfortunately very few audio
testers have a thorough knowledge of statistics and the fallacies of
statistics.

So if you claim that certain things we hear, or think we hear, or
Stereophile has heard, cannot "...be validated through differences in
measured performance" then you should first try to establish which
measurements under which conditions are relevant and which processing
of the results is relevant.

Recently there has been renewed interest in the old question of HOW we
should measure audio equipment, and WHICH measurements are relevant
and give us results that correlate with what we hear.

I would like to remind you of the work of Richard Cabot, for example
his "Fundamentals of modern audio measurement", first presented at the
103rd convention of the AES, 1997, available in pdf format on the
internet. In another paper, "COMPARISON OF NONLINEAR DISTORTION
MEASUREMENT METHODS", also on the internet in pdf format, he
introduces his famous FastTest methodology.

Reading these two papers alone will make it clear to you that there is
so much more to say about measurements that it far too simple to speak
of "measured performance" an sich.

But there is more.

Not so long ago Daniel Cheever has written a nice paper presenting new
measurements that SHOW that Single Ended Triode amps without negative
feedback do distort LESS, FAR LESS than comparable transistor
amplifiers. Would you believe that?
All those years the Objectivist League has told us that transistor
amps **measure** "objectively" much better than SETs, and that SETs AT
BEST add "euphonic distortions" that are pleasing to the ear, and now
this guy tells us that SETs "objectively" MEASURE BETTER than
transistor amps!!!!

So the Hard Line Objectivists were wrong all the time, not only
soundwise but also measurementwise. What Subjectivists had heard all
the time, namely that SETs sound better, HAS NOW BEEN VALIDATED
through differences in measured performance!!!!!

(See: Daniel Cheever, "A NEW METHODOLOGY FOR AUDIO FREQUENCY
POWER AMPLIFIER TESTING BASED ON PSYCHOACOUSTIC
DATA THAT BETTER CORRELATES WITH SOUND QUALITY", dec 2001, also in pdf
format on the internet)

As it is, I believe there are some qualifications to be made on
Cheever's paper, but I won't make them. I leave it up to you to look
his paper up on the internet and read HOW he construes his set of
measurements and processing methods. After all, you show an interest
in discussing the validity of measurements, so you are allowed to do
some homework.

Well, I will help you out a bit. Cheever's basic tenet is that the
supposedly "objective" measure of THD is not objective at all. THD is
measured as the root mean square of all the harmonics of a fundamental
in the audible range. This leads to an unweighted sum of harmonics
relative to the fundamental as a distortion percentage.
His point is that the SUM of the distortion is not really important,
but that the STRUCTURE of the produced harmonics is important. The
more this structure deviates from the natural nonlinearities produced
in the ear itself, the more audible the distortions become.

You see, whether he is completely correct in his stressing of aural
harmonics as the basis of distortion measurements (I believe there is
more to say than he does), is not the point.

The point is that he makes clear that the so-called "objective"
measurement of THD is not at all objective. There is no reason at all
why an unweighted summation of (the energy in) harmonics relative to
the fundamental would be an "objective" or a relevant measurement. It
is weighting with a value of 1 for each harmonic. Why not diminishing
weights? Why not increasing weights?

By the way, I **personally** do not think that SETs are really the
excellent amplifiers that Subjectivists and Cheever take them to be,
but that is not the point either.

The point is that it IS possible, and it IS done, to construe a set of
serious measurements that DO show that SETs measure "objectively"
BETTER than transistor amps, while the whole Objectivist community
lives in the mind-set that this cannot "objectively" be done.

In short, measurements, and especially the processing of measurements,
are NOT objective. If they correlate with what we hear, we may
consider them relevant. If they don't han they are not so relevant.

You are also advised to take notice of the recent work or Earl and
Lidia Geddes on sound quality and the perception and measurement of
distortion, also presented recently at the AES. See their website at:

http://www.gedlee.com/distortion_perception.htm

You are also advised to take a look at the newest issue of the Journal
of the AES (nov 2003, vol 51, no 11). As you know, these guys of the
AES are not really soft-in-the-head Subjectivists. Let's look at the
contents and quote the abstract of the main paper in this issue:

[quote]
The Effect of Nonlinear Distortion on Perceived Quality of Music and
Speech Signals
Chin-Tuan Tan, Brian C. J. Moore, and Nick Zacharov 1012

The subjective evaluation of nonlinear distortions often shows a weak
correlation with physical measures because the choice of distortion
metrics is not obvious. In reexamining this subject, the authors
validated a metric based on the change in the spectrum in a series of
spectral bins, which when combined leads to a single distortion
metric. Distortion was evaluated both objectively and subjectively
using speech and music. Robust results support the hypothesis for this
approach.
[unquote]

So you are not the only one asking himself why ...
"The subjective evaluation of nonlinear distortions often shows a weak
correlation with physical measures..."

It is, as I have made now ABUNDANTLY clear, ...
"because the choice of distortion metrics is not obvious."

Yeah.

There is another very interesting article in the same issue of the
JAES, one which will make the Hard Line Objectivists puke, as it makes
clear that even the tough AES boys roll over to the soft-in-the-head
camp:

[quote]
Large-Signal Analysis of Triode Vacuum-Tube Amplifiers
Muhammad Taher Abuelma'atti 1046

With the renewed interest in vacuum tubes, the issue of intrinsic
distortion mechanisms becomes relevant again. The author demonstrates
a nonlinear model of triodes and pentodes that leads to a closed-form
solution when the nonlinearity is represented by a Fourier expansion
rather than the conventional Taylor series. When applied to a two-tone
sine wave, the analysis shows that the distortion in tube amplifiers
is similar to that of the equivalent transistor amplifier. A SPICE
analysis confirms the approach.
[unquote]

Yeah, even with simple two-tone sine waves it is now ESTABLISHED
OBJECTIVELY that tube amps do NOT distort more than transistor amps.
So it says.

===========

Oh, WHAT a field day for the Subjectivists today.

Oh, WHAT a dismal day for the HLOs like Pinkerton, Krueger, Ferstler
and all the rest of them.

All those years they have thought that they have at least the AES on
their side, and now the AES deserts them. It must be an annus
horribilis for them.

Merry Xmas.

Ernesto.


"You don't have to learn science if you don't feel
like it. So you can forget the whole business if
it is too much mental strain, which it usually is."

Richard Feynman
  #68   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"S888Wheel" wrote in message
...

Ok, then lets not impose such a level of rigor.
Nothing else reported in Stereophile has to meet this criteria,
why impose it on DBTs?


For the sake of improving protocols to improve reliability of subjective
reports.


I don't agree. Sufficient DBT protocols exist. Stereophile has used them.
No "improvement" in DBT protocols is required. Applying existing DBT
protocols
would be sufficient to confirm or deny audible differences exist.

When it comes to such protocols I think quality is more important than
quantity.


Unfortunately this opens a major loophole. Cherry picking the
units to be tested such that audible differences are assured.
I would like to see DBTs become part of the standard review
protocol for select categories of equipment.
Most reviewers like to compare equipment under review to
their personal reference systems anyway.

If DBTs aren't done well they will not improve the state of reviews
published by Stereophile.


We differ on how well done they need to be to add
credibility to audible difference claims.

The source of your beef with Stereophile is that it
lacks reliability now is it not?


Not exactly. I find the subjective perceptions
portion of some reviews to lack credibility.

ScottW


  #70   Report Post  
Lionel
 
Posts: n/a
Default Note to the Idiot

George M. Middius a écrit :


S888Wheel said to the Terrierborg:


The source of your beef with Stereophile is that it
lacks reliability now is it not?



Of course not. Woofies hates Stereophile because it describes,
evaluates, and ultimately endorses luxury goods. Communists don't need
luxury goods.


George my pooooooooor little RAO's baby...
When will you stop to write such naive absurdity ?



  #71   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"Ernst Raedecker" wrote in message
...
On Wed, 24 Dec 2003 21:37:57 -0800, "ScottW"
wrote:

What I am referring to are the reviews where different units
are compared and perceptions of differences in sonic
performances are claimed which can't
be validated through differences in measured performance.


This is a very valid point. Many times what we hear is not what we
measure, and what we measure is not confirmed by our hearing. How is
it possible that our hearing and our measurements many times do NOT
correlate?


Interesting stuff. You have supported very well one of my original
comments that there is near infinite depth of detail to measurements
which can be explored.

I think it is far easier to first confirm audible differences
and then pursue validating those differences through measurement.

Without the listening tests, there is still no demonstration that
measurement
differences are in fact audible or not.
I think the cart is before the horse.

With regard to the work of the Geddes, it is apparent that distortion
measures that are more applicable to amplifiers don't work
particularly well for speakers. This does not in any
way, invalidate those measures for amplifier performance.

ScottW


  #72   Report Post  
Sockpuppet Yustabe
 
Posts: n/a
Default Note to the Idiot


"George M. Middius" wrote in message
...


S888Wheel said to the Terrierborg:

The source of your beef with Stereophile is that it
lacks reliability now is it not?


Of course not. Woofies hates Stereophile because it describes,
evaluates, and ultimately endorses luxury goods. Communists don't need
luxury goods.



It depends if they are Communist leaders or Communist followers




----== Posted via Newsfeed.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeed.com The #1 Newsgroup Service in the World! 100,000 Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---
  #73   Report Post  
George M. Middius
 
Posts: n/a
Default Note to the Idiot



The Terrierborg must've had a very disappointing Xmas.

This is a very valid point. Many times what we hear is not what we
measure, and what we measure is not confirmed by our hearing. How is
it possible that our hearing and our measurements many times do NOT
correlate?


Interesting stuff.


No, it's exceedingly boring, trivial, and pointless. Oh wait -- you
were masturbating again, weren't you?


You have supported very well one of my original
comments that there is near infinite depth of detail to measurements
which can be explored.


Or you could just listen to some music.

I think it is far easier to first confirm audible differences
and then pursue validating those differences through measurement.


"Far easier" than gouging out your eyeballs?

Without the listening tests, there is still no demonstration that
measurement differences are in fact audible or not.
I think the cart is before the horse.


I'll bet you're an expert at ****ing yourself with a garden hose.


  #74   Report Post  
George M. Middius
 
Posts: n/a
Default Note to the Idiot



Sockpuppet Yustabe said:

The source of your beef with Stereophile is that it
lacks reliability now is it not?


Of course not. Woofies hates Stereophile because it describes,
evaluates, and ultimately endorses luxury goods. Communists don't need
luxury goods.


It depends if they are Communist leaders or Communist followers


The Terrierborg is your buddy. You tell us.



  #75   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

"Arny Krueger" wrote in message
...
"John Atkinson" wrote in message
. com
In message
Arny Krueger ) wrote:
Stereophile does some really weird measurements, such as their
undithered tests of digital gear. The AES says don't do it, but
John Atkinson appears to be above all authority but the voices
that only he hears.


It always gratifying to learn, rather late of course, that I had
bested Arny Krueger in a technical discussion. My evidence for
this statement is his habit of waiting a month, a year, or even more
after he has ducked out of a discussion before raising the subject
again on Usenet as though his arguments had prevailed. Just as he has
done here. (This subject was discussed on r.a.o in May 2002, with Real
Audio Guys Paul Bamborough and Glenn Zelniker joining me in pointing
out the flaws in Mr. Krueger's argument.)


So, as Atkinson's version of the story evolves, it wasn't him alone
that bested me, but the dynamic trio of Atkinson, Bamborough, and
Zelniker. Notice how the story is changing right before our very eyes!


"Our?" Do you have a frog in your pocket, Mr. Krueger? No, Mr. Krueger.
The story hasn't changed. I was merely pointing out that Paul Bamborough
and Glenn Zelniker, both digital engineers with enviable reputations,
posted agreement with the case I made, and as I said, joined me in
pointing out the flaws in your argument.

So let's examine what the Audio Engineering Society (of which I am
a long-term member and Mr. Krueger is not) says on the subject of
testing digital gear, in their standard AES17-1998 (revision of
AES17-1991):
Section 4.2.5.2: "For measurements where the stimulus is generated in
the digital domain, such as when testing Compact-Disc (CD) players,
the reproduce sections of record/replay devices, and digital-to-analog
converters, the test signals shall be dithered.

I imagine this is what Mr. Krueger means when wrote "The AES says
don't do it." But unfortunately for Mr. Krueger, the very same AES
standard goes on to say in the very next section (4.2.5.3):
"The dither may be omitted in special cases for investigative
purposes. One example of when this is desirable is when viewing bit
weights on an oscilloscope with ramp signals. In these circumstances
the dither signal can obscure the bit variations being viewed."


At this point Atkinson tries to confuse "investigation" with "testing
equipment performance for consumer publication reviews" Of course these
are two very different things...


Not at all, Mr. Krueger. As I explained to you back in 2002 and again
now, the very test that you describe as "really weird" and that you
claim the "AES says don't do" is specifically outlined in the AES
standard as an example of a test for which a dithered signal is
inappropriate, because it "can obscure the bit variations being viewed."

It is also fair to point out that both the undithered ramp signal and
the undithered 1kHz tone at exactly -90.31dBFS that I use for the same
purpose are included on the industry-standard CD-1 Test CD, that was
prepared under the aegis of the AES.

If you continue to insist that the AES says "don't do it," then why on
earth would the same AES help make such signals available?

As the first specific test I use an undithered signal for is indeed
for investigative purposes -- looking at how the error in a DAC's MSBs
compare to the LSB, in other words, the "bit weights" -- it looks as
if Mr. Krueger's "The AES says don't do it" is just plain wrong.


The problem here is that again Atkinson has confused detailed
investigations into how individual subcomponents of chips in the player
works (i.e., "[investigation]") with the business of characterizing how
it will satisfy consumers.


The AES standard concerns the measured assessment of "Compact-Disc (CD)
players, the reproduce sections of record/replay devices, and
digital-to-analog converters." As I pointed out, it makes an exception
for "investigative purposes" and makes no mention of such "purposes"
being limited to the "subcomponents of chips." The examination of "bit
weights" is fundamental to good sound from a digital component, because
if each one of the 65,535 integral step changes in the digital word
describing the signal produces a different-sized change in the
reconstructed analog signal, the result is measureable and audible
distortion.

Consumers don't care about whether one individual bits of the
approximately 65,000 levels supported by the CD format works, they
want to know how the device will sound.


Of course. And being able to pass a "bit weight" test is fundamental to
a digital component being able to sound good. This is why I publish the
results of this test for every digital product reviewed in Stereophile.
I am pleased to report that the bad old days, when very few DACs could
pass this test, are behind us.

It's a simple matter to show that nobody, not even John Atkinson can
hear a single one of those bits working or not working.


I am not sure what this means. If a player fails the test I am describing,
both audible distortion and sometimes even more audible changes in pitch
can result. I would have thought it important for consumers to learn
of such departures from ideal performance.

Yet he deems it appropriate to confuse consumers with this sort of
[minutiae], perhaps so that they won't notice his egregiously-flawed
subjective tests.


In your opinion, Mr. Krueger, and I have no need to argue with you
about opinions, only when you mistate facts. As you have done in this
instance. To recap:

I use just two undithered test signals as part of the battery of tests
I perform on digital components for Stereophile. Mr. Krueger has
characterized my use of these test signals as "really weird" and has
claimed that their use is forbidden by the Audio Engineering
Society. Yet, as I have shown by quoting the complete text of the
relevant paragraphs from the AES standard on the subject, one of the
tests I use is specifically mentioned as an example as the kind of test
where dither would interfere with the results and where an undithered
signal is recommended.

As my position on this subject has been supported by two widely
respected experts on digital audio, I don't think that anything more
needs to said about it.

And as I said, Mr. Krueger is also incorrect about the second
undithered test signal I use, which is to examine a DAC's or CD player's
rejection of word-clock jitter. My use is neither "really weird," nor
is it specifically forbidden by the Audio Engineering Society.

He does other tests, relating to jitter, for which there is no
independent confirmation of reliable relevance to audibility. I hear
that this is not because nobody has tried to find correlation. It's
just that the measurement methodology is flawed, or at best has no
practical advantages over simpler methodologies that correlate better
with actual use.


And once again, Arny Krueger's lack of comprehension of why the latter
test -- the "J-Test," invented by the late Julian Dunn and
implemented as a commercially available piece of test equipment by
Paul Miller -- needs to use an undithered signal reveals that he
still does not grasp the significance of the J-Test or perhaps even
the philosophy of measurement in general. To perform a measurement to
examine a specific aspect of component behavior, you need to use a
diagnostic signal. The J-Test signal is diagnostic for the assessment
of word-clock jitter because:

1) As both the components of the J-Test signal are exact integer
fractions of the sample frequency, there is _no_ quantization error.
Even without dither. Any spuriae that appear in the spectra of the
device under test's analog output are _not_ due to quantization.
Instead, they are _entirely_ due to the DUT's departure from
theoretically perfect behavior.

2) The J-Test signal has a specific sequence of 1s and 0s that
maximally stresses the DUT and this sequence has a low-enough
frequency that it will be below the DUT's jitter-cutoff frequency.

Adding dither to this signal will interfere with these
characteristics, rendering it no longer diagnostic in nature. As an
example of a _non_-diagnostic test signal, see Arny Krueger's use of
a dithered 11.025kHz tone in his website tests of soundcards at a
96kHz sample rate. This meets none of the criteria I have just
outlined.


Notice that *none* of the above mintuae and fine detail addresses my
opening critical comment: "He does other tests, relating to jitter for
which there is no independent confirmation of reliable relevance to
audibility".

Now did you see anything in Atkinson two numbered paragraphs above and
the subsequent unnumbered paragraph that address my comment about
listening tests and independent confirmation of audibility? No you
didn't!


You are absolutely correct, Mr. Krueger. There is nothing about the
audibility of jitter in these paragraphs. This is because I was
addressing your statements that this test, like the one examining bit
weights, was "really weird" and that "The AES says don't do it, but
John Atkinson appears to be above all authority but the voices that
only he hears."

Regarding audibility, I then specifically said, in my next paragraph,
that "One can argue about the audibility of jitter..." As you _don't_
think it is audible but my experience leads me to believe that it _can_
be, depending on level and spectrum, again I don't see any point in
arguing this subject with you, Mr. Krueger. All I am doing is
specifically addressing the point you made in your original posting
and showing that it was incorrect. Which I have done.

Finally, you recently claimed in another posting that your attacking me
was "highly appropriate," given that my views about you "are totally
fallacious, libelous and despicable." I suggest to those reading this
thread that they note that Arny Krueger has indeed made this discussion
highly personal, using phrases such as "the voices that only [John
Atkinson] hears"; "Notice how [John Atkinson's] story is changing right
before our very eyes!"; "the same-old, same-old old Atkinson
song-and-dance which reminds many knowledgeable people of that old
carny's advice 'If you can't convince them, confuse them!'"; "This is
just more of Atkinson's 'confuse 'em if you can't convince 'em'
schtick"; "Atkinson is up to tricks as usual."

I suggest people think for themselves about how appropriate Mr.
Krueger's attacks are, and how relevant they are to a subject where
it is perfectly acceptable for people to hold different views.

John Atkinson
Editor, Stereophile


  #76   Report Post  
Powell
 
Posts: n/a
Default Note to the Idiot


"John Atkinson" wrote

On the dark side to this tail, Stereophile was later
prohibited from publishing any reference to Carver
after trying to undo (publish/verbally) the results of the
empirical findings.


This is simply not true, Mr. Powell. You can retrieve
previous disucssions of this subject from Google,
but I will dig up the story from my archives and post
it to r.a.o.

"from my archives "... Please do and post TAS's
version, too. I take it you have no problem with
the other facts stated in the post.






  #77   Report Post  
George M. Middius
 
Posts: n/a
Default Note to the Idiot



John Atkinson said:

I suggest people think for themselves about how appropriate Mr.
Krueger's attacks are, and how relevant they are to a subject where
it is perfectly acceptable for people to hold different views.


So you're saying audio is an intellectual, financial, vocational (or
avocational), and otherwise mundane endeavor? I'm sorry, but this is
where you earn your excommunication. Krooger is clearly high up on his
overflowing toilet, preaching to his choir. This is a matter of faith
for Turdborg, and well you should note that, sir.




  #78   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"S888Wheel" wrote in message


Really? What Stereophile issue describes that?


I don't remember. you could always check their archives.


Bad idea, given how brain deed their search engine is.

Google yields:

http://www.google.com/groups?selm=2k...1.prod.aol.net

"Okay, the first part involved a challenge that Bob Carver made, that he
could
make one of his amplifiers sound identical to any amplifier selected by the
Stereophile editors. Although the article didn't specifically state this,
they
chose a Conrad-Johnson tube amplifier (as far from a solid state mid-priced
amplifier as you can get), which Bob proceeded to match up against his M1.0
amplifier. The article stated that the Stereophile editors could not hear a
difference between their source amplifier and the modified Carver amplifier.
The modified amplifier was then used as a prototype for a production model,
the
M1.0t. The "t" stands for transfer-function-modified.

"Stereophile later claimed that the production amplifier didn't match the
original tube amplifier. Carver said that it did--and the beat went on.

"The lawsuit was a separate issue. Carver Corporation charging that
Stereophile
had engaged in a campaign to discredit Carver or some such. In a settlement,
they agreed not to mention Carver in their editorial pages, although the
company was free to continue to advertise in the magazine.

"My feeling, based on knowing Carver and talking with him about his
technique,
is that there is no black magic involved in infusing the sonic character of
a
tube amplifier onto a solid state amplifier, so long as the destination
amplifier have equal or superior output/current and distortion
characteristics.
Since I don't believe in audio mysticism yet, I have no reason to believe
that
any knowledgeable audio engineer, given enough time, couldn't do the same
thing.



  #79   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"Ernst Raedecker" wrote in message

On Wed, 24 Dec 2003 21:37:57 -0800, "ScottW"
wrote:


What I am referring to are the reviews where different units
are compared and perceptions of differences in sonic
performances are claimed which can't
be validated through differences in measured performance.


This is a very valid point. Many times what we hear is not what we
measure, and what we measure is not confirmed by our hearing. How is
it possible that our hearing and our measurements many times do NOT
correlate?


The answer is that there are problems with:
(1) our measurements: they are NOT really objective.


They are plenty objective, the real problem is that we can't relate them to
subjective perceptions as well as we might like.

(2) our hearing: this depends MORE on our signal processing ability in
the brain than on our data-collecting ability of the ear.


Not news. Hence generalized listener training and other more specific
procedures for improving listener sensitivity in a particular test. Working
examples of this can be found at the www.pcabx.com web site.

(1) Let's discuss the "measured performance". The hearing stuff will
have to wait.


Contrary to common belief there is NOT an objective standard for
"measured performance" or "THE measured performance". What you choose
to measure is subjective, and the weight you give to certain elements
of your measurements is also subjective. Of course statisticians have
known all this for 70 years and more. Unfortunately very few audio
testers have a thorough knowledge of statistics and the fallacies of
statistics.


There are standards for measured performance. The real problem is that they
aren't generally agreed-upon. It's a complex situation. A major stumbling
block is the radical subjectivist rejection of any and all reliable and
bias-controlled listening procedures that have been proposed to date.

So if you claim that certain things we hear, or think we hear, or
Stereophile has heard, cannot "...be validated through differences in
measured performance" then you should first try to establish which
measurements under which conditions are relevant and which processing
of the results is relevant.


I think that JJ once said that if all artifacts are 100 dB down, no audible
differences will be heard. I think that this is correct as far as it goes,
but in a great many circumstances this is far too rigorous of a standard.

Recently there has been renewed interest in the old question of HOW we
should measure audio equipment, and WHICH measurements are relevant
and give us results that correlate with what we hear.


The interest never stopped except in the minds that stopped thinking
rationally.

I would like to remind you of the work of Richard Cabot, for example
his "Fundamentals of modern audio measurement", first presented at the
103rd convention of the AES, 1997, available in pdf format on the
internet. In another paper, "COMPARISON OF NONLINEAR DISTORTION
MEASUREMENT METHODS", also on the internet in pdf format, he
introduces his famous FastTest methodology.


Good paper, lots of good ideas, but also something that has been arguably
eclipsed by more recent developments. These papers don't really talk about
what levels of distortion are good or bad. They focus on how to measure
common forms of noise and distortion. Therefore, their introduction in the
discussion at this point is actually irrelevant, because we're discussing
how much noise and distortion is audible, not how to measure it.

Reading these two papers alone will make it clear to you that there is
so much more to say about measurements that it far too simple to speak
of "measured performance" an sich.


Not at all. If one understands what these papers are trying to say and what
they don't say, its just reading material about how to measure noise and
distortion, and they don't say much at all about what the resulting numbers
mean.

But there is more.


Yes, ranging from Zwicker and Fastl to the two Geddes/Lee papers from the
last major AES.

Not so long ago Daniel Cheever has written a nice paper presenting new
measurements that SHOW that Single Ended Triode amps without negative
feedback do distort LESS, FAR LESS than comparable transistor
amplifiers. Would you believe that?


The whole discussion obviously rests on how you characterize distortion and
what you call "comparable" transistor amplifiers. SETs are basically what
results when you throw away just about every important technical innovation
relating to power amps that was developed between about 1925 and 1965. This
includes biasing, load lines, push-pull operation and the long and bloody
development of a number of different flavors of inverse feedback. I suspect
that if one is equally stupid about designing SS amps, some really horrid
equipment just might result. Garbage in, garbage out.

All those years the Objectivist League has told us that transistor
amps **measure** "objectively" much better than SETs, and that SETs AT
BEST add "euphonic distortions" that are pleasing to the ear, and now
this guy tells us that SETs "objectively" MEASURE BETTER than
transistor amps!!!!


Cheever's paper http://web.mit.edu/cheever/www/cheever_thesis.pdf is a bit
of a joke. He arbitrarily assigns sound quality characterizations to a
number of amps and them attempts to justify his arbitrary choices by
mathematical means. Bad science or weird science?

So the Hard Line Objectivists were wrong all the time, not only
soundwise but also measurementwise. What Subjectivists had heard all
the time, namely that SETs sound better, HAS NOW BEEN VALIDATED
through differences in measured performance!!!!!


No, its been supported by the usual subjectivist means - arbitrary personal
decisions offered without any reliable, believable support.

(See: Daniel Cheever, "A NEW METHODOLOGY FOR AUDIO FREQUENCY
POWER AMPLIFIER TESTING BASED ON PSYCHOACOUSTIC
DATA THAT BETTER CORRELATES WITH SOUND QUALITY", dec 2001, also in pdf
format on the internet)


Here's the URL again, for people who need a good laugh:

http://web.mit.edu/cheever/www/cheever_thesis.pdf

As it is, I believe there are some qualifications to be made on
Cheever's paper, but I won't make them. I leave it up to you to look
his paper up on the internet and read HOW he construes his set of
measurements and processing methods. After all, you show an interest
in discussing the validity of measurements, so you are allowed to do
some homework.


Homework, which rather obviously the author of the post I' responding to
didn't to.

Well, I will help you out a bit. Cheever's basic tenet is that the
supposedly "objective" measure of THD is not objective at all. THD is
measured as the root mean square of all the harmonics of a fundamental
in the audible range. This leads to an unweighted sum of harmonics
relative to the fundamental as a distortion percentage.


What's really going on is that the means used to weight harmonics in a THD
measurement are arbitrary, and have never been seriously claimed by anybody
to have psychoacoustic justification. The measurement is objective, the
analysis is objective, but the particular analysis is not justified by
modern perceptual research.

His point is that the SUM of the distortion is not really important,
but that the STRUCTURE of the produced harmonics is important.


It's not Cheever's point, its Crowhurst's point. BTW, Crowhursts paper is on
the Cheever site at

http://web.mit.edu/cheever/www/crow1.htm

BTW the official info about this paper (date, abstract, etc) is:

Some Defects in Amplifier Performance Not Covered by Standard
Specifications 1039524 bytes (CD aes7)
Author(s): Crowhurst, Norman H.
Publication: Preprint 12; Convention 9; October 1957
Abstract: Physiological research has shown that the very low orders of
distortion reepresented by the specifications of modern amplifiers should
be inaudible, but the fact remains that considerable distortion can be heard
in cases where the measured distortion, by methods at present standard, is
far below the limits determined to be audible. This paper examines
critically some of the possible forms of distortion that can be audible
under such circumstances. Methods of detecting their presence are described,
with the intention of providing a basis for future forms of specification,
more indicative of significant practical amplifier performance than are the
present standards.


The
more this structure deviates from the natural nonlinearities produced
in the ear itself, the more audible the distortions become.


This is a wonderfully ignorant statement. The nonlinerities of the ear are
strongly SPL-dependent, and by this I mean the SPL at the ear. So, if you
keep the SPL levels down, the ear's nonlinearities are more-or-less under
control. The more significant source of the problem is well-known to
everybody who has seriously studied psychoacoustics since about 1985 -
masking.

You see, whether he is completely correct in his stressing of aural
harmonics as the basis of distortion measurements (I believe there is
more to say than he does), is not the point.


Well there is certainly more to say than Cheever's says, as Lee and Gedees
recently said it.

The point is that he makes clear that the so-called "objective"
measurement of THD is not at all objective.


Sure it is. The problem is not that it isn't objective, the problem is that
it is not perceptually-based.

There is no reason at all
why an unweighted summation of (the energy in) harmonics relative to
the fundamental would be an "objective" or a relevant measurement. It
is weighting with a value of 1 for each harmonic. Why not diminishing
weights? Why not increasing weights?


Any consistent means for weighting harmonics is objective, but it just might
be very suboptimal because it is not based on what has developed in terms of
knowledge about human perception since THD was first suggested (the early
1930's, I believe)

By the way, I **personally** do not think that SETs are really the
excellent amplifiers that Subjectivists and Cheever take them to be,
but that is not the point either.


Nice job of building Cheever up and then cutting him down.

The point is that it IS possible, and it IS done, to construe a set of
serious measurements that DO show that SETs measure "objectively"
BETTER than transistor amps, while the whole Objectivist community
lives in the mind-set that this cannot "objectively" be done.


The recent Geddes/Lee AES papers disprove this claim quite thoroughly.

In short, measurements, and especially the processing of measurements,
are NOT objective.


Sure they are, but that doesn't mean that measures dating back to maybe 1925
are the best that we can do.

If they correlate with what we hear, we may
consider them relevant. If they don't than they are not so relevant.


How about that.

You are also advised to take notice of the recent work or Earl and
Lidia Geddes on sound quality and the perception and measurement of
distortion, also presented recently at the AES. See their website at:


http://www.gedlee.com/distortion_perception.htm


Nice job of self-deconstruction.

You are also advised to take a look at the newest issue of the Journal
of the AES (nov 2003, vol 51, no 11). As you know, these guys of the
AES are not really soft-in-the-head Subjectivists. Let's look at the
contents and quote the abstract of the main paper in this issue:

[quote]
The Effect of Nonlinear Distortion on Perceived Quality of Music and
Speech Signals
Chin-Tuan Tan, Brian C. J. Moore, and Nick Zacharov 1012


The subjective evaluation of nonlinear distortions often shows a weak
correlation with physical measures because the choice of distortion
metrics is not obvious. In reexamining this subject, the authors
validated a metric based on the change in the spectrum in a series of
spectral bins, which when combined leads to a single distortion
metric. Distortion was evaluated both objectively and subjectively
using speech and music. Robust results support the hypothesis for this
approach.
[unquote]


So you are not the only one asking himself why ...
"The subjective evaluation of nonlinear distortions often shows a weak
correlation with physical measures..."


No, its something that those nasty old objectivists have been discussing
quite a bit, and for years.

It is, as I have made now ABUNDANTLY clear, ...
"because the choice of distortion metrics is not obvious."


This is supposed to be news?

Yeah.

There is another very interesting article in the same issue of the
JAES, one which will make the Hard Line Objectivists puke, as it makes
clear that even the tough AES boys roll over to the soft-in-the-head
camp:


[quote]
Large-Signal Analysis of Triode Vacuum-Tube Amplifiers
Muhammad Taher Abuelma'atti 1046


With the renewed interest in vacuum tubes, the issue of intrinsic
distortion mechanisms becomes relevant again. The author demonstrates
a nonlinear model of triodes and pentodes that leads to a closed-form
solution when the nonlinearity is represented by a Fourier expansion
rather than the conventional Taylor series. When applied to a two-tone
sine wave, the analysis shows that the distortion in tube amplifiers
is similar to that of the equivalent transistor amplifier. A SPICE
analysis confirms the approach.
[unquote]


Yeah, even with simple two-tone sine waves it is now ESTABLISHED
OBJECTIVELY that tube amps do NOT distort more than transistor amps.


Actually, that's not what it says, but straightening out Raedecker would be
a full-time job for a larger committee than just me.

So it says.


===========


Oh, WHAT a field day for the Subjectivists today.


Why?

Oh, WHAT a dismal day for the HLOs like Pinkerton, Krueger, Ferstler
and all the rest of them.


Really?

All those years they have thought that they have at least the AES on
their side, and now the AES deserts them. It must be an annus
horribilis for them.


Not at all. This is just more of Raedecker's ignorant, self-contradictory
posturing.



  #80   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

"ScottW" wrote in message
news:Pz1Hb.41708$m83.13206@fed1read01...
Isn't there a question about the validity of applying this test to CD
players which don't have to regnerate the clock?

I thought it was generally applied to HT receivers with DACs
and external DACs?


Hi ScottW, yes, the J-Test was originally intended to examine devices where
the clock was embedded in serial data. What I find interesting is that
CD players do differ quite considerably in how they handle this signal,
meaning that there are other mechanisms going on producing the same effect.
(Meitner's and Gendron LIM, for example, which they discussed in an AES paper
about 10 years ago.) And of course, those CD players that use an internal
S/PDIF link stand revealed for what they are on the J-Test.

BTW, you might care to look at the results on the J-Test for the Burmester
CD player in our December issue (avaiable in our on-line archives). It did
extraordinaruly well on 44.1k material, both on internal CD playback and
on external S/PDIF data, but failed miserably with other sample rates.
Most peculiar. My point is that the J-Test was invaluable in finding
this out.

John Atkinson
Editor, Stereophile
Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Google Proof of An Unprovoked Personal Attack from Krueger Bruce J. Richman Audio Opinions 27 December 11th 03 05:21 AM
Note to Krooger George M. Middius Audio Opinions 1 October 22nd 03 07:57 AM
Note to the Krooborg George M. Middius Audio Opinions 17 October 16th 03 11:53 PM
Note to Marc Phillips Lionel Chapuis Audio Opinions 9 September 11th 03 06:07 PM
Note on Google Groups URLs George M. Middius Audio Opinions 19 September 8th 03 11:45 PM


All times are GMT +1. The time now is 08:22 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"