Reply
 
Thread Tools Display Modes
  #81   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

"ScottW" wrote in message
news:Pz1Hb.41708$m83.13206@fed1read01...
I thought it was generally applied to HT receivers with DACs
and external DACs?


Oh, and one more thing. I have no problem with people not thinking this
test is useful, or is not being applied appropriately, or offers no
proven correlation with audible problems. If those are your opinions, I
have no intention of arguing with you. They just don't happen to be _my_
opinions. I see nothing wrong in us agreeing to disagree. What I _am_
objecting to is Arny Krueger's trying to disseminate something that is
not true, which is his statement that tests that don't use dither are
forbidden by an AES standard. For him to keep repeating this falsehood
is dirty pool.

John Atkinson
Editor, Stereophile
  #82   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"John Atkinson" wrote in message
om
"Arny Krueger" wrote in message
...
"John Atkinson" wrote in message
om


In message
Arny Krueger ) wrote:


Stereophile does some really weird measurements, such as their
undithered tests of digital gear. The AES says don't do it, but
John Atkinson appears to be above all authority but the voices
that only he hears.

It always gratifying to learn, rather late of course, that I had
bested Arny Krueger in a technical discussion. My evidence for
this statement is his habit of waiting a month, a year, or even more
after he has ducked out of a discussion before raising the subject
again on Usenet as though his arguments had prevailed. Just as he
has done here. (This subject was discussed on r.a.o in May 2002,
with Real Audio Guys Paul Bamborough and Glenn Zelniker joining me
in pointing out the flaws in Mr. Krueger's argument.)


So, as Atkinson's version of the story evolves, it wasn't him alone
that bested me, but the dynamic trio of Atkinson, Bamborough, and
Zelniker. Notice how the story is changing right before our very
eyes!


"Our?" Do you have a frog in your pocket, Mr. Krueger?


It says that it's your mother, Mr. Atkinson. I don't believe it.

No, Mr.
Krueger. The story hasn't changed. I was merely pointing out that
Paul Bamborough and Glenn Zelniker, both digital engineers with
enviable reputations, posted agreement with the case I made, and as I
said, joined me in pointing out the flaws in your argument.


Zelniker and Bamborough have both gone on long and loud about their personal
disputes with me and their low personal opinions of me are. Therefore, they
are not unbiased judges of this matter and should be ignored. If they were
honest men, they would recuse themselves, but of course they are not honest
men.

So let's examine what the Audio Engineering Society (of which I am
a long-term member and Mr. Krueger is not) says on the subject of
testing digital gear, in their standard AES17-1998 (revision of
AES17-1991):
Section 4.2.5.2: "For measurements where the stimulus is generated
in the digital domain, such as when testing Compact-Disc (CD)
players, the reproduce sections of record/replay devices, and
digital-to-analog converters, the test signals shall be dithered.


I imagine this is what Mr. Krueger means when wrote "The AES says
don't do it." But unfortunately for Mr. Krueger, the very same AES
standard goes on to say in the very next section (4.2.5.3):
"The dither may be omitted in special cases for investigative
purposes. One example of when this is desirable is when viewing bit
weights on an oscilloscope with ramp signals. In these circumstances
the dither signal can obscure the bit variations being viewed."


At this point Atkinson tries to confuse "investigation" with "testing
equipment performance for consumer publication reviews" Of course
these are two very different things...


Not at all, Mr. Krueger. As I explained to you back in 2002 and again
now, the very test that you describe as "really weird" and that you
claim the "AES says don't do" is specifically outlined in the AES
standard as an example of a test for which a dithered signal is
inappropriate, because it "can obscure the bit variations being
viewed."


Apparently the meaning of the word "investigation" is very unclear to you,
Atkinson. At this late time in your education, I won't be of much help in
terms of clarifying it for you.

It is also fair to point out that both the undithered ramp signal and
the undithered 1kHz tone at exactly -90.31dBFS that I use for the same
purpose are included on the industry-standard CD-1 Test CD, that was
prepared under the aegis of the AES.


Irrelevant to the issue of subjective relevancy.

If you continue to insist that the AES says "don't do it," then why on
earth would the same AES help make such signals available?


For the purpose of scientific and advanced technical investigation, not for
routine testing for a consumer publication.

As the first specific test I use an undithered signal for is indeed
for investigative purposes -- looking at how the error in a DAC's
MSBs compare to the LSB, in other words, the "bit weights" -- it
looks as if Mr. Krueger's "The AES says don't do it" is just plain
wrong.


The problem here is that again Atkinson has confused detailed
investigations into how individual subcomponents of chips in the
player works (i.e., "[investigation]") with the business of
characterizing how it will satisfy consumers.


The AES standard concerns the measured assessment of "Compact-Disc
(CD) players, the reproduce sections of record/replay devices, and
digital-to-analog converters." As I pointed out, it makes an exception
for "investigative purposes" and makes no mention of such "purposes"
being limited to the "subcomponents of chips."


Apparently the meaning of the word "investigation" is very unclear to you,
Atkinson. At this late time in your education, I won't be of much help in
terms of clarifying it to you.

The examination of "bit
weights" is fundamental to good sound from a digital component,
because if each one of the 65,535 integral step changes in the
digital word describing the signal produces a different-sized change
in the reconstructed analog signal, the result is measurable and
audible distortion.


It is an absolute and total falsehood that missing one of the 65,535
integral step changes in the
digital words describing the signal produces a different-sized change
in the reconstructed analog signal, will result in measurable and audible
distortion.

A careful reading of the following AES standards documents will show why:

AES-6id-2000 AES information document for digital audio -- Personal computer
audio quality measurements (35 pages) [2000-05-01 printing]

AES17-1998 (revision of AES17-1991) AES standard method for digital audio
engineering -- Measurement of digital audio equipment

If you are unaware of the reasons for my claim, I would be happy to explain
them to you, Atkinson.


Consumers don't care about whether one individual bits of the
approximately 65,000 levels supported by the CD format works, they
want to know how the device will sound.


Of course. And being able to pass a "bit weight" test is fundamental
to a digital component being able to sound good.


Absolutely not. It's easy to show that many bits can be dropped and/or
otherwise mangled with zero measurable and audible results.

This is why I
publish the results of this test for every digital product reviewed
in Stereophile. I am pleased to report that the bad old days, when
very few DACs could pass this test, are behind us.


True but loss of a few bit levels here and there can have zero audible and
measured effects.

It's a simple matter to show that nobody, not even John Atkinson can
hear a single one of those bits working or not working.


I am not sure what this means.


It means that you are wrong.

If a player fails the test I am
describing, both audible distortion and sometimes even more audible
changes in pitch can result.


I can cite a great many cases where neither audible nor measurable effects
occur, other than of course in some of your sonically-irrelevant tests.

I would have thought it important for
consumers to learn of such departures from ideal performance.


Atkinson, you simply don't know what you are talking about when it comes to
the audibility of small differences.

Yet he deems it appropriate to confuse consumers with this sort of
[minutiae], perhaps so that they won't notice his egregiously-flawed
subjective tests.


In your opinion, Mr. Krueger, and I have no need to argue with you
about opinions, only when you misstate facts. As you have done in this
instance.


I have done no such thing.

To recap:



I use just two undithered test signals as part of the battery of tests
I perform on digital components for Stereophile. Mr. Krueger has
characterized my use of these test signals as "really weird" and has
claimed that their use is forbidden by the Audio Engineering
Society. Yet, as I have shown by quoting the complete text of the
relevant paragraphs from the AES standard on the subject, one of the
tests I use is specifically mentioned as an example as the kind of
test where dither would interfere with the results and where an
undithered signal is recommended.


Apparently the meaning of the word "investigation" is very unclear to you,
Atkinson. At this late time in your education, I won't be of much help in
terms of clarifying it to you.

As my position on this subject has been supported by two widely
respected experts on digital audio, I don't think that anything more
needs to said about it.


Zelniker and Bamborough have both gone on long and loud about their personal
disputes with me and their low personal opinions of me are. Therefore, they
are not unbiased judges of this matter and should be ignored. If they were
honest men, they would recuse themselves, but of course they are not honest
men.


And as I said, Mr. Krueger is also incorrect about the second
undithered test signal I use, which is to examine a DAC's or CD
player's rejection of word-clock jitter. My use is neither "really
weird," nor is it specifically forbidden by the Audio Engineering
Society.


But it is. The problem is that Atkinson doesn't know what the word
"investigation" means. He has obviously forgotten that he is testing audio
gear for consumers, and that perceived sound quality should guide his
testing procedures.

He does other tests, relating to jitter, for which there is no
independent confirmation of reliable relevance to audibility. I
hear that this is not because nobody has tried to find
correlation. It's just that the measurement methodology is flawed,
or at best has no practical advantages over simpler methodologies
that correlate better with actual use.

And once again, Arny Krueger's lack of comprehension of why the
latter test -- the "J-Test," invented by the late Julian Dunn and
implemented as a commercially available piece of test equipment by
Paul Miller -- needs to use an undithered signal reveals that he
still does not grasp the significance of the J-Test or perhaps even
the philosophy of measurement in general. To perform a measurement
to examine a specific aspect of component behavior, you need to use
a diagnostic signal. The J-Test signal is diagnostic for the
assessment of word-clock jitter because:

1) As both the components of the J-Test signal are exact integer
fractions of the sample frequency, there is _no_ quantization error.
Even without dither. Any spuriae that appear in the spectra of the
device under test's analog output are _not_ due to quantization.
Instead, they are _entirely_ due to the DUT's departure from
theoretically perfect behavior.

2) The J-Test signal has a specific sequence of 1s and 0s that
maximally stresses the DUT and this sequence has a low-enough
frequency that it will be below the DUT's jitter-cutoff frequency.

Adding dither to this signal will interfere with these
characteristics, rendering it no longer diagnostic in nature. As an
example of a _non_-diagnostic test signal, see Arny Krueger's use of
a dithered 11.025kHz tone in his website tests of soundcards at a
96kHz sample rate. This meets none of the criteria I have just
outlined.


Notice that *none* of the above mintuae and fine detail addresses my
opening critical comment: "He does other tests, relating to jitter
for which there is no independent confirmation of reliable relevance
to audibility".

Now did you see anything in Atkinson two numbered paragraphs above
and the subsequent unnumbered paragraph that address my comment about
listening tests and independent confirmation of audibility? No you
didn't!


You are absolutely correct, Mr. Krueger. There is nothing about the
audibility of jitter in these paragraphs. This is because I was
addressing your statements that this test, like the one examining bit
weights, was "really weird" and that "The AES says don't do it, but
John Atkinson appears to be above all authority but the voices that
only he hears."



Regarding audibility, I then specifically said, in my next paragraph,
that "One can argue about the audibility of jitter..." As you _don't_
think it is audible but my experience leads me to believe that it
_can_ be, depending on level and spectrum, again I don't see any
point in arguing this subject with you, Mr. Krueger. All I am doing is
specifically addressing the point you made in your original posting
and showing that it was incorrect. Which I have done.


Not at all. You've just talked around the issue one more time without
dealing with it.

Finally, you recently claimed in another posting that your attacking
me was "highly appropriate," given that my views about you "are
totally fallacious, libelous and despicable." I suggest to those
reading this thread that they note that Arny Krueger has indeed made
this discussion highly personal, using phrases such as "the voices
that only [John Atkinson] hears"; "Notice how [John Atkinson's] story
is changing right before our very eyes!"; "the same-old, same-old old
Atkinson song-and-dance which reminds many knowledgeable people of
that old carny's advice 'If you can't convince them, confuse them!'";
"This is just more of Atkinson's 'confuse 'em if you can't convince
'em' schtick"; "Atkinson is up to tricks as usual."


I can't change the facts about your many deceptions, Atkinson. Only you can
do that.

I suggest people think for themselves about how appropriate Mr.
Krueger's attacks are, and how relevant they are to a subject where
it is perfectly acceptable for people to hold different views.


The real problem is that there's an unhidden agenda in this discussion,
which is audibility. Atkinson refuses to deal with this issue directly and
appropriately because he knows that he will lose. His many deceptions,
attempts to cite false witnesses, and inability to clearly and directly deal
with the issues that I raised should be clear to all readers but numskulls
like George Middius.



  #83   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"John Atkinson" wrote in message
om

What I _am_ objecting to is Arny Krueger's trying to
disseminate something that is not true, which is his statement that
tests that don't use dither are forbidden by an AES standard. For him
to keep repeating this falsehood is dirty pool.


Apparently the meaning of the word the AES uses, namely "investigation" is
very unclear to you,
Atkinson. At this late time in your education, I won't be of much help in
terms of clarifying it for you.

Since all of the media that consumers play on their digital equipment is
supposed to be dithered and generally is, there's no justification for the
use of undithered test signals when testing consumer equipment for the
purpose of reporting performance to consumers.

In your last post you made a fairly telling false claim Atkinson:

"The examination of "bit weights" is fundamental to good sound from a
digital component, because if each one of the 65,535 integral step changes
in the digital word describing the signal produces a different-sized change
in the reconstructed analog signal, the result is measurable and audible
distortion."

This is easy to show to be a grotesque false claim. At this time I'm leaving
disproving it as an exercise, but in due time I will conclusively show why
it is a false claim on the measurement side, using the following AES
standards documents:

AES-6id-2000 AES information document for digital audio -- Personal computer
audio quality measurements (35 pages) [2000-05-01 printing]

AES17-1998 (revision of AES17-1991) AES standard method for digital audio
engineering -- Measurement of digital audio equipment

Anybody who wants to study the audibility of systematic bit reduction for
themselves need only visit www.pcabx.com .

The most relevant page at the PCABX web site would be:

http://www.pcabx.com/technical/bits44/index.htm

and

http://www.pcabx.com//technical/sample_rates/index.htm

Both pages show musical samples with fairly gross removal of bits that is
also completely undetectable to even the most sensitive listeners using SOAT
or near-SOTA monitoring equipment. In this particular case, PCABX is a
rigorous and exact methodology as opposed to some other cases where PCABX is
merely a highly convenient and effective approximation of rigorous and exact
methods.




  #84   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"John Atkinson" wrote in message
om...
"ScottW" wrote in message
news:Pz1Hb.41708$m83.13206@fed1read01...
Isn't there a question about the validity of applying this test to CD
players which don't have to regnerate the clock?

I thought it was generally applied to HT receivers with DACs
and external DACs?


Hi ScottW, yes, the J-Test was originally intended to examine devices

where
the clock was embedded in serial data. What I find interesting is that
CD players do differ quite considerably in how they handle this signal,
meaning that there are other mechanisms going on producing the same

effect.
(Meitner's and Gendron LIM, for example, which they discussed in an AES

paper
about 10 years ago.) And of course, those CD players that use an internal
S/PDIF link stand revealed for what they are on the J-Test.

BTW, you might care to look at the results on the J-Test for the

Burmester
CD player in our December issue (avaiable in our on-line archives). It

did
extraordinaruly well on 44.1k material, both on internal CD playback and
on external S/PDIF data, but failed miserably with other sample rates.
Most peculiar. My point is that the J-Test was invaluable in finding
this out.

John Atkinson
Editor, Stereophile


Of course the test is valid for external data. Still interesting
that the reviewer never picked this up.

I would be interested to see a DBT to support Brian's perception
that, "as the centerpiece of a digital-only system, running balanced
from stem to stern, the Burmester 001 is the best digital front-end
I've ever heard." I'd be interested in seeing if he really could
identify balanced from single ended.

ScottW


  #85   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"John Atkinson" wrote in message
om...
"ScottW" wrote in message
news:Pz1Hb.41708$m83.13206@fed1read01...
I thought it was generally applied to HT receivers with DACs
and external DACs?


Oh, and one more thing. I have no problem with people not thinking this
test is useful, or is not being applied appropriately, or offers no
proven correlation with audible problems. If those are your opinions, I
have no intention of arguing with you. They just don't happen to be _my_
opinions.


I have no opinion on correlation with audible problems as I haven't
seen anyone show it does or does not exist.

Upon what basis have you formed your opinion?

Looks like you might have a good candidate for testing
in the Burmester.

I see nothing wrong in us agreeing to disagree. What I _am_
objecting to is Arny Krueger's trying to disseminate something that is
not true, which is his statement that tests that don't use dither are
forbidden by an AES standard. For him to keep repeating this falsehood
is dirty pool.


Agreed. He is becoming a bit repititous with this assertion.
I try not to let him be too distracting in these moments.

ScottW




  #86   Report Post  
S888Wheel
 
Posts: n/a
Default Note to the Idiot

I said


You asked for my definition of well done DBTs


Scott said


But I don't agree that is necessary for Stereophile to
implement.


Fair enough but we disagree. I believe that if Stereophile were to go through
the trouble to enforce a standard DBT for all reviewers to use in order to
improve reliability of reviews it only makes sense to me to have the protocols
adhere to a scientifically acceptable standard. Anything less looks like a
large demand made on hobbyists reviewers with little assurance tht it will
improve the reliability of such reviews.

I said


I don't think it would require independent witnesses but it would require
Stereophile to establish their own formal peer review group.


Scott said


Let me be clear, I don't want to impose a bunch of requirements that
make this effort too difficult to implement.


Then how could the readers be sure valid tests are being conducted and
reported?

I said


But we are talking
about Stereophile dealing with the current level of uncertainty that now

exists
with the current protocols.


Scott said


? Stereophile has conducted elaborate DBTs with more rigourous
protocols than I call for.


Yes. So they know how difficult it would be to make this a requirement before
any subjective review may go forward.


Scott said

Let them establish a protocol
that is workable for their reviewers to conduct.
Publish it for comment, should be very interesting.


I'm not clear what you want now. You want Stereophile to establish a protocol
but you want the protocol to be easily executed and it need not meet the rigors
demanded by science? What would such protocols look ike that they would
substantially improve the reliability of subjective reviews and yet not meet
the standards demanded by science and be easy for all reviewers to do?

I said

I think to do standard DBTs right would be a major
pain in the ass for them. Even the magazines which make a big issue out

of such
tests don't often actually do such tests and when they do they often do a

crap
job of it.


Scott said


If they have a tool which controls switching and tabulates results,
I really don't see what the problem is.


Would you want someone to do this without any training? Would you like this to
happen with no calibration of test sensitivity or listener sensitivity? Would
you want such testing to take place absent varification with a sperate set of
tests conducted by another tester?

Scott said


What needs to happen is a level of automation is provided
to match the skill level of the tester. That wouldn't be that difficult.


I'm sure it would help. I'm not sure it wouldn't still be challenging for
hobbyists to get reliable results.


Scott said

A tool to allow a single person toconduct and report
statistically valid results (if not independently witnessed)
would be required. After that, conducting the tests would
be relatively easy.


I said


Is it ever easy? Look what Howard did with such a tool.



Scott said


I don't believe Howards ABX box provided the level of automation
I am talking about.


I think the box was fine. I think the problem was Howard. Were it not for some
stupid mistakes he made in math we may have never known just how bad his tests
really were. Lets not forget they were published.
  #87   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

"Powell" wrote in message
...
The Carver Challenge. Bob Carver made statements
that he could replicate any amplifier design using a
technique called "transfer function." Stereophile
took up his challenge wanting Bob to replicate the
sound of the Conrad-Johnson Premier 5 mono-blocks.
I think that over a two-day period he accomplished
that task to Holt's and Archibald's satisfaction. From
there the Carver M1.0t was born...

On the dark side to this tail, Stereophile was later
prohibited from publishing any reference to Carver
after trying to undo (publish/verbally) the results of the
empirical findings.


Hi Powell, I dug into my archives as promised. Here is a blow-by-blow
account of what happened:

Sometime in 1985, Bob Carver challenged Stereophile's then editor, J.
Gordon Holt, and its publisher, Larry Archibald, that he could match
the sound of any amplifier they chose with one of his inexpensive
"Magnetic-Field" power amplifiers. Mr. Carver subsequently visited
Santa Fe to try to modify his amplifier so that it matched the sound
of an expensive tube design. (This was indeed a Conrad-Johnson
monoblock, though that was not reported at the time. I am not aware of
the reasons why not as I didn't join Stereophile's staff until May 1986.)

The degree to which the two amplifiers matched was confirmed using a
null test. At first, however, even though the measured null was 35dB
down from the amplifiers' output levels (meaning that any difference
was at the 1.75% level), JGH and LA were able to distinguish the Carver
from the target amplifier by ear. It was only when Bob Carver lowered
the level of the null between the two amplifiers to -70dB -- a 0.03%
difference -- that the listeners agreed that the Carver and the
reference amplifier were sonically indistinguishable. (This entire
series of tests was reported on in the October 1985 issue of
Stereophile, Vol.8 No.6.)

Neither Gordon Holt nor myself had further access to the original
prototype amplifier that had featured in the 1985 tests. Some 18 months
later, however, after I joined the magazine, Stereophile was sent a
production version of the Carver M-1.0t. This was an amplifier that we
had understood Bob Carver intended to sound and measure identically to
the prototype that had featured in the 1985 listening tests. Because of
this sonic "cloning," the production M-1.0t was advertised by Carver
as sounding identical to the tube amplifier that had featured in the
1985 Stereophile tests: "Compare the new M-1.0t against any and all
competition. Including the very expensive amplifiers that have been
deemed the M-1.0t's sonic equivalent," stated Carver Corporation
advertisements in Audio (October 1986), Stereo Review (February 1987),
and in many other issues of those magazines.

After careful and independent auditioning of the production M-1.0t,
Gordon and I felt that while the M-1.0t was indeed similar-sounding
to the tube design, it did not sound identical. Certainly it could not
be said that we deemed it the sonic equivalent of the "very expensive
amplifiers." Measuring the outputs of the two amplifiers at the
loudspeaker terminals while each was driving a pair of Celestion
SL600 speakers revealed minor frequency-response differences due to
the amplifiers having different output impedances. (The Carver had a
conventionally low output impedance; in subsequent auditioning, Bob
Carver connected a long length of Radio Shack hook-up cable in series
with its output to try to reduce the sonic differences between the
amplifiers.)

In addition, carrying out a null test between the production M-1.0t
amplifier sent to Stereophile by Carver and the reference tube
amplifier revealed that, at best, the two amps would only null to
-40dB at 2kHz, a 1% difference, diminishing to -20dB below 100Hz and
above 15kHz, a 10% difference.

These null figures were not significantly altered by changing the tube
amplifier's bias or by varying the line voltage with a Variac. We then
borrowed another sample of the M-1.0t from a local dealer and nulled
that against the target amplifier. The result was an even shallower
null. In the original, 1985, tests, JGH and LA had proven to Mr. Carver
that they could identify two amplifiers that had produced a 35dB null
by ear. These 1987 measurements therefore reinforced the idea that the
production M-1.0t did not sound the same as the target tube amplifier
even though the original hand-tweaked prototype did appear to have done
so, driving the midrange/treble panels of Infinity RS1B loudspeakers
(but not the woofers).

Upon being informed of these results, Bob Carver flew out to Santa Fe
at the end of February 1987 and carried out a set of null tests that
essentially agreed with my measurements, a fact confirmed by Mr. Carver
in a letter published in Stereophile: "...the null product between the
amplifiers...is 28dB, not 70dB! Your tests showed this, and so did mine,"
he wrote, and went on to say that "Since my own research has shown that
the threshold for detecting differences is about 40dB, I knew there was
enough variance between the amps to be detectable by a careful listener."

Mr. Carver then asked us to participate in a series of single-blind
tests comparing the production M-1.0t with the tube amplifier, with
himself acting as the test operator. We agreed, and Gordon Holt proved
to be able to distinguish the two amplifiers by ear alone in these tests.
"J. Gordon was able to hear the difference between my M-1.0t and the
reference amp in a blind listening test," Mr. Carver wrote in the letter
referred to above, continuing, "An earlier test had shown Gordon's
hearing to be flawless, like that of a wee lad." (All of Stereophile's
tests and auditioning of the production M-1.0t, the events that occurred
during the February '87 weekend Mr. Carver spent in Santa Fe, and Mr.
Carver's subsequent letter were reported on and published in the
April/May 1987 issue of Stereophile, Vol.10 No.3.

Bob Carver subsequently reinforced the idea that it is hard to
consistently manufacture amplifiers which differ from a target design
by less than 1% or so: ie, cannot produce a null deeper than -40dB. In
an interview with me that appeared in the February 1990 issue of
Stereophile (Vol.13 No.2), he said, "A 70dB null is a very steep null.
It's really down to the roots of the universe and things like that.
70dB nulls aren't possible to achieve in production." I asked him,
therefore, what his target null was between the Silver Seven-t and the
original Silver Seven amplifier, two more-recent Carver amplifiers that
are stated in Carver's literature to sound very similar. "About 36dB,"
was his reply. "When you play music, the null will typically hover
around the 36dB area." (Bob Carver subsequently confirmed that I had
correctly reported his words in this interview.)

Far from recanting their original findings, Stereophile's staff
reported what they had measured and what they had heard under
carefully controlled conditions regarding the performance of the
production Carver M-1.0t amplifier, just as they do with any component
being reviewed. The fact that those findings were at odds with their
earlier experience can be explained by the fact that the amplifiers
auditioned were _not_ identical: the 1985 tests involved a
hand-tweaked prototype based on a Carver M-1.5 chassis; the 1987 tests
involved a sample M-1.0t taken from the production line and it is my
understanding that there was no production hand-nulling.

In addition, the 1985 tests only involved the midrange and treble towers
of Infinity RS-1b loudspeakers, the woofers being driven by a different
amplifier. FOr the 1987 review and subsequent tests, the Carver and
C-J amps were used full-range.

While it would indeed appear possible for Mr. Carver on an individual,
hand-tweaked basis to achieve a null of -70dB between two entirely
different amplifiers (meaning that it would be unlikely for them to
sound different), routinely repeating this feat in production is not
possible (something implied by Mr. Carver's own statements). And if it
is not possible, then it is likely that such amplifiers could well sound
different from one another, just as Stereophile reported.

Regarding subsequent events, we published reviews of the Carver TFM-25
and Silver Seven-t amplifiers in 1989 and the Carver Amazing loudspeaker
in 1990. Also in 1990, an edited version of a Stereophile review
appeared in Carver literature and in an advertisement that appeared in
the May/June 1990 issue of The Absolute Sound. We took legal action to
prevent this from happening, as we do in any instance of infringement
of our copyright. Bob Carver responded by filing a countersuit for
defamation, trade disparagement, product disparagement, and
interference with a business expectancy, against Stereophile Inc.,
against Larry Archibald, against Robert Harley, and against myself,
claiming "in excess of $50,000" in personal damages for Mr. Carver
and "in excess of $3 million" in lost sales for the Carver Corporation.
(This sum was later raised to $7 million on the appearance of a Follow-Up
review of the Carver Silver Seven-t amplifier in our October 1990 issue.)

Carver's countersuit included some 42 individual counts of purported
defamation dating back to J. Gordon Holt's reporting of the original
"Carver Challenge" in the middle of 1985. J. Gordon Holt was _not_
named in the countersuit, however, and neither were Dick Olsher and Sam
Tellig who had also written reviews of Carver products. In effect, I
was being sued for things had had been published in Stereophile a year
before I joined.

What we had expected to be a conventional copyright case had turned
into something much greater in its scope and financial consequences.
Neither case never went to court, however, the two sides agreeing to
an arbitrated settlement in late December 1990, with the help of a
court-appointed mediator. Agreement was reached for a settlement with
prejudice (meaning that none of the claims and counterclaims can be
revived by either side) that took effect on 1/1/91.

The settlement agreement was made a public document. The main points
we

a) Neither side admitted any liability.

b) Neither side paid any money to the other.

c) Carver recalled all remaining copies of the unauthorized reprint for
destruction.

d) With the exception of third-party advertisements, Stereophile agreed
not to mention in print Carver the man, Carver the company, or Carver
products for a cooling-off period of three years starting 1/1/91 or
until the principals involved were no longer with their respective
companies.

That's it. Stereophile returned to giving review coverage to Carver
products in the usual manner after 1/1/94.

John Atkinson
Editor, Stereophile
  #88   Report Post  
S888Wheel
 
Posts: n/a
Default Note to the Idiot

Scott said


Ok, then lets not impose such a level of rigor.
Nothing else reported in Stereophile has to meet this criteria,
why impose it on DBTs?



I said


For the sake of improving protocols to improve reliability of subjective
reports.


Scott said

I don't agree. Sufficient DBT protocols exist. Stereophile has used them.
No "improvement" in DBT protocols is required. Applying existing DBT
protocols
would be sufficient to confirm or deny audible differences exist.


What exactly are those protocols? I would say the protocols used by those
advocating the use of DBTs in other publications were lacking and they were
being pawned off as definitive proof of universal truths. That would be
counterproductive IMO. I don't know what protocols Stereophile used in their
DBTs but I am willing to bet that if they were scientifically valid they were
far to burdensome to be done before every subjective review published in
Stereophile. Maybe John Atkinson could comment on that. I am speculating.

I said


When it comes to such protocols I think quality is more important than
quantity.


Scott said


Unfortunately this opens a major loophole. Cherry picking the
units to be tested such that audible differences are assured.


I don't follow.

Scott said

I would like to see DBTs become part of the standard review
protocol for select categories of equipment.


Isn't that cherry picking?

Scott said


Most reviewers like to compare equipment under review to
their personal reference systems anyway.


That in and of itself presents a problem. One cannot always draw universal
truths about any component based on use in one system. I think mulitple
reviewers for any one piece of equipment woiuld be a very good idea but I
suspect it isn't practical or financially feasable.

I said


If DBTs aren't done well they will not improve the state of reviews
published by Stereophile.



Scott said


We differ on how well done they need to be to add
credibility to audible difference claims.


I agree. So in light of our disagreement imagine the difficulty of enforcing a
standard of rigor and protocol on a group of reviewers who mostly review as a
hobby. Imagine the expense involved in initiating such protocols. I think even
if we don't agree on the standard we probably agree that there should be a
standard.

I said


The source of your beef with Stereophile is that it
lacks reliability now is it not?


Scott said


Not exactly. I find the subjective perceptions
portion of some reviews to lack credibility.


I take them as anecdotes just as I take opinions of other audiophiles as
anecdotes. Thier level of unreliability does not bother me because I assume
they are unreliable as anecdotes tend to be.
  #89   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"S888Wheel" wrote in message
...
I said


You asked for my definition of well done DBTs


Scott said


But I don't agree that is necessary for Stereophile to
implement.


Fair enough but we disagree. I believe that if Stereophile were to go

through
the trouble to enforce a standard DBT for all reviewers to use in order

to
improve reliability of reviews it only makes sense to me to have the

protocols
adhere to a scientifically acceptable standard. Anything less looks like

a
large demand made on hobbyists reviewers with little assurance tht it

will
improve the reliability of such reviews.

I said


I don't think it would require independent witnesses but it would

require
Stereophile to establish their own formal peer review group.


Scott said


Let me be clear, I don't want to impose a bunch of requirements that
make this effort too difficult to implement.


Then how could the readers be sure valid tests are being conducted and
reported?


Same way they accept the measurements.
Faith in integrity.


I said


But we are talking
about Stereophile dealing with the current level of uncertainty that

now
exists
with the current protocols.


Scott said


? Stereophile has conducted elaborate DBTs with more rigourous
protocols than I call for.


Yes. So they know how difficult it would be to make this a requirement

before
any subjective review may go forward.


From what I recall, they were somewhat manually conducted
and required more than one person.
This does not have to be the case if the right tools
are developed.


Scott said

Let them establish a protocol
that is workable for their reviewers to conduct.
Publish it for comment, should be very interesting.


I'm not clear what you want now. You want Stereophile to establish a

protocol
but you want the protocol to be easily executed and it need not meet the

rigors
demanded by science?


Please define the "rigors demanded by science" so I know what you mean.
I keep thinking of tests conducted by pharmaceutical companies against
FDA requirments.


What would such protocols look ike that they would
substantially improve the reliability of subjective reviews and yet not

meet
the standards demanded by science and be easy for all reviewers to do?


A Laptop that controlled a ABX switch device which captured results
and downloaded them to a secure website where they were statistically
analyzed. The reviewer would have the capability to conduct the trials
without assistance and only be required to perform the connections.
Listen (which he is doing anyway), and choose.
Science would not be satisfied as no one independently confirmed the
connections and witnessed the procedure.
A fair amount of faith in the integrity of the reviewer would be granted.
The reviewer could conduct as many trials as they wish over the course
of the review.



I said

I think to do standard DBTs right would be a major
pain in the ass for them. Even the magazines which make a big issue

out
of such
tests don't often actually do such tests and when they do they often

do a
crap
job of it.


Scott said


If they have a tool which controls switching and tabulates results,
I really don't see what the problem is.


Would you want someone to do this without any training?


The reviewer has plenty of time for training. Aren't they
doing this as a normal part of the review process. Familiarizing
themselves with the sonic characteristics of the equipment and
comparing them to reference pieces?

Would you like this to
happen with no calibration of test sensitivity or listener sensitivity?


No. We are specifically trying to validate the reviewers
perception that it sounds different. Take the Bermester review
where he said it sounded better in balanced mode than SE.
Prove it sounded different. The tests didn't support that assertion
..

Would
you want such testing to take place absent varification with a sperate

set of
tests conducted by another tester?


Yes, I trust their integrity not to cheat. I also accept that
requiring verification is a substantial cost prohibiter.

Scott said


What needs to happen is a level of automation is provided
to match the skill level of the tester. That wouldn't be that

difficult.

I'm sure it would help. I'm not sure it wouldn't still be challenging for
hobbyists to get reliable results.


Define reliable? Subjective listening tests on one subject
can only apply to that subject. Still, in the case of an equipment
reviewer that one subject is of interest to a large number
of people.


Scott said

A tool to allow a single person toconduct and report
statistically valid results (if not independently witnessed)
would be required. After that, conducting the tests would
be relatively easy.


I said


Is it ever easy? Look what Howard did with such a tool.



Scott said


I don't believe Howards ABX box provided the level of automation
I am talking about.


I think the box was fine. I think the problem was Howard. Were it not for

some
stupid mistakes he made in math we may have never known just how bad his

tests
really were. Lets not forget they were published.


The math is trivial. We can set up a spreadsheet. In fact I've seen a
couple on the
Web. Let John do the math and include the outcome in his measurements
section.

ScottW


  #90   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"John Atkinson" wrote in message
om...
"Powell" wrote in message
...
The Carver Challenge. Bob Carver made statements
that he could replicate any amplifier design using a
technique called "transfer function." Stereophile
took up his challenge wanting Bob to replicate the
sound of the Conrad-Johnson Premier 5 mono-blocks.
I think that over a two-day period he accomplished
that task to Holt's and Archibald's satisfaction. From
there the Carver M1.0t was born...

On the dark side to this tail, Stereophile was later
prohibited from publishing any reference to Carver
after trying to undo (publish/verbally) the results of the
empirical findings.


Hi Powell, I dug into my archives as promised. Here is a blow-by-blow
account of what happened:

Sometime in 1985, Bob Carver challenged Stereophile's then editor, J.
Gordon Holt, and its publisher, Larry Archibald, that he could match
the sound of any amplifier they chose with one of his inexpensive
"Magnetic-Field" power amplifiers. Mr. Carver subsequently visited
Santa Fe to try to modify his amplifier so that it matched the sound
of an expensive tube design. (This was indeed a Conrad-Johnson
monoblock, though that was not reported at the time. I am not aware of
the reasons why not as I didn't join Stereophile's staff until May 1986.)

Fascinating tale.
I wonder if two production versions of the Conrad Johhnson
monoblocks would null better than 40 dB?

ScottW




  #91   Report Post  
George M. Middius
 
Posts: n/a
Default Note to the Idiot



Scottie CellPhone is roaming again.

Upon what basis have you formed your opinion?


Have you tried combining your dedication to beating on cellphones with
your futile obsession over tests that will never come to pass? You
could try "testing" stereos with the earphone, on speaker, and in a
three-way, and then report to the group about how much of an
audiophile you are.



  #92   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"S888Wheel" wrote in message
...
Scott said


Ok, then lets not impose such a level of rigor.
Nothing else reported in Stereophile has to meet this criteria,
why impose it on DBTs?


I said


For the sake of improving protocols to improve reliability of

subjective
reports.


Scott said

I don't agree. Sufficient DBT protocols exist. Stereophile has used

them.
No "improvement" in DBT protocols is required. Applying existing DBT
protocols
would be sufficient to confirm or deny audible differences exist.


What exactly are those protocols?


ITU BS.116 would be a good place to start.

I would say the protocols used by those
advocating the use of DBTs in other publications were lacking and they

were
being pawned off as definitive proof of universal truths.


I think there have been a couple of blind tests documented for AES

Heres a paper that discusses analysis with 2 tailed (better or worse than
chance)

http://www.aes.org/journal/toc/oct96.html

That would be
counterproductive IMO. I don't know what protocols Stereophile used in

their
DBTs but I am willing to bet that if they were scientifically valid they

were

My only exception to "scientific validity" is allowing the reviewer to
conduct the test for himself alone, unmonitored.
If he cheats, he cheats. If not the results will withstand scrutiny.

far to burdensome to be done before every subjective review published in
Stereophile. Maybe John Atkinson could comment on that. I am speculating.

I said


When it comes to such protocols I think quality is more important than
quantity.


Scott said


Unfortunately this opens a major loophole. Cherry picking the
units to be tested such that audible differences are assured.


I don't follow.


I've seen a few tests with positive results where the amps selected
were substantially different.

For example

http://www.stereophile.com/reference/587/index.html

I've seen people point to this test as some kind of revelation
that amps aren't amps.


Scott said

I would like to see DBTs become part of the standard review
protocol for select categories of equipment.


Isn't that cherry picking?


Faced with reality that some categories, speakers for example
may be too difficult we are forced to allow some exceptions.


Scott said


Most reviewers like to compare equipment under review to
their personal reference systems anyway.


That in and of itself presents a problem. One cannot always draw

universal
truths about any component based on use in one system. I think mulitple
reviewers for any one piece of equipment woiuld be a very good idea but I
suspect it isn't practical or financially feasable.

I said


If DBTs aren't done well they will not improve the state of reviews
published by Stereophile.



Scott said


We differ on how well done they need to be to add
credibility to audible difference claims.


I agree. So in light of our disagreement imagine the difficulty of

enforcing a
standard of rigor and protocol on a group of reviewers who mostly review

as a
hobby. Imagine the expense involved in initiating such protocols. I think

even
if we don't agree on the standard we probably agree that there should be

a
standard.


I think it already exists.

ScottW



  #93   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"George M. Middius" wrote in message
...


Scottie CellPhone is roaming again.

Upon what basis have you formed your opinion?


Have you tried combining your dedication to beating on cellphones with
your futile obsession over tests that will never come to pass? You
could try "testing" stereos with the earphone, on speaker, and in a
three-way, and then report to the group about how much of an
audiophile you are.


Poor George, the man (or boy or ?) who can't tell a futile obsession
from a mild interest.

Anyone notice that as the audio content of the group rises,
George appears to become almost frantic in his attempts
to derail discussion? Calm down George, I'm sure
the group will return to the normalcy you seek in short order.

ScottW


  #94   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

"Powell" wrote in message ...
"John Atkinson" wrote
I will dig up the story from my archives and post it to r.a.o.


"from my archives "...


Yes. As many of the questions I am asked return on a regular basis, I
keep a file of what I have written on the subjects. Do you have a problem
with that term?

Please do and post TAS's version, too.


"TAS"'s version? As no-one at TAS at that time was involved in the Carver
Challenge -- Round 1 in 1985 involved Bob Carver, Larry Archibald, and J.
Gordon Holt; Round 2 in 1987 involved those 3 people plus myself -- I
don't see what anyone at TAS could know about it.

I take it you have no problem with the other facts stated in the post.


I refer you to my previous posting for a complete discussion of what
happened. I am assuming your knowledge of the case is based on some of the
reports published in magazines other than Stereophile. Subsequent to the
settling of the lawsuit, a number of stories appeared in some audio
magazines, based on interviews with Bob Carver. None of the journalists
involved had spoken with Larry Archibald, Gordon Holt, or myself. At least
one even appeared to have a direct financial arrangement with Mr. Carver.
Both of these facts makes their reporting suspect, in my opinion.

John Atkinson
Editor, Stereophile
  #95   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

"Arny Krueger" wrote in message
...
http://www.google.com/groups?selm=2k...1.prod.aol.net

"The lawsuit was a separate issue. Carver Corporation charging that
Stereophile had engaged in a campaign to discredit Carver or some such.
In a settlement, they agreed not to mention Carver in their editorial
pages, although the company was free to continue to advertise in the
magazine."


This not correct. Here are the relevant paragraphs from the settlement,
which was made a public document in order to clarify the affair:

a) Neither side admitted any liability.

b) Neither side paid any money to the other.

c) Carver recalled all remaining copies of the unauthorized reprint for
destruction.

d) With the exception of _third-party advertisements_ [my underlining],
Stereophile agreed not to mention in print Carver the man, Carver the
company, or Carver products for a cooling-off period of three years
starting 1/1/91 or until the principals involved were no longer with
their respective companies.

John Atkinson
Editor, Stereophile


  #96   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"John Atkinson" wrote in message
om

The degree to which the two amplifiers matched was confirmed using a
null test. At first, however, even though the measured null was 35 dB
down from the amplifiers' output levels (meaning that any difference
was at the 1.75% level), JGH and LA were able to distinguish the
Carver from the target amplifier by ear.


This statement is self-congratulatory (" even though"), likely to be false
( no evidence that "able to distinguish" involved a blind test), and
ludicrous (the 1.75% difference can include nearly 2% IM, when the threshold
of hearing for IM can be as little as 0.1%).

It was only when Bob Carver
lowered the level of the null between the two amplifiers to -70 dB --
a 0.03% difference -- that the listeners agreed that the Carver and
the reference amplifier were sonically indistinguishable. (This entire
series of tests was reported on in the October 1985 issue of
Stereophile, Vol.8 No.6.)


This statement is still likely to be false ( no evidence that "able to
distinguish" involved a blind test).


Neither Gordon Holt nor myself had further access to the original
prototype amplifier that had featured in the 1985 tests. Some 18
months later, however, after I joined the magazine, Stereophile was
sent a production version of the Carver M-1.0t. This was an amplifier
that we had understood Bob Carver intended to sound and measure
identically to the prototype that had featured in the 1985 listening
tests. Because of this sonic "cloning," the production M-1.0t was
advertised by Carver as sounding identical to the tube amplifier that
had featured in the 1985 Stereophile tests: "Compare the new M-1.0t
against any and all competition. Including the very expensive
amplifiers that have been deemed the M-1.0t's sonic equivalent,"
stated Carver Corporation advertisements in Audio (October 1986),
Stereo Review (February 1987), and in many other issues of those
magazines.


AFAIK the most visible of Carver's mods to the M-1.0t was the addition of a
switchable resistor in series with the output terminals. More evidence that
the most audible difference between SS amps and tubed amps is their output
impedance, not all the far-more-subtle nonlinear distortions that tube
bigots tend to obsess over.

In addition, carrying out a null test between the production M-1.0t
amplifier sent to Stereophile by Carver and the reference tube
amplifier revealed that, at best, the two amps would only null to
-40 dB at 2 kHz, a 1% difference, diminishing to -20 dB below 100 Hz and
above 15 kHz, a 10% difference.


These null figures were not significantly altered by changing the tube
amplifier's bias or by varying the line voltage with a Variac.


It's pretty foolish to try to make substantial sonic improvements by varying
line voltage away from the optimum that the amplifier was designed for. The
same bad logic gives us some of the heaviest grade of audio snake oil -
power line conditioners and regenerators.


Upon being informed of these results, Bob Carver flew out to Santa Fe
at the end of February 1987 and carried out a set of null tests that
essentially agreed with my measurements, a fact confirmed by Mr.
Carver in a letter published in Stereophile: "...the null product
between the amplifiers...is 28 dB, not 70 dB! Your tests showed this,
and so did mine," he wrote, and went on to say that "Since my own
research has shown that the threshold for detecting differences is
about 40 dB, I knew there was enough variance between the amps to be
detectable by a careful listener."


Using PCABX technology, I've been able to improve the threshold for
detecting differences from 40 dB to nearly 60 dB.


Regarding subsequent events, we published reviews of the Carver TFM-25
and Silver Seven-t amplifiers in 1989 and the Carver Amazing
loudspeaker in 1990. Also in 1990, an edited version of a Stereophile
review appeared in Carver literature and in an advertisement that
appeared in the May/June 1990 issue of The Absolute Sound. We took
legal action to prevent this from happening, as we do in any instance
of infringement of our copyright. Bob Carver responded by filing a
countersuit for defamation, trade disparagement, product
disparagement, and interference with a business expectancy, against
Stereophile Inc., against Larry Archibald, against Robert Harley, and
against myself, claiming "in excess of $50,000" in personal damages
for Mr. Carver and "in excess of $3 million" in lost sales for the
Carver Corporation. (This sum was later raised to $7 million on the
appearance of a Follow-Up review of the Carver Silver Seven-t
amplifier in our October 1990 issue.)


Remember this is the same Bob Carver that was using legal threats to extract
(see, I didn't use the other ex-word that was in my mind) money from
subwoofer manufacturers too economically weak to defend themselves from his
IMO illegal and patently ridiculous claims.


Carver's countersuit included some 42 individual counts of purported
defamation dating back to J. Gordon Holt's reporting of the original
"Carver Challenge" in the middle of 1985. J. Gordon Holt was _not_
named in the countersuit, however, and neither were Dick Olsher and
Sam Tellig who had also written reviews of Carver products. In
effect, I was being sued for things had had been published in
Stereophile a year before I joined.


Atkinson took this way too personally. Perhaps in retrospect, he might learn
from this and recent events.



  #97   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"ScottW" wrote in message
news:8R7Hb.41748$m83.22687@fed1read01

Sometime in 1985, Bob Carver challenged Stereophile's then editor, J.
Gordon Holt, and its publisher, Larry Archibald, that he could match
the sound of any amplifier they chose with one of his inexpensive
"Magnetic-Field" power amplifiers. Mr. Carver subsequently visited
Santa Fe to try to modify his amplifier so that it matched the sound
of an expensive tube design. (This was indeed a Conrad-Johnson
monoblock, though that was not reported at the time. I am not aware
of the reasons why not as I didn't join Stereophile's staff until
May 1986.)


Fascinating tale.
I wonder if two production versions of the Conrad Johhnson
monoblocks would null better than 40 dB?


I'd expect two well-matched, recently tubed samples to null within at least
60 dB, even with a loudspeaker-like loads.



  #98   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"ScottW" wrote in message
newsa2Hb.41709$m83.29230@fed1read01

Let me be clear, I don't want to impose a bunch of requirements that
make this effort too difficult to implement.


I'd be just fine with Streophile doing a few comparisons of SS amps that
have similar ratings and are reasonably well made, but that Stereophile
rates vastly differently on their RCL. Tye results are easy to predict.
Seeing done would be fun just in terms of watching Atkinson try to worm his
way out of the obvious logical conclusions.

But we are talking
about Stereophile dealing with the current level of uncertainty that
now exists with the current protocols.


The uncertainly mainly comes from the fact that DBTs tend to be science, and
unlike prejudice science bows to no man.


? Stereophile has conducted elaborate DBTs with more rigorous
protocols than I call for.


The protocols were twisted to favor false positives within the context of
apparently-blind tests. There's other ways to bias a test than just letting
it be sighted.

Let them establish a protocol
that is workable for their reviewers to conduct.


As I said before, and Atkinson has recently sluffed denying, it's probable
that many of the Stereophile reviewers don't even do proper level matching.
Level-matching takes some technical skills and some fairly good equipment.
I'm not talking multi-deca-killobuck analyzers, I'm talking hand-held Fluke
meters that cost less than $500. I don't think that the Stereophile
reviewers are all that interested in this kind of stuff. They tend to be
poets, not test bench street fighters. Therefore, they lack the motivation,
expertise, and experience it takes to do this in a professional, timely sort
of way.

Back in the day of the ABX company, anybody who had a $kilobuck could buy
some really-pretty-good DBT switching equipment for under $1k. But those
days are gone. I think that an ABX test set would cost over $5K if someone
trotted out the blueprints, updated them to cover new technology, and
started turning the production line crank again.


I think to do standard DBTs right would be a major
pain in the ass for them.


Sound and Vision does some special DBT studies at least once a year.

Even the magazines which make a big issue
out of such tests don't often actually do such tests and when they
do they often do a crap job of it.


I only know about S&V's tests, and while I might do things a bit different,
they don't slough off on the basics.

If they have a tool which controls switching and tabulates
results, I really don't see what the problem is.


Most of the blind tests that S&V have done lately are the kind of test that
are exact when done with PCABX-type software. I believe that in one case
they used a PCABX-like tool they got from Microsoft, and in another they
used something that they got some other undisclosed way. There are currently
eight (8) PCABX software products downloadable on the web. I'm sure that any
of them can be used effectively.

What needs to happen is a level of automation is provided
to match the skill level of the tester. That wouldn't be that
difficult.


It's been done at least 10 times by 10 different people or groups.

A tool to allow a single person to conduct and report
statistically valid results (if not independently witnessed)
would be required. After that, conducting the tests would
be relatively easy.


Is it ever easy? Look what Howard did with such a tool.


It wasn't nearly as bad as some would like to make out. Note all the ranting
and raving on RAO about the evils of my PCABX tool. More than 20,000 copies
have been downloaded and used with nearly vanishing amounts of negative
comment in the real world.

I don't believe Howards ABX box provided the level of automation
I am talking about.


All of the PCABX-type tools tabulate the results, some do the statistical
analysis automatically at the end of the test. Can't get much easier than
that!

Of course it's very fashionable to ignore the vast resources and tremendous
numbers of DBTs that are going on in the real world outside RAO and
Stereophile. In fact both RAO and Stereophile are intellectual deserts when
it comes to DBTs and DBT resources. Middius and Atkinson like it that way!



  #99   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"S888Wheel" wrote in message


What exactly are those protocols?


Shows how ignorant you are, Mr. Hi-IQ.

I would say the protocols used by
those advocating the use of DBTs in other publications were lacking
and they were being pawned off as definitive proof of universal
truths.


Shows how ignorant you are, Mr. Hi-IQ.

That would be counterproductive IMO.


You're so ignorant that you don't have a relevant opinion, Mr. Hi-IQ.

I don't know what
protocols Stereophile used in their DBTs but I am willing to bet that
if they were scientifically valid they were far to burdensome to be
done before every subjective review published in Stereophile.


By the time Stereophile started publishing articles about their own DBTs the
basics were exceedingly well-known. Atkinson's refinements to the tests
mainly existed to hide some built-in biases towards positive results.
Backing out these gratuitous complexifications took additional statistical
work, but this work was done and showed that the results were the same-old,
same-old random guessing.

By PCABX standards, the actual listening sessions that Atkinson did were
crude and biased towards false negatives. So you have an ironic situation
where Atkinson structured the test for false positives, but the listening
sessions themselves were biased towards false negatives. I suspect there's
some chance that the PCABX version of a comparison of the actual components
he used would have a mildly positive outcome.

Maybe
John Atkinson could comment on that. I am speculating.


Ignorantly talking out your butt would be more like it.

I said


When it comes to such protocols I think quality is more important
than quantity.


Scott said



Unfortunately this opens a major loophole. Cherry picking the
units to be tested such that audible differences are assured.


I don't follow.

Scott said

I would like to see DBTs become part of the standard review
protocol for select categories of equipment.


Isn't that cherry picking?

Scott said


Most reviewers like to compare equipment under review to
their personal reference systems anyway.


That in and of itself presents a problem. One cannot always draw
universal truths about any component based on use in one system. I
think multiple reviewers for any one piece of equipment would be a
very good idea but I suspect it isn't practical or financially
feasible.


It's very easy using the PCABX approach, but PCABX of equipment with analog
I/O requires more pragmatism than many ignorant and semi-ignorant people can
muster. Anybody who thinks that all audio components sound different is a
jillion miles away from the pragmatism that is required if one is to be
comfortable with PCABX tests of equipment with analog I/O. PCABX testing of
equipment with digital I/O and audio software is exact.

I said


If DBTs aren't done well they will not improve the state of reviews
published by Stereophile.


Well dooh!

Scott said


We differ on how well done they need to be to add
credibility to audible difference claims.


I agree. So in light of our disagreement imagine the difficulty of
enforcing a standard of rigor and protocol on a group of reviewers
who mostly review as a hobby. Imagine the expense involved in
initiating such protocols. I think even if we don't agree on the
standard we probably agree that there should be a standard.


There is a fairly detailed and rigorous recommendation from a standards
organization - AES/EBU Recommendation BS-1116. It's been around for years.

I said


The source of your beef with Stereophile is that it
lacks reliability now is it not?


Scott said


Not exactly. I find the subjective perceptions
portion of some reviews to lack credibility.


Especially considering all the equipment that is perceived to sound
different, but actually doesn't.

I take them as anecdotes just as I take opinions of other audiophiles
as anecdotes. Their level of unreliability does not bother me because
I assume they are unreliable as anecdotes tend to be.


It's fairly easy to make an anecdote very convincing. Just toss a
level-matched, time-synched, bias-controlled listening test into it. Been
there, done that many times.

When are you bozos going to get off your duffs and stop eating my dust?


  #100   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"ScottW" wrote in message
news:T67Hb.41745$m83.31020@fed1read01

I would be interested to see a DBT to support Brian's perception
that, "as the centerpiece of a digital-only system, running balanced
from stem to stern, the Burmester 001 is the best digital front-end
I've ever heard." I'd be interested in seeing if he really could
identify balanced from single ended.


As a rule, one can only audibly distinguish balanced from balanced from
unbalanced if the implementation of both or either is horribly flawed, or if
the test environment includes some outside sources of potentially-audible
noise or other interfering signal.




  #101   Report Post  
Arny Krueger
 
Posts: n/a
Default Note to the Idiot

"ScottW" wrote in message
news:xb7Hb.41746$m83.37148@fed1read01
"John Atkinson" wrote in message


Agreed. He is becoming a bit repetitious with this assertion.
I try not to let him be too distracting in these moments.



Why would you do something disingenuous like let the truth get in you way,
Scotty?


  #102   Report Post  
Powell
 
Posts: n/a
Default The Carver Challenge


"John Atkinson" wrote

The Carver Challenge. Bob Carver made statements
that he could replicate any amplifier design using a
technique called "transfer function." Stereophile
took up his challenge wanting Bob to replicate the
sound of the Conrad-Johnson Premier 5 mono-blocks.
I think that over a two-day period he accomplished
that task to Holt's and Archibald's satisfaction. From
there the Carver M1.0t was born...

On the dark side to this tail, Stereophile was later
prohibited from publishing any reference to Carver
after trying to undo (publish/verbally) the results of the
empirical findings.


Hi Powell, I dug into my archives as promised. Here is
a blow-by-blow account of what happened:

Sometime in 1985, Bob Carver challenged Stereophile's
then editor, J. Gordon Holt, and its publisher, Larry Archibald,
that he could match the sound of any amplifier they chose
with one of his inexpensive "Magnetic-Field" power amplifiers.
Mr. Carver subsequently visited Santa Fe to try to modify his
amplifier so that it matched the sound of an expensive tube
design. (This was indeed a Conrad-Johnson monoblock,
though that was not reported at the time. I am not aware of
the reasons why not as I didn't join Stereophile's staff until
May 1986.)

"I am not aware of the reasons why"... this is not
true. Vol.8, No. 6, pa.33: "What worried us was the
possibility that Carver might come close to
matching the sound of our reference amp that its
designer/manufacture would be embarrassed,
chagrined, and outraged." "If Carver then managed
to even approximate the sound of that amplifier, its
manufacture would quite naturally ask "Why us?
Why did you single us out for ridicule? And we
would be hard put to answer without appearing
unfair."


The degree to which the two amplifiers matched was
confirmed using a null test. At first, however, even
though the measured null was 35dB down from the
amplifiers' output levels (meaning that any difference
was at the 1.75% level), JGH and LA were able to
distinguish the Carver from the target amplifier by ear.
It was only when Bob Carver lowered the level of the
null between the two amplifiers to -70dB -- a 0.03%
difference -- that the listeners agreed that the Carver
and the reference amplifier were sonically
indistinguishable. (This entire series of tests was
reported on in the October 1985 issue of
Stereophile, Vol.8 No.6.)

Vol.8, No. 6, pa.42. "It is true that there were no
"controls" here -no double-blind precautions
against prejudices of various kinds. But the lack
of these controls should have, if anything,
influenced the outcome in the other direction. We
wanted Bob to fail. We wanted to hear a difference.
Among other things, it would have reassured us
that our ears really are among the best in the
business, despite "70-dB nulls."

"There were times when we were sure that we had
heard such a difference. But, I repeat, each time
we'd put the other amplifier in, listed to the same
musical passage again, and hear exactly the same
thing. According to the rules of the game, Bob had
won."

Neither Gordon Holt nor myself had further access to the original
prototype amplifier that had featured in the 1985 tests. Some 18 months
later, however, after I joined the magazine, Stereophile was sent a
production version of the Carver M-1.0t. This was an amplifier that we
had understood Bob Carver intended to sound and measure identically to
the prototype that had featured in the 1985 listening tests. Because of
this sonic "cloning," the production M-1.0t was advertised by Carver
as sounding identical to the tube amplifier that had featured in the
1985 Stereophile tests: "Compare the new M-1.0t against any and all
competition. Including the very expensive amplifiers that have been
deemed the M-1.0t's sonic equivalent," stated Carver Corporation
advertisements in Audio (October 1986), Stereo Review (February 1987),
and in many other issues of those magazines.

After careful and independent auditioning of the production M-1.0t,
Gordon and I felt that while the M-1.0t was indeed similar-sounding
to the tube design, it did not sound identical. Certainly it could not
be said that we deemed it the sonic equivalent of the "very expensive
amplifiers." Measuring the outputs of the two amplifiers at the
loudspeaker terminals while each was driving a pair of Celestion
SL600 speakers revealed minor frequency-response differences due to
the amplifiers having different output impedances. (The Carver had a
conventionally low output impedance; in subsequent auditioning, Bob
Carver connected a long length of Radio Shack hook-up cable in series
with its output to try to reduce the sonic differences between the
amplifiers.)

In addition, carrying out a null test between the production M-1.0t
amplifier sent to Stereophile by Carver and the reference tube
amplifier revealed that, at best, the two amps would only null to
-40dB at 2kHz, a 1% difference, diminishing to -20dB below 100Hz and
above 15kHz, a 10% difference.

These null figures were not significantly altered by changing the tube
amplifier's bias or by varying the line voltage with a Variac. We then
borrowed another sample of the M-1.0t from a local dealer and nulled
that against the target amplifier. The result was an even shallower
null. In the original, 1985, tests, JGH and LA had proven to Mr. Carver
that they could identify two amplifiers that had produced a 35dB null
by ear. These 1987 measurements therefore reinforced the idea that the
production M-1.0t did not sound the same as the target tube amplifier
even though the original hand-tweaked prototype did appear to have done
so, driving the midrange/treble panels of Infinity RS1B loudspeakers
(but not the woofers).

Upon being informed of these results, Bob Carver flew out to Santa Fe
at the end of February 1987 and carried out a set of null tests that
essentially agreed with my measurements, a fact confirmed by Mr. Carver
in a letter published in Stereophile: "...the null product between the
amplifiers...is 28dB, not 70dB! Your tests showed this, and so did mine,"
he wrote, and went on to say that "Since my own research has shown that
the threshold for detecting differences is about 40dB, I knew there was
enough variance between the amps to be detectable by a careful listener."

Mr. Carver then asked us to participate in a series of single-blind
tests comparing the production M-1.0t with the tube amplifier, with
himself acting as the test operator. We agreed, and Gordon Holt proved
to be able to distinguish the two amplifiers by ear alone in these tests.
"J. Gordon was able to hear the difference between my M-1.0t and the
reference amp in a blind listening test," Mr. Carver wrote in the letter
referred to above, continuing, "An earlier test had shown Gordon's
hearing to be flawless, like that of a wee lad." (All of Stereophile's
tests and auditioning of the production M-1.0t, the events that occurred
during the February '87 weekend Mr. Carver spent in Santa Fe, and Mr.
Carver's subsequent letter were reported on and published in the
April/May 1987 issue of Stereophile, Vol.10 No.3.

Bob Carver subsequently reinforced the idea that it is hard to
consistently manufacture amplifiers which differ from a target design
by less than 1% or so: ie, cannot produce a null deeper than -40dB. In
an interview with me that appeared in the February 1990 issue of
Stereophile (Vol.13 No.2), he said, "A 70dB null is a very steep null.
It's really down to the roots of the universe and things like that.
70dB nulls aren't possible to achieve in production." I asked him,
therefore, what his target null was between the Silver Seven-t and the
original Silver Seven amplifier, two more-recent Carver amplifiers that
are stated in Carver's literature to sound very similar. "About 36dB,"
was his reply. "When you play music, the null will typically hover
around the 36dB area." (Bob Carver subsequently confirmed that I had
correctly reported his words in this interview.)

Far from recanting their original findings, Stereophile's staff
reported what they had measured and what they had heard under
carefully controlled conditions regarding the performance of the
production Carver M-1.0t amplifier, just as they do with any component
being reviewed. The fact that those findings were at odds with their
earlier experience can be explained by the fact that the amplifiers
auditioned were _not_ identical: the 1985 tests involved a
hand-tweaked prototype based on a Carver M-1.5 chassis; the 1987 tests
involved a sample M-1.0t taken from the production line and it is my
understanding that there was no production hand-nulling.

In addition, the 1985 tests only involved the midrange and treble towers
of Infinity RS-1b loudspeakers, the woofers being driven by a different
amplifier. FOr the 1987 review and subsequent tests, the Carver and
C-J amps were used full-range.

While it would indeed appear possible for Mr. Carver on an individual,
hand-tweaked basis to achieve a null of -70dB between two entirely
different amplifiers (meaning that it would be unlikely for them to
sound different), routinely repeating this feat in production is not
possible (something implied by Mr. Carver's own statements). And if it
is not possible, then it is likely that such amplifiers could well sound
different from one another, just as Stereophile reported.

Regarding subsequent events, we published reviews of the Carver TFM-25
and Silver Seven-t amplifiers in 1989 and the Carver Amazing loudspeaker
in 1990. Also in 1990, an edited version of a Stereophile review
appeared in Carver literature and in an advertisement that appeared in
the May/June 1990 issue of The Absolute Sound. We took legal action to
prevent this from happening, as we do in any instance of infringement
of our copyright. Bob Carver responded by filing a countersuit for
defamation, trade disparagement, product disparagement, and
interference with a business expectancy, against Stereophile Inc.,
against Larry Archibald, against Robert Harley, and against myself,
claiming "in excess of $50,000" in personal damages for Mr. Carver
and "in excess of $3 million" in lost sales for the Carver Corporation.
(This sum was later raised to $7 million on the appearance of a Follow-Up
review of the Carver Silver Seven-t amplifier in our October 1990 issue.)

Carver's countersuit included some 42 individual counts
of purported defamation dating back to J. Gordon Holt's
reporting of the original "Carver Challenge" in the middle
of 1985. J. Gordon Holt was _not_named in the countersuit,
however, and neither were Dick Olsher and Sam Tellig who
had also written reviews of Carver products. In effect, I was
being sued for things had had been published in Stereophile
a year before I joined.

By 1991 the situation between Stereophile and Caver
had degenerated beyond your accounting. In an October
1991 letter to Stereophile readers Bob Carver wrote:
"As you probably know, Stereophile has pretty much
heaped abuse upon me, my company and my designs.
In their editorial pages I have been labeled a "neurotic
designer," a "rip-off" and the man who makes
"el-cheapo" products."

"My amplifier design has been portrayed by Stereophile
as not performing as advertised, oscillating and having
bad sound. My speakers have been characterized by
Stereophile as having severe peaks in the frequency
response curves and unsuitable for considerations; to
my dismay, I've found that many buy into what Stereophile
says. I would like to present Carver in a more positive
and balance light. My amps don't oscillate, and my
speakers do not have frequency response peaks."


What we had expected to be a conventional copyright case had turned
into something much greater in its scope and financial consequences.
Neither case never went to court, however, the two sides agreeing to
an arbitrated settlement in late December 1990, with the help of a
court-appointed mediator. Agreement was reached for a settlement with
prejudice (meaning that none of the claims and counterclaims can be
revived by either side) that took effect on 1/1/91.

The settlement agreement was made a public document. The main points
we

a) Neither side admitted any liability.

b) Neither side paid any money to the other.

c) Carver recalled all remaining copies of the unauthorized reprint for
destruction.

d) With the exception of third-party advertisements, Stereophile agreed
not to mention in print Carver the man, Carver the company, or Carver
products for a cooling-off period of three years starting 1/1/91 or
until the principals involved were no longer with their respective
companies.

That's it. Stereophile returned to giving review coverage to Carver
products in the usual manner after 1/1/94.

John Atkinson
Editor, Stereophile


"Powell" wrote

On the dark side to this tail, Stereophile was later
prohibited from publishing any reference to Carver
after trying to undo (publish/verbally) the results of the
empirical findings.

This is simply not true, Mr. Powell. You can retrieve
previous disucssions of this subject from Google,
but I will dig up the story from my archives and post
it to r.a.o.

John Atkinson
Editor, Stereophile

"This is simply not true"... You are certainly entitled to
your biased opinion. The (above) empirical results/facts
are as I have portrayed.






  #103   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

"ScottW" wrote in message
news:8R7Hb.41748$m83.22687@fed1read01...
I wonder if two production versions of the Conrad Johhnson
monoblocks would null better than 40 dB?


My memory is that they did do so. Perhaps more importantly, the null was
maintained across the audio band, intead of reducing at frequency extremes
which was the case with the heterogeneous amplifiers.

John Atkinson
Editor, Stereophile
  #104   Report Post  
Powell
 
Posts: n/a
Default The Carver Challenge


"John Atkinson" wrote

d) With the exception of _third-party advertisements_ [my
underlining], Stereophile agreed not to mention in print
Carver the man, Carver the company, or Carver products
for a cooling-off period of three years starting 1/1/91 or
until the principals involved were no longer with their
respective companies.

John Atkinson
Editor, Stereophile

Are you saying Bob Carver's 10/91letter (below) was
in clear violation of the terms of the settlement
("starting 1/1/91")?

In an October 1991 letter to Stereophile readers Bob
Carver wrote:
"As you probably know, Stereophile has pretty much
heaped abuse upon me, my company and my designs.
In their editorial pages I have been labeled a "neurotic
designer," a "rip-off" and the man who makes
"el-cheapo" products."

"My amplifier design has been portrayed by Stereophile
as not performing as advertised, oscillating and having
bad sound. My speakers have been characterized by
Stereophile as having severe peaks in the frequency
response curves and unsuitable for considerations; to
my dismay, I've found that many buy into what Stereophile
says. I would like to present Carver in a more positive
and balance light. My amps don't oscillate, and my
speakers do not have frequency response peaks."


  #105   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

"ScottW" wrote in message
news:T67Hb.41745$m83.31020@fed1read01...
"John Atkinson" wrote in message
om...
you might care to look at the results on the J-Test for the Burmester CD player
in our December issue
(http://www.stereophile.com/digitalso.../1203burmester). It did
extraordinarily well on 44.1k material, both on internal CD playback and
on external S/PDIF data, but failed miserably with other sample rates.
Most peculiar. My point is that the J-Test was invaluable in finding this out.


Of course the test is valid for external data. Still interesting
that the reviewer never picked this up.


Because he did not have any external data sources running at sample rates
other than 44.1kHz, he wouldn't have realized there was a problem.

I would be interested to see a DBT to support Brian's perception
that, "as the centerpiece of a digital-only system, running balanced
from stem to stern, the Burmester 001 is the best digital front-end
I've ever heard." I'd be interested in seeing if he really could
identify balanced from single ended.


I don't see why you are skeptical. There are many technical reasons why
balanced and unbalanced connections can differ. Martin Colloms
published an article on tjhis subject back in 1994: see
http://www.stereophile.com/showarchives.cgi?335 .

John Atkinson
Editor, Stereophile


  #106   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"John Atkinson" wrote in message
om...
"ScottW" wrote in message
news:T67Hb.41745$m83.31020@fed1read01...
"John Atkinson" wrote in message
om...
you might care to look at the results on the J-Test for the Burmester

CD player
in our December issue
(http://www.stereophile.com/digitalso.../1203burmester). It

did
extraordinarily well on 44.1k material, both on internal CD playback

and
on external S/PDIF data, but failed miserably with other sample

rates.
Most peculiar. My point is that the J-Test was invaluable in finding

this out.

Of course the test is valid for external data. Still interesting
that the reviewer never picked this up.


Because he did not have any external data sources running at sample rates
other than 44.1kHz, he wouldn't have realized there was a problem.

I would be interested to see a DBT to support Brian's perception
that, "as the centerpiece of a digital-only system, running balanced
from stem to stern, the Burmester 001 is the best digital front-end
I've ever heard." I'd be interested in seeing if he really could
identify balanced from single ended.


I don't see why you are skeptical.


Because you said "I didn't find any significant differences between
its performance in unbalanced and balanced modes that would
explain either why Brian preferred the latter or found the former
to sound too warm".

There are many technical reasons why
balanced and unbalanced connections can differ.


Are these technical reasons not measurable?


Martin Colloms
published an article on tjhis subject back in 1994: see
http://www.stereophile.com/showarchives.cgi?335 .


And Martin seems to imply the reason for Brians
preference for balanced operation lies within his
amplifier rather than the Burmester.

Overall Martin doesn't seem supportive
of the need for balanced operation in home systems.

ScottW


  #107   Report Post  
ScottW
 
Posts: n/a
Default Note to the Idiot


"Arny Krueger" wrote in message
...
"ScottW" wrote in message
news:xb7Hb.41746$m83.37148@fed1read01
"John Atkinson" wrote in message


Agreed. He is becoming a bit repetitious with this assertion.
I try not to let him be too distracting in these moments.



Why would you do something disingenuous like let the truth get in you

way,
Scotty?


With regard to your assertion that the J-test is invalid
BECAUSE the signal is not dithered, you are
quite simply, wrong.

ScottW


  #108   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

In message
From: Arny Krueger ) wrote:
"John Atkinson" wrote in message
. com
What I _am_ objecting to is Arny Krueger's trying to disseminate
something that is not true, which is his statement that tests that
don't use dither are forbidden by an AES standard. For him to keep
repeating this falsehood is dirty pool.


Apparently the meaning of the word the AES uses, namely "investigation"
is very unclear to you, Atkinson.


No Mr. Krueger. It is perfectly clear, particularly as one of the two
tests I perform using undithered signals _is defined by the AES itself_
as an appropriate use of an undithered signal. To recap:

When you wrote that "Stereophile does some really weird measurements,
such as their undithered tests of digital gear," your use of the words
"really weird" was misleading because one of the tests is instanced by
the AES in their published standard on the testing of digital audio
components and the other is used in widely available commercial piece
of test gear.

When you wrote "The AES says don't do it [ie, use undithered test signals],
but John Atkinson appears to be above all authority but the voices that
only he hears" you were wrong a) because the AES standard specifically
refers to one of the tests I use as a legitimate exception to their
recommendation that dithered test signals be used and b) because the
second test signal doesn't need to be dithered because of the peculiar
nature of its relationship to the sample frequency.

Since all of the media that consumers play on their digital equipment
is supposed to be dithered and generally is, there's no justification
for the use of undithered test signals when testing consumer equipment
for the purpose of reporting performance to consumers.


The justification is that used for every test an engineer conducts
using a test signal that is not found in the music played by consumers:
sinewaves, squarewaves, multi-sinewave tones, etc, etc. Which is that by
reducing the variables to the minimum, Mr. Krueger, the performance of
the DUT in one spcific area can be characterized. As I have repeatedly
explained, out of the large number of tests I perform on digital
components, all but two use dithered signals. In the two cases where I
use an undithered signal, the first is specifically recommended by the
AES for examining a DAC's bit weights -- a phrase you don't appear to
comprehend, owing to your repeated references to "missing bits" -- and
the second is diagnostic for the effects of word-clock jitter, because
of its total lack of quantizing error.

As I have said, that you have kept repeating your criticism of these tests
on many occasions over the past 5 years, despite corrections from me, from
Glen Zelniker, from Paul Bamborough, from Jim "JJ" Johnston, from Paul
Miller, and even from the late Julian Dunn, suggests you really don't
grasp the fundamentals of digital audio engineering. :-(

John Atkinson
Editor, Stereophile
  #109   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

"Arny Krueger" wrote in message
...
"John Atkinson" wrote in message
om
"Arny Krueger" wrote in message
...
No, Mr. Krueger. The story hasn't changed. I was merely pointing out
that Paul Bamborough and Glenn Zelniker, both digital engineers with
enviable reputations, posted agreement with the case I made, and as I
said, joined me in pointing out the flaws in your argument.


Zelniker and Bamborough have both gone on long and loud about their
personal disputes with me and their low personal opinions of me are.
Therefore, they are not unbiased judges of this matter and should be
ignored.


With respect, Mr. Krueger, this is circular logic. You appear to be
arguing that because these engineers have argued with you in the past,
that disagreement disqualifies them from arguing with you in the
future. In other words, you appear to wish for a situation where only
those who agree with you on a subject are allowed to discuss things with
you. It ain't gonna happen, Mr. Krueger, unless you stop misrepreenting
what other peeple have said or done.

If they were honest men, they would recuse themselves, but of course
they are not honest men.


It really is reprehensible, Mr. Krueger, that when you are unable to offer
technical counter-arguments, you descend into personal attacks. And I
would have thought that your current legal troubles would have taught
you not to slander people on a public newsgroup, Mr. Krueger. :-)

At this point Atkinson tries to confuse "investigation" with "testing
equipment performance for consumer publication reviews" Of course
these are two very different things...


Not at all, Mr. Krueger. As I explained to you back in 2002 and again
now, the very test that you describe as "really weird" and that you
claim the "AES says don't do" is specifically outlined in the AES
standard as an example of a test for which a dithered signal is
inappropriate, because it "can obscure the bit variations being
viewed."


It is also fair to point out that both the undithered ramp signal and
the undithered 1kHz tone at exactly -90.31dBFS that I use for the same
purpose are included on the industry-standard CD-1 Test CD, that was
prepared under the aegis of the AES.


Irrelevant to the issue of subjective relevancy.


That's not the argument you were making, Mr. Krueger, when you wrote
"the AES says 'don't do it.'" You claimed, without any qualification
whatsoever, that the Audio Engineering Society forbade the testing
of digital audio components using undithered test signals. As I have
carefully shown, they did no such thing.

If you continue to insist that the AES says "don't do it," then why on
earth would the same AES help make such signals available?


For the purpose of scientific and advanced technical investigation,
not for routine testing for a consumer publication.


Why not? The relevant AES standard does not qualify its recommendation for
the use of undithered test signals in any way whatsoever. The standard
does not say for "the purpose of scientific and advanced technical
investigation, not for routine testing for a consumer publication." In
fact, it doesn't say anything on that subject. And as I am an AES member
and am acquainted with most of the engineers who drafted this standard,
don't you think it strange, Mr. Krueger, that not one of them has mentioned
to me that I _shouldn't_ use the specific undithered test signals given as
an example in the standard to do "routine testing for a consumer
publication"?

And as you are not an AES member, Mr. Krueger, and you are not acquainted
with these engineers, how can you be so certain that you know what they
_meant_ to write about "investigations" using undithered test signals
in the standard? That they didn't want consumer publications doing tests
like this, just people foing "scientific and advanced technical
investigation"?

The examination of "bit weights" is fundamental to good sound from a
digital component, because if each one of the 65,535 integral step
changes in the digital word describing the signal produces a
different-sized change in the reconstructed analog signal, the result
is measurable and audible distortion.


It is an absolute and total falsehood that missing one of the 65,535
integral step changes in the digital words describing the signal
produces a different-sized change in the reconstructed analog signal,
will result in measurable and audible distortion.


Really? An "absolute and total falsehood"? Sounds a bit strong, Mr.
Krueger. Please note, BTW, that I have never mentioned "missing" bits.
That is your projection. I was talking about DAC "bit weights," ie, the
relationship between the sizes of the changes in analog output voltage
produced by each of the bits in a LPCM system. FOr example, in a 16-bit
DAC, the MSB should produce a change in the analog output voltage 32,768
times larger than that produced by the LSB. If those bit weights are not
correct, the result is, indeed, "measurable and audible distortion."

A few years back, JJ and I had a discussion on r.a.o. about a pathological
DAC we had both experienced. With this DAC, the change from "digital black"
to -1 LSB (which, if you think about the 2's-complement encoding used on a
CD, involves more than one bit actually changing state) produced a change
in the analog output voltage twice that when the data changed from digital
black to +1 LSB. What was worse, the former change was in the wrong
direction, ie, what should have been a negative voltage was actually
positive.

What happens when you feed data representing a low-level 1kHz sinewave to
this DAC. If you measure its output, you get a massive increase in
second-harmonic distortion. So your statement that my claiming there will
be measurable distortion is "an absolute and total falsehood" is
demonstrated to be incorrect.

What happens when you listen to the output of this DAC? With a pure tone
and listening through headphones you can actually hear the doubling of
pitch. So your statement that my claiming there will be audible distortion
is "an absolute and total falsehood" is also demonstrated to be incorrect.

As I said in an earlier message, parts as bad as this are largely absent
from modern digital audio components. But that doesn't invalidate the
usefulness or the relevance of the test I perform.

I have snipped the rest of your posting because of the ongoing personal
attacks, Mr. Krueger. And again, please note that you are addressing an
issue I did not mention.

John Atkinson
Editor, Stereophile
  #110   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

"Arny Krueger" wrote in message
...
"John Atkinson" wrote in message
om
"Arny Krueger" wrote in message
...
No, Mr. Krueger. The story hasn't changed. I was merely pointing out
that Paul Bamborough and Glenn Zelniker, both digital engineers with
enviable reputations, posted agreement with the case I made, and as I
said, joined me in pointing out the flaws in your argument.


Zelniker and Bamborough have both gone on long and loud about their
personal disputes with me and their low personal opinions of me are.
Therefore, they are not unbiased judges of this matter and should be
ignored.


With respect, Mr. Krueger, this is circular logic. You appear to be
arguing that because these engineers have argued with you in the past,
that disagreement disqualifies them from arguing with you in the
future. In other words, you appear to wish for a situation where only
those who agree with you on a subject are allowed to discuss things with
you. It ain't gonna happen, Mr. Krueger, unless you stop misrepreenting
what other peeple have said or done.

If they were honest men, they would recuse themselves, but of course
they are not honest men.


It really is reprehensible, Mr. Krueger, that when you are unable to offer
technical counter-arguments, you descend into personal attacks. And I
would have thought that your current legal troubles would have taught
you not to slander people on a public newsgroup, Mr. Krueger. :-)

At this point Atkinson tries to confuse "investigation" with "testing
equipment performance for consumer publication reviews" Of course
these are two very different things...


Not at all, Mr. Krueger. As I explained to you back in 2002 and again
now, the very test that you describe as "really weird" and that you
claim the "AES says don't do" is specifically outlined in the AES
standard as an example of a test for which a dithered signal is
inappropriate, because it "can obscure the bit variations being
viewed."


It is also fair to point out that both the undithered ramp signal and
the undithered 1kHz tone at exactly -90.31dBFS that I use for the same
purpose are included on the industry-standard CD-1 Test CD, that was
prepared under the aegis of the AES.


Irrelevant to the issue of subjective relevancy.


That's not the argument you were making, Mr. Krueger, when you wrote
"the AES says 'don't do it.'" You claimed, without any qualification
whatsoever, that the Audio Engineering Society forbade the testing
of digital audio components using undithered test signals. As I have
carefully shown, they did no such thing.

If you continue to insist that the AES says "don't do it," then why on
earth would the same AES help make such signals available?


For the purpose of scientific and advanced technical investigation,
not for routine testing for a consumer publication.


Why not? The relevant AES standard does not qualify its recommendation for
the use of undithered test signals in any way whatsoever. The standard
does not say for "the purpose of scientific and advanced technical
investigation, not for routine testing for a consumer publication." In
fact, it doesn't say anything on that subject. And as I am an AES member
and am acquainted with most of the engineers who drafted this standard,
don't you think it strange, Mr. Krueger, that not one of them has mentioned
to me that I _shouldn't_ use the specific undithered test signals given as
an example in the standard to do "routine testing for a consumer
publication"?

And as you are not an AES member, Mr. Krueger, and you are not acquainted
with these engineers, how can you be so certain that you know what they
_meant_ to write about "investigations" using undithered test signals
in the standard? That they didn't want consumer publications doing tests
like this, just people doing "scientific and advanced technical
investigation"?

The examination of "bit weights" is fundamental to good sound from a
digital component, because if each one of the 65,535 integral step
changes in the digital word describing the signal produces a
different-sized change in the reconstructed analog signal, the result
is measurable and audible distortion.


It is an absolute and total falsehood that missing one of the 65,535
integral step changes in the digital words describing the signal
produces a different-sized change in the reconstructed analog signal,
will result in measurable and audible distortion.


Really? An "absolute and total falsehood"? Sounds a bit strong, Mr.
Krueger. Please note, BTW, that I have never mentioned "missing" bits.
That is your projection. I was talking about DAC "bit weights," ie, the
relationship between the sizes of the changes in analog output voltage
produced by each of the bits in a LPCM system. FOr example, in a 16-bit
DAC, the MSB should produce a change in the analog output voltage 32,768
times larger than that produced by the LSB. If those bit weights are not
correct, the result is, indeed, "measurable and audible distortion."

A few years back, JJ and I had a discussion on r.a.o. about a pathological
DAC we had both experienced. With this DAC, the change from "digital black"
to -1 LSB (which, if you think about the 2's-complement encoding used on a
CD, involves more than one bit actually changing state) produced a change
in the analog output voltage twice that when the data changed from digital
black to +1 LSB. What was worse, the former change was in the wrong
direction, ie, what should have been a negative voltage was actually
positive.

What happens when you feed data representing a low-level 1kHz sinewave to
this DAC. If you measure its output, you get a massive increase in
second-harmonic distortion. So your statement that my claiming there will
be measurable distortion is "an absolute and total falsehood" is
demonstrated to be incorrect.

What happens when you listen to the output of this DAC? With a pure tone
and listening through headphones you can actually hear the doubling of
pitch. So your statement that my claiming there will be audible distortion
is "an absolute and total falsehood" is also demonstrated to be incorrect.

As I said in an earlier message, parts as bad as this are largely absent
from modern digital audio components. But that doesn't invalidate the
usefulness or the relevance of the test I perform.

I have snipped the rest of your posting because of the ongoing personal
attacks, Mr. Krueger. And again, please note that you are addressing an
issue I did not mention.

John Atkinson
Editor, Stereophile


  #112   Report Post  
S888Wheel
 
Posts: n/a
Default Note to the Idiot


Scott said


Let me be clear, I don't want to impose a bunch of requirements that
make this effort too difficult to implement.


I said


Then how could the readers be sure valid tests are being conducted and
reported?


Scott said


Same way they accept the measurements.
Faith in integrity.


I think bench tests are a different ball game.

I said


I'm not clear what you want now. You want Stereophile to establish a

protocol
but you want the protocol to be easily executed and it need not meet the

rigors
demanded by science?


Scott said


Please define the "rigors demanded by science" so I know what you mean.
I keep thinking of tests conducted by pharmaceutical companies against
FDA requirments.


No, I don't mean that. I mean tests that are varified with protocols that
assure a large enough sample, address biases, calibrate sensitivity, eliminate
all but one variable and are varified independently and reviewed by a group of
editors on staff for Stereophile to act as a sort of peer review group. Is that
a large task? I think so. But I think if the goal is to raise the state of
subjective reviews above that of anecdote nothing less will do.

I said


What would such protocols look ike that they would
substantially improve the reliability of subjective reviews and yet not

meet
the standards demanded by science and be easy for all reviewers to do?


Scott said


A Laptop that controlled a ABX switch device which captured results
and downloaded them to a secure website where they were statistically
analyzed. The reviewer would have the capability to conduct the trials
without assistance and only be required to perform the connections.
Listen (which he is doing anyway), and choose.
Science would not be satisfied as no one independently confirmed the
connections and witnessed the procedure.
A fair amount of faith in the integrity of the reviewer would be granted.
The reviewer could conduct as many trials as they wish over the course
of the review.


That sounds like a reasonable starting place but I think it is only a starting
place.

I said


Would you want someone to do this without any training?



Scott said


The reviewer has plenty of time for training.


Maybe, maybe not. Don't forget the practical reality of reviewers for
Stereophile. Most of them are not doing this for a living.

Scott said

Aren't they
doing this as a normal part of the review process. Familiarizing
themselves with the sonic characteristics of the equipment and
comparing them to reference pieces?


I think so but I think that is different from propper training for ABX DBTs.

I said


Would you like this to
happen with no calibration of test sensitivity or listener sensitivity?


Scott said


No. We are specifically trying to validate the reviewers
perception that it sounds different. Take the Bermester review
where he said it sounded better in balanced mode than SE.
Prove it sounded different. The tests didn't support that assertion
.


We agree on this. I think it has been completely ignored in every DBT I have
ever seen written about in any audio journal. Laughably some who have published
such tests simply asked the subjects if they thought the test was sensitive
enough, as if that is any kind of calibration for sensitivity. It does make for
more work though.

I said


Would
you want such testing to take place absent varification with a sperate

set of
tests conducted by another tester?


Scott said


Yes, I trust their integrity not to cheat. I also accept that
requiring verification is a substantial cost prohibiter.


I am sure it is. I would like to see seperate varification though. It's just
more reliable. It also makes cheating (we have seen cheating in Howard's case)
much less likely.

I said


I'm sure it would help. I'm not sure it wouldn't still be challenging for
hobbyists to get reliable results.


Scott said


Define reliable? Subjective listening tests on one subject
can only apply to that subject. Still, in the case of an equipment
reviewer that one subject is of interest to a large number
of people.


I don't need to define reliable.I think we both know what it means. I should
have said substantially more reliable. It is always a matter of degree.

I said


I think the box was fine. I think the problem was Howard. Were it not for

some
stupid mistakes he made in math we may have never known just how bad his

tests
really were. Lets not forget they were published.



Scott said


The math is trivial.


LOL. tell that to Howard.

Scott said

We can set up a spreadsheet. In fact I've seen a
couple on the
Web. Let John do the math and include the outcome in his measurements
section.


I know. Howard's math was fixable. It was his dishonesty that I think was at
issue.
  #113   Report Post  
George M. Middius
 
Posts: n/a
Default Note to the Idiot



S888Wheel said:

Please define the "rigors demanded by science" so I know what you mean.


I mean tests that are varified with protocols that
assure a large enough sample, address biases, calibrate sensitivity, eliminate
all but one variable and are varified independently and reviewed by a group of
editors on staff for Stereophile to act as a sort of peer review group.


How many angels have you two counted so far? I lost track at around
10,000.

Is that
a large task? I think so. But I think if the goal is to raise the state of
subjective reviews above that of anecdote nothing less will do.


Why do you use the biased word "raise"? A more accurate term would be
"suck the life out of".

Are you sure you didn't apply for membership in the Hive when we
weren't looking?





  #114   Report Post  
S888Wheel
 
Posts: n/a
Default Note to the Idiot

I said


When it comes to such protocols I think quality is more important than
quantity.


Scott said


Unfortunately this opens a major loophole. Cherry picking the
units to be tested such that audible differences are assured.



I said


I don't follow.



Scott said


I've seen a few tests with positive results where the amps selected
were substantially different.


I think we had a little misundersatnding here. I meant the the protocols
themselves when I said quality over quantity. I wasn't suggesting that fewer
components be tested.


  #115   Report Post  
S888Wheel
 
Posts: n/a
Default Note to the Idiot

Arny quotes me out of context:


What exactly are those protocols?



Arny said


Shows how ignorant you are, Mr. Hi-IQ.


Shows what a dishonest hypocrite you are Arny. Of course that is well known on
RAO.

I said


I would say the protocols used by
those advocating the use of DBTs in other publications were lacking
and they were being pawned off as definitive proof of universal
truths.



Arny said


Shows how ignorant you are, Mr. Hi-IQ.


Shows your willingness to pawn psuedoscience off as legitimate science to
promote your audio reliogion Mr too chicken **** to take an IQ test.

I said


That would be counterproductive IMO.


Arny said


You're so ignorant that you don't have a relevant opinion, Mr. Hi-IQ.


You are so stupid you don't even relize that if you had left this quote in it's
context that it agrees with claims you have posted. You sling so much bull****
you can't keep track of it. You quote aout of context so freely to suit your
agenda you don't even know when you arew contradicting your own claims.

I said


I don't know what
protocols Stereophile used in their DBTs but I am willing to bet that
if they were scientifically valid they were far to burdensome to be
done before every subjective review published in Stereophile.




By the time Stereophile started publishing articles about their own DBTs the
basics were exceedingly well-known. Atkinson's refinements to the tests
mainly existed to hide some built-in biases towards positive results.


Prove it.

Arny said


Backing out these gratuitous complexifications took additional statistical
work, but this work was done and showed that the results were the same-old,
same-old random guessing.


Prove it.

Arny said


By PCABX standards, the actual listening sessions that Atkinson did were
crude and biased towards false negatives. So you have an ironic situation
where Atkinson structured the test for false positives, but the listening
sessions themselves were biased towards false negatives. I suspect there's
some chance that the PCABX version of a comparison of the actual components
he used would have a mildly positive outcome.


By PCABX standards you can fool yourself into thinking you are doing DBTs of
amplifers without the actual amplifiers.

I said


Maybe
John Atkinson could comment on that. I am speculating.



Arny said


Ignorantly talking out your butt would be more like it.


You are such a dick.

Arny said



It's very easy using the PCABX approach, but PCABX of equipment with analog
I/O requires more pragmatism than many ignorant and semi-ignorant people can
muster. Anybody who thinks that all audio components sound different is a
jillion miles away from the pragmatism that is required if one is to be
comfortable with PCABX tests of equipment with analog I/O. PCABX testing of
equipment with digital I/O and audio software is exact.


Can't stop talking out both sides your mouth can you. I guess you don't realize
that you have reached a point where everything you say is now hypocritical.

I said


If DBTs aren't done well they will not improve the state of reviews
published by Stereophile.


Arny said


Well dooh!


This is the brightest thing you have said so far. Congradulations for reaching
the intelectual level of Homer Simpson.

Arny said



When are you bozos going to get off your duffs and stop eating my dust?


Why would anyone want to eat the dust that gathers on your head while you waste
your life infront of your computer? That is just sick.


  #116   Report Post  
S888Wheel
 
Posts: n/a
Default Note to the Idiot

George said


Why do you use the biased word "raise"? A more accurate term would be
"suck the life out of".


I simply don't think that is the case when things are done well.


Are you sure you didn't apply for membership in the Hive when we
weren't looking?


Never joined any hives that I know of. I think it is unfortunate that some
objectivists have left a bad taste for science in the mouths of some
subjectivists with their misrepresentation of science.
  #117   Report Post  
George M. Middius
 
Posts: n/a
Default Note to the Idiot



S888Wheel said:

Why do you use the biased word "raise"? A more accurate term would be
"suck the life out of".


I simply don't think that is the case when things are done well.


By "things", you mean life-sucking tests. Q.E.D.


Are you sure you didn't apply for membership in the Hive when we
weren't looking?


Never joined any hives that I know of.


It's not "any hives", it's the Hive.


I think it is unfortunate that some
objectivists have left a bad taste for science in the mouths of some
subjectivists with their misrepresentation of science.


That's true of the Krooborg, but the not-insane 'borgs are intent on
igniting a class war over, of all things, high-end audio. You are
right that "tests" are the blunt instrument wielded by the
pseudoscientists. BTW, you might want to get Mr. **** to lay out in
some detail the so-called "scceiennticicf" tests he claims to have
done. From the fragmented descriptions Turdy has dropped here and
there, I have it on good authority that he is beyond clueless when it
comes to designing, or even participating in, a scientifically valid
audio DBT.




  #118   Report Post  
John Atkinson
 
Posts: n/a
Default The Carver Challenge

"Powell" wrote in message
...
"John Atkinson" wrote
This was indeed a Conrad-Johnson monoblock, though that was not
reported at the time. I am not aware of the reasons why not as I
didn't join Stereophile's staff until May 1986.)


"I am not aware of the reasons why"... this is not
true. Vol.8, No.6, p.33: "What worried us was the
possibility that Carver might come close to
matching the sound of our reference amp that its
designer/manufacture would be embarrassed,
chagrined, and outraged." "If Carver then managed
to even approximate the sound of that amplifier, its
manufacture would quite naturally ask "Why us?
Why did you single us out for ridicule? And we
would be hard put to answer without appearing
unfair." -- [J. Gordon Holt]


Thank you for the correction Mr. Powell. At the time I wrote the text
from which you were quoting, around 10 years ago, I didn't have access
to Vol.8 No.6, only to my notes. I now do have that issue but didn't
recheck my narrative when I retrieved it from my archives. Please note
that the text you were quoting was Gordon's, not mine, hence my adding
the correct attribution. I disagree with Gordon's editorial decision not
to make the identity of the target amplifier public, BTW. Had I been
editor at that time, I would have included it in the published text.

The degree to which the two amplifiers matched was
confirmed using a null test. At first, however, even
though the measured null was 35dB down from the
amplifiers' output levels (meaning that any difference
was at the 1.75% level), JGH and LA were able to
distinguish the Carver from the target amplifier by ear.


Please note that my reference for this part of my narrative
was Gordon's text on p.40 of Stereophile, Vol.8 No.6, and that
Bob Carver subsequently wrote that his own research had shown
that a null of 35dB was insufficient to ensure that two amplifiers
sounded identical.

It was only when Bob Carver lowered the level of the
null between the two amplifiers to -70dB -- a 0.03%
difference -- that the listeners agreed that the Carver
and the reference amplifier were sonically
indistinguishable. (This entire series of tests was
reported on in the October 1985 issue of
Stereophile, Vol.8 No.6.)


Vol.8 No.6, pa.42. "It is true that there were no
"controls" here - no double-blind precautions
against prejudices of various kinds. But the lack
of these controls should have, if anything,
influenced the outcome in the other direction. We
wanted Bob to fail. We wanted to hear a difference.
Among other things, it would have reassured us
that our ears really are among the best in the
business, despite "70-dB nulls."

"There were times when we were sure that we had
heard such a difference. But, I repeat, each time
we'd put the other amplifier in, listed to the same
musical passage again, and hear exactly the same
thing. According to the rules of the game, Bob had
won." -- [J. Gordon Holt]


Thank you for quoting from this issue of Stereophile, Mr. Powell.
I don't see it contradicts anything that I have written in the
posting to which you were responding.

On the dark side to this [tale], Stereophile was later
prohibited from publishing any reference to Carver after
trying to undo (publish/verbally) the results of the
empirical findings.


This is simply not true, Mr. Powell...


"This is simply not true"... You are certainly entitled to
your biased opinion. The (above) empirical results/facts
are as I have portrayed.


My "simply not true" referred to the two parts of your statement in
your original posting, Mr. Powell:

1) that Stereophile was "later prohibited from publishing any reference
to Carver." As I have described, this was for 3 years only, as part of
the negotiated settlement; and

2) "after trying to undo (publish/verbally) the results of the empirical
findings."

We did not try to undo anything. The results of the first "Carver
Challenge," performed using an Infinity speaker's midrange and tweeter
panels only and a hand-tweaked prototype amplifier, stand today just as
they did in 1985. Neither Gordon, nor Larry, nor myself have said or
written anything contrary to what was published in 1985 about the
first Stereophile Carver Challenge.

The second Stereophile "Carver Challenge" involved two production
samples of a different Carver amplifier, driven full-range. That Gordon
could distinguish this amplifier from the Conrad-Johnson by ear was
confirmed by Bob Carver. My measurements of the achievable null between
this amplifier and the Conrad-Johnson were also repeated and confirmed
by Bob Carver. And as my reporting of these events and others involving
Mr. Carver were subsequently points named in a defamation lawsuit, please
note that the fact that my reporting was accurate was something that was
testified to under oath by both sides in the dispute.

d) With the exception of _third-party advertisements_ [my
underlining], Stereophile agreed not to mention in print
Carver the man, Carver the company, or Carver products
for a cooling-off period of three years starting 1/1/91 or
until the principals involved were no longer with their
respective companies.


In an October 1991 letter to Stereophile readers Bob Carver
wrote: "As you probably know, Stereophile has pretty much
heaped abuse upon me, my company and my designs. In their
editorial pages I have been labeled a "neurotic
designer," a "rip-off" and the man who makes "el-cheapo"
products."

"My amplifier design has been portrayed by Stereophile
as not performing as advertised, oscillating and having
bad sound. My speakers have been characterized by
Stereophile as having severe peaks in the frequency
response curves and unsuitable for considerations; to
my dismay, I've found that many buy into what Stereophile
says. I would like to present Carver in a more positive
and balance light. My amps don't oscillate, and my
speakers do not have frequency response peaks."

Are you saying Bob Carver's 10/91 letter [above] was
in clear violation of the terms of the settlement
("starting 1/1/91")?


Yes it was in violation, not that probably matters almost 10 years to
the day after the end of the specified cooling-off period and long after
Bob Carver was forced out of the company that bears his name. (Though he
later was to reacquire it.) But I am not familiar with this letter, Mr.
Powell. Did it really appear in Stereophile, as you imply by saying
the letter was addressed to "Stereophile readers"? If not, where _was_
it published?

And if you have any more questions about these historical events, Mr.
Powell, please don't hesitate to ask.

John Atkinson
Editor, Stereophile
  #119   Report Post  
John Atkinson
 
Posts: n/a
Default The Carver Challenge

I have responded to this question in another posting. I am not sure
why the writer felt the need to ask it twice.

John Atkinson
Editor, Stereophile
  #120   Report Post  
John Atkinson
 
Posts: n/a
Default Note to the Idiot

"ScottW" wrote in message
news:QTkHb.42013$m83.430@fed1read01...
"John Atkinson" wrote in message
om...
"ScottW" wrote in message
news:T67Hb.41745$m83.31020@fed1read01...
I'd be interested in seeing if he really could
identify balanced from single ended.


I don't see why you are skeptical.


Because you said "I didn't find any significant differences between
its performance in unbalanced and balanced modes that would
explain either why Brian preferred the latter or found the former
to sound too warm".


That's correct. Nothing technical in the measured performance between
the two modes. However, I did not address how successful Brian's
preamp was in rejecting noise, which will differ between the two
modes.

There are many technical reasons why
balanced and unbalanced connections can differ.


Are these technical reasons not measurable?


Of course, But although I have occasionally done it, in general time
pressure prevents me from travlling to my reviewers' home to measure in
situ.

Martin Colloms
published an article on tjhis subject back in 1994: see
http://www.stereophile.com/showarchives.cgi?335 .


And Martin seems to imply the reason for Brians
preference for balanced operation lies within his
amplifier rather than the Burmester.


I suspect that's right. The amplifiers' rejection of noise and their
common-mode rejection ratios will all have an effect.

Overall Martin doesn't seem supportive
of the need for balanced operation in home systems.


No. I disagree somewhat, as being able to separate the shield ground
from the signal's ground reference in balanced operation can sometimes
be an advantage in getting the lowest noise floor. This may have been a
factor in Brian's auditioning.

But to be honest, I don't see this as a major issue. The would-be owner
of the Burmester can simply audition it in whichever mode sounds best to
him, if they indeed sound different in his system.

John Atkinson
Editor, Stereophile
Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Google Proof of An Unprovoked Personal Attack from Krueger Bruce J. Richman Audio Opinions 27 December 11th 03 05:21 AM
Note to Krooger George M. Middius Audio Opinions 1 October 22nd 03 07:57 AM
Note to the Krooborg George M. Middius Audio Opinions 17 October 16th 03 11:53 PM
Note to Marc Phillips Lionel Chapuis Audio Opinions 9 September 11th 03 06:07 PM
Note on Google Groups URLs George M. Middius Audio Opinions 19 September 8th 03 11:45 PM


All times are GMT +1. The time now is 12:05 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"