PDA

View Full Version : Blindtest question


Thomas A
July 27th 03, 07:11 PM
Is there any published DBT of amps, CD players or cables where the
number of trials are greater than 500?

If there difference is miniscule there is likely that many "guesses"
are wrong and would require many trials to reveal any subtle
difference?

Thomas

Steven Sullivan
July 27th 03, 11:02 PM
Thomas A > wrote:
> Is there any published DBT of amps, CD players or cables where the
> number of trials are greater than 500?

> If there difference is miniscule there is likely that many "guesses"
> are wrong and would require many trials to reveal any subtle
> difference?

There are published tests where people claimed they could hear the
difference sighted , but when they were 'blinded' they could not.
In this case the argument that 500 trials are needed would seem
to be weak.

However, a real and miniscule difference would certainly be
discerned more reliably if there was specific training to hear it
beforehand.

--
-S.

Thomas A
July 28th 03, 03:46 PM
Steven Sullivan > wrote in message news:<d_XUa.142496$GL4.36308@rwcrnsc53>...
> Thomas A > wrote:
> > Is there any published DBT of amps, CD players or cables where the
> > number of trials are greater than 500?
>
> > If there difference is miniscule there is likely that many "guesses"
> > are wrong and would require many trials to reveal any subtle
> > difference?
>
> There are published tests where people claimed they could hear the
> difference sighted , but when they were 'blinded' they could not.
> In this case the argument that 500 trials are needed would seem
> to be weak.

Yes, that's for sure. But how are scientific tests of just noticable
difference set up? A difference, when very small, could introduce more
incorrect answers from the test subjects. Thus I think the question is
interesting.
>
> However, a real and miniscule difference would certainly be
> discerned more reliably if there was specific training to hear it
> beforehand.

Yes, but still, if the difference is real and miniscule it could
introduce incorrect answers even if there is specific training
beforehand. If there would be an all or nothing thing, then the result
would always be 100% correct (difference) or 50% (no difference).
What if the answers are 60% correct?

normanstrong
July 28th 03, 05:04 PM
"Thomas A" > wrote in message
news:DBUUa.141509$OZ2.27088@rwcrnsc54...
> Is there any published DBT of amps, CD players or cables where the
> number of trials are greater than 500?

I've never seen one. It would be difficult to get a single subject to
do that many trials. So, it would have to be many subjects and they
would have to be isolated to prevent subtle influence from one to the
other.
>
> If there difference is miniscule there is likely that many "guesses"
> are wrong and would require many trials to reveal any subtle
> difference?

(Note that the word is spelled "minuscule.")

Norm Strong

Stewart Pinkerton
July 28th 03, 06:27 PM
On 28 Jul 2003 14:46:01 GMT, (Thomas A)
wrote:

>Steven Sullivan > wrote in message news:<d_XUa.142496$GL4.36308@rwcrnsc53>...
>> Thomas A > wrote:
>> > Is there any published DBT of amps, CD players or cables where the
>> > number of trials are greater than 500?
>>
>> > If there difference is miniscule there is likely that many "guesses"
>> > are wrong and would require many trials to reveal any subtle
>> > difference?
>>
>> There are published tests where people claimed they could hear the
>> difference sighted , but when they were 'blinded' they could not.
>> In this case the argument that 500 trials are needed would seem
>> to be weak.
>
>Yes, that's for sure. But how are scientific tests of just noticable
>difference set up? A difference, when very small, could introduce more
>incorrect answers from the test subjects. Thus I think the question is
>interesting.
>>
>> However, a real and miniscule difference would certainly be
>> discerned more reliably if there was specific training to hear it
>> beforehand.
>
>Yes, but still, if the difference is real and miniscule it could
>introduce incorrect answers even if there is specific training
>beforehand. If there would be an all or nothing thing, then the result
>would always be 100% correct (difference) or 50% (no difference).
>What if the answers are 60% correct?

The real problem is that you can *never* say that the difference is
real, just that there is a very high statistical probability that a
difference was detected. After all, it *is* possible to toss a coin
and get 500 heads in a row, it's just *very* unlikely.
--

Stewart Pinkerton | Music is Art - Audio is Engineering

Bob Marcus
July 28th 03, 10:54 PM
(Thomas A) wrote in message >...

> Yes, but still, if the difference is real and miniscule it could
> introduce incorrect answers even if there is specific training
> beforehand. If there would be an all or nothing thing, then the result
> would always be 100% correct (difference) or 50% (no difference).
> What if the answers are 60% correct?

You would still need far fewer than 500 trials to get a statistically
significant result. For 165 trials, 99 correct, which is 60%, would be
statistically significant at the 99% confidence level. If you were
willing to settle for a 95% confidence level, you would need even
fewer trials.

And remember, even if you don't get a statistically significant
result, you still can't conclude that the difference is inaudible. So
you wouldn't get an incorrect result; you'd get an inconclusive one.

bob

Nousaine
July 29th 03, 05:02 AM
(Thomas A) wrote:

>Is there any published DBT of amps, CD players or cables where the
>number of trials are greater than 500?
>
>If there difference is miniscule there is likely that many "guesses"
>are wrong and would require many trials to reveal any subtle
>difference?
>
>Thomas

With regard to amplifiers as of May 1990 there had been such tests. In 1978
QUAD published an erxperiment with 576 trials. In 1980 Smith peterson and
Jackson published an experiment with 1104 trials; in 1989 Stereophile published
a 3530 trial comparison. In 1986 Clark & Masters published an experiment with
772 trials. All were null.

There's a misconception that blind tests tend to have very small sample sizes.
As of 1990 the 23 published amplifier experiments had a mean average of 426 and
a median of 90 trials. If we exclude the 3530 trial experiment the mean becomes
285 trials. The median remains unchanged.

Steven Sullivan
July 29th 03, 06:16 AM
Thomas A > wrote:
> Steven Sullivan > wrote in message news:<d_XUa.142496$GL4.36308@rwcrnsc53>...
>> Thomas A > wrote:
>> > Is there any published DBT of amps, CD players or cables where the
>> > number of trials are greater than 500?
>>
>> > If there difference is miniscule there is likely that many "guesses"
>> > are wrong and would require many trials to reveal any subtle
>> > difference?
>>
>> There are published tests where people claimed they could hear the
>> difference sighted , but when they were 'blinded' they could not.
>> In this case the argument that 500 trials are needed would seem
>> to be weak.

> Yes, that's for sure. But how are scientific tests of just noticable
> difference set up? A difference, when very small, could introduce more
> incorrect answers from the test subjects. Thus I think the question is
> interesting.
>>
>> However, a real and miniscule difference would certainly be
>> discerned more reliably if there was specific training to hear it
>> beforehand.

> Yes, but still, if the difference is real and miniscule it could
> introduce incorrect answers even if there is specific training
> beforehand. If there would be an all or nothing thing, then the result
> would always be 100% correct (difference) or 50% (no difference).
> What if the answers are 60% correct?

What level of certitude are you looking for? Scientists use
statistical tools to calculate probabilities of different
kinds of error in such cases.

--
-S.

Arny Krueger
July 29th 03, 06:23 AM
"Thomas A" > wrote in message
news:DBUUa.141509$OZ2.27088@rwcrnsc54

> Is there any published DBT of amps, CD players or cables where the
> number of trials are greater than 500?

I think N = 200+ has been reached.

> If there difference is miniscule there is likely that many "guesses"
> are wrong and would require many trials to reveal any subtle
> difference?

If you look at theory casually, you might reach that conclusion. However,
what invariably happens in tests that produce questionable results with a
small number of trials, is that adding more trials makes it clearer than
ever that the small-sample results were due to random guessing.

Thomas A
July 29th 03, 04:20 PM
(Nousaine) wrote in message >...
> (Thomas A) wrote:
>
> >Is there any published DBT of amps, CD players or cables where the
> >number of trials are greater than 500?
> >
> >If there difference is miniscule there is likely that many "guesses"
> >are wrong and would require many trials to reveal any subtle
> >difference?
> >
> >Thomas
>
> With regard to amplifiers as of May 1990 there had been such tests. In 1978
> QUAD published an erxperiment with 576 trials. In 1980 Smith peterson and
> Jackson published an experiment with 1104 trials; in 1989 Stereophile published
> a 3530 trial comparison. In 1986 Clark & Masters published an experiment with
> 772 trials. All were null.
>
> There's a misconception that blind tests tend to have very small sample sizes.
> As of 1990 the 23 published amplifier experiments had a mean average of 426 and
> a median of 90 trials. If we exclude the 3530 trial experiment the mean becomes
> 285 trials. The median remains unchanged.

Ok thanks. Is it possible to get the numbers for each test? I would
like to see if it possible to do a meta-analysis in the amplifier
case. The test by tagmclaren is an additional one:

http://www.tagmclaren.com/members/news/news77.asp

Thomas

Thomas A
July 29th 03, 04:20 PM
"Arny Krueger" > wrote in message news:<0xnVa.4274$cF.1296@rwcrnsc53>...
> "Thomas A" > wrote in message
> news:DBUUa.141509$OZ2.27088@rwcrnsc54
>
> > Is there any published DBT of amps, CD players or cables where the
> > number of trials are greater than 500?
>
> I think N = 200+ has been reached.
>
> > If there difference is miniscule there is likely that many "guesses"
> > are wrong and would require many trials to reveal any subtle
> > difference?
>
> If you look at theory casually, you might reach that conclusion. However,
> what invariably happens in tests that produce questionable results with a
> small number of trials, is that adding more trials makes it clearer than
> ever that the small-sample results were due to random guessing.

So what happens when the situation is small but just audible? Has any
such test situations been set up? Does the result end up with close to
100% correct or e.g. 55% correct? My question is what happens when
test subjects are "forced" with differences that approach to the
"audible limit".

Thomas A
July 29th 03, 04:20 PM
Steven Sullivan > wrote in message news:<UqnVa.4003$Oz4.1480@rwcrnsc54>...
> Thomas A > wrote:
> > Steven Sullivan > wrote in message news:<d_XUa.142496$GL4.36308@rwcrnsc53>...
> >> Thomas A > wrote:
> >> > Is there any published DBT of amps, CD players or cables where the
> >> > number of trials are greater than 500?
>
> >> > If there difference is miniscule there is likely that many "guesses"
> >> > are wrong and would require many trials to reveal any subtle
> >> > difference?
> >>
> >> There are published tests where people claimed they could hear the
> >> difference sighted , but when they were 'blinded' they could not.
> >> In this case the argument that 500 trials are needed would seem
> >> to be weak.
>
> > Yes, that's for sure. But how are scientific tests of just noticable
> > difference set up? A difference, when very small, could introduce more
> > incorrect answers from the test subjects. Thus I think the question is
> > interesting.
> >>
> >> However, a real and miniscule difference would certainly be
> >> discerned more reliably if there was specific training to hear it
> >> beforehand.
>
> > Yes, but still, if the difference is real and miniscule it could
> > introduce incorrect answers even if there is specific training
> > beforehand. If there would be an all or nothing thing, then the result
> > would always be 100% correct (difference) or 50% (no difference).
> > What if the answers are 60% correct?
>
> What level of certitude are you looking for? Scientists use
> statistical tools to calculate probabilities of different
> kinds of error in such cases.

Well confidence limits of 95% or 99% are usually applied. The power of
the test is however important when you approach the audible limit.
Also, in sample sizes >200 you need not use correction for continuity
in the statistical calculation. I am not sure, but I think this
correction applies in this case when sample sizes are 25-200. Below
25, this correction is not sufficient.

Harry Lavo
July 30th 03, 04:26 AM
Thomas -

Thanks for the post of the Tag Mclaren test link (and to Tom for the other
references). I've looked at the Tag link and suspect it's going to add to
the controversy here. My comments on the test follow.

From the tone of the web info on this test, one can presume that Tag set out
to show its relatively inexpensive gear was just as good as some
acknowledged industry standards. But....wonder why Tag choose the 99%
confidence level? Being careful *not* to say that it was prechosen in
advance? It is because had they used the more common and almost
universally-used 95% level it would have shown that:

* When cable A was the "X" it was recognized at a significant level by the
panel (and guess whose cable probably would "lose" in a preference test
versus a universally recognized standard of excellence chosen as "tops" by
both Stereophile and TAS, as well as by other industry publishers)

* One individual differentiated both cable A and combined cables at the
significant level

Results summarized as follows:

Tag Mclaren Published ABX Results

Sample 99% 95% Actual Confidence

Total Test

Cables
A 96 60 53 e 52 94.8% e
B 84 54 48 e 38 coin toss
Both 180 107 97 e 90 coin toss

Amps
A 96 60 53 e 47 coin toss
B 84 54 48 e 38 coin toss
Both 180 107 97 e 85 coin toss

Top Individuals

Cables
A 8 8 7 6 94.5%
B 7 7 7 5 83.6%
Both 15 13 11 11 95.8%

Amps
A 8 8 7 5 83.6%
B 7 7 7 5 83.6%
Both 15 13 11 10 90.8%

e = extrapolated based on scores for 100 and 50 sample size

In general, the test while seemingly objective has more negatives than
positives when measured against the consensus of the objectivists (and some
subjectivists) in this group as to what constitutes a good abx test:

TEST POSITIVES
*double blind
*level matched

TEST NEGATIVES
*short snippets
*no user control over switching and (apparently) no repeats
*no user control over content
*group test, no safeguards against visual interaction
*no group selection criteria apparent and no pre-training or testing

The results and the summary of positives/negatives above raise some
interesting questions:

*why, for example, should one cable be significantly identified when "x" and
the other fail miserably to be identified. This has to be due and
interaction between the characteristics of the music samples chosen, the
characteristics of the cables under test, and perhaps aggravated by the use
of short snippets with an inadequate time frame to establish the proper
evaluation context. Did the test itself create the overall null where
people could not differentiate based soley on the test not favoring B as
much as A?

* do the differences in people scoring high on the two tests support the
idea that different people react to different attributes of the DUT's. Or
does it again suggest some interaction between the music chosen, the
characteristics of the individual pieces, and perhaps the evaluation time
frame.

* or is it possible that the abx test itself, when used with short snippets,
makes some kinds of differences more apparent and others less apparent and
thus by working against exposing *all* kinds of differences help create more
*no differences* than should be the result.

* since the panel is not identified and there was no training, do the
results suggest a "dumbing down" of differentiation from the scores of the
more able listeners? I am sure it will be suggested that the two different
high scorers were simply random outliers...I'm not so sure especially since
the individual scoring high on the cable test hears the cable differences
exactly like the general sample but at a higher level (required because of
smaller sample size) and the high scorer on the amp test is in much the same
position.

if some of these arguments sound familiar, they certainly raises echoes of
the issues raised here by subjectivists over the years...and yet these
specifics are rooted in the results of this one test.

I'd like to hear other views on this test.

"Thomas A" > wrote in message
news:ahwVa.6957$cF.2308@rwcrnsc53...
> (Nousaine) wrote in message
>...
> > (Thomas A) wrote:
> >
> > >Is there any published DBT of amps, CD players or cables where the
> > >number of trials are greater than 500?
> > >
> > >If there difference is miniscule there is likely that many "guesses"
> > >are wrong and would require many trials to reveal any subtle
> > >difference?
> > >
> > >Thomas
> >
> > With regard to amplifiers as of May 1990 there had been such tests. In
1978
> > QUAD published an erxperiment with 576 trials. In 1980 Smith peterson
and
> > Jackson published an experiment with 1104 trials; in 1989 Stereophile
published
> > a 3530 trial comparison. In 1986 Clark & Masters published an experiment
with
> > 772 trials. All were null.
> >
> > There's a misconception that blind tests tend to have very small sample
sizes.
> > As of 1990 the 23 published amplifier experiments had a mean average of
426 and
> > a median of 90 trials. If we exclude the 3530 trial experiment the mean
becomes
> > 285 trials. The median remains unchanged.
>
> Ok thanks. Is it possible to get the numbers for each test? I would
> like to see if it possible to do a meta-analysis in the amplifier
> case. The test by tagmclaren is an additional one:
>
> http://www.tagmclaren.com/members/news/news77.asp
>
> Thomas
>

Nousaine
July 30th 03, 04:27 AM
(Thomas A) wrote:

(Nousaine) wrote in message
>...
>> (Thomas A) wrote:
>>
>> >Is there any published DBT of amps, CD players or cables where the
>> >number of trials are greater than 500?
>> >
>> >If there difference is miniscule there is likely that many "guesses"
>> >are wrong and would require many trials to reveal any subtle
>> >difference?
>> >
>> >Thomas
>>
>> With regard to amplifiers as of May 1990 there had been such tests. In 1978
>> QUAD published an erxperiment with 576 trials. In 1980 Smith peterson and
>> Jackson published an experiment with 1104 trials; in 1989 Stereophile
>published
>> a 3530 trial comparison. In 1986 Clark & Masters published an experiment
>with
>> 772 trials. All were null.
>>
>> There's a misconception that blind tests tend to have very small sample
>sizes.
>> As of 1990 the 23 published amplifier experiments had a mean average of 426
>and
>> a median of 90 trials. If we exclude the 3530 trial experiment the mean
>becomes
>> 285 trials. The median remains unchanged.
>
>Ok thanks. Is it possible to get the numbers for each test? I would
>like to see if it possible to do a meta-analysis in the amplifier
>case. The test by tagmclaren is an additional one:
>
>http://www.tagmclaren.com/members/news/news77.asp
>
>Thomas

I did just that in 1990 to answer the nagging question "has sample size and
barely audible difference hidden anything?" A summary of these data can be
found in The Proceedings of the 1990 AES Conference "The Sound of Audio" May
1990 in the paper "The Great Debate: Is Anyone Winning?" (www.aes.org)

In general larger sample sizes did not produce more significant results and
there wasn't a relationship of criterion score to sample size.

IME if there is a true just-audible difference scores tend to run high. For
example in tests I ran last summer scores were, as I recall, 21/23 and 17/21 in
two successive runs in a challenge where the session leader claimed a
transparent transfer. IOW results go from chance to strongly positive once
threshold has been reached.

You can test this for yourself at www.pcabx.com where Arny Krueger has
training sessions with increasing levels of difficulty. Also the codec testing
sites are a good place to investigate this issue.

Nousaine
July 30th 03, 07:59 AM
"Harry Lavo" wrote:

>Thomas -
>
>Thanks for the post of the Tag Mclaren test link (and to Tom for the other
>references). I've looked at the Tag link and suspect it's going to add to
>the controversy here.

Actually there's no 'controversey' here. No proponent of amp/wire-sound has
ever shown that nominally competent amps or wires have any sound of their own
when played back over loudspeakers.

The only 'controversey' is over whether Arny Kreuger's pcabx tests cab with
headphones and special programs can be extrapolated to commerically available
programs and speakers in a normally reverberant environment.

The Tag-M results are fully within those expected given the more than 2 dozen
published experiments of amps and wires.

y comments on the test follow.
>
From the tone of the web info on this test, one can presume that Tag set out
>to show its relatively inexpensive gear was just as good as some
>acknowledged industry standards. But....wonder why Tag choose the 99%
>confidence level?

Why not? But you can analyze it any way your want. That's the wonderful thing
about published results.

Being careful *not* to say that it was prechosen in
>advance? It is because had they used the more common and almost
>universally-used 95% level it would have shown that:
>
>* When cable A was the "X" it was recognized at a significant level by the
>panel (and guess whose cable probably would "lose" in a preference test
>versus a universally recognized standard of excellence chosen as "tops" by
>both Stereophile and TAS, as well as by other industry publishers)
>
>* One individual differentiated both cable A and combined cables at the
>significant level
>
>Results summarized as follows:
>
> Tag Mclaren Published ABX Results
>
>Sample 99% 95% Actual Confidence
>
>Total Test
>
>Cables
>A 96 60 53 e 52 94.8% e
>B 84 54 48 e 38 coin toss
>Both 180 107 97 e 90 coin toss
>
>Amps
>A 96 60 53 e 47 coin toss
>B 84 54 48 e 38 coin toss
>Both 180 107 97 e 85 coin toss
>
>Top Individuals
>
>Cables
>A 8 8 7 6 94.5%
>B 7 7 7 5 83.6%
>Both 15 13 11 11 95.8%
>
>Amps
>A 8 8 7 5 83.6%
>B 7 7 7 5 83.6%
>Both 15 13 11 10 90.8%
>
>e = extrapolated based on scores for 100 and 50 sample size
>
>In general, the test while seemingly objective has more negatives than
>positives when measured against the consensus of the objectivists (and some
>subjectivists) in this group as to what constitutes a good abx test:

This is what always happens with 'bad news.' Instead of giving us contradictory
evidence we get endless wishful 'data-dredging' to find any possible reason to
ignore the evidence.

In any other circle when one thinks the results of a given experiment are wrong
they just duplicate it showing the error OR produce a valid one with contrary
evidence.

>TEST POSITIVES
>*double blind
>*level matched
>
>TEST NEGATIVES
>*short snippets
>*no user control over switching and (apparently) no repeats
>*no user control over content
>*group test, no safeguards against visual interaction
>*no group selection criteria apparent and no pre-training or testing

OK how many of your sighted 'tests' have ignored one or all of these positives
or negatives?

>The results and the summary of positives/negatives above raise some
>interesting questions:

No, not really. All of the true questions about bias controlled listening tests
have been addressed prior.

>
>*why, for example, should one cable be significantly identified when "x" and
>the other fail miserably to be identified. This has to be due and
>interaction between the characteristics of the music samples chosen, the
>characteristics of the cables under test, and perhaps aggravated by the use
>of short snippets with an inadequate time frame to establish the proper
>evaluation context. Did the test itself create the overall null where
>people could not differentiate based soley on the test not favoring B as
>much as A?
>
>* do the differences in people scoring high on the two tests support the
>idea that different people react to different attributes of the DUT's. Or
>does it again suggest some interaction between the music chosen, the
>characteristics of the individual pieces, and perhaps the evaluation time
>frame.
>
>* or is it possible that the abx test itself, when used with short snippets,
>makes some kinds of differences more apparent and others less apparent and
>thus by working against exposing *all* kinds of differences help create more
>*no differences* than should be the result.
>
>* since the panel is not identified and there was no training, do the
>results suggest a "dumbing down" of differentiation from the scores of the
>more able listeners? I am sure it will be suggested that the two different
>high scorers were simply random outliers...I'm not so sure especially since
>the individual scoring high on the cable test hears the cable differences
>exactly like the general sample but at a higher level (required because of
>smaller sample size) and the high scorer on the amp test is in much the same
>position.
>
>if some of these arguments sound familiar, they certainly raises echoes of
>the issues raised here by subjectivists over the years...and yet these
>specifics are rooted in the results of this one test.
>
>I'd like to hear other views on this test.

These results are consistent with the 2 dozen and more other bias controlled
listening tests of power amplifiers and wires.

>
>"Thomas A" > wrote in message
>news:ahwVa.6957$cF.2308@rwcrnsc53...
>> (Nousaine) wrote in message
>...
>> > (Thomas A) wrote:
>> >
>> > >Is there any published DBT of amps, CD players or cables where the
>> > >number of trials are greater than 500?
>> > >
>> > >If there difference is miniscule there is likely that many "guesses"
>> > >are wrong and would require many trials to reveal any subtle
>> > >difference?
>> > >
>> > >Thomas
>> >
>> > With regard to amplifiers as of May 1990 there had been such tests. In
>1978
>> > QUAD published an erxperiment with 576 trials. In 1980 Smith peterson
>and
>> > Jackson published an experiment with 1104 trials; in 1989 Stereophile
>published
>> > a 3530 trial comparison. In 1986 Clark & Masters published an experiment
>with
>> > 772 trials. All were null.
>> >
>> > There's a misconception that blind tests tend to have very small sample
>sizes.
>> > As of 1990 the 23 published amplifier experiments had a mean average of
>426 and
>> > a median of 90 trials. If we exclude the 3530 trial experiment the mean
>becomes
>> > 285 trials. The median remains unchanged.
>>
>> Ok thanks. Is it possible to get the numbers for each test? I would
>> like to see if it possible to do a meta-analysis in the amplifier
>> case. The test by tagmclaren is an additional one:

Thanks for the reference.

Stewart Pinkerton
July 30th 03, 08:18 AM
On Wed, 30 Jul 2003 03:26:24 GMT, "Harry Lavo" >
wrote:

From the tone of the web info on this test, one can presume that Tag set out
>to show its relatively inexpensive gear was just as good as some
>acknowledged industry standards. But....wonder why Tag choose the 99%
>confidence level? Being careful *not* to say that it was prechosen in
>advance? It is because had they used the more common and almost
>universally-used 95% level it would have shown that:

Can anyone smell fish? Specifically, red herring?

>* When cable A was the "X" it was recognized at a significant level by the
>panel (and guess whose cable probably would "lose" in a preference test
>versus a universally recognized standard of excellence chosen as "tops" by
>both Stereophile and TAS, as well as by other industry publishers)

No Harry, *all* tests fell below the 95% level, except for one single
participant in the cable test, which just scraped in. Given that there
were 12 volunteers, there's less than 2:1 odds against this happening
when tossing coins. Interesting that you also failed to note that the
'best performers' in the cable test did *not* perform well in the
amplifier test, and vice versa.

You do love to cherry-pick in search of your *required* result, don't
you?

>* One individual differentiated both cable A and combined cables at the
>significant level
>
>Results summarized as follows:
>
> Tag Mclaren Published ABX Results
>
>Sample 99% 95% Actual Confidence
>
>Total Test
>
>Cables
>A 96 60 53 e 52 94.8% e
>B 84 54 48 e 38 coin toss
>Both 180 107 97 e 90 coin toss
>
>Amps
>A 96 60 53 e 47 coin toss
>B 84 54 48 e 38 coin toss
>Both 180 107 97 e 85 coin toss
>
>Top Individuals
>
>Cables
>A 8 8 7 6 94.5%
>B 7 7 7 5 83.6%
>Both 15 13 11 11 95.8%
>
>Amps
>A 8 8 7 5 83.6%
>B 7 7 7 5 83.6%
>Both 15 13 11 10 90.8%
>
>e = extrapolated based on scores for 100 and 50 sample size
>
>In general, the test while seemingly objective has more negatives than
>positives when measured against the consensus of the objectivists (and some
>subjectivists) in this group as to what constitutes a good abx test:
>
>TEST POSITIVES
>*double blind
>*level matched
>
>TEST NEGATIVES
>*short snippets
>*no user control over switching and (apparently) no repeats
>*no user control over content
>*group test, no safeguards against visual interaction
>*no group selection criteria apparent and no pre-training or testing
>
>The results and the summary of positives/negatives above raise some
>interesting questions:
>
>*why, for example, should one cable be significantly identified when "x" and
>the other fail miserably to be identified. This has to be due and
>interaction between the characteristics of the music samples chosen, the
>characteristics of the cables under test, and perhaps aggravated by the use
>of short snippets with an inadequate time frame to establish the proper
>evaluation context.

No it doen't Harry, I doesn't *have* to be due to anything but random
chance.

> Did the test itself create the overall null where
>people could not differentiate based soley on the test not favoring B as
>much as A?
>
>* do the differences in people scoring high on the two tests support the
>idea that different people react to different attributes of the DUT's. Or
>does it again suggest some interaction between the music chosen, the
>characteristics of the individual pieces, and perhaps the evaluation time
>frame.

No, since the high scorers on one test were not the high scorers in
the other test. It's called a distrinution, harry, and it is simply
more evidence that there were in fact no audible differences - as any
reasonable person would expect.

>> http://www.tagmclaren.com/members/news/news77.asp

--

Stewart Pinkerton | Music is Art - Audio is Engineering

Steven Sullivan
July 30th 03, 08:18 AM
Nousaine > wrote:

> This is what always happens with 'bad news.' Instead of giving us contradictory
> evidence we get endless wishful 'data-dredging' to find any possible reason to
> ignore the evidence.

> In any other circle when one thinks the results of a given experiment are wrong
> they just duplicate it showing the error OR produce a valid one with contrary
> evidence.

Not necessarily. It's quite common for questions to be raised during peer
review of a scientific paper; it is then incumbent upon the *experimenter*, not
the critic, to justify his or her choice of protocol, or his/her explanation of
the results. Often this involves doing more experiments to address the reviewer's
concerns. Sometimes it merely involved explaining the results more clearly, or
in more qualified terms. If the experimenter feels the reviewer has ignored some
important point, that comes out too in the reply to the reviews.

I say all this having not yet visited the link, so I'm totally unbiased ;>

Harry Lavo
July 31st 03, 04:15 AM
"Stewart Pinkerton" > wrote in message
news:7jKVa.10234$Oz4.4174@rwcrnsc54...
> On Wed, 30 Jul 2003 03:26:24 GMT, "Harry Lavo" >
> wrote:
>
> From the tone of the web info on this test, one can presume that Tag set
out
> >to show its relatively inexpensive gear was just as good as some
> >acknowledged industry standards. But....wonder why Tag choose the 99%
> >confidence level? Being careful *not* to say that it was prechosen in
> >advance? It is because had they used the more common and almost
> >universally-used 95% level it would have shown that:
>
> Can anyone smell fish? Specifically, red herring?
>

Are you an outlyer? Or are you simply sensitive to fish? Or did you not
conceive that thought double-blind and it is just your imagination? :=)

> >* When cable A was the "X" it was recognized at a significant level by
the
> >panel (and guess whose cable probably would "lose" in a preference test
> >versus a universally recognized standard of excellence chosen as "tops"
by
> >both Stereophile and TAS, as well as by other industry publishers)
>
> No Harry, *all* tests fell below the 95% level, except for one single
> participant in the cable test, which just scraped in. Given that there
> were 12 volunteers, there's less than 2:1 odds against this happening
> when tossing coins. Interesting that you also failed to note that the
> 'best performers' in the cable test did *not* perform well in the
> amplifier test, and vice versa.
>

I'm sorry, but when rounded to whole numbers 94.8% is a lot closer than one
number higher which would be about 96% in the larger panels and 97% in the
smaller panels. The standard is 95%. To say that 94.8% doesn't qualify is
splitting hairs. I inclduded the actual numbers needed to pass the barrier
just to satisfy the purists, but you *ARE* splitting hairs here, Stewart.

> You do love to cherry-pick in search of your *required* result, don't
> you?
>

You mean not accepting the "received truth" without doing my own analysis is
cherry picking, is that it Stewart? We are not allowed to point out
anonomlies and ask "why"? "how come"? "what could be causing this?"

And would you explain why a significant level was reached on the "A" cable
test with 96 trials? Was that "cherry picking". C'mon, Stewart, you know
better. In fact the real issue here is: if one cable can be so readily
picked out, why can't the other be? What is it in the test, procedure,
quality of the cables, order bias, or what. Something is rotten in the
beloved state of ABX here!

> >* One individual differentiated both cable A and combined cables at the
> >significant level
> >
> >Results summarized as follows:
> >
> > Tag Mclaren Published ABX Results
> >
> >Sample 99% 95% Actual Confidence
> >
> >Total Test
> >
> >Cables
> >A 96 60 53 e 52 94.8% e
> >B 84 54 48 e 38 coin toss
> >Both 180 107 97 e 90 coin toss
> >
> >Amps
> >A 96 60 53 e 47 coin toss
> >B 84 54 48 e 38 coin toss
> >Both 180 107 97 e 85 coin toss
> >
> >Top Individuals
> >
> >Cables
> >A 8 8 7 6 94.5%
> >B 7 7 7 5 83.6%
> >Both 15 13 11 11 95.8%
> >
> >Amps
> >A 8 8 7 5 83.6%
> >B 7 7 7 5 83.6%
> >Both 15 13 11 10 90.8%
> >
> >e = extrapolated based on scores for 100 and 50 sample size
> >
> >In general, the test while seemingly objective has more negatives than
> >positives when measured against the consensus of the objectivists (and
some
> >subjectivists) in this group as to what constitutes a good abx test:
> >
> >TEST POSITIVES
> >*double blind
> >*level matched
> >
> >TEST NEGATIVES
> >*short snippets
> >*no user control over switching and (apparently) no repeats
> >*no user control over content
> >*group test, no safeguards against visual interaction
> >*no group selection criteria apparent and no pre-training or testing
> >
> >The results and the summary of positives/negatives above raise some
> >interesting questions:
> >
> >*why, for example, should one cable be significantly identified when "x"
and
> >the other fail miserably to be identified. This has to be due and
> >interaction between the characteristics of the music samples chosen, the
> >characteristics of the cables under test, and perhaps aggravated by the
use
> >of short snippets with an inadequate time frame to establish the proper
> >evaluation context.
>
> No it doen't Harry, I doesn't *have* to be due to anything but random
> chance.
>
> > Did the test itself create the overall null where
> >people could not differentiate based soley on the test not favoring B as
> >much as A?
> >
> >* do the differences in people scoring high on the two tests support the
> >idea that different people react to different attributes of the DUT's.
Or
> >does it again suggest some interaction between the music chosen, the
> >characteristics of the individual pieces, and perhaps the evaluation time
> >frame.
>
> No, since the high scorers on one test were not the high scorers in
> the other test. It's called a distrinution, harry, and it is simply
> more evidence that there were in fact no audible differences - as any
> reasonable person would expect.
>
> >> http://www.tagmclaren.com/members/news/news77.asp
>
> --
>
> Stewart Pinkerton | Music is Art - Audio is Engineering

I notice no comment on this latter part, Stewart. That is the *SUBSTANCE*
of the interesting results of the test/techniques used and the questions
raised.
>

ludovic mirabel
July 31st 03, 04:21 AM
Steven Sullivan > wrote in message >...
> Nousaine > wrote:
>
> > This is what always happens with 'bad news.' Instead of giving us contradictory
> > evidence we get endless wishful 'data-dredging' to find any possible reason to
> > ignore the evidence.
>
> > In any other circle when one thinks the results of a given experiment are wrong
> > they just duplicate it showing the error OR produce a valid one with contrary
> > evidence.
>
> Not necessarily. It's quite common for questions to be raised during peer
> review of a scientific paper; it is then incumbent upon the *experimenter*, not
> the critic, to justify his or her choice of protocol, or his/her explanation of
> the results. Often this involves doing more experiments to address the reviewer's
> concerns. Sometimes it merely involved explaining the results more clearly, or
> in more qualified terms. If the experimenter feels the reviewer has ignored some
> important point, that comes out too in the reply to the reviews.
>
> I say all this having not yet visited the link, so I'm totally unbiased ;>

Bravo Mr. Sullivan. I hope you'll be as pleased to accept my applause
as I am to see your excellent exposure of the frequently-voiced
challenge to the ABX sceptics to "prove" their sceptical questions.
Exposure coming from an unexpected corner.
Perhaps we're seeing a revival of intellectual integrity in debate on
RAHE.

I promise to quote your summary when occasion warrants it.
Ludovic Mirabel

Steven Sullivan
July 31st 03, 05:33 AM
ludovic mirabel > wrote:
> Steven Sullivan > wrote in message >...
> > Nousaine > wrote:
> >
> > > This is what always happens with 'bad news.' Instead of giving us contradictory
> > > evidence we get endless wishful 'data-dredging' to find any possible reason to
> > > ignore the evidence.
> >
> > > In any other circle when one thinks the results of a given experiment are wrong
> > > they just duplicate it showing the error OR produce a valid one with contrary
> > > evidence.
> >
> > Not necessarily. It's quite common for questions to be raised during peer
> > review of a scientific paper; it is then incumbent upon the *experimenter*, not
> > the critic, to justify his or her choice of protocol, or his/her explanation of
> > the results. Often this involves doing more experiments to address the reviewer's
> > concerns. Sometimes it merely involved explaining the results more clearly, or
> > in more qualified terms. If the experimenter feels the reviewer has ignored some
> > important point, that comes out too in the reply to the reviews.
> >
> > I say all this having not yet visited the link, so I'm totally unbiased ;>

> Bravo Mr. Sullivan. I hope you'll be as pleased to accept my applause
> as I am to see your excellent exposure of the frequently-voiced
> challenge to the ABX sceptics to "prove" their sceptical questions.

Actually, ludovic, what tends to happen far more often, is that skeptics ask
subjectivists to prove *their* claims, which is quite proper.

Also, as I implied, there mere act of *questioning* does not make the question
well-founded or mean that it requires answering. In you're case, I have
observed that they almost never are. In a peer review process, the
poor foundation and/or bad understanding behind such queries would be noted by
the experimenter, who would make his case to the editor, and the points would
not be required to be addressed.

There is no 'exposure' involved, here, except of your own agenda, as usual.

--
-S.

Thomas A
July 31st 03, 04:07 PM
(Nousaine) wrote in message news:<nWGVa.15987$YN5.14030@sccrnsc01>...
> (Thomas A) wrote:
>
> (Nousaine) wrote in message
> >...
> >> (Thomas A) wrote:
> >>
> >> >Is there any published DBT of amps, CD players or cables where the
> >> >number of trials are greater than 500?
> >> >
> >> >If there difference is miniscule there is likely that many "guesses"
> >> >are wrong and would require many trials to reveal any subtle
> >> >difference?
> >> >
> >> >Thomas
> >>
> >> With regard to amplifiers as of May 1990 there had been such tests. In 1978
> >> QUAD published an erxperiment with 576 trials. In 1980 Smith peterson and
> >> Jackson published an experiment with 1104 trials; in 1989 Stereophile
> published
> >> a 3530 trial comparison. In 1986 Clark & Masters published an experiment
> with
> >> 772 trials. All were null.
> >>
> >> There's a misconception that blind tests tend to have very small sample
> sizes.
> >> As of 1990 the 23 published amplifier experiments had a mean average of 426
> and
> >> a median of 90 trials. If we exclude the 3530 trial experiment the mean
> becomes
> >> 285 trials. The median remains unchanged.
> >
> >Ok thanks. Is it possible to get the numbers for each test? I would
> >like to see if it possible to do a meta-analysis in the amplifier
> >case. The test by tagmclaren is an additional one:
> >
> >http://www.tagmclaren.com/members/news/news77.asp
> >
> >Thomas
>
> I did just that in 1990 to answer the nagging question "has sample size and
> barely audible difference hidden anything?" A summary of these data can be
> found in The Proceedings of the 1990 AES Conference "The Sound of Audio" May
> 1990 in the paper "The Great Debate: Is Anyone Winning?" (www.aes.org)

Ok thanks. I'll look it up.

>
> In general larger sample sizes did not produce more significant results and
> there wasn't a relationship of criterion score to sample size.

Where the data from all experiments pooled? It might not be the best
way, if some experiments *did* include real audible differences but in
which the sample size was too small to reveal any statistically
significant difference whereas other did not include real audible
difference. Any measured responses in the experiments? Did any of test
include control tests where the difference was audible but subtle and
then comparing e.g. different subjects? Where the "best scorers"
allowed to repeat the experiments in the main experiment? Many
questions but they may be relevant when making a meta-analysis.

In addition, have any of the experiments used test signals in the LF
range (around 15-20 Hz) and high-capable subwoofers (>120 dB SPL @ 20
Hz)? I've just curious since the tests from the Swedish
Audio-Technical Society frequently identifies amplfiers than roll of
in the low end using blind tests. It might not be said to be an
audible difference since the difference is percieved as a difference
in vibrations in the body. I think I mentioned this before. Also for
testing CD players, have anybody used a sin2 pulse in evaluating
audible differences?

>
> IME if there is a true just-audible difference scores tend to run high. For
> example in tests I ran last summer scores were, as I recall, 21/23 and 17/21 in
> two successive runs in a challenge where the session leader claimed a
> transparent transfer. IOW results go from chance to strongly positive once
> threshold has been reached.

Yes, I have come to similar conclusions myself in my own system.

>
> You can test this for yourself at www.pcabx.com where Arny Krueger has
> training sessions with increasing levels of difficulty. Also the codec testing
> sites are a good place to investigate this issue.

I've tried the tests at Arnys site a couple of times, but I feel I
need better hardware to do these tests more accurate.

Jim West
July 31st 03, 04:14 PM
In article <nR%Va.24664$YN5.23125@sccrnsc01>, Harry Lavo wrote:
>
> You mean not accepting the "received truth" without doing my own analysis is
> cherry picking, is that it Stewart? We are not allowed to point out
> anonomlies and ask "why"? "how come"? "what could be causing this?"

You are indeed cherry picking. With 12 individuals the probability that
one would would appear to meet the 95% level is fairly high. Remember
that you can expect 1 in 20 to meet that level entirely by random. It
is not acceptable scientific practice to select specific data sub-sets
out of the complete set. Otherwise you could "prove" anything by simply
running enough trials and ignoring those you don't like. Check any peer
reviewed journal.

In any event, 11 out 15 has a probability of 5.9 % of occuring by chance.
That does not meet the 95 % confidence level. It would be rejected in
a peer reviewed statistical study. (If that was the only data more
trials would called for. But it wasn't the only data.)

> And would you explain why a significant level was reached on the "A" cable
> test with 96 trials? Was that "cherry picking". C'mon, Stewart, you know
> better. In fact the real issue here is: if one cable can be so readily
> picked out, why can't the other be? What is it in the test, procedure,
> quality of the cables, order bias, or what. Something is rotten in the
> beloved state of ABX here!

Where are you getting your numbers? The data they posted on the
web page showed that there were 52 correct answers in 96 trials.
At least 52 correct answers will occur entirely by chance 23.8 %
of the time. This is far from statistically significant.

Nousaine
August 1st 03, 05:09 AM
(Thomas A) wrote:

.....some snip.....

>Where the data from all experiments pooled? It might not be the best
>way, if some experiments *did* include real audible differences but in
>which the sample size was too small to reveal any statistically
>significant difference whereas other did not include real audible
>difference.

Of the 23 tests only one had a sample size as small as 16. Three had sample
sizes of 40 or fewer.

Any measured responses in the experiments?

It was typical, but not universal, to verify frequency response. The most
common type of significance included amplifiers whihc were found to have
operating malfunction.

Did any of test
>include control tests where the difference was audible but subtle and
>then comparing e.g. different subjects?

These were power amplifiers remember. One of the earlier ones went to
significant effort to track down subtle, hey ...ANY, differences and were
unable to find them.

Where the "best scorers"
>allowed to repeat the experiments in the main experiment?

This did not appear to be part of the protocol for any but subject analysis was
common.

Many
>questions but they may be relevant when making a meta-analysis.

> In addition, have any of the experiments used test signals in the LF
>range (around 15-20 Hz) and high-capable subwoofers (>120 dB SPL @ 20
>Hz)?

No. But there are no commercially available subwoofers that will do 120+ dB at
2 meters in a real room. I've tested dozens and dozens and the only ones with
this capability are custom.

I've just curious since the tests from the Swedish
>Audio-Technical Society frequently identifies amplfiers than roll of
>in the low end using blind tests.

The typical half power point for my stock of a dozen power amplifiers is 6 Hz.
I've not seen the SATS data though.

It might not be said to be an
>audible difference since the difference is percieved as a difference
>in vibrations in the body. I think I mentioned this before. Also for
>testing CD players, have anybody used a sin2 pulse in evaluating
>audible differences?

Not that I know of.

John Corbett
August 1st 03, 05:16 AM
In article <zVGVa.15179$Ho3.2323@sccrnsc03>, "Harry Lavo"
> wrote:

> Thomas -
>
> Thanks for the post of the Tag Mclaren test link (and to Tom for the other
> references). I've looked at the Tag link and suspect it's going to add to
> the controversy here. My comments on the test follow.
>
> From the tone of the web info on this test, one can presume that Tag set out
> to show its relatively inexpensive gear was just as good as some
> acknowledged industry standards. But....wonder why Tag choose the 99%
> confidence level? Being careful *not* to say that it was prechosen in
> advance? It is because had they used the more common and almost
> universally-used 95% level it would have shown that:
>
> * When cable A was the "X" it was recognized at a significant level by the
> panel (and guess whose cable probably would "lose" in a preference test
> versus a universally recognized standard of excellence chosen as "tops" by
> both Stereophile and TAS, as well as by other industry publishers)
>
> * One individual differentiated both cable A and combined cables at the
> significant level
>
> Results summarized as follows:
>
> Tag Mclaren Published ABX Results
>
> Sample 99% 95% Actual Confidence
>
> Total Test
>
> Cables
> A 96 60 53 e 52 94.8% e
> B 84 54 48 e 38 coin toss
> Both 180 107 97 e 90 coin toss
>
> Amps
> A 96 60 53 e 47 coin toss
> B 84 54 48 e 38 coin toss
> Both 180 107 97 e 85 coin toss
>
> Top Individuals
>
> Cables
> A 8 8 7 6 94.5%
> B 7 7 7 5 83.6%
> Both 15 13 11 11 95.8%
>
> Amps
> A 8 8 7 5 83.6%
> B 7 7 7 5 83.6%
> Both 15 13 11 10 90.8%
>
> e = extrapolated based on scores for 100 and 50 sample size
>
> [snip]
>
> I'd like to hear other views on this test.
>
Mr. Lavo, here are some comments on your numbers.

The short story: your numbers are bogus.

The long story follows.

I don't know how you came up with critical values for what you
think is a reasonable level of significance.

For n = 96 trials, the critical values are:
60 for .01 level of significance
57 for .05 level of significance
53 for .20 level of significance

for n = 84 trials, the critical values are:
54 for .01 level of significance
51 for .05 level of significance
47 for .20 level of significance

for n = 180 trials, the critical values are:
107 for .01 level of significance
102 for .05 level of significance
97 for .20 level of significance

for n = 8 trials, the critical values are:
8 for .01 level of significance
7 for .05 level of significance
6 for .20 level of significance

for n = 7 trials, the critical values are:
7 for .01 level of significance
7 for .05 level of significance
6 for .20 level of significance

for n = 15 trials, the critical values are:
13 for .01 level of significance
12 for .05 level of significance
10 for .20 level of significance

The values you provide for what you call 95% confidence (i.e., .05 level
of significance) are almost the correct values for 20% significance.

You make much of an apparently borderline significant result,
where the best individual cable test scores were 11 of 15 correct.

If that had been the entire experiment, we would have a p-value
of .059, reflecting the probability that one would do at least that
well in a single run of 15 trials.
That is, the probability that someone would score 11, 12, 13, 14, or 15
correct just by guessing is .059; also, the probability that the score is
less than 11 would be 1 - .059 = .941 for a single batch of 15 trials.

But what was reported was the best such performance in a dozen sets of
trials. That's not the same as a single run of 15 trials.

The probability of at least one of 12 subjects doing at least as well as
11 of 15 is 1 - [ probability that all 12 do worse than 11 of 15 ].
Thus we get 1 - (.941)^(12), which is about .52.

So, even your star performer is not doing better than chance suggests
he should.

Mr. Lavo, your conjecture (that that the test organizers have tried
to distort the results to fit an agenda) appears to be without support.

Now for some comments on the TAG McLaren report itself.

There are problems with some numbers provided by TAG McLaren, but they
are confined to background material.
There do not appear to be problems with the actual report of experimental
results.

TAG McLaren claims that you need more than 10 trials to obtain results
significant at the .01 level, but they are wrong. In fact, 7 trials suffice.
With 10 trials you can reach .001.
There is a table just before the results section of their report with some
discussion about small sample sizes. The first several rows of that table have
bogus numbers in the third column, and their sample size claims are based
on those wrong numbers.
However, the values for 11 or more trials are correct.

As I have already noted, the numbers used in the report itself appear to
be correct.

Some would argue that their conclusion should be that they found no
evidence to support a claim of audible difference, rather than concluding
that there was no audible difference, but that's another issue.

You have correctly noted concerns about the form of ABX presentation (not
the same as the usual ABX scheme discussed on rahe) but that does not
invalidate the experiment.

There are questions about how the sample of listeners was obtained.

For the most part, TAG McLaren seems to have designed a test, carried it
out, run the numbers properly, and then accurately reported what they did.
That's more than can be said for many tests.

JC

Stewart Pinkerton
August 1st 03, 05:17 AM
On Thu, 31 Jul 2003 03:15:31 GMT, "Harry Lavo" >
wrote:

>And would you explain why a significant level was reached on the "A" cable
>test with 96 trials? Was that "cherry picking". C'mon, Stewart, you know
>better. In fact the real issue here is: if one cable can be so readily
>picked out, why can't the other be? What is it in the test, procedure,
>quality of the cables, order bias, or what. Something is rotten in the
>beloved state of ABX here!

All that the above proves (especially since the results were marginal
at best) is that there's a random distribution in the results which
favoured A *on this occasion*. Run that trial again, and I'll put an
even money bet that you'll have a slight bias in favour of B.

Anyone familiar with Statistical Process Control is well aware that
one swallow doesn't make a summer.
--

Stewart Pinkerton | Music is Art - Audio is Engineering

Harry Lavo
August 1st 03, 05:31 AM
"Jim West" > wrote in message
news:fnaWa.19572$cF.7720@rwcrnsc53...
> In article <nR%Va.24664$YN5.23125@sccrnsc01>, Harry Lavo wrote:
> >
> > You mean not accepting the "received truth" without doing my own
analysis is
> > cherry picking, is that it Stewart? We are not allowed to point out
> > anonomlies and ask "why"? "how come"? "what could be causing this?"
>
> You are indeed cherry picking. With 12 individuals the probability that
> one would would appear to meet the 95% level is fairly high. Remember
> that you can expect 1 in 20 to meet that level entirely by random. It
> is not acceptable scientific practice to select specific data sub-sets
> out of the complete set. Otherwise you could "prove" anything by simply
> running enough trials and ignoring those you don't like. Check any peer
> reviewed journal.
>

Yep it is more probable than one in twenty. But not so high that we have to
accept your assumption that he/she *IS* the one-in-twenty.

> In any event, 11 out 15 has a probability of 5.9 % of occuring by chance.
> That does not meet the 95 % confidence level. It would be rejected in
> a peer reviewed statistical study. (If that was the only data more
> trials would called for. But it wasn't the only data.)
>

My mistake..you are correct. So the number is only 94.1%. But 12 out of 15
yields 98.2%. Which is closer to the 95% standard?

> > And would you explain why a significant level was reached on the "A"
cable
> > test with 96 trials? Was that "cherry picking". C'mon, Stewart, you
know
> > better. In fact the real issue here is: if one cable can be so readily
> > picked out, why can't the other be? What is it in the test, procedure,
> > quality of the cables, order bias, or what. Something is rotten in the
> > beloved state of ABX here!
>

> Where are you getting your numbers? The data they posted on the
> web page showed that there were 52 correct answers in 96 trials.
> At least 52 correct answers will occur entirely by chance 23.8 %
> of the time. This is far from statistically significant.

Oops! Late at night and the only book of binomial probabilites I had handy
contained only raw data, not the cumulative error tables that I was used to
from my previous work. So I goofed. I agree that the three-to-one odds are
not statistically significant, and so the full panel results are null for
both cables. My apologies.

The issue with the individuals still stands, however.

Jim West
August 1st 03, 03:44 PM
In article <l2mWa.36515$Ho3.6598@sccrnsc03>, Harry Lavo wrote:
> "Jim West" > wrote in message
> news:fnaWa.19572$cF.7720@rwcrnsc53...
>>
>> You are indeed cherry picking. With 12 individuals the probability that
>> one would would appear to meet the 95% level is fairly high. Remember
>> that you can expect 1 in 20 to meet that level entirely by random. It
>> is not acceptable scientific practice to select specific data sub-sets
>> out of the complete set. Otherwise you could "prove" anything by simply
>> running enough trials and ignoring those you don't like. Check any peer
>> reviewed journal.
>>
>
> Yep it is more probable than one in twenty. But not so high that we have to
> accept your assumption that he/she *IS* the one-in-twenty.

Where did I assume that?

>> In any event, 11 out 15 has a probability of 5.9 % of occuring by chance.
>> That does not meet the 95 % confidence level. It would be rejected in
>> a peer reviewed statistical study. (If that was the only data more
>> trials would called for. But it wasn't the only data.)
>>
>
> My mistake..you are correct. So the number is only 94.1%. But 12 out of 15
> yields 98.2%. Which is closer to the 95% standard?

Irrelevant. You can't arbitrarily play with the numbers.

>
>> Where are you getting your numbers? The data they posted on the
>> web page showed that there were 52 correct answers in 96 trials.
>> At least 52 correct answers will occur entirely by chance 23.8 %
>> of the time. This is far from statistically significant.
>
> Oops! Late at night and the only book of binomial probabilites I had handy
> contained only raw data, not the cumulative error tables that I was used to
> from my previous work. So I goofed. I agree that the three-to-one odds are
> not statistically significant, and so the full panel results are null for
> both cables. My apologies.
>
> The issue with the individuals still stands, however.

No it doesn't. Even with cherry picking (which is not valid anyway)
no individual performed at a statistically signficant level. Period.

Arny Krueger
August 1st 03, 03:45 PM
"Harry Lavo" > wrote in message
news:93mWa.36669$uu5.4445@sccrnsc04
> "normanstrong" > wrote in message
> news:lfaWa.29024$uu5.3508@sccrnsc04...
>> Golly, after reading the entire article, I'm impressed. Faced with
>> these results, it's pretty hard to attack McLaren's article as being
>> poor science.
>>
>> http://www.tagmclaren.com/members/news/news77.asp
>>
>> I see nothing in the results that is inconsistent with the hypothesis
>> that there are no audible differences in both the cables and amps.

> Then you have not looked closely at and thought about some of this
> results/issues I have raised.

I know Norm pretty well and he's very heavy into statistics, courtesy of a
successful career at Fluke as a test equipment designer. He looks at
discussions of statistics with a very critical, practical eye.

As far as the critical issues that have been raised, I think that they speak
for themselves, pretty weakly.

Stewart Pinkerton
August 1st 03, 05:12 PM
On Fri, 01 Aug 2003 04:32:05 GMT, "Harry Lavo" >
wrote:

>"normanstrong" > wrote in message
>news:lfaWa.29024$uu5.3508@sccrnsc04...
>> Golly, after reading the entire article, I'm impressed. Faced with
>> these results, it's pretty hard to attack McLaren's article as being
>> poor science.
>>
>> http://www.tagmclaren.com/members/news/news77.asp
>>
>> I see nothing in the results that is inconsistent with the hypothesis
>> that there are no audible differences in both the cables and amps.

>Then you have not looked closely at and thought about some of this
>results/issues I have raised.

They have indeed been closely examined, and your distortions and wild
speculations have been debunked by several posters.

Now, since the TAG results are yet another nail in your 'everything
sounds different' coffin, exactly where is there one single *shred* of
evidence to support your position?

As a sad footnote, it must be observed that TAG-McLaren is one of the
very few 'high end' companies which brought genuine engineering talent
to bear on an attempt to improve the quality of music reproduction in
the home, backed by considerable financial muscle. Regrettably (but
perhaps predictably), they found that honesty and skill were *not* the
way to make money in so-called 'high end' audio. R.I.P.......
--

Stewart Pinkerton | Music is Art - Audio is Engineering

Thomas A
August 1st 03, 05:12 PM
(Nousaine) wrote in message news:<1KlWa.36390$uu5.4253@sccrnsc04>...
> (Thomas A) wrote:
>
> ....some snip.....
>
some more snip...
>
> Where the "best scorers"
> >allowed to repeat the experiments in the main experiment?
>
> This did not appear to be part of the protocol for any but subject analysis was
> common.

My experience with blindtests is that results do vary among subjects.
I made a test where the CD players where not level-matched (something
just below 0.5 dB difference). Two persons, including myself, could
verify a difference in blindtest in my home setup (bass-rich music, no
test signals used), whereas two other persons were unable to do it.
Thus a retest with the best-scorers in a test is something which might
be desirable.
>
> Many
> >questions but they may be relevant when making a meta-analysis.
>
> > In addition, have any of the experiments used test signals in the LF
> >range (around 15-20 Hz) and high-capable subwoofers (>120 dB SPL @ 20
> >Hz)?
>
> No. But there are no commercially available subwoofers that will do 120+ dB at
> 2 meters in a real room. I've tested dozens and dozens and the only ones with
> this capability are custom.

Agree that commercial subs with +120 dB in room is hard to find.

>
> I've just curious since the tests from the Swedish
> >Audio-Technical Society frequently identifies amplfiers than roll of
> >in the low end using blind tests.
>
> The typical half power point for my stock of a dozen power amplifiers is 6 Hz.
> I've not seen the SATS data though.

Have you ever tested yourself? You have a quite bass-capable system if
I remember correctly. You would need music with very deep and
high-quality bass, or test-tones, and perhaps a setup as described in
the link (a reference amp). The reference amp used by SATS have during
many years has been NAD 208. Other amps rated good is e.g. Rotel
RB1090. I am not sure at the moment which ones that were rated
"not-so-good", but I can look it up. The method they use is a
"before-and-after" test.

http://www.sonicdesign.se/amptest.htm

>
> It might not be said to be an
> >audible difference since the difference is percieved as a difference
> >in vibrations in the body. I think I mentioned this before. Also for
> >testing CD players, have anybody used a sin2 pulse in evaluating
> >audible differences?
>
> Not that I know of.

Ok. Maybe somebody (Arny?) could present scope pictures of sin2 pulses
of various DACs and CD players? In addition present them as audio
files on the pcabx site? It would be interesting to see whether those
players or DACs with distorted pulses (they do exist...) could be
revealed in a DBT. Especially players with one-bit and true multi-bit.

Harry Lavo
August 2nd 03, 01:38 AM
"Stewart Pinkerton" > wrote in message
news:iRlWa.36496$uu5.4473@sccrnsc04...
> On Thu, 31 Jul 2003 03:15:31 GMT, "Harry Lavo" >
> wrote:
>
> >And would you explain why a significant level was reached on the "A"
cable
> >test with 96 trials? Was that "cherry picking". C'mon, Stewart, you
know
> >better. In fact the real issue here is: if one cable can be so readily
> >picked out, why can't the other be? What is it in the test, procedure,
> >quality of the cables, order bias, or what. Something is rotten in the
> >beloved state of ABX here!
>
> All that the above proves (especially since the results were marginal
> at best) is that there's a random distribution in the results which
> favoured A *on this occasion*. Run that trial again, and I'll put an
> even money bet that you'll have a slight bias in favour of B.
>
> Anyone familiar with Statistical Process Control is well aware that
> one swallow doesn't make a summer.
> --

This wasn't one swallow, Stewart...this was 96 swallows in one case and 84
in another. A whole pride of swallows. So the odds of such severe swings
in opposite directions (even if only marginal) is not very high over the
whole pride. We are not talking a sample of two here.

Harry Lavo
August 2nd 03, 05:51 AM
"John Corbett" > wrote in message
news:hQlWa.37169$YN5.32913@sccrnsc01...
> In article <zVGVa.15179$Ho3.2323@sccrnsc03>, "Harry Lavo"
> > wrote:
>
> > Thomas -
> >
> > Thanks for the post of the Tag Mclaren test link (and to Tom for the
other
> > references). I've looked at the Tag link and suspect it's going to add
to
> > the controversy here. My comments on the test follow.
> >
> > From the tone of the web info on this test, one can presume that Tag set
out
> > to show its relatively inexpensive gear was just as good as some
> > acknowledged industry standards. But....wonder why Tag choose the 99%
> > confidence level? Being careful *not* to say that it was prechosen in
> > advance? It is because had they used the more common and almost
> > universally-used 95% level it would have shown that:
> >
> > * When cable A was the "X" it was recognized at a significant level by
the
> > panel (and guess whose cable probably would "lose" in a preference test
> > versus a universally recognized standard of excellence chosen as "tops"
by
> > both Stereophile and TAS, as well as by other industry publishers)
> >
> > * One individual differentiated both cable A and combined cables at the
> > significant level
> >
> > Results summarized as follows:
> >
> > Tag Mclaren Published ABX Results
> >
> > Sample 99% 95% Actual Confidence
> >
> > Total Test
> >
> > Cables
> > A 96 60 53 e 52 94.8% e
> > B 84 54 48 e 38 coin toss
> > Both 180 107 97 e 90 coin toss
> >
> > Amps
> > A 96 60 53 e 47 coin toss
> > B 84 54 48 e 38 coin toss
> > Both 180 107 97 e 85 coin toss
> >
> > Top Individuals
> >
> > Cables
> > A 8 8 7 6 94.5%
> > B 7 7 7 5 83.6%
> > Both 15 13 11 11 95.8%
> >
> > Amps
> > A 8 8 7 5 83.6%
> > B 7 7 7 5 83.6%
> > Both 15 13 11 10 90.8%
> >
> > e = extrapolated based on scores for 100 and 50 sample size
> >
> > [snip]
> >
> > I'd like to hear other views on this test.
> >
> Mr. Lavo, here are some comments on your numbers.
>
> The short story: your numbers are bogus.
>
> The long story follows.
>
> I don't know how you came up with critical values for what you
> think is a reasonable level of significance.
>
> For n = 96 trials, the critical values are:
> 60 for .01 level of significance
> 57 for .05 level of significance
> 53 for .20 level of significance
>
> for n = 84 trials, the critical values are:
> 54 for .01 level of significance
> 51 for .05 level of significance
> 47 for .20 level of significance
>
> for n = 180 trials, the critical values are:
> 107 for .01 level of significance
> 102 for .05 level of significance
> 97 for .20 level of significance
>
> for n = 8 trials, the critical values are:
> 8 for .01 level of significance
> 7 for .05 level of significance
> 6 for .20 level of significance
>
> for n = 7 trials, the critical values are:
> 7 for .01 level of significance
> 7 for .05 level of significance
> 6 for .20 level of significance
>
> for n = 15 trials, the critical values are:
> 13 for .01 level of significance
> 12 for .05 level of significance
> 10 for .20 level of significance
>
> The values you provide for what you call 95% confidence (i.e., .05 level
> of significance) are almost the correct values for 20% significance.
>
> You make much of an apparently borderline significant result,
> where the best individual cable test scores were 11 of 15 correct.
>
> If that had been the entire experiment, we would have a p-value
> of .059, reflecting the probability that one would do at least that
> well in a single run of 15 trials.
> That is, the probability that someone would score 11, 12, 13, 14, or 15
> correct just by guessing is .059; also, the probability that the score is
> less than 11 would be 1 - .059 = .941 for a single batch of 15 trials.
>
> But what was reported was the best such performance in a dozen sets of
> trials. That's not the same as a single run of 15 trials.
>
> The probability of at least one of 12 subjects doing at least as well as
> 11 of 15 is 1 - [ probability that all 12 do worse than 11 of 15 ].
> Thus we get 1 - (.941)^(12), which is about .52.
>
> So, even your star performer is not doing better than chance suggests
> he should.
>
> Mr. Lavo, your conjecture (that that the test organizers have tried
> to distort the results to fit an agenda) appears to be without support.
>
> Now for some comments on the TAG McLaren report itself.
>
> There are problems with some numbers provided by TAG McLaren, but they
> are confined to background material.
> There do not appear to be problems with the actual report of experimental
> results.
>
> TAG McLaren claims that you need more than 10 trials to obtain results
> significant at the .01 level, but they are wrong. In fact, 7 trials
suffice.
> With 10 trials you can reach .001.
> There is a table just before the results section of their report with some
> discussion about small sample sizes. The first several rows of that table
have
> bogus numbers in the third column, and their sample size claims are based
> on those wrong numbers.
> However, the values for 11 or more trials are correct.
>
> As I have already noted, the numbers used in the report itself appear to
> be correct.
>
> Some would argue that their conclusion should be that they found no
> evidence to support a claim of audible difference, rather than concluding
> that there was no audible difference, but that's another issue.
>
> You have correctly noted concerns about the form of ABX presentation (not
> the same as the usual ABX scheme discussed on rahe) but that does not
> invalidate the experiment.
>
> There are questions about how the sample of listeners was obtained.
>
> For the most part, TAG McLaren seems to have designed a test, carried it
> out, run the numbers properly, and then accurately reported what they did.
> That's more than can be said for many tests.
>

John -

I have explained that I made an error, and I thank you for pointing it out.
I also explained how and why, but that is an explaination, not an excuse.

Perhaps you could bring your statistical skills to bear on the Greenhill
test as reported in Stereo Review in1983. The raw results were posted here
by me and by Ludovic about a year ago. As I recall one of the participants
in that test did very well across several tests..I'd be interested in your
calculation of the probability of his achieving those results by chance.
This is not a troll..the mathematics of it are simply beyond me and I tried
at the time to calculate the odds and apparently failed. The argument was:
outlyer, or golden ear.

Harry Lavo
August 2nd 03, 06:02 AM
"Stewart Pinkerton" > wrote in message
...
> On Fri, 01 Aug 2003 04:31:13 GMT, "Harry Lavo" >
> wrote:
>
> >"Jim West" > wrote in message
> >news:fnaWa.19572$cF.7720@rwcrnsc53...
> >> In article <nR%Va.24664$YN5.23125@sccrnsc01>, Harry Lavo wrote:
> >> >
> >> > You mean not accepting the "received truth" without doing my own
analysis is
> >> > cherry picking, is that it Stewart?
>
> No, attempting to extract *only* those sub-tests which agree with your
> prejudices is 'cherry picking', and even then, you can' t make it
> stick on the numbers in that series of tests.
>
> >> > We are not allowed to point out
> >> > anonomlies and ask "why"? "how come"? "what could be causing this?"
> >>
> >> You are indeed cherry picking. With 12 individuals the probability that
> >> one would would appear to meet the 95% level is fairly high. Remember
> >> that you can expect 1 in 20 to meet that level entirely by random. It
> >> is not acceptable scientific practice to select specific data sub-sets
> >> out of the complete set. Otherwise you could "prove" anything by simply
> >> running enough trials and ignoring those you don't like. Check any peer
> >> reviewed journal.
> >>
> >Yep it is more probable than one in twenty. But not so high that we have
to
> >accept your assumption that he/she *IS* the one-in-twenty.
>
> Unfortunately for your speculation, that individual did poorly in the
> amplifier test. Similarly, the best scorers in the amp test did poorly
> on cables. IOW, the results were *random*, and did not show *any* sign
> of a genuine audible difference, despite your many and convoluted
> attempts to distort the data to fit your agenda.

Ah, the "received truth", better known as dogma. And why, pray tell,
Stewart, is your explanation any more valid than my supposition that the
cable test and the amp test may have revealed different attributes of
reproduction of that particular musical piece at the margin, and that the
two men were each more sensitive to one than the other.?

Harry Lavo
August 2nd 03, 06:02 AM
"Jim West" > wrote in message
et...
> In article <l2mWa.36515$Ho3.6598@sccrnsc03>, Harry Lavo wrote:
> > "Jim West" > wrote in message
> > news:fnaWa.19572$cF.7720@rwcrnsc53...
> >>
> >> You are indeed cherry picking. With 12 individuals the probability that
> >> one would would appear to meet the 95% level is fairly high. Remember
> >> that you can expect 1 in 20 to meet that level entirely by random. It
> >> is not acceptable scientific practice to select specific data sub-sets
> >> out of the complete set. Otherwise you could "prove" anything by simply
> >> running enough trials and ignoring those you don't like. Check any peer
> >> reviewed journal.
> >>
> >
> > Yep it is more probable than one in twenty. But not so high that we have
to
> > accept your assumption that he/she *IS* the one-in-twenty.
>
> Where did I assume that?
>

The dismissal of any single recipient scoring at a significant or
near-significant level here is almost always described here as an outlyer.
in some cases this seems true...in others not so true. To say for sure that
somebody is an outlyer means one is assuming a certainty of "1", in other
words, 100% sure. As opposed to saying there is a 25% chance that he may be
an outlyer. Or a 50/50 chance. So implicit in the assertion, if the true
probablity for an indivual in a given trial is one-in-twenty, and you have
12 subjects, is that that "one" is the "one-in-twenty" is the
"one-in-twelve". Which may or may not be the case. But it is certainly no
sure thing.

> >> In any event, 11 out 15 has a probability of 5.9 % of occuring by
chance.
> >> That does not meet the 95 % confidence level. It would be rejected in
> >> a peer reviewed statistical study. (If that was the only data more
> >> trials would called for. But it wasn't the only data.)
> >>
> >
> > My mistake..you are correct. So the number is only 94.1%. But 12 out
of 15
> > yields 98.2%. Which is closer to the 95% standard?
>
> Irrelevant. You can't arbitrarily play with the numbers.
>

I'm not playing with numbers. When you establish a significance level you
are establishing a certainly level that the community considers accepatble
odds. 95% confidence level reflects one-in-twenty odds. A 98% confidence
level reflects a one-in-fifty odds. 94.1% reflects a one-in-eighteen and a
half odds. Which seems qualitatively closer to one-in-twenty to you.

With a large sample size, stating the number needed for significance is
okay..exceeding it by one doesn't distort results. But with small sample
sizes such as we are talking about here, there is a big statistical
difference beween 11 of 15 and 12 of 15. And the eleven of fifteen best
approaches the 95% standard.

> >
> >> Where are you getting your numbers? The data they posted on the
> >> web page showed that there were 52 correct answers in 96 trials.
> >> At least 52 correct answers will occur entirely by chance 23.8 %
> >> of the time. This is far from statistically significant.
> >
> > Oops! Late at night and the only book of binomial probabilites I had
handy
> > contained only raw data, not the cumulative error tables that I was used
to
> > from my previous work. So I goofed. I agree that the three-to-one odds
are
> > not statistically significant, and so the full panel results are null
for
> > both cables. My apologies.
> >
> > The issue with the individuals still stands, however.
>
> No it doesn't. Even with cherry picking (which is not valid anyway)
> no individual performed at a statistically signficant level. Period.
>

Again, repeat the dogma (and the mantra). Don't bother to think about what
the numbers really translate to. I don't know of any social researcher
requiring odds of 98% or 99% confidence level. Do you really believe 95% is
"significant" while 94.2% is not, in any meaningful, non-dogmatic way?

Stewart Pinkerton
August 2nd 03, 04:54 PM
On 2 Aug 2003 00:38:44 GMT, "Harry Lavo" > wrote:

>"Stewart Pinkerton" > wrote in message
>news:iRlWa.36496$uu5.4473@sccrnsc04...

>> All that the above proves (especially since the results were marginal
>> at best) is that there's a random distribution in the results which
>> favoured A *on this occasion*. Run that trial again, and I'll put an
>> even money bet that you'll have a slight bias in favour of B.
>>
>> Anyone familiar with Statistical Process Control is well aware that
>> one swallow doesn't make a summer.
>
>This wasn't one swallow, Stewart...this was 96 swallows in one case and 84
>in another. A whole pride of swallows.

That's a very poor flocking argument, since a proper evaluation of the
test, using *all* the results, shows that the amps and cables were
most definitely *not* sonically distinguishable. Further, those test
subjects who scored well on cables scored poorly on amps, and vice
versa, proving the point that they could not reliably distinguish any
differences.

> So the odds of such severe swings
>in opposite directions (even if only marginal) is not very high over the
>whole pride. We are not talking a sample of two here.

We are also not talking of any 'swings' that have statistical
significance. This has been pointed out to you several times by
several people, but you still insist on attempting to distort the
results to fit your own prejudices.
--

Stewart Pinkerton | Music is Art - Audio is Engineering

Stewart Pinkerton
August 2nd 03, 04:54 PM
On Sat, 02 Aug 2003 05:02:21 GMT, "Harry Lavo" >
wrote:

>"Stewart Pinkerton" > wrote in message
...

>> Unfortunately for your speculation, that individual did poorly in the
>> amplifier test. Similarly, the best scorers in the amp test did poorly
>> on cables. IOW, the results were *random*, and did not show *any* sign
>> of a genuine audible difference, despite your many and convoluted
>> attempts to distort the data to fit your agenda.
>
>Ah, the "received truth", better known as dogma.

No, the simplest and most likely explanation of the results.

> And why, pray tell,
>Stewart, is your explanation any more valid than my supposition that the
>cable test and the amp test may have revealed different attributes of
>reproduction of that particular musical piece at the margin, and that the
>two men were each more sensitive to one than the other.?

Occam's Razor. You are attempting to speculate that the dark side of
the Moon may indeed have large sections which are made of green
cheese. While possible, this is unlikely, and most reasonable people
would not put real money on it. The same applies to 'cable sound'.

--

Stewart Pinkerton | Music is Art - Audio is Engineering

Stewart Pinkerton
August 2nd 03, 04:55 PM
On Sat, 02 Aug 2003 05:02:40 GMT, "Harry Lavo" >
wrote:

>"Jim West" > wrote in message
et...

>> Irrelevant. You can't arbitrarily play with the numbers.
>>
>I'm not playing with numbers. When you establish a significance level you
>are establishing a certainly level that the community considers accepatble
>odds. 95% confidence level reflects one-in-twenty odds. A 98% confidence
>level reflects a one-in-fifty odds. 94.1% reflects a one-in-eighteen and a
>half odds. Which seems qualitatively closer to one-in-twenty to you.
>
>With a large sample size, stating the number needed for significance is
>okay..exceeding it by one doesn't distort results. But with small sample
>sizes such as we are talking about here, there is a big statistical
>difference beween 11 of 15 and 12 of 15. And the eleven of fifteen best
>approaches the 95% standard.

Fine. Now take the fact that there were a dozen volunteers, and you
find that you have a 12 out of 18.5 chance of achieving that result by
random chance. That's not much more than even odds. Now tell me that
this has any significance whatever.

>> > The issue with the individuals still stands, however.
>>
>> No it doesn't. Even with cherry picking (which is not valid anyway)
>> no individual performed at a statistically signficant level. Period.
>>
>Again, repeat the dogma (and the mantra). Don't bother to think about what
>the numbers really translate to. I don't know of any social researcher
>requiring odds of 98% or 99% confidence level. Do you really believe 95% is
>"significant" while 94.2% is not, in any meaningful, non-dogmatic way?

More importantly, since there were 12 test subjects, the *real*
statistical probability of one subject scoring 11 out of 15 is not
94.2%, but 65%. That is simply not significant by *any* standard,
especially when combined with the fact that those test subjects who
did well on cables did poorly on amps, and vice versa. It's all just
random chance, and all the cherry picking in the world won't change
that.
--

Stewart Pinkerton | Music is Art - Audio is Engineering

Nousaine
August 3rd 03, 06:07 AM
(Thomas A) wrote:

(Nousaine) wrote in message
>news:<1KlWa.36390$uu5.4253@sccrnsc04>...
>> (Thomas A) wrote:
>>
>> ....some snip.....
>>
>some more snip...
>>
>> Where the "best scorers"
>> >allowed to repeat the experiments in the main experiment?
>>
>> This did not appear to be part of the protocol for any but subject analysis
>was
>> common.
>
>My experience with blindtests is that results do vary among subjects.
>I made a test where the CD players where not level-matched (something
>just below 0.5 dB difference). Two persons, including myself, could
>verify a difference in blindtest in my home setup (bass-rich music, no
>test signals used), whereas two other persons were unable to do it.
>Thus a retest with the best-scorers in a test is something which might
>be desirable.

In a situation like this with 2 of 4 persons scoring significantly it's likely
that the overall score would be significant as well.

It was typical for analysis to be conducted on a subject by subject basis to
find significant individual scores. In tests where the overall result was null
there did not appear to be cases where individually positive scores were
covered by totals.

I have offered Retests in most of my personally conducted experiments. In these
I have experienced exactly one subject who asked to extend the number of trials
in an experiment, one who retook a test at my request and one who accepted an
opportunity for retest.

This covers perhaps a dozen formal tests and several dozen subjects.

>> Many
>> >questions but they may be relevant when making a meta-analysis.
>>
>> > In addition, have any of the experiments used test signals in the LF
>> >range (around 15-20 Hz) and high-capable subwoofers (>120 dB SPL @ 20
>> >Hz)?
>>
>> No. But there are no commercially available subwoofers that will do 120+ dB
>at
>> 2 meters in a real room. I've tested dozens and dozens and the only ones
>with
>> this capability are custom.
>
>Agree that commercial subs with +120 dB in room is hard to find.
>
>>
>> I've just curious since the tests from the Swedish
>> >Audio-Technical Society frequently identifies amplfiers than roll of
>> >in the low end using blind tests.
>>
>> The typical half power point for my stock of a dozen power amplifiers is 6
>Hz.
>> I've not seen the SATS data though.
>
>Have you ever tested yourself? You have a quite bass-capable system if
>I remember correctly. You would need music with very deep and
>high-quality bass, or test-tones, and perhaps a setup as described in
>the link (a reference amp).

I've measured the frequency response of the amplifiers and, yes, my subwoofer
will produce 120 dB + SPL from 12 to 62 Hz with less than 10% distortion.

Perhaps not surprisingly it takes a 5000 watt capable amplifier to make these
SPL levels but I've never felt 'cheated of bass' when the system is driven with
2 channels of a 250 wpc stereo amplifier with ordinary programs, some of which
have frequency content below 10 Hz.

The reference amp used by SATS have during
>many years has been NAD 208. Other amps rated good is e.g. Rotel
>RB1090. I am not sure at the moment which ones that were rated
>"not-so-good", but I can look it up. The method they use is a
>"before-and-after" test.
>
>http://www.sonicdesign.se/amptest.htm
>
>>
>> It might not be said to be an
>> >audible difference since the difference is percieved as a difference
>> >in vibrations in the body. I think I mentioned this before.

Basically you have to have the woofer displacement/ amp power to start. I
haven't conducted a formal test about this but I'm guessing that speaker
displacement is a bigger issue than amplifier bandwidth. IOW I'm guessing that
most modern SS amplifiers have low frequency bandwidth to cover modern programs
and that the basic limiting factor is the subwoofer transducer(s).

...snip remainder...

All Ears
August 7th 03, 04:14 PM
I can recommend blind testing beer, nobody can taste any difference within
the same type of beer anyway. Lots of money to save.... :)

Nousaine
August 13th 03, 05:17 AM
"All Ears" wrote"

> can recommend blind testing beer, nobody can taste any difference within
>the same type of beer anyway. Lots of money to save.... :)

So then why not for audio components?

All Ears
August 14th 03, 02:52 AM
"Nousaine" > wrote in message
et...
> "All Ears" wrote"
>
> > can recommend blind testing beer, nobody can taste any difference within
> >the same type of beer anyway. Lots of money to save.... :)
>
> So then why not for audio components?
>

I do not disagree about the basic idea of blind tests, done under the right
circumstances. It is always good with a reality check.

However, I must also admit that I like choosing things like cables, from how
I think they sound in my system. This is despite the fact that I know that I
probably not will be able to identify these cables in a blind test. But then
again, if changing something like set of speaker cables can change my
perception of the sound from being harsh, bright, laid back, etc. into
something pleasing, then why not do it?

KE

chung
August 14th 03, 03:06 AM
All Ears wrote:

> I do not disagree about the basic idea of blind tests, done under the right
> circumstances. It is always good with a reality check.
>
> However, I must also admit that I like choosing things like cables, from how
> I think they sound in my system. This is despite the fact that I know that I
> probably not will be able to identify these cables in a blind test. But then
> again, if changing something like set of speaker cables can change my
> perception of the sound from being harsh, bright, laid back, etc. into
> something pleasing, then why not do it?

No other reason except you are spending money where it makes the least
audible difference. If money is no object (and perhaps even if it is),
do it if that makes you happy.

>
> KE
>

Richard D Pierce
August 14th 03, 03:29 PM
In article <cXB_a.139754$uu5.19479@sccrnsc04>,
All Ears > wrote:
>"Nousaine" > wrote in message
et...
>> "All Ears" wrote"
>>
>> > can recommend blind testing beer, nobody can taste any difference within
>> >the same type of beer anyway. Lots of money to save.... :)
>>
>> So then why not for audio components?
>>
>
>I do not disagree about the basic idea of blind tests, done under the right
>circumstances. It is always good with a reality check.
>
>However, I must also admit that I like choosing things like cables, from how
>I think they sound in my system. This is despite the fact that I know that I
>probably not will be able to identify these cables in a blind test. But then
>again, if changing something like set of speaker cables can change my
>perception of the sound from being harsh, bright, laid back, etc. into
>something pleasing, then why not do it?

Absolutely, why not? Certainly I, having been tarred with the
objectivist brush, have no objection whatsoever. You have a
method which works for you.

But it's the elevation of personal preference to physical,
universal fact that the problem begins. It's a very different
thing to say "I like this cable because such-and-such," it's a
very different thing to claim "this cable IS better because of
the elimination of interstrand charge jumping and the
elimination of intercrystalline micro-diode effects." And it's
also a very different kettle of fish to start dragging out
excuses like "your system doesn't have enough resolution" and
such when one's first claims of "clear and obvious differences"
are not born out.

--
| Dick Pierce |
| Professional Audio Development |
| 1-781/826-4953 Voice and FAX |
| |

All Ears
August 14th 03, 07:51 PM
"Nousaine" > wrote in message
news:1lP_a.102001$cF.30984@rwcrnsc53...
> "All Ears" wrote:
>
> >
> >"Nousaine" > wrote in message
> et...
> >> "All Ears" wrote"
> >>
> >> > can recommend blind testing beer, nobody can taste any difference
within
> >> >the same type of beer anyway. Lots of money to save.... :)
> >>
> >> So then why not for audio components?
> >>
> >
> >I do not disagree about the basic idea of blind tests, done under the
right
> >circumstances. It is always good with a reality check.
> >
> >However, I must also admit that I like choosing things like cables, from
how
> >I think they sound in my system. This is despite the fact that I know
that I
> >probably not will be able to identify these cables in a blind test. But
then
> >again, if changing something like set of speaker cables can change my
> >perception of the sound from being harsh, bright, laid back, etc. into
> >something pleasing, then why not do it?
> >
> >KE
>
> Well; it could limit the sonic throughput of your system. Or deliver the
wrong
> market data. If hadn't spent the money on the wires you could have
purchased
> more recordings OR given more money to the orchestra of your choice.
>
> Buying the wire may have inadvertantley limited the availability of
> acoustically performed concerts in the long run.
>
> Of course my argument is just that. No one of us makes these decisions.
But we
> should be happy that the market as a whole is probably not directly
endorsing
> the cut-back of classical music performance.
>
> But it surely isn't helping.

I am actually using classical and acoustic music as referance, I don't think
that my choice of cables has any negative impact in reprodusing this kind of
music.

KE

>

All Ears
August 14th 03, 08:09 PM
"chung" > wrote in message
news:m8C_a.139809$uu5.20563@sccrnsc04...
> All Ears wrote:
>
> > I do not disagree about the basic idea of blind tests, done under the
right
> > circumstances. It is always good with a reality check.
> >
> > However, I must also admit that I like choosing things like cables, from
how
> > I think they sound in my system. This is despite the fact that I know
that I
> > probably not will be able to identify these cables in a blind test. But
then
> > again, if changing something like set of speaker cables can change my
> > perception of the sound from being harsh, bright, laid back, etc. into
> > something pleasing, then why not do it?
>
> No other reason except you are spending money where it makes the least
> audible difference. If money is no object (and perhaps even if it is),
> do it if that makes you happy.

Since the rest of the components are among the best money can buy, and I do
feel a difference from the choice of cables, I use the ones I like. Of
course, things must be put into the right perspective. A set of 3000 USD
speaker cables would be a poor investment in a 1000 USD system.

KE

>
> >
> > KE
> >
>

chung
August 14th 03, 09:36 PM
All Ears wrote:
> "chung" > wrote in message

>
> Since the rest of the components are among the best money can buy, and I do
> feel a difference from the choice of cables, I use the ones I like. Of
> course, things must be put into the right perspective. A set of 3000 USD
> speaker cables would be a poor investment in a 1000 USD system.
>
>

In that case, you obviously can, and will, do whatever you want to make
you happy.

However, I have this nagging thought that by buying expensive speaker
cables (assuming you are not buying the Home Depot or Radio Shack
brands), you are endorsing the high-end cable industry, and I can't
think of a less worthy segment of audio to endorse.

All Ears
August 15th 03, 02:22 AM
"chung" > wrote in message
...
> All Ears wrote:
> > "chung" > wrote in message
>
> >
> > Since the rest of the components are among the best money can buy, and I
do
> > feel a difference from the choice of cables, I use the ones I like. Of
> > course, things must be put into the right perspective. A set of 3000 USD
> > speaker cables would be a poor investment in a 1000 USD system.
> >
> >
>
> In that case, you obviously can, and will, do whatever you want to make
> you happy.
>
> However, I have this nagging thought that by buying expensive speaker
> cables (assuming you are not buying the Home Depot or Radio Shack
> brands), you are endorsing the high-end cable industry, and I can't
> think of a less worthy segment of audio to endorse.

I admit that I am not using cheap cables, on the other hand, I don't always
think that the most expencive are the best. I know that it is impossible to
prove the difference, and I could most likely not identify these cables in a
DBT. Does this make me a fool? Maybe, but then again, this is a hobby for
me, and I find great joy in combining equipment and listen to the result.
In a really good setup, it is often possible to take out one or two minor
components and replace with "default components" without any serious damage
to the end result.

I do not endorse any part of the industry, that do not, from my point of
view, deserve it.

KE

Nousaine
August 15th 03, 04:16 AM
"All Ears" wrote:

>"Nousaine" > wrote in message
>news:1lP_a.102001$cF.30984@rwcrnsc53...
>> "All Ears" wrote:
>>
>> >
>> >"Nousaine" > wrote in message
>> et...
>> >> "All Ears" wrote"
>> >>
>> >> > can recommend blind testing beer, nobody can taste any difference
>within
>> >> >the same type of beer anyway. Lots of money to save.... :)
>> >>
>> >> So then why not for audio components?
>> >>
>> >
>> >I do not disagree about the basic idea of blind tests, done under the
>right
>> >circumstances. It is always good with a reality check.
>> >
>> >However, I must also admit that I like choosing things like cables, from
>how
>> >I think they sound in my system. This is despite the fact that I know
>that I
>> >probably not will be able to identify these cables in a blind test. But
>then
>> >again, if changing something like set of speaker cables can change my
>> >perception of the sound from being harsh, bright, laid back, etc. into
>> >something pleasing, then why not do it?
>> >
>> >KE
>>
>> Well; it could limit the sonic throughput of your system. Or deliver the
>wrong
>> market data. If hadn't spent the money on the wires you could have
>purchased
>> more recordings OR given more money to the orchestra of your choice.
>>
>> Buying the wire may have inadvertantley limited the availability of
>> acoustically performed concerts in the long run.
>>
>> Of course my argument is just that. No one of us makes these decisions.
>But we
>> should be happy that the market as a whole is probably not directly
>endorsing
>> the cut-back of classical music performance.
>>
>> But it surely isn't helping.
>
>I am actually using classical and acoustic music as referance, I don't think
>that my choice of cables has any negative impact in reprodusing this kind of
>music.
>
>KE

Unless you have 'unlimited' resources it surely must have. I'm not ashamed of
spending $2000 to design and build the world's most accomplished subwoofer

Stewart Pinkerton
August 15th 03, 03:32 PM
On 14 Aug 2003 19:09:07 GMT, "All Ears" > wrote:

>"chung" > wrote in message
>news:m8C_a.139809$uu5.20563@sccrnsc04...
>> All Ears wrote:

>If money is no object (and perhaps even if it is),
>> do it if that makes you happy.
>
>Since the rest of the components are among the best money can buy, and I do
>feel a difference from the choice of cables, I use the ones I like. Of
>course, things must be put into the right perspective. A set of 3000 USD
>speaker cables would be a poor investment in a 1000 USD system.

They're also a darned poor 'investment' in a $30,000 system........

Of course, if phat cables make you feel all warm and phuzzy, why not?
--

Stewart Pinkerton | Music is Art - Audio is Engineering

Nousaine
August 18th 03, 06:30 AM
"All Ears" wrote:

>> Unless you have 'unlimited' resources it surely must have. I'm not ashamed
>of
>> spending $2000 to design and build the world's most accomplished subwoofer
>
>When a guy building some of the best speakers in the world suggests me to
>use either brand A or brand B in cables, I think it would be silly at least
>not to try these?

So who does this?

>In order to build these speakers, his ears must be good and he must know
>what he is doing. So if I find that he is right, I'll buy the cables.

If this is all true why doesn't his speaker come with the cables? Is he
purposley allowing some of his customers to have substandard sound with his
speakers.

>
>Why should you be ashamed in spending $2000 on building the world's most
>accomplished subwoofer? If you can prove this in a DBT, I'll buy one from
>you :)
>
>KE

You can't "buy" one; you'll have to make it. Want a scheme get a back issue of
the June '99 Sound & Vision. Want to know what "wires" are used? Ask me.

All Ears
August 18th 03, 05:38 PM
"Nousaine" > wrote in message
news:CvZ%a.140179$cF.51788@rwcrnsc53...
> "All Ears" wrote:
>
> >> Unless you have 'unlimited' resources it surely must have. I'm not
ashamed
> >of
> >> spending $2000 to design and build the world's most accomplished
subwoofer
> >
> >When a guy building some of the best speakers in the world suggests me to
> >use either brand A or brand B in cables, I think it would be silly at
least
> >not to try these?
>
> So who does this?

If you want names and brands, I can mail these to you.

>
> >In order to build these speakers, his ears must be good and he must know
> >what he is doing. So if I find that he is right, I'll buy the cables.
>
> If this is all true why doesn't his speaker come with the cables? Is he
> purposley allowing some of his customers to have substandard sound with
his
> speakers.

People are free to do what they like and want, however advise is free and
obtional.

Interestingly enough, different cables are recommended for SS and tubed
amplifiers.

What I find interesting here, is that a guy capable of designing fantastic
speakers, would recommend cables if there really is no difference. These
speakers has been refined and improved over many years, going systematically
through all components of the construction. I am sure that most people would
not be able to hear difference from the individual little improvements, but
when it all adds up, it does give a notisable difference.

KE

>
> >
> >Why should you be ashamed in spending $2000 on building the world's most
> >accomplished subwoofer? If you can prove this in a DBT, I'll buy one from
> >you :)
> >
> >KE
>
> You can't "buy" one; you'll have to make it. Want a scheme get a back
issue of
> the June '99 Sound & Vision. Want to know what "wires" are used? Ask me.
>

chung
August 18th 03, 06:16 PM
All Ears wrote:

>
> People are free to do what they like and want, however advise is free and
> obtional.

Did the designer put his recommendation on cables in writing? Do you
also realize that there may be a conflict of interest if he recommends
Home Depot 12-gauge cables? You also should understand that a lot of
"audiophiles" will not respect him if he says that all cables of a
certain gauge are fine.

Maybe a reason he recommends a certain cable is to make sure that you
don't buy a cable with built-in tone controls? You should ask him
privately if Home Depot 12-gauge is good enough.

>
> Interestingly enough, different cables are recommended for SS and tubed
> amplifiers.
>
> What I find interesting here, is that a guy capable of designing fantastic
> speakers, would recommend cables if there really is no difference. These
> speakers has been refined and improved over many years, going systematically
> through all components of the construction. I am sure that most people would
> not be able to hear difference from the individual little improvements, but
> when it all adds up, it does give a notisable difference.
>

You should read John Dunlavy's comments on speaker cables. As you well
know, he designed some very highly respected speaker systems. Dick
Pierce is another professional who designs and develops speaker systems,
and you know his position on cables by now.

Richard D Pierce
August 18th 03, 09:08 PM
In article <5Z90b.182884$YN5.134772@sccrnsc01>,
All Ears > wrote:
>> You should read John Dunlavy's comments on speaker cables. As you well
>> know, he designed some very highly respected speaker systems. Dick
>> Pierce is another professional who designs and develops speaker systems,
>> and you know his position on cables by now.
>
>I have read a lot on this issue, and accepts the fact that in a DBT, nobody
>can hear a difference.
>
>It could have something to do with the fact that the brain has great
>difficulty in remembering a sound image, and that it tends to fill out the
>blanks or adapt the sound into something acceptable.

Come on, this same old lame, weak, stupid argument has been
raised time and time again, and it only gets lamer, weaker and
more stupid with each telling.

If you're asserting that the brain has great difficulty in
remembering a sound image, then, guess what, the brain is at
least equally hampered in remembering a sound image in eithe a
blind test or an informal test. One of the wntire points of a
time-proximate test methodlogy is that it REDUCES the necessaity
of the brain to remember fine acoustical details, which IS a
known problem.

And as to your hypothesis that the brain tends to fill out [sic]
the blanks or adapt the sound to something acceptable, it seems
that you are making a good case AGAINST "break-in" and any other
of a number of high-end claims.

You've just started your journey away from the Dark Side, my
son. :-)

--
| Dick Pierce |
| Professional Audio Development |
| 1-781/826-4953 Voice and FAX |
| |

chung
August 18th 03, 10:54 PM
All Ears wrote:
> "chung" > wrote in message
> news:ZR70b.182160$YN5.133562@sccrnsc01...
>> All Ears wrote:
>>
>> >
>> > People are free to do what they like and want, however advise is free
> and
>> > obtional.
>>
>> Did the designer put his recommendation on cables in writing? Do you
>> also realize that there may be a conflict of interest if he recommends
>> Home Depot 12-gauge cables? You also should understand that a lot of
>> "audiophiles" will not respect him if he says that all cables of a
>> certain gauge are fine.
>
> No the recommendation is verbal. I do however know that he would use dog
> ****, if it gave the desired result :)

But will he state that, or recommend that to you?

>
>> Maybe a reason he recommends a certain cable is to make sure that you
>> don't buy a cable with built-in tone controls? You should ask him
>> privately if Home Depot 12-gauge is good enough.
>
> The answer would be: "If it works for you, it's fine with me"

OK, so he really has no strong opinion on which cable should be used.
Makes more sense.

>> You should read John Dunlavy's comments on speaker cables. As you well
>> know, he designed some very highly respected speaker systems. Dick
>> Pierce is another professional who designs and develops speaker systems,
>> and you know his position on cables by now.
>
> I have read a lot on this issue, and accepts the fact that in a DBT, nobody
> can hear a difference.

Why would you accept that without trying?

If you were willing to accept that, then why not accept that there
perhaps is no *audible* difference?

>
> It could have something to do with the fact that the brain has great
> difficulty in remembering a sound image, and that it tends to fill out the
> blanks or adapt the sound into something acceptable.
>

Could it possibly have something to do with the fact that there is *no
audible difference*? Remember Occam's Razor?

Richard D Pierce
August 19th 03, 12:42 AM
In article <K%b0b.183243$Ho3.25518@sccrnsc03>,
All Ears > wrote:
>"Richard D Pierce" > wrote in message
>news:Nma0b.182506$Ho3.24946@sccrnsc03...
>> Come on, this same old lame, weak, stupid argument has been
>> raised time and time again, and it only gets lamer, weaker and
>> more stupid with each telling.
>>
>> If you're asserting that the brain has great difficulty in
>> remembering a sound image, then, guess what, the brain is at
>> least equally hampered in remembering a sound image in eithe a
>> blind test or an informal test. One of the wntire points of a
>> time-proximate test methodlogy is that it REDUCES the necessaity
>> of the brain to remember fine acoustical details, which IS a
>> known problem.
>>
>> And as to your hypothesis that the brain tends to fill out [sic]
>> the blanks or adapt the sound to something acceptable, it seems
>> that you are making a good case AGAINST "break-in" and any other
>> of a number of high-end claims.
>>
>> You've just started your journey away from the Dark Side, my
>> son. :-)
>
>Working hard on getting The Force with me....
>
>Frankly speaking, it is just hard not to be allowed to trust the things I
>experience.

Nobody is asking you to do otherwise, any more than you just
asked yourself to do. If the brain is, as you claim, imcapable
of remmebering a sound image, and is so capable of filling in
missing blanks and so adaptable, how can you trust it as a
constant?

>When you design speakers, do you DBT every little modification
>you do, or do you allow yourself to trust what you (think) you
>hear? Maybe you design from measurements only?

When I design speakers, I do so, most of the time, at the behest
of a client who is paying me money to make a speaker that has
appropriate commercial viability. That means that the
performance has to be first well defined in that market
context. That will put constraints on a variety of system
performance parameters, and these constraints further lead to a
coherent system specification, which includes axial and power
response requirements. From that, we get fairly strict
requirements for enclosure size and type, driver requirements
and so on.

With all that in hand, detailed system design can commence and
proceed to a point where a final design is pretty well
proscribed.

How much listening have I done to this point? Well, none is
possible because the system doesn't even exist yet. Yet, I can
pretty well predict HOW it will sound or, more importantlu, will
the sound fit the clients requirements.

Generally, it's not necessary to build any more than one or two
versions of the prototype for listening purposes, because MOST
of the tweaking will have already been done in the design
process.

THIS is what separates the pros form the amateurs: most people
believe that speaker design is this long, almost endless
iterative process of build, tweak, rebuild, tweak again, build
yet again, tweak yet again, and so on. Well, it IS, if you have
neither the skill, experience, tools or facilities to avoid the
process. The vats majority of amateurs and not a small number of
commercial speaker companies DO NOT HAVE ANY of the required
skill, experience, tools or facilities for efficient,
comprehensive and accurate design.

When I started many years ago, I made a lot of mistakes. Guess
what, I don't make thos emistakes anymore. But if you have some
rank amateur who has NO test facilities to verify the changes he
makes are what he THINKS they are: the guy ius going to end up
wandering around blind. His ears ARE NOT GOING TO HELP. You just
now admitted, for all to see (including you, I hope), that the
brain is too willing to fill in gaps, too adaptable, and too
poor at remembering detailed sound images. In that sense you are
absolutely right. So how can one be trustful of something YOU
have declared so unreliable?

Let's kook at an example. A person who has NO measurement or
design facility may tweak, by ear, the low frequency tuning of a
system until it sounds like what he wants. How does this person
know that, in the process, he has not seriously compromised the
excursion-limited power handling of the system, or its
distortion?

Another example: since multi-way speakers are not minimum-phase
systems (and understand that "minimum phase" is a very precisely
defined term that is nonetheless POORLY understood in the audio
community), one can not take the response as shown by your
typical "real-time analyzer" or even the aural impression by ear
and use it to tweak driver equalization, simply because the
frequency response and phase response of non-minimum-phase
systems are NOT uniquely linked.

How many people in the high-end business, in YOUR store, know
what "minimum phase" means? I'll bet dollars to donuts the
answer is a very small number, yet it is a crucial concept in
loudspeaker design.

--
| Dick Pierce |
| Professional Audio Development |
| 1-781/826-4953 Voice and FAX |
| |

ludovic mirabel
August 19th 03, 05:48 AM
"All Ears" > wrote in message >...
> "chung" > wrote in message
> news:eWb0b.183204$Ho3.24948@sccrnsc03...
> > All Ears wrote:
> > > "chung" > wrote in message
> > > news:ZR70b.182160$YN5.133562@sccrnsc01...
> > >> All Ears wrote:
> > >>
> > >> >
> > >> > People are free to do what they like and want, however advise is free
> and
> > >> > obtional.
> > >>
> > >> Did the designer put his recommendation on cables in writing? Do you
> > >> also realize that there may be a conflict of interest if he recommends
> > >> Home Depot 12-gauge cables? You also should understand that a lot of
> > >> "audiophiles" will not respect him if he says that all cables of a
> > >> certain gauge are fine.
> > >
> > > No the recommendation is verbal. I do however know that he would use dog
> > > ****, if it gave the desired result :)
> >
> > But will he state that, or recommend that to you?
>
> Oh yes, he did already, although no actual recommendation is given :)
>
> >
> > >
> > >> Maybe a reason he recommends a certain cable is to make sure that you
> > >> don't buy a cable with built-in tone controls? You should ask him
> > >> privately if Home Depot 12-gauge is good enough.
> > >
> > > The answer would be: "If it works for you, it's fine with me"
> >
> > OK, so he really has no strong opinion on which cable should be used.
> > Makes more sense.
> >
> > >> You should read John Dunlavy's comments on speaker cables. As you well
> > >> know, he designed some very highly respected speaker systems. Dick
> > >> Pierce is another professional who designs and develops speaker
> systems,
> > >> and you know his position on cables by now.
> > >
> > > I have read a lot on this issue, and accepts the fact that in a DBT,
> nobody
> > > can hear a difference.
> >
> > Why would you accept that without trying?
> >
> > If you were willing to accept that, then why not accept that there
> > perhaps is no *audible* difference?
>
> That is what I hear, over and over again....
> >
> > >
> > > It could have something to do with the fact that the brain has great
> > > difficulty in remembering a sound image, and that it tends to fill out
> the
> > > blanks or adapt the sound into something acceptable.
> > >
> >
> > Could it possibly have something to do with the fact that there is *no
> > audible difference*? Remember Occam's Razor?
>
> Maybe.....
>
> >

I'm glad to see that someone doesn't let himself be intimidated by
people who believe that heaping up adjectives like "lame, weak,
stupid" will cow a heretic into confessing his sins prior to auto-da-
fe.
The same people, or their kin, will deny, against common sense and
all evidence, that "proximate'(or whatever obfuscating substitute for
one-after-another sequence they choose) listening to A , then to B and
then comparing X with A and B is not a problem for many. No training
needed or is it? And how much of it? And who decides when enough is
enough?
On the other hand when someone like myself says that SIMULTANEOUS
comparison by the left-right method with random changes suits HIM
better he is told that his method is "fatally flawed" or something to
that effect. Says who?
Well- they do. And who are they? Those who say so of course.
Ludovic Mirabel

Audio Guy
August 19th 03, 06:01 PM
In article <u_h0b.195707$o%2.91075@sccrnsc02>,
(ludovic mirabel) writes:

> The same people, or their kin, will deny, against common sense and
> all evidence, that "proximate'(or whatever obfuscating substitute for
> one-after-another sequence they choose) listening to A , then to B and
> then comparing X with A and B is not a problem for many. No training
> needed or is it? And how much of it? And who decides when enough is
> enough?

Those who have actually tried it and have used it. You on the other
hand have not and so have no way to determine if they are correct or
not. You think that constant nay-saying against ABX and DBTs is all
that is needed to prove your point, when all it does is show you
really have no point at all.

> On the other hand when someone like myself says that SIMULTANEOUS
> comparison by the left-right method with random changes suits HIM
> better he is told that his method is "fatally flawed" or something to
> that effect. Says who?

Again, those who have actually done the tests and put in the research.

> Well- they do. And who are they? Those who say so of course.

And you are not doing the same "say so" without any proof that you
claim is being done by others? Where is your proof?

Arny Krueger
August 19th 03, 06:01 PM
"All Ears" > wrote in message
news:K%b0b.183243$Ho3.25518@sccrnsc03

> Frankly speaking, it is just hard not to be allowed to trust the
> things I experience.

The first thing that one needs to do is to understand that there are big
differences and little differences. Quantification is often the place were
many people let their ears lead them astray. Just because you can hear
differences between just about any loudspeakers doesn't mean that you can
hear differences between just about any cables or power amplifiers.

>When you design speakers, do you DBT every
> little modification you do, or do you allow yourself to trust what
> you (think) you hear?

Loudspeaker design involves relatively large differences.

>Maybe you design from measurements only?

Power amplifier design usually involves relatively small differences.

Leonard
August 19th 03, 06:01 PM
Ref: Blindtest issues...
For what its worth...

Use about any criteria you desire regarding cables,
amps, etc...also, if you feel better about it, put
a sign on each component with its name in blazing
qualities. It possibly will make you feel better
about the system and strangely, the whole thing
might well sound better. That is part of this whole
experience regarding audio...if your prejudices
are deep set from within..then give in to them
and enjoy the music. Be happy with the most
expensive equipment you can afford, it might well
be pretty good..mentally, you might come to
accept that fact..music will flourish, bloom
and all will be right with the Universe!!

All this "shadow-boxing" regarding "all is the
same" is interesting in this strange dimension
that surrounds Audio. Go with you own prejudices
and be happy. Very important to your Audio
happiness!

Leonard...

__________________________________________________ _____

On Sun, 27 Jul 2003 18:11:48 +0000, Thomas A wrote:

> Is there any published DBT of amps, CD players or cables where the
> number of trials are greater than 500?
>
> If there difference is miniscule there is likely that many "guesses"
> are wrong and would require many trials to reveal any subtle
> difference?
>
> Thomas

Arny Krueger
August 19th 03, 06:01 PM
"ludovic mirabel" > wrote in message
news:u_h0b.195707$o%2.91075@sccrnsc02

> I'm glad to see that someone doesn't let himself be intimidated by
> people who believe that heaping up adjectives like "lame, weak,
> stupid" will cow a heretic into confessing his sins prior to auto-da-
> fe.

I did a search on the word "stupid" in the RAO archives and found that
since the beginning of the year, the leading user of this word is one
Ludivoc Mirabel.

> The same people, or their kin, will deny, against common sense and
> all evidence, that "proximate' (or whatever obfuscating substitute for
> one-after-another sequence they choose) listening to A , then to B and
> then comparing X with A and B is not a problem for many.

Here we see an absolutely unbelievable claim - that "proximate" listening is
not the same as "one-after-another" listening.

> No training needed or is it?

Listening is at its core a form of physical and mental endeavor, like sports
or many professions. Training helps people do better at sports and
professions and this is generally thought to be a good thing. yet here we
see a tacit claim that somehow training is a bad thing.

> And how much of it? And who decides when enough is enough?

Who decides when a person has enough training - usually its the person
themselves, right?

> On the other hand when someone like myself says that SIMULTANEOUS
> comparison by the left-right method with random changes suits HIM
> better he is told that his method is "fatally flawed" or something to
> that effect. Says who?

Just about anybody who has tried it. The problem with simultaneous listening
is the poor signal-to-noise ratio. Presuming that the levels are matched,
the highest possible SNR for simultaneous listening is 6 dB, and usually
the SNR associated with simultaneous listening is zero (0.0) dB.

> Well- they do. And who are they?

People who tried it and found that the 40-100 dB SNR of proximate listening
leads to more sensitive results than the 0-6 dB SNR of simultaneous
listening.

>Those who say so of course.

We observe that our leading promoter of simultaneous listening argues
frequently against blind listening. One unarguable benefit of blind
listening is that positive results can't be falsified. Somehow it all fits,
no?

All Ears
August 19th 03, 06:57 PM
"Richard D Pierce" > wrote in message
...
> In article >,
> All Ears > wrote:
> >"Richard D Pierce" > wrote in message
> ...
> >> How much listening have I done to this point? Well, none is
> >> possible because the system doesn't even exist yet. Yet, I can
> >> pretty well predict HOW it will sound or, more importantlu, will
> >> the sound fit the clients requirements.
> >
> >Guess this is what they call experience, you know how it will sound
because
> >you did similar designs in the past.
>
> Actually, no, many of the designs are NOT "similar," though they
> ALL behave according to the same basic physical rules. Have yet
> to see even the hint of an exception, event in new and wonderful
> systems that claim to be operating on "entirely new principals."
>
> >>
> >> Generally, it's not necessary to build any more than one or two
> >> versions of the prototype for listening purposes, because MOST
> >> of the tweaking will have already been done in the design
> >> process.
> >
> >So the prototypes are mainly to satisfy the client?
>
> No, the prototypes are REQUIRED by the client.
>
> >Or will you establish a
> >DBT to determine the differences between the prototypes?
>
> The differences between the two are differences requested by the
> client. It is most often the case that they will hand me a set
> of requirements, I will design and build to that set of
> requirements, hit the target pretty dead on, and then the
> client, on listening to the prortypes, discovers that they did
> not understand their requirements very well. So in some
> respects, tweaking the prototypes is really the clients tweaking
> their own expectations.
>
> But you fail to understand where a double blind test is
> required. It is for the purpose of establishing whether a
> difference can be heard where the differences can be approaching
> the threshold capability of the listener.
>
> In that light, I would and have recommended that the CLIENT
> engage in a blind listening test where the client has requested
> a change that I feel is unwarranted. A classic example: in
> designing a network for a speaker that had to meet a specific
> cost target, the client insisted, against my recommendations,
> that the complex conjugate inpedance comensator on the woofer
> use a very expensive film capacitor, where I had specifically
> designed in an NP electrolytic. The point where the capacitor is
> actually active is well outside of the passband of the woofer,
> the entire conjugate itself is effectively bypassed by a large
> film capacitor that is the shunt element in the network itself.
> and is further isolated by a fairly large value resistor, and
> that whole circuit is, itself, a shunt leg across the driver.
>
> Using an NP electrolytic instead of the film cap saved enough
> money to be spent on stuff that was REALLY important, yet the
> client insisted the speaker would sound dreadful without it. The
> client agreed to take one prortype with two networks, both in a
> box he couldn't see in, and listen to the two by whatever means
> he chose, save that he wasn't allowed to peak in the box. He and
> 12 other listeners listening over a period of a month were
> utterly unable to pick which was which.
>
> The blind test proved the assertion, which was supported by
> solid engineering data, that the cap in that position of the
> circuit had no audible effect (though there was a measurable
> difference). It further wasted a month of the clients valuable
> marketing time.
>
> >> THIS is what separates the pros form the amateurs: most people
> >> believe that speaker design is this long, almost endless
> >> iterative process of build, tweak, rebuild, tweak again, build
> >> yet again, tweak yet again, and so on. Well, it IS, if you have
> >> neither the skill, experience, tools or facilities to avoid the
> >> process. The vats majority of amateurs and not a small number of
> >> commercial speaker companies DO NOT HAVE ANY of the required
> >> skill, experience, tools or facilities for efficient,
> >> comprehensive and accurate design.
> >>
> >> When I started many years ago, I made a lot of mistakes. Guess
> >> what, I don't make thos emistakes anymore. But if you have some
> >> rank amateur who has NO test facilities to verify the changes he
> >> makes are what he THINKS they are: the guy ius going to end up
> >> wandering around blind. His ears ARE NOT GOING TO HELP. You just
> >> now admitted, for all to see (including you, I hope), that the
> >> brain is too willing to fill in gaps, too adaptable, and too
> >> poor at remembering detailed sound images. In that sense you are
> >> absolutely right. So how can one be trustful of something YOU
> >> have declared so unreliable?
> >
> >Getting carried away?......
>
> No, but you seem to be avoiding the implication of your original
> statement.

I really don't see that, I never claimed that ears can replace math or
measurements in engineering, which of course is a natural starting point. I
said that there are well know limitations in what to expect from listening.
This is why I prefer to make judgements of equipment over a longer period of
time. A system that initially sounds fantastic, may reveal flaws by extended
listening.

>
> >> Let's kook at an example. A person who has NO measurement or
> >> design facility may tweak, by ear, the low frequency tuning of a
> >> system until it sounds like what he wants. How does this person
> >> know that, in the process, he has not seriously compromised the
> >> excursion-limited power handling of the system, or its
> >> distortion?
> >
> >He can surely not...
>
> Fine, then you can see, with this one example, precisely how
> untrustworthy the method you implicitly advocate is.

I think you are putting the words into my mouth, which method am I supposed
to advocate??
>
> >> Another example: since multi-way speakers are not minimum-phase
> >> systems (and understand that "minimum phase" is a very precisely
> >> defined term that is nonetheless POORLY understood in the audio
> >> community), one can not take the response as shown by your
> >> typical "real-time analyzer" or even the aural impression by ear
> >> and use it to tweak driver equalization, simply because the
> >> frequency response and phase response of non-minimum-phase
> >> systems are NOT uniquely linked.
> >
> >I would assume that you are talking about phase coherent designs,
> >which I do know a little about. Dynamic linearity is also quite
> >important to get a good result.
>
> Well you fell into the trap, as almost every person in the
> high-end indistry does, because they hear "phase" and then
> immediately conjure up advertising slogans and a pile of utter
> hooey written by high-end magazine wonks who haven't the
> faintest clue about what they are talking about.

Okay the term "Phase coherence" may be worn out or ill defined.
>
> The term "minimum-phase" has a very precise, well understood
> meaning, it seems, everywhere but in high-end audio. A "minimum-
> phase" system is ANY system whose amplitude response and phase
> response are unique transforms of one another. It DOES NOT mean
> "phase coherent,: because "phase coherent" is a vague,
> ill-defined term that is more advertising hooey than anything
> else.
>
> A 'minimum-phase' system is onewhere if you take the frequency
> response of the system and mathematically calculate the phase
> response from that (which you can do via a mathematical
> operation called the 'Hilbert transform'), and compare it to the
> actual MEASURED phase of the system, they will be the same.
>
> If the two are NOT the same, then the system is non-minimum-
> phase and the difference between the two is called,, cleverly
> enough, the system's 'excess phase.'
>
> Now, are 'minimum phase systems' inherently better? Well,
> 'minimum phase' is NOT a measure of 'quality' in the sense that
> you might want to know at all. For instance, every listener has
> a component in their system which is a non-minimum-phase
> component that they can never avoid. The non-minimum-phase
> behavior of this component is very large and quite variable. Are
> all high-end systems (indeed ALL audio systems) inherently bad
> because of it? Well, with such a scary term like 'non-minimum-
> phase, you'dthink so, right?
>
> Wrong, because the component I'm talking about is simply the air
> between you and the speakers. It's a simple perfectly linear
> delay, and simple linear delays have non-minimum phase behavior.
> How? Well, if your were to measure the response of this delay,
> it would have a flat frequency response. The phase response
> derived via the Hilbert transform is a flat phase response. But
> the MEASURED phase response is decidely not flat. The reason is
> because of the delay. Yet no one is going to argue that the
> non-minimum-phase result of listening to the acoustical delay
> between the speakers and your ears is a bad thing. (the delay,
> by the way, has a linear phase response)
>
> But the point of all this is that non-minimum phase systems,
> which ALL speakers with crossovers are, do not necesarily behave
> in a manner that is intuitive. You might see a dip in the
> response and be inclined to diddle with the crossover to
> equalize the dip out, only to find that the dip is STILL there
> despite the clear change in equalization. Rverberent rooms
> behave the same way, having non-minimum-phase response. You see
> a big peak in the amplitude response due to some big
> resonance, your minimum-phase brain and your minimum-phase
> equalizer want to fill in that hole, and you push a knob down,
> only to find that the peak is STILL there, the excessive
> reveration that causes the peak is STILL there, and now the
> whole system sounds WORSE because not only have you NOT
> corrected the non-minimum-phase problem with a minimum phase
> solution, you've also screwed up the direct-arrival frequency
> response a bunch.
>
> Oh, did you know that each band of a graphic equalizer has nice
> minimum phase response? That makes them better, right?

Setting up another trap, are we :) Let's just go outside and listen to one
way speakers. I think you gave the answer yourself, above.

>
> >> How many people in the high-end business, in YOUR store, know
> >> what "minimum phase" means? I'll bet dollars to donuts the
> >> answer is a very small number, yet it is a crucial concept in
> >> loudspeaker design.
> >
> >I don't really have a store, but a listening room where interested people
> >can book an appointment.
>
> Whatever, if you told one of your visitors to define "minimum
> phase," what do you suppose they might say?

I could easily prove that most visitors are ignorant fools, but I choose not
to do this. I'd rather help them in finding a system that matches their
needs.

>
> --
> | Dick Pierce |
> | Professional Audio Development |
> | 1-781/826-4953 Voice and FAX |
> | |

Audio Guy
August 19th 03, 07:09 PM
In article <cJs0b.200002$o%2.92503@sccrnsc02>,
(Audio Guy) writes:
> In article <u_h0b.195707$o%2.91075@sccrnsc02>,
> (ludovic mirabel) writes:
>
>> I'm glad to see that someone doesn't let himself be intimidated by
>> people who believe that heaping up adjectives like "lame, weak,
>> stupid" will cow a heretic into confessing his sins prior to auto-da-
>> fe.
>
> I dare you to quote anyone using those terms about anyone on this
> group. I'm surprised the moderators let you claim such an outrageous
> thing.

Sorry, my mistake, you yourself have used those terms quite a few
times, so someone has used them in the past.

All Ears
August 19th 03, 08:45 PM
----- Original Message -----
From: "Nousaine" >
Newsgroups: rec.audio.high-end
Sent: Tuesday, August 19, 2003 7:00 PM
Subject: Re: Blindtest question


> "All Ears" wrote:
>
> >"Nousaine" > wrote in message
> >news:CvZ%a.140179$cF.51788@rwcrnsc53...
> >> "All Ears" wrote:
> >>
> >> >> Unless you have 'unlimited' resources it surely must have. I'm not
> >ashamed
> >> >of
> >> >> spending $2000 to design and build the world's most accomplished
> >subwoofer
> >> >
> >> >When a guy building some of the best speakers in the world suggests me
to
> >> >use either brand A or brand B in cables, I think it would be silly at
> >least
> >> >not to try these?
> >>
> >> So who does this?
> >
> >If you want names and brands, I can mail these to you.
>
> I'm interested.

Okay, I'll mail you some info.

>
> >> >In order to build these speakers, his ears must be good and he must
know
> >> >what he is doing. So if I find that he is right, I'll buy the cables.
> >>
> >> If this is all true why doesn't his speaker come with the cables? Is he
> >> purposley allowing some of his customers to have substandard sound with
> >his
> >> speakers.
> >
> >People are free to do what they like and want, however advise is free and
> >obtional.
>
> But he's purposely limiting the sound of his speakers when they're sold,
> knowing that they'd sound better with better wire and NOT supplying that
wire
> with the product?

No actually not, because the recommendation of wires are different with the
various system configurations.

>
> >
> >Interestingly enough, different cables are recommended for SS and tubed
> >amplifiers.
> >
> >What I find interesting here, is that a guy capable of designing
fantastic
> >speakers, would recommend cables if there really is no difference.
>
> What I find surprising is that such a person wouldn't deliver his speakers
with
> the best-sounding wire as a package.

As answer above..
>
> These
> >speakers has been refined and improved over many years, going
systematically
> >through all components of the construction. I am sure that most people
would
> >not be able to hear difference from the individual little improvements,
but
> >when it all adds up, it does give a notisable difference.
> >
> >KE
>
> Ah the series, cumulative tweak argument. I put that to the test in "To
Tweak
> or Not." No cigar.
>
> >> >Why should you be ashamed in spending $2000 on building the world's
most
> >> >accomplished subwoofer? If you can prove this in a DBT, I'll buy one
from
> >> >you :)
> >> >
> >> >KE
> >>
> >> You can't "buy" one; you'll have to make it. Want a scheme get a back
> >issue of
> >> the June '99 Sound & Vision. Want to know what "wires" are used? Ask
me.
>
> And?

Okay, okay, tell me then.......... :)

KE
>

Bob Marcus
August 20th 03, 05:42 PM
"Arny Krueger" > wrote in message news:<5Ks0b.201187$YN5.140840@sccrnsc01>...
> "ludovic mirabel" > wrote in message
> news:u_h0b.195707$o%2.91075@sccrnsc02
> >
> > On the other hand when someone like myself says that SIMULTANEOUS
> > comparison by the left-right method with random changes suits HIM
> > better he is told that his method is "fatally flawed" or something to
> > that effect. Says who?
>
> Just about anybody who has tried it. The problem with simultaneous listening
> is the poor signal-to-noise ratio. Presuming that the levels are matched,
> the highest possible SNR for simultaneous listening is 6 dB, and usually
> the SNR associated with simultaneous listening is zero (0.0) dB.

Besides, Mr. Mirabel's comparisons were done only single-blind and
were not level-matched. The lack of level-matching makes the whole
thing a joke, really. Even a slight mismatch will result in an easily
perceivable image shift, a perception that, while real, is completely
meaningless if you ultimately intend to use the same cable in both
channels.
>
> > Well- they do. And who are they?
>
> People who tried it and found that the 40-100 dB SNR of proximate listening
> leads to more sensitive results than the 0-6 dB SNR of simultaneous
> listening.
>
> >Those who say so of course.
>
> We observe that our leading promoter of simultaneous listening argues
> frequently against blind listening. One unarguable benefit of blind
> listening is that positive results can't be falsified. Somehow it all fits,
> no?

Well, even I'll defend Mirabel here. He hasn't argued against blind
testing (though he hasn't conceded its absolute necessity for
difference tests, either). What he's argued against is proximate
comparisons. He says he gets confused, and then he can't tell things
apart. (He's not confused, of course. He just refuses to believe what
his ears are telling him!)

bob

S888Wheel
August 20th 03, 06:45 PM
Tom said

>
>So he designs speakers with the knowledge that the speakers won't be optimal
>with the wire in the speaker and that there is no way of knowing in advance
>how
>the speaker will sound in any given system?

Every speaker manufacturer does this. Duh. Or should they include a
complimentary ideal room with their speakers?

All Ears
August 20th 03, 07:45 PM
snip

> >> >> So who does this?
> >> >
> >> >If you want names and brands, I can mail these to you.
> >>
> >> I'm interested.
> >
> >Okay, I'll mail you some info.
>
> Thanks in advance.

Info was mailed to you yesterday.

>
> >> >> >In order to build these speakers, his ears must be good and he must
> >know
> >> >> >what he is doing. So if I find that he is right, I'll buy the
cables.
> >> >>
> >> >> If this is all true why doesn't his speaker come with the cables? Is
he
> >> >> purposley allowing some of his customers to have substandard sound
with
> >> >his
> >> >> speakers.
> >> >
> >> >People are free to do what they like and want, however advise is free
and
> >> >obtional.
> >>
> >> But he's purposely limiting the sound of his speakers when they're
sold,
> >> knowing that they'd sound better with better wire and NOT supplying
that
> >wire
> >> with the product?
> >
> >No actually not, because the recommendation of wires are different with
the
> >various system configurations.
>
> So he designs speakers with the knowledge that the speakers won't be
optimal
> with the wire in the speaker and that there is no way of knowing in
advance how
> the speaker will sound in any given system? And that different wires will
> improve things on-site?
>
> If this is true, because he can't test all amplifiers, components and
wires
> ever used, he has no prior knowledge of how his speakers will sound in any
> given system other than the one used in design.
>
> Is wires or amps made a big difference with any given speaker HOW can any
> speaker designer make and sell any speaker in good conscience unless he
has a
> complete system specification in advance? And if the system contains any
device
> of the thousands available that were not part of the original
design/validation
> process who's to say that it would be acceptable in any given system
Surely not
> the maker or the point-of-sale representative.

There are actually two versions of this speakers, one optimized for SS the
other for tubes. A few SS designs are however recommended to be used with
the "tube harness"

So this is one step further than I have seen anybody else go.

These are not over-the-counter type mass produced speakers. Anybody
investing in this kind of equipment, should do some home work before the
actual purchase. Since these speakers are rather revealing, connecting them
to the "wrong" equipment, can be a rather unpleasent experience.

>
> >> >Interestingly enough, different cables are recommended for SS and
tubed
> >> >amplifiers.
> >> >
> >> >What I find interesting here, is that a guy capable of designing
> >fantastic
> >> >speakers, would recommend cables if there really is no difference.
>
> So how can he allow ANY cable that he has not personally validated be used
with
> his speakers? Shouldn't he refuse to sell product to someone who has not
> certified his cable kit?

I don't think so, some may prefer a zip cord.
It is not unusual for his customers to call and ask for advice, if the
desired result cannot be obtained. It is usually possible for him to pin
point the possible problems, which could be cables in some situations.

>
> >> What I find surprising is that such a person wouldn't deliver his
speakers
> >with
> >> the best-sounding wire as a package.
> >
> >As answer above..
>
> No satisfactory answer there. IF these speaker require a given set(s) of
cables
> to perform optimally than why aren't they supplied with the speaker?

I see your point, but what about amplifier, CD player, etc., etc. It will be
a pretty big bulk package if taken to the full extend.

>
> >> These
> >> >speakers has been refined and improved over many years, going
> >systematically
> >> >through all components of the construction. I am sure that most people
> >would
> >> >not be able to hear difference from the individual little
improvements,
> >but
> >> >when it all adds up, it does give a notisable difference.
> >> >
> >> >KE
> >>
> >> Ah the series, cumulative tweak argument. I put that to the test in "To
> >Tweak
> >> or Not." No cigar.
> >>
> >> >> >Why should you be ashamed in spending $2000 on building the world's
> >most
> >> >> >accomplished subwoofer? If you can prove this in a DBT, I'll buy
one
> >from
> >> >> >you :)
> >> >> >
> >> >> >KE
> >> >>
> >> >> You can't "buy" one; you'll have to make it. Want a scheme get a
back
> >> >issue of
> >> >> the June '99 Sound & Vision. Want to know what "wires" are used? Ask
> >me.
> >>
> >> And?
> >
> >Okay, okay, tell me then.......... :)
> >
> >KE
>
> The Equalizer amp lead is a 36-foot rca cable that came in the box with an
> inexpensive subwoofer product. The power amp lead is a 'junk box' rca
rescued
> from my parts box. The 'internal' wiring of the 8 drivers was
accomplished
> with 16 gauge zip cord sold as car speaker wire as was the amplifier
> connection.
>
> Does this work? Well find a subwoofer that will deliver 120 dB+ from 12 to
62
> Hz at 2-meters in a real room. There are zero commercial products that
will do
> this.
>
> And it's not just raw acoustical power. The system is perfectly integrated
with
> the 7 channel surround system and provides better-then-high end full
bandwidth
> sound quality.

I have seen pictures of your design, looks pretty impressive. Already
considering to build one :)

KE
>

Stewart Pinkerton
August 20th 03, 08:30 PM
On Wed, 20 Aug 2003 16:49:07 GMT, (Nousaine) wrote:

>Does this work? Well find a subwoofer that will deliver 120 dB+ from 12 to 62
>Hz at 2-meters in a real room. There are zero commercial products that will do
>this.

Ever tried running a full level 7Hz tone through that system? :-)

Keep a bucket handy - if not a body bag!

>And it's not just raw acoustical power. The system is perfectly integrated with
>the 7 channel surround system and provides better-then-high end full bandwidth
>sound quality.

Well, that's the truth, for sure.............
--

Stewart Pinkerton | Music is Art - Audio is Engineering

Penury
August 21st 03, 02:30 AM
On 21 Aug 2003 00:33:06 GMT, (Nousaine) wrote:
>Actually getting things dropped down the throat is an issue because I don't use
>a full cloth grille: too much flapping.. The answer to that dilemma is
>removable individual woofer mounting panels.
>
>It's also not clear from photos but the whole deal was designed so the system
>can be used with up to sixteen 8,10 12-inch woofers or eight 15 or 18-inch
>devices.

Where can we view these photos of your system ?

Bill Eckle

Vanity Web page at:
http://www.wmeckle.com

Nousaine
August 21st 03, 08:46 AM
Penury wrote:

>
>On 21 Aug 2003 00:33:06 GMT, (Nousaine) wrote:
>>Actually getting things dropped down the throat is an issue because I don't
>use
>>a full cloth grille: too much flapping.. The answer to that dilemma is
>>removable individual woofer mounting panels.
>>
>>It's also not clear from photos but the whole deal was designed so the
>system
>>can be used with up to sixteen 8,10 12-inch woofers or eight 15 or 18-inch
>>devices.
>
> Where can we view these photos of your system ?
>
>Bill Eckle

>Vanity Web page at:
>http://www.wmeckle.com

Photos were published in the June 1999 Sound and Vision in the article "The
Subwoofer That Shook The World." (not my title.)

I'm also developing a web-site but it may be awhile before it's operational.

Thomas A
August 21st 03, 05:57 PM
"All Ears" > wrote in message >...
> Snip
> > >> >Why should you be ashamed in spending $2000 on building the world's
> most
> > >> >accomplished subwoofer? If you can prove this in a DBT, I'll buy one
> from
> > >> >you :)
> > >> >
> > >> >KE
> > >>
> > >> You can't "buy" one; you'll have to make it. Want a scheme get a back
> issue of
> > >> the June '99 Sound & Vision. Want to know what "wires" are used? Ask
> me.
> >
> > And?
>
> Okay let me guess, 10 gauge wire?
>
> I admit cheating, saw a little info about your sub, looks like it will move
> some air! Although I would hate my baby boy to get stuck down there....not
> sure the woofers could handle this... :)
>
> KE

I'll just add that the cables recommended (by the contructor at Ino
Audio) to the most competent commercial speaker system I know of is
cheap EKK 2.5mm2 cable. And this system can play high SPL with very
low distortion.

http://www.studioblue.se/images/monitorsystem.jpg

Thomas

Thomas A
August 22nd 03, 05:28 AM
(Nousaine) wrote in message >...
> "All Ears" wrote:
>
> .....snips....
>
> ---- Original Message -----
> From: "Nousaine" >
> >Newsgroups: rec.audio.high-end
> >Sent: Tuesday, August 19, 2003 7:00 PM
> >Subject: Re: Blindtest question
> >
> >> >> >When a guy building some of the best speakers in the world suggests me
> to
> >> >> >use either brand A or brand B in cables, I think it would be silly at
> least
> >> >> >not to try these?
> >> >>
> >> >> So who does this?
> >> >
> >> >If you want names and brands, I can mail these to you.
> >>
> >> I'm interested.
> >
> >Okay, I'll mail you some info.
>
> Thanks in advance.
>
> >> >> >In order to build these speakers, his ears must be good and he must
> know
> >> >> >what he is doing. So if I find that he is right, I'll buy the cables.
> >> >>
> >> >> If this is all true why doesn't his speaker come with the cables? Is he
> >> >> purposley allowing some of his customers to have substandard sound with
> his
> >> >> speakers.
> >> >
> >> >People are free to do what they like and want, however advise is free and
> >> >obtional.
> >>
> >> But he's purposely limiting the sound of his speakers when they're sold,
> >> knowing that they'd sound better with better wire and NOT supplying that
> wire
> >> with the product?
> >
> >No actually not, because the recommendation of wires are different with the
> >various system configurations.
>
> So he designs speakers with the knowledge that the speakers won't be optimal
> with the wire in the speaker and that there is no way of knowing in advance how
> the speaker will sound in any given system? And that different wires will
> improve things on-site?
>
> If this is true, because he can't test all amplifiers, components and wires
> ever used, he has no prior knowledge of how his speakers will sound in any
> given system other than the one used in design.
>
> Is wires or amps made a big difference with any given speaker HOW can any
> speaker designer make and sell any speaker in good conscience unless he has a
> complete system specification in advance? And if the system contains any device
> of the thousands available that were not part of the original design/validation
> process who's to say that it would be acceptable in any given system Surely not
> the maker or the point-of-sale representative.
>
> >> >Interestingly enough, different cables are recommended for SS and tubed
> >> >amplifiers.
> >> >
> >> >What I find interesting here, is that a guy capable of designing
> fantastic
> >> >speakers, would recommend cables if there really is no difference.
>
> So how can he allow ANY cable that he has not personally validated be used with
> his speakers? Shouldn't he refuse to sell product to someone who has not
> certified his cable kit?
>
> >> What I find surprising is that such a person wouldn't deliver his speakers
> with
> >> the best-sounding wire as a package.
> >
> >As answer above..
>
> No satisfactory answer there. IF these speaker require a given set(s) of cables
> to perform optimally than why aren't they supplied with the speaker?
>
> >> These
> >> >speakers has been refined and improved over many years, going
> systematically
> >> >through all components of the construction. I am sure that most people
> would
> >> >not be able to hear difference from the individual little improvements,
> but
> >> >when it all adds up, it does give a notisable difference.
> >> >
> >> >KE
> >>
> >> Ah the series, cumulative tweak argument. I put that to the test in "To
> Tweak
> >> or Not." No cigar.
> >>
> >> >> >Why should you be ashamed in spending $2000 on building the world's
> most
> >> >> >accomplished subwoofer? If you can prove this in a DBT, I'll buy one
> from
> >> >> >you :)
> >> >> >
> >> >> >KE
> >> >>
> >> >> You can't "buy" one; you'll have to make it. Want a scheme get a back
> issue of
> >> >> the June '99 Sound & Vision. Want to know what "wires" are used? Ask
> me.
> >>
> >> And?
> >
> >Okay, okay, tell me then.......... :)
> >
> >KE
>
> The Equalizer amp lead is a 36-foot rca cable that came in the box with an
> inexpensive subwoofer product. The power amp lead is a 'junk box' rca rescued
> from my parts box. The 'internal' wiring of the 8 drivers was accomplished
> with 16 gauge zip cord sold as car speaker wire as was the amplifier
> connection.
>
> Does this work? Well find a subwoofer that will deliver 120 dB+ from 12 to 62
> Hz at 2-meters in a real room. There are zero commercial products that will do
> this.

Tom, Ino Audio produce subwoofer systems that produce extremely high
SPL at low frequencies. The Ino Audio Profundus Z-4 vented system can
pump 80 liters of air peak to peak at 20 Hz.

Thomas
>
> And it's not just raw acoustical power. The system is perfectly integrated with
> the 7 channel surround system and provides better-then-high end full bandwidth
> sound quality.

Nousaine
August 22nd 03, 04:43 PM
(Thomas A) wrote:

>Tom, Ino Audio produce subwoofer systems that produce extremely high
>SPL at low frequencies. The Ino Audio Profundus Z-4 vented system can
>pump 80 liters of air peak to peak at 20 Hz.

Can you give me a reference? Displacement is the key element.

Thomas A
August 24th 03, 05:29 PM
(Nousaine) wrote in message news:<TSq1b.170244$cF.59291@rwcrnsc53>...
> (Thomas A) wrote:
>
> >Tom, Ino Audio produce subwoofer systems that produce extremely high
> >SPL at low frequencies. The Ino Audio Profundus Z-4 vented system can
> >pump 80 liters of air peak to peak at 20 Hz.
>
> Can you give me a reference? Displacement is the key element.

There is no reference on the web, other than the explanation of the
system given by Ingvar Öhman, the contructor of the speakers. I think
you've seen the thread before:

http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&threadm=a94e5p01ahs%40enews1.newsguy.com&rnum=2&prev=/groups%3Fq%3Dino%2Baudio%2Bgroup:rec.audio.high-end%26hl%3Den%26lr%3D%26ie%3DUTF-8%26group%3Drec.audio.high-end%26selm%3Da94e5p01ahs%2540enews1.newsguy.com%26 rnum%3D2

Thomas

Nousaine
August 24th 03, 09:09 PM
(Nousaine) wrote in message
>news:<TSq1b.170244$cF.59291@rwcrnsc53>...
>> (Thomas A) wrote:
>>
>> >Tom, Ino Audio produce subwoofer systems that produce extremely high
>> >SPL at low frequencies. The Ino Audio Profundus Z-4 vented system can
>> >pump 80 liters of air peak to peak at 20 Hz.
>>
>> Can you give me a reference? Displacement is the key element.
>
>There is no reference on the web, other than the explanation of the
>system given by Ingvar Öhman, the contructor of the speakers. I think
>you've seen the thread before:
>
>
>http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&threadm=a94e5p01ahs%40
enews1.newsguy.com&rnum=2&prev=/groups%3Fq%3Dino%2Baudio%2Bgroup:rec.audio
..high-end%26hl%3Den%26lr%3D%26ie%3DUTF-8%26group%3Drec.audio.high-end%26se
lm%3Da94e5p01ahs%2540enews1.
>newsguy.com%26rnum%3D2
>
>Thoma

Oh yeah I'd forgotten that. It seems that this system is just a fig-newton of
someone's imagination :)

80 liters is the equivalent of 14 small block Chevy V8s. My current system uses
8 23.5-mm Xmax 15-inch woofers and it's maximum displacement is about 34 liters.

Thomas A
August 25th 03, 05:09 PM
(Nousaine) wrote in message news:<9Y82b.247987$uu5.54965@sccrnsc04>...
> (Nousaine) wrote in message
> >news:<TSq1b.170244$cF.59291@rwcrnsc53>...
> >> (Thomas A) wrote:
> >>
> >> >Tom, Ino Audio produce subwoofer systems that produce extremely high
> >> >SPL at low frequencies. The Ino Audio Profundus Z-4 vented system can
> >> >pump 80 liters of air peak to peak at 20 Hz.
> >>
> >> Can you give me a reference? Displacement is the key element.
> >
> >There is no reference on the web, other than the explanation of the
> >system given by Ingvar Ã?hman, the contructor of the speakers. I think
> >you've seen the thread before:
> >
> >
> >http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&threadm=a94e5p01ahs%40
> enews1.newsguy.com&rnum=2&prev=/groups%3Fq%3Dino%2Baudio%2Bgroup:rec.audio
> .high-end%26hl%3Den%26lr%3D%26ie%3DUTF-8%26group%3Drec.audio.high-end%26se
> lm%3Da94e5p01ahs%2540enews1.
> >newsguy.com%26rnum%3D2
> >
> >Thoma
>
> Oh yeah I'd forgotten that. It seems that this system is just a fig-newton of
> someone's imagination :)

I'm not sure what you mean fig-newton :), but I guess that it's hard
to believe the numbers given (which have been measured in the studio).
If you at anytime travel to Sweden, it might be possible for you to
both hear and measure the system, located in Stockholm. But you would
need to contact Ino and/or Studio Blue for a demonstration.
>
> 80 liters is the equivalent of 14 small block Chevy V8s. My current system uses
> 8 23.5-mm Xmax 15-inch woofers and it's maximum displacement is about 34 liters.

Ino also sell systems called Profundus Infra-10 which are 10 x 15 inch
woofers in closed box configuration.

T