Reply
 
Thread Tools Display Modes
  #161   Report Post  
Keith Hughes
 
Posts: n/a
Default its Exactly the same

chris wrote:

"normanstrong" wrote in message


It's always dangerous to correct someone in public. Nevertheless, I
should point out that 16 out of 20 is not the same as 4 out of 5,


for

statistical purposes. Not even close!

Norm Strong


I'm sorry Norm it is EXACTLY the SAME.
the proportional chances of right or wrong answers are the same.
16/20 is the same ratio as 4/5
Just ask any Turf Accountant. He would been seen dead writing 16/20
on his board.


Sorry Chris, but you're the one who's mistaken. Sure, 4/5
represents the same *ratio* as 16/20. That is, of course,
irrelevant in this context. Check out the "Central Limit Theorem".
As the sample size increases, the binomial distribution (which
we're using for calculation) more closely matches the Normal
distribution. IOW, the larger the sample size, the more
representative the sample is of the larger population, and thus
the lower the confidence interval becomes. Ratios have no relevance.

Keith Hughes
  #163   Report Post  
Harry Lavo
 
Posts: n/a
Default weakest Link in the Chain

"Bob Marcus" wrote in message
news:69lNb.69026$na.40064@attbi_s04...
Harry Lavo wrote:

If there is one thing that the
Oohashi et al test established (without a doubt, documented in the
article)
is that *music* as opposed to sound triggers an *emotional* response

from
the brain that takes 20 secs or so to fully register and be

operational.

This is utter hogwash. IF Oohashi proved anything (and that is a huge if),
it is only that ultrahigh frequency noise in combination with music

produces
some delayed reaction in the brain.


What the heck are you basing this on? So the utrasonic overtones in his
experiment were "noise".
The reaction was in the emotion-processing portion of the brain and
correlated statistically with higher qualitative music ratings. Not exactly
the usual result of "noise" now, is it?

I
and others believe that many of the more subtle effects involving
high-end
audio reproduction of music require this component to be present for

full
perception.

But, according to Oohashi himself, it is NOT present in any consumer-grade
audio system. So, even if you were right about the weaknesses of standard
DBTs, Oohashi provides no basis for believing that they are insufficient

for
comparing consumer audio components.


I am not talking here simply about high-frequency overtones. Where did you
get that idea?

snip

But you don't think a test showing that proto-monadic evaluation showed
statistically significant differences where simple dbt'ng did not,

would
not
get published.

Sure it would, if it happened. But I don't think any perceptual

psychologist
would waste his time trying, because a proto-monadic test is so obviously

a
terrible way to test for audibility. (It produces too many false negatives
to be a reliable test.) Even Oohashi didn't make that claim for it.


Again, from what basis you drawing this conclusion. They set up the test
*precisely* this way to better get at a definitive reult. That means he
doesn't make a claim for it?

  #164   Report Post  
Harry Lavo
 
Posts: n/a
Default weakest Link in the Chain

"Nousaine" wrote in message
news:fBkNb.68743$na.39732@attbi_s04...
(Audio Guy) wrote:

n article gi4Nb.55157$8H.104911@attbi_s03,
"Harry Lavo" writes:


..snips to specific content.....


Thank you for remembering. However, you don't quite remember

accurately.
The control test I proposed is similar to the Oohashi et al test in

that it
is evaluative over a range of qualitative factors, done in a relaxed

state
and environment, and with repeated hearing of full musical excerpts.

But
it
is not a duplicate of the test. The arguments for such tests have

been
made here for years..long before the Oohashi article was published.


Let's remember that the Oohashi "article" was not "published" in a
peer-reviewed journal but exists as an AES Convention Preprint like my AES
Preprints. Therefore they carry exactly the same weight as anything I've
presented at a Convention, AES Chapter Presentation or in a consumer audio
magazine.


Tom the Oohashi article I introduced here was peer reviewed and appeared in
Journal of Neurophysiology. It was discussed extensively and I repeated
the link often. Here is the link once again.:

http://jn.physiology.org/cgi/content/full/83/6/3548

I find it troubling that you, who makes a living as a "testing expert",
would have ignored such an important discussion, regardless of what you may
have thought of it. Else you would know it was a peer-reviewed and well
documented piece of research.

He and
his researchers apparently reached the same conclusion...that it was a

more
effective way of testing for the purposes under study...which were
semi-open-ended evaluations of the musical reproduction. Double blind,

by
the way, as was my proposed test.


As long as it's double blind I have no problems with it, but I have
yet to see you propose any reason for rejecting the current DBTs
except that they don't agree with your sighted results.

.


That's because you continually ignore the arguments of myself and other
"subjectivists" here on RAHE and elsewhere. The reasons have been well
documented. You'll find my reiteration of them elsewhere in this thread.

Again you are joining Stewart in repeating an (at best) half-truth

ad
nauseum here. Those firms use DBT's for specific purposes to listen

for
specific things that they "train" their listeners to hear. That is
called
"development".


Some of the blind tests I've conducted were testing the ability of

self-trained
listeners to identify 'differences' that had been described as "pretty

amazing"
by the subject using the very reference system and the programs that he

had
claimed illustrated these differences clearly.

One in particular used as long as 5 weeks in-situ training using the

actual
test apparatus and others have used the reference system of the claimants.


You have cited that test before, but without details as to how the final
choices were made (eg what happens after the weeks of listening, it
continues to be an "antedotal outlier" in your work.

In the food industry we used such tests for color, for
texture and "mouth feel", for saltiness and other flavor
characteristics.
That is a far cry from a final, open-ended evaluation whereby when

you
start
you have simply a new component and are not sure what you are

looking
for /
simply listening and trying to determine if / how the new component
sounds
vs. the old. It is called "open ended" testing for a reason...


Sure but one can do such a test without sound and still get usable results

for
marketing purposes.

I'm not interested in how the sound "feels in the mouth" unless that feel

is
soley related to sound quality. If we wish to limit the assessment to

sound
alone then bias-mechanisms are needed AND closely spaced presentations are

the
most sensitive.


The above is simply jibberish. :-)

Yes, because it never ends, and so never gets to a result.


This is just retoric and is absolute nonesense! :-(


How so, if it is open-ended it seems by definition to not have an end.
How do you tell when it has reached the end, when it gets the results
you want?


To move off-topic for a moment I will say that most open sound quality
evaluations I've seen tend to do exactly that. Generally the 'evaluation'
starts with a presentation (salesman, conventioneer, audiophile, etc)

after
which the presenter asks "whaddya think?" This is followed by some

discussion
in which only a few present engage. Often they will report different

"things."
Then the presenter will say "Let's try again with BETTER material" and the
process will continue, often with considerable 'negotiation' of

differences,
until there is a trial where the 'right' answers are given and then the

session
is over. Sometimes there will be continued program delivery and some
hand-waving and back-slapping. But, I've seen the script played time and

again.


We are talking about how audiophiles do comparisons in their own homes, not
with salesmen in a convention or showroom. Talk about straw men!

could reason to believe that conventional dbt or abx testing is not

the
best
way to do it and may mask certain important factors that may be more
apparent in more relaxed listening.


There's been no evidence that this is true, other than the non-published
Oohashi test.


There has never been a test run with a control test that would show this.
That is the crux of the matter.

Not that this isn't an interesting theorem. But, there's no replicated

evidence
to support it. On the other hand, ABX and other blind testing has quite an
interesting set of data on testing audible quotient of products that fit

within
the known limits of human acoustical thresholds.


Okay, what is the "known limit" of soundstage depth, tom?


Mr Lavo seems to be arguing that some kind of 'test' that requires lengthy
evaluation under "open-ended" conditions would somehow be more suitable

for the
average enthusiast. My guess is that no one except a few who may own a

store
would have such an opportunity and that a week-end or overnight ABX test
(perhaps with the QSC $600 ABX Comparitor) or other bias controlled test

is not
only more practical but quite implementable for the truly curious.


Well, son, let's see if you have a fever. I'm sorry but I just broke my
last thermomenter. We'll just use this barometric probe instead...it should
tell us the same thing. It is used to measure weather every day, and
weather includes temperature, doesn't it? :-)

And again, your only defense is that DBT results don't agree with
your opinions. DBTs can and have been done over long periods with
relaxed listening, and the results are the same.


On Tom's say so and without any detailed description or published data.


Actually that has been published in Audio and The $ensible Sound. You may

can
also trace data to the SMWTMS site through www.pcabx.com


No monadic or proto-mondadic evaluative tests have ever been done as a
control. without them you have *no* proof...simply assertion.


I'm
talking about a rigorous, scientific test of dbt, control

proto-mondadic,
and sighted open ended testing. With careful sample selection,

proctoring,
statistical analysis, and peer-reviewed publication. Once that is done

I
will be happy to accept what conclusions emerge. It hasn't been done,

and
so *assertiosn* that comparative dbt's such as abx are appropriate is

just
that, an assertion.


As is hte hypothesis that this test will have some useful benefit. And I

think
this poster is right; this test would be rejected if it failed to support

that
theory.


Well, I'll give you the same reply I gave him. You are assuming it would
give the same results. suppose it showed that proto-monadic testing
actually supported sighted evaluative listening tests. you don't think it
would be reported. Your assumption shows that you are operating from a
belief system, not as a proponent of truly scientific testing.


It's much more than an assertion, it has data to back it up, unlike
your assertions that audio DBTs are just "assertions".

The issue isn't so much the blind vs
sighted as it is the comparative vs. the evaluative....and while a
double
blind evaluative test (such as the proto-monadic "control test" I
outlined)
may be the ideal, it is so difficult to do that it is simply not
practical
for home listeners treating it as a hobby to undertake. So as
audiophiles
not convinced of the validity of convention dbt's for open-ended
evaluation,
we turn to the process of more open ended evaluative testing as a

better
bet
despite possible sighted bias.


IOW he's willing to accept the increased (actually the nearly universal)
probablity of false positives as being less important than limiting the

results
to sound quality alone.


Yes, with the distinct possiblity that the "sound quality only" test is
actually missing/masking some key evaluative areas.

Sounds like he's a marketeer, doesn't it


Since you obviously share the opinion that marketers are dishonest, I'd
suggest you try a little test. I'll blindfold you and put you in a roomful
of people. Ask any and all of them questions that show honesty or
dishonesty, and then tell me whom the marketers are. See if you can pick
them blind, eh?

  #165   Report Post  
Rusty Boudreaux
 
Posts: n/a
Default weakest Link in the Chain

"RBernst929" wrote in message
...
AND.... if science didnt progress.. i suppose the CD player as

invented by Sony
and Phillips back in the 80's would be the same as it is today?

Well, science
"learned" that they could be made better by eliminating jitter.

Until
discovered, no one knew about jitter.


Not true.

Sampling theory was first formulated mathematically by Nyquist in
1928 and formally proved by Shannon in 1949. Shannon's work was
directly applicable to PCM which was well established at the
time. T1 carrier used PCM to multiplex 24 analog telephone
signals into one digital channel (1.544Mbps) and transport the
signal for hundreds of miles.

The telecom industry solved the issues of PCM and jitter and
deployed vast networks based on such three decades before the
audio CD. The designers of the early CD players should be
faulted for not ulitizing the prior public research or even basic
undergraduate comm theory.

H. Nyquist, "Certain topics in telegraph transmission theory,"
Trans. AIEE, vol. 47, pp. 617-644, Apr. 1928.
C. E. Shannon, "Communication in the presence of noise," Proc.
Institute of Radio Engineers, vol. 37, no.1, pp. 10-21, Jan.
1949.



  #166   Report Post  
Norman Schwartz
 
Posts: n/a
Default weakest Link in the Chain

"RBernst929" wrote in message
news:KRlNb.55268$nt4.85518@attbi_s51...
Is the sound of the song changed in any way or
not? I say it is. I think most audiophiles would agree with me.

Otherwise,
every cable on planet Earth makes no difference at all. They all produce
identical sound and we are all fools falling for marketing hype. -Bob


And I say if your head is in a slightly different position, or you happen to
sit slightly lower or higher in your chair the "song will be changed", and
this is obviously isn't due to cables.

  #167   Report Post  
Audio Guy
 
Posts: n/a
Default weakest Link in the Chain

In article XygNb.66118$I06.302958@attbi_s01,
"L Mirabel" writes:
"Audio Guy" wrote in message
news:upXMb.43728$sv6.119711@attbi_s52...

(Audio Guy) wrote in message
news:upXMb.43728$sv6.119711@attbi_s52...

Why would he, you don't seem to take the mounds of evidence about the


results of DBTs seriously. And please no reams of text about 1,75db


differences, etc, etc, as I glaze over everytime I read it, so don't


bother.


I copied below your long posting in its entirety.


Ok, but much of it isn't mine, but quotes of others posts.

I read it till "my eyes glazed over" and found a lot of assertions about the
"mounds" of DBTs, that were done, somewhere out there, and proved how
valuable DBT is for hearing differences between components and so on and on.


We've had those mounds of DBTs reported here many times.

Just one thing is missing. Forget a "mound". Give*one single reference* to
an experiment showing that the randomly selected audiophiles using it found
differences between comparable components. (Author, mag., year, volume,
page)


How aobut you give one single reference proving audio DBTs as
practiced don't work? Tom has listed several from article he's writen,
I don't have the info at hand, but I'm quite sure he'll provide them
to you if you jsut ask.

I promise not to glaze when faced with such proper, verifiable reference.

In case your training as an electronics eng. and psychologist did not
acquaint with the difference between unsupported assertion and experimental
evidence you could ask Mr. Chung and Mr. Pinkerton to help out.


Again, Tom Nousaine has posted such info quite a few times, he's been
running audio DBTs for quite a long time now, and in fact he just
posted a reference to one of his magazine articles just recently, but
up till now you've dismissed them due to a lack of peer review. I see
now you no longer require peer review before you'll accept the tests
as valid, so it seems this discussion has no reason to continue.
Comma.

  #168   Report Post  
Audio Guy
 
Posts: n/a
Default weakest Link in the Chain

In article ,
(Mkuller) writes:
(Audio Guy) wrote:
You still have yet to show why audio DBTs as typically practiced are
not valid tests. What is it that makes you so sure that they aren't?
It still boils down to "their results don't agree with (my) sighted
evaluations". If this is not true, please explain.


Ok, let's put aside the "DBTs don't agree with my sighted results" for a
moment. This is a question for all of those advocating DBTs for audio
equipment comparisons -

"How do you know DBTs work - i.e. do not get in the way of identifying subtle
audible differences - when they are used for audio equipment comparisons using
music?" After all, most all of the published results we've seen are null.

If your only answer is that you *believe* they work here because they are used
(differently) in research or psychometrics, that is not good enough. Your
belief in DBTs would seem to be based on a 'belief system' rather than actual
evidence. Right?


No, it is not just a 'belief system' since DBTs have proved useful in
all areas of scientific investigation, and audio DBTs agree with
psychoacoustic research into JNDs and what actual differences in
audio signal would result in what can be heard by humans.

If your answer is that you believe in science and DBTs are scientific, that
also is not sufficient. Where is your verifying test, or scientific proof that
they work in the way you are advocating their use?


Again, their results agree with psychoacustic reasearch data. Where is
your test that shows they don't work. All you have is one test that is
not an audio component test and has not been peer reviewed.

Not who else uses them or who elses believes they work, but *proof* that they
work here in this area and don't obscure subtle details, or any information for
that matter.


You have it completely backwards. As mentioned above the results
agree with the predicted results, so until someone proves they do
"obscure subtle details" that have only been identifed in sighted
listening I see no reason to doubt them. I myself have no interest in
investigating such things since I have no reason to doubt the
results. You do, so you do the tests to prove your 'belief system'. I
mean, come on, on one side we have DBTs that agree with psychoacoustic
data, and on the other hand we have uncontrolled sighted listening.
It's a no-brainer IMO.

Don't have any verifying test or proof but you still believe? That's fine.
Believe what you want. Just don't try to convince anyone else on such flimsy
grounds. Give it up. Your arguements are not convincing any on the other side
- any more than ours are convincing you.


I explained the proof above. If you want to stick with your 'belief
system', be my guest.

  #169   Report Post  
Rusty Boudreaux
 
Posts: n/a
Default its Exactly the same

"chris" wrote in message
...
"normanstrong" wrote in message

It's always dangerous to correct someone in public.

Nevertheless, I
should point out that 16 out of 20 is not the same as 4 out

of 5,
for
statistical purposes. Not even close!

Norm Strong

I'm sorry Norm it is EXACTLY the SAME.
the proportional chances of right or wrong answers are the

same.
16/20 is the same ratio as 4/5
Just ask any Turf Accountant. He would been seen dead writing

16/20
on his board.


By that logic 1 out of 1 is the same as 1000 out of 1000. Random
effects makes the first meaningless since there's a 50:50 chance
of getting 1/1. The second has a very high level of confidence
since the probability of randomly guessing 1000 out of 1000 is
extremely small.

  #170   Report Post  
chung
 
Posts: n/a
Default its Exactly the same

chris wrote:
"normanstrong" wrote in message

It's always dangerous to correct someone in public. Nevertheless, I
should point out that 16 out of 20 is not the same as 4 out of 5,

for
statistical purposes. Not even close!

Norm Strong

I'm sorry Norm it is EXACTLY the SAME.
the proportional chances of right or wrong answers are the same.
16/20 is the same ratio as 4/5
Just ask any Turf Accountant. He would been seen dead writing 16/20
on his board.


All you have to do to see the fault in your logic is to answer this:
Does getting the correct answer one of of one trial carry the same
weight as getting the correct answer 100 out of 100 tries?

Any turf accountant will say that you are 100% correct both times, right?



  #171   Report Post  
Stewart Pinkerton
 
Posts: n/a
Default weakest Link in the Chain

On 8 Jan 2004 22:28:02 GMT, Lawrence Leung
wrote:

(Stewart Pinkerton) wrote in
:

"Stewart Pinkerton" wrote in message
...
On Wed, 07 Jan 2004 00:15:29 GMT, Lawrence Leung
wrote:

Clearly, you fail to understand the *joke* of describing that cable in
'audiophile' terms, when it's just standard hookup wire.


Hey! Mr. Pinkerton, first of all I thought you only stuborn on that "wire
is wire" nonsense.


It is of course not nonsense, but readily verifiable *fact*, despite
your fanciful imaginings.

But not once but a lot of time you accused people
cannot understand your English or joke, well, might be you don't know how
to make people understand!


Since my Chinese is much worse than your English, I'll leave that one
alone.

That's pretty frickin' stoopid, considering that I already *have* the
cables I use, now isn't it? I can't sell homebrew cables. BTW, I *do*
use 'freeby' interconnects and speaker cable in the kitchen and PC
systems, which don't have the peculiar hardwired cabling associated
with the TV and main music systems.


You have no right to call people or even people's opinion "frickin'
stoopid", I never call your "wire is wire" theory "frickin' stoopid". I
think you owe him an apology.


You appear to think lots of things which have no rational basis.

In that case, you would then be "living" your
oft-professed standards, and you'd have a better base on which to
disparage high-rez.


What the heck has any of this to do with 'high-rez'?

Until then, it is indeed difficult to take some of your
protestations seriously.


What you *really* mean is that you're desperately reaching, because
you have no rational rebuttal to my 'high-rez' arguments.............


Have you published any science paper? Have you published any book?


I've published a technical overview of the problems of static
electricity in the electronics industry, but that's not really very
relevant to hi-fi. Have you published any science paper or book?

What has this to do with anything other than a desperate call to
authority? When will you provide *any* rebuttal to my arguments?

In
regarding to audiophile topic(s)? There are millions of real audiophiles
in the world are agreeing on cable theory, only you and your dozen
experienced audiophile refuse to accept.


Absolute nonsense, and it doesn't matter how many times you repeat
this nonsense, it will remain untrue. There *is* no 'cable theory' of
the sort you claim, and you have never shown *any* evidence that there
is. In fact, what cable theory does exist, suggests that wire is wire.
--

Stewart Pinkerton | Music is Art - Audio is Engineering
  #172   Report Post  
Stewart Pinkerton
 
Posts: n/a
Default weakest Link in the Chain

On 14 Jan 2004 17:26:16 GMT, Lawrence Leung
wrote:

I can't help but say a few more things...


Will there be some point at which you say something which has a
rational basis?

Let's assume all you say about "wire is wire", all speaker cables are the
same, there is absolutely no difference among speaker cables as far as
listening concern. So forth and so on...


Yup, absolutely correct so far............

If it is such an absolute truth, absolute fact, why there are more and
more cable companies in the market?


The same reason that there are more and more brands of shampoo -
marketing.

Look, I don't know about other
countries, but in the USA, every company has to be responsible to what
they claimed their product(s) can do, you just can't make such obvious
lie (as you accuse) your speaker cable is such and such.


This has been brought up before, and it seems to be one of those
arteas where 'advertising puff' can be used as a defence, like all
those claims that some soap powders wash 'whiter than white'.

The only reason that I can think of is: whatever test that you or someone
propose is not a valid test, or at least the test result is not a valid
result! I'm pretty sure that if a test like that is confirmed true, it
will simply put the global multi-billion dollars business to a halt, am I
right?


See P.T. Barnum for a clue to the existence of both the anti-ageing
and 'audiophile cable' industries.....................

When are you going to show any *evidence* to back all this posturing?

But so far, we haven't heard of any such kind of confirmation from any
"big" name research lab, University, government office, some place that
is more "trustable" then a group of people making claim. I'm not saying
you are lying, it is just the doubt. Why is $4000? The test can only
verify that whether you can or you cannot hear the difference, but it
cannot directly prove that these two cables are the same.


It can prove that not one single person can hear any difference.

How many people in the world that will listen to music, or have contacted
special speaker cables? How many people had already taken the test? What
is the ratio of that? Let say there are altogether one million
audiophiles in the world that is or was using special speaker cables
(hey, I belive Chung and Pinky used special speaker cables before but
since they cannot tell the difference, they gave up using them), and say
there had been one thousand of them took the test, all of them fail to
tell the difference, OK then, so you can say 1000/1000000=0.1% of the who
audiophiles propulation agreed that "wire is wire", but is it enough?


You can never prove a negative, but you can sure tell where to place
your bet.....................

You only have 0.1% of the fact that backup your claim, and you then call
it evidence? You then call it a fact? You then call it a truth? I would
say, promote the test more, wait until you have more than 95%. As Pinky
pointed out, you have to have a 95% assurance to prove that one claim is
correct.


Unfortunately for your 'MacDonalds' argument, you have 0.1% of the
world population on one side, and not one single person on the other.
That is of course an *infinitely* high ratio in favour of 'wire is
wire'!!

Now can we stop playing silly semantic games?

Let me know when will the 95% come, perhaps by then, I will also give up
my believe on "speaker cable makes the difference", but before that, 0.1%
of doubt will not affect me or the other 99.9% audiophiles in the world.


Lawrence, that is an absolutely pathetic argument, since there are
certainly *not* a million users of 'audiophile' cable, and further,
not one single person has *ever* been able to tell the difference
under blind conditions. That means that ** 100% ** of people who have
actually tried to tell the difference, agree that 'wire is wire'.
--

Stewart Pinkerton | Music is Art - Audio is Engineering
  #173   Report Post  
Stewart Pinkerton
 
Posts: n/a
Default weakest Link in the Chain

On Thu, 15 Jan 2004 01:16:31 GMT, (RBernst929)
wrote:

Here is the test. You sit in your own listening room and select a familiar cut
such as "Green Earings" from Steely Dan. After the song finishes you leave the
room and have someone switch cables. You listen again to the same song. Sound
identical? Then you leave the room again and switch to a different set of
cables. Repeat the song. Sound different in any way? The contention is that
if you replace cable after cable from Kimber PBJs to Nordost Valhallas there
must be no detectable difference in the song. It would be as if you had the CD
player on replay. NO differences at all in any of these cables, no matter what
cables or how many or what manufacturer. The sound the comes out of your
speakers is identical regardless. This is Mr. Pinkerton's assertion.


Correct.

My
assertion is that i CAN detect a difference in the sound reaching my ears from
the different cables. The sound of the same song is changed somehow. That is
the only way to interpret cables since no one "hears" a cable, only the sound
that comes out of the speakers. Is the sound of the song changed in any way or
not? I say it is. I think most audiophiles would agree with me. Otherwise,
every cable on planet Earth makes no difference at all. They all produce
identical sound and we are all fools falling for marketing hype.


Correct, you are.

Now, care to prove me wrong? Tom is waiting..............
--

Stewart Pinkerton | Music is Art - Audio is Engineering
  #174   Report Post  
Stewart Pinkerton
 
Posts: n/a
Default weakest Link in the Chain

On Wed, 14 Jan 2004 21:28:30 GMT, (RBernst929)
wrote:

Mr. Pinkerton, im am not "strutting" in suggesting the test take place in my
system. This is a fair field for evaluation and i see NO reason why i should
put up any money to prove YOUR theory.


No one is asking you to, I am simply pointing out that your bluster is
unbacked. BTW, it's not *my* theory, it's plain fact.

The idea for the test is yours and the
apparent need for "proof" is yours.


Actually, it's Tom Nousaine's test, and the pool of money is put up by
around a dozen serious audiophiles who are fed up reading all this
nonsense about 'cable sound'. What is most noticeable is that, despite
the continuance of shrill claims about various aspects of 'cable
sound', not *one* person has stepped up to the plate to *prove* that
they can really hear a difference. That's in about four years. Does
that tell you something about the strength of their convictions?

Furthermore, my beef is with your dogmatic
belief that i cannot detect any differences in wires.


It's a belief based on lots of empirical evidence, and backed by the
body of physics and electrical engineering knowledge. Seems a fair
foundation to me................

Well, what proof do YOU
provide that i cant?


See above.

I dont mean other types of studies you tout but a study
that I, ME, cannot hear differences in wire in my system?


See above. If you are so confident, why not take up Tom's offer, and
collect the easy money?

In addition, i can
spell "intelligence" just fine. It is a pity you cannot refrain from making
gratuitous disparaging comments to fellow audiophiles. Perhaps one day you
will make a mistake?


I make them all the time, my comment was intended to point up that one
must be extra careful when disparaging others.............

Now, what's your problem with the 'cable challenge', given that it's
in your own reference system with your own choice of music?
--

Stewart Pinkerton | Music is Art - Audio is Engineering
  #176   Report Post  
Buster Mudd
 
Posts: n/a
Default weakest Link in the Chain

(Stewart Pinkerton) wrote in message ...

Let me make sure I understand the above: you're saying that in the
hypothetical instance where the 2 cables in the DBT didn't level match
at 100Hz and/or 10kHz when they did level match at 1kHz, you would be
compelled to actively match them via spectral equalization before
proceeding with the test?


No, I'd just add a few passive components to some zipcord, to achieve
the same FR imbalance as the 'audiophile' cable. It would remain a
requirement of the test that matching is achieved, since this test is
not about whether you can hear the effect of rolled-off treble!


I guess I'm confused: I got the impression from the posts I'd read
here that your claim was that the differences between boutique
audiophile speaker cable & Home Depot 12awg zip cord would be
inaudible in DBTs. But you're saying that spectral distortions
potentially introduced by these cables *don't* count?

Many here seem to agree that boutique cable designers (either
willfully or not) are often selling, in effect, "passive tone
controls". It does not strike me as far-fetched that one of the more
likely differences between a multi-kilobuck designer cable &
$0.25/foot off-the-shelf zip cord would be frequency response
anomalies. Irrespective of various cable manufacturer's claims that
their product is "transparent", and irrespective of all the other
mythical voodoo & flooby-dust that some audiophiles claim they hear,
wouldn't boutique cable advocates' claims that they hear a difference
between Brand A & Brand B be easily attributable to these response
anomalies?

I certainly understand how critically important level matching at a
single reference frequency is...but if having done so, frequency
response anomalies are evident, one would have to concede that the two
products under test probably sound different. (And then subsequent 16
out of 20 ABX testing would only serve to test the discriminatory
abilities of the test subject, which is a whole 'nother issue.) What
is your justification for negating these potential difference?
  #177   Report Post  
Stewart Pinkerton
 
Posts: n/a
Default weakest Link in the Chain

On Thu, 15 Jan 2004 08:27:50 GMT, "Harry Lavo"
wrote:

"Bob Marcus" wrote in message
news:69lNb.69026$na.40064@attbi_s04...
Harry Lavo wrote:

If there is one thing that the
Oohashi et al test established (without a doubt, documented in the article)
is that *music* as opposed to sound triggers an *emotional* response from
the brain that takes 20 secs or so to fully register and be operational.

This is utter hogwash. IF Oohashi proved anything (and that is a huge if),
it is only that ultrahigh frequency noise in combination with music produces
some delayed reaction in the brain.


What the heck are you basing this on? So the utrasonic overtones in his
experiment were "noise".
The reaction was in the emotion-processing portion of the brain and
correlated statistically with higher qualitative music ratings. Not exactly
the usual result of "noise" now, is it?


Actually, when you consider the effect of low-level masking noise
present in LP playback, yes it is.
--

Stewart Pinkerton | Music is Art - Audio is Engineering
  #178   Report Post  
Harry Lavo
 
Posts: n/a
Default weakest Link in the Chain

"Stewart Pinkerton" wrote in message
...
On 15 Jan 2004 06:20:20 GMT, "Harry Lavo" wrote:

"Nousaine" wrote in message
...
"Harry Lavo" wrote:

May be so, but the considerable increase in transparency between

equipment
of the early 80's seems mostly attributable to the passive components.

Not
much else has changed in amplifiers, for example.
And yet the cumulative effect of improved (from a sound standpoint)
capacitors and low-noise resistors has been a marked increase in
transparency.

OSAF. I have a 1976 vintage Heathkit AA-1640 200-watt stereo amplifier.

I have
compared this unit to my Bryston 4Bs and found no audible difference

between
them with ABX testing. Indeed that particular Heathkit was initially

used to
verify (or not) the difference between film and electrolytic coupling
capacitors circa 1980.


Well, then, perhaps time to compare it to a recent Krell, or BAT50, or

ARC.

I have a mid '80s amplifier which sounds exactly the same as a good
modern amplifier. You are talking nonsense, there is *no* 'increase in
transparency' with modern amps, because amps were pretty much a done
deal by the mid '80s. It is also *impossible* for two cables with the
same resistance to have any effect on bass.


I think if you read my post, I said "early '80's" not "mid-eighties". The
early '80's is when most companies switched to using better passives. So
yes, by the mid-late '80's most amps sounded better. I would have been more
prudent to say late '70's. And of course it would be a moot point if you
didn't misquote me.

  #179   Report Post  
Stewart Pinkerton
 
Posts: n/a
Default weakest Link in the Chain

On 15 Jan 2004 17:15:26 GMT, (Buster Mudd) wrote:

(Stewart Pinkerton) wrote in message ...

Let me make sure I understand the above: you're saying that in the
hypothetical instance where the 2 cables in the DBT didn't level match
at 100Hz and/or 10kHz when they did level match at 1kHz, you would be
compelled to actively match them via spectral equalization before
proceeding with the test?


No, I'd just add a few passive components to some zipcord, to achieve
the same FR imbalance as the 'audiophile' cable. It would remain a
requirement of the test that matching is achieved, since this test is
not about whether you can hear the effect of rolled-off treble!


I guess I'm confused: I got the impression from the posts I'd read
here that your claim was that the differences between boutique
audiophile speaker cable & Home Depot 12awg zip cord would be
inaudible in DBTs. But you're saying that spectral distortions
potentially introduced by these cables *don't* count?


Of course they don't count, because it is *not* such simplistic FR
differences which are the basis of all the claims made for such
cables. Also, did you fail to get the point that I can match the FR of
*any* 'audiophile' cable for a couple of dollars?

Many here seem to agree that boutique cable designers (either
willfully or not) are often selling, in effect, "passive tone
controls". It does not strike me as far-fetched that one of the more
likely differences between a multi-kilobuck designer cable &
$0.25/foot off-the-shelf zip cord would be frequency response
anomalies.


Only true of some of the really weird ones that have those little
boxes on the ends.

Irrespective of various cable manufacturer's claims that
their product is "transparent", and irrespective of all the other
mythical voodoo & flooby-dust that some audiophiles claim they hear,
wouldn't boutique cable advocates' claims that they hear a difference
between Brand A & Brand B be easily attributable to these response
anomalies?


That's certainly possible, and isn't it a hoot that you can achieve
the same effect for a couple of dollars! :-)

I certainly understand how critically important level matching at a
single reference frequency is...but if having done so, frequency
response anomalies are evident, one would have to concede that the two
products under test probably sound different. (And then subsequent 16
out of 20 ABX testing would only serve to test the discriminatory
abilities of the test subject, which is a whole 'nother issue.) What
is your justification for negating these potential difference?


See above.
--

Stewart Pinkerton | Music is Art - Audio is Engineering
  #180   Report Post  
normanstrong
 
Posts: n/a
Default weakest Link in the Chain

"S888Wheel" wrote in message
...
Here is the test. You sit in your own listening room and select a

familiar
cut
such as "Green Earings" from Steely Dan. After the song finishes

you leave
the
room and have someone switch cables. You listen again to the same

song.
Sound
identical? Then you leave the room again and switch to a different

set of
cables. Repeat the song. Sound different in any way? The

contention is that
if you replace cable after cable from Kimber PBJs to Nordost

Valhallas there
must be no detectable difference in the song. It would be as if

you had the
CD
player on replay. NO differences at all in any of these cables, no

matter
what
cables or how many or what manufacturer. The sound the comes out

of your
speakers is identical regardless. This is Mr. Pinkerton's

assertion. My
assertion is that i CAN detect a difference in the sound reaching

my ears
from
the different cables. The sound of the same song is changed

somehow. That is
the only way to interpret cables since no one "hears" a cable, only

the sound
that comes out of the speakers. Is the sound of the song changed

in any way
or
not? I say it is. I think most audiophiles would agree with me.

Otherwise,
every cable on planet Earth makes no difference at all. They all

produce
identical sound and we are all fools falling for marketing

ype. -Bob
Bernstein.


Well that makes for an easy test if you have a friend who can help.

You can
even do it DB. Use two cables that you think sound different. Listen

to the
first one then leave the room. Have your friend flip a coin heads he

changes
cables tails he doesn't. Then have the friend leave the room. you

come back in
and listen. decide if the cables were changed or not. Repeat this 20

times.
have your friend mark every trial as different or the same. You mark

every
trial different or the same. both keep track seperately. Of course

the cables
must be out of sight.


Having done this before, I'd make the following changes:

Always disconnect the cables before flipping the coin. In this
fashion it will be impossible to draw any conclusions from the length
of time required to make the switch.

I'd do 18 trials, instead of 20, since the probability of 14 correct
is 4.8%--very close to the statistical requirement of 5%.

A 3rd person should be present to verify the cable connection,
immediately AFTER the subject has made his guess. This 3rd person
should have no contact with the subject during the period of the test.
The subject's wife is a good choice for the 3rd person.

If you want to cross all the t's and dot all the i's, I recommend a
4th person to proctor the subject. In any case, there should never be
a person who knows the actual connection that is within range of the
subject.

Norm Strong



  #181   Report Post  
Stewart Pinkerton
 
Posts: n/a
Default weakest Link in the Chain

On Wed, 14 Jan 2004 21:28:01 GMT, Lawrence Leung
wrote:

chung wrote in ews:NEgNb.67259$na.39785@attbi_s04:

I seem to receive more and more ads in email about patches or pill
that would elongate a certain part of my anatomy. According to our
logic, those must really work.


Strictly speaking, those claim works on certain circumstance but not everyone will have
the same effect.


Oh, my, gawd...................

You really *believe* in penis patches? That pills can extend your
manly abilities by three inches? That explains a lot about your faith
in cables! :-)

They are not lying, striclty speaking, when they say their cable is
superior, since they know that it is in the mind of the listener like
yourself. If they were to provide evidence, either measurements or
DBT's, that the cable does sound more accurate, then perhaps someone
can go after them if those evidence turns out to be false. Have you
seen measurements or other test results given out by cable companies?


Yes, I have. A lot of cable companies gave out their L, C, and Q value from their speaker
cables and interconnect. I compare those values with the Home Depot zip-cord, they are far
higher than the "special" speaker cables.


What? This is meaningless nonsense, you obviously have *no* idea what
you're talking about. 12 AWG 'zipcord' compares very well in echnical
terms with all except the low-inductance cables, and their
*theoretical* superiority has never been shown to be audible. In fact,
the technically very best speaker cable that money can buy is made by
Dunlavy Audio, and guess what? John Dunlavy himself admits that it
sounds just like zipcord.

Then, how can you say "wire is wire", "they have
no difference", all you need to have is a simple LCR meter.


No *audible* difference Lawrence, no *audible* difference.

If only people can only agree on what "confirmed true" means...


That mean NO!?

I put in money in the pot, because I would like to see someone like
you with a strong belief in a cable's audibility to take the test and
either fail, or provide new evidence that there is something about
cable sound. BTW, it seems like no matter whether the pot is $4K or
$8K, no one really wants to step up. What does that tell you? Here is
a chance for tbe cable believers to really shut the engineers up.
Think about the glory, if not the money.


Wait a second, you said thousand of people took the test before, and now you said "no one
really wants to step up"? Which statement is true? If you set up a test, with significant
amount of money award, and not even one show up for the test, does it mean that something
wrong with the test? BTW, by saying "no one really wants to step up", that will include
you and Pinky and your troops?


We have all tried it, and there's no audible difference. Lots of
people have *claimed* to hear differences, but have obviously tried it
at home, found no difference, and now make up excuses for why they
won't take the test. What is *your* problem with the test? According
to *you*, it should be easy money! What's *your* problem, Lawrence?

Think about it another way. Has any big research lab, university,
government office, etc. confirmed that Elvis is dead?


No, because a lot of people seen his burial, offical already confirmed he is dead.


And all of physics and electrical theory confirms that 'wire is wire'.

Huh? If you cannot hear the difference between two cables, aren't they
sonically the same? Or are you referring to other properties of the
cable such as looks, price, etc.?


See above. If you cannot hear the difference between two cables, they are sonically the
same to you, cannot apply to anyone else.


This applies to *everyone* who has tried, and they are dedicated
audiophiles who often claim exceptional hearing acuity, so it seems
more than likely that it applies to everyone, period.

Those who buy expensive cables, maybe a few thousand, or maybe a
little above 10K? Those who listen to music, hundreds of millions?
What does that ratio tell you? Cable industry is a totally
insignificant industry, and it should be so.

How many people had already taken the test?


A few hundred to a few thousand have probably conducted some kind of
blind test at home. Isn't it interesting that you seldom see reports
of people passing the cable test, after care is taken to ensure that
the test is blind and that level differences have been taken out?


That means nobody took the test whatsoever.


It means nothing of the sort. Why are *you* so afraid to try it for
yourself?

How about the fact that no one has aced the DBT for cables after
frequency response has been taken out as a factor? I'm not talking
about the $4K test, but all the cable blind test audiophiles have
taken in their homes and among friends.


That's right, a lot of audiophiles reported that they can tell the difference at home, and
you didn't believe it. So, why do you think when anyone do exactly what you require will
give a correct answer? BTW, why 20? What makes you think 20 tests is good enough? I said
19, or 21, how's that?


That's fine, it just has to be *at least* ten. It can be 348, if you
like.

All I am saying is there is no number or very little data to support the claim.


All of physics and all of electrical theory supports the claim.

Well, now I know that the test hasn't even been done by anyone, including yourself.


That is not true. It has been done, and the results published on this
newsgroup, by Steve Zipser and Greg Singh. I have also done such tests
on many occasions, and so have Tom Nousaine and Arny Kreuger.
--

Stewart Pinkerton | Music is Art - Audio is Engineering
  #182   Report Post  
Bob Marcus
 
Posts: n/a
Default weakest Link in the Chain

Buster Mudd wrote:

I guess I'm confused: I got the impression from the posts I'd read
here that your claim was that the differences between boutique
audiophile speaker cable & Home Depot 12awg zip cord would be
inaudible in DBTs. But you're saying that spectral distortions
potentially introduced by these cables *don't* count?

The "claim" is not that all cables are indistinguishable, though sometimes
people's wording is imprecise. (This is Usenet, not a peer-reviewed journal,
after all.) The "claim" is that, if a difference is really heard, it will be
the result of resistance-related (or perhaps inductance/capacitance related)
frequency response anomalies.

BTW, the number $4,000 has been thrown around here recently. So far as I
know, no one has a fixed list of contributors and amounts, and some people
who have offered to contribute in the past may not have been heard from
recently. Perhaps, in the interests of truth in advertising, we should say
there's a pot in the low-to-mid thousands. If we get an actual taker, we'll
probably have to confirm the pool.

bob

__________________________________________________ _______________
Check out the new MSN 9 Dial-up — fast & reliable Internet access with prime
features! http://join.msn.com/?pgmarket=en-us&...alup/home&ST=1
  #183   Report Post  
Mkuller
 
Posts: n/a
Default weakest Link in the Chain

(Mkuller) wrote:
"How do you know DBTs work - i.e. do not get in the way of identifying
subtle
audible differences - when they are used for audio equipment comparisons
using
music?" After all, most all of the published results we've seen are null.


(Nousaine)
All of the BigFoot investigations have also returned null results. Shoud we
question those results based just on the fact that they were null?

So what you are really saying here is you have NO actual proof, and then bring
in Bigfoot as a strawman. Debate tactics.

Perhaps they were "null" because there were no real differences to hear.

More likely, there were real differences and the test got in the way of
identifying them. You can only speculate since you have NO actual scientific
proof your test is valid.

This line of reasoning is based on the premise that any test that fails to
verify previously un-verified "differences" is somehow wrong.
Parapsychologists
use this line of reasoning all the time.


Another strawman instead of proof.

One is expected to ignore ALL contrary evidence simply because it doesn't
confirm existing "wisdom."

If your only answer is that you *believe* they work here because they are
used
(differently) in research or psychometrics, that is not good enough. Your
belief in DBTs would seem to be based on a 'belief system' rather than

actual
evidence. Right?


Actually I have conducted experiments that answered ALL the common
"complaints"
about bias-controlled listening tests and yet I've never found a single (or
married ) subject who was able to confirm wire/amp/cap differences when
nothing more than an opague cloth was placed over the I/O terminals, even in
their reference systems.

Because the test you use (DBT) is getting in the way of their identifying the
differences.

It seems to me that Mr Kuller's Belief System guides his opinions .....he
has
provided nor even postulated any evidence that shows otherwise. ALL the
contrary stuff is wrong.


Hand waving protestations. Where is your proof that the tests actually work
for what you are using them for and don't obscure subtle information? Your
belief system is no more valid than mine.

If your answer is that you believe in science and DBTs are scientific, that
also is not sufficient. Where is your verifying test, or scientific proof
that
they work in the way you are advocating their use?


Where is your evidence that they don't? As mentioned I have personally
conducted experiments that addressed EVERY objection I've ever heard .....
time
(5 to 16 weeks), switching (cable swaps to ABX), reference systems (PERSONAL
systems where sound was said to be clearly audible), trials (individual
ranges
from 5 to 25) and everything else I can think of.


Again, you have NO proof so you turn the question around and ask me for proof.
Not very convincing.

Not who else uses them or who elses believes they work, but *proof* that

they
work here in this area and don't obscure subtle details, or any information
for
that matter.


So an experiment I conducted where a cable advocate failed to hear "pretty
amazing differences" in the same system he claimed were originally observed
with nothing more than an opague cloth placed over the I/O terminals is not
"proof"; then what would be?


How can you be so sure that the DBT didn't interfere with the audible
differences, amazing or not. You have no proof, only repeated assertions.
That's not good enough.

How about another experiment I proctored where an ampliifer advocate claimed
to
have easily scored 19/20 in blind tests? In this case the subject in his
personal reference system with his personal selection of program material was
unable to identify his reference device against a 10-year old integrated
ampliifer not ONCE but 2 times over 2 days?

Are you suggesting that this doesn't count? That "I" somehow cowered that
outspoken individual with my "presence?" Please.

You have no proof that the test (especially the lack of controls) did not
interfere witrh the results.

Don't have any verifying test or proof but you still believe? That's fine.
Believe what you want. Just don't try to convince anyone else on such

flimsy
grounds. Give it up. Your arguements are not convincing any on the other
side
- any more than ours are convincing you.


Mike; you sound like a guy trying hard to convince yourself. Why not try some
blind testing?


I've tried plenty of blind testing and have found that it interferes with
identifying subtle audible differences. I believe it has to do with the way
the brain works, stores audible memory and has to shift to make a decision
about whether X sounds more like A or B. But that's my belief system based on
my experience and scientific knowledge since I can't prove it. But then,
neither can you prove your own method works, or scientifically disprove
mine.....

With sighted listening, you have the *possibility* of bias interfering and
giving false positive results. With DBTs you have the *certainty* of the test
interfering with subtle audible details and providing false negatives, since
most all DBTs give null results. It would appear sighted listening (under some
circumstances) is actually superior.
Regards,
Mike

  #184   Report Post  
Harry Lavo
 
Posts: n/a
Default weakest Link in the Chain

"normanstrong" wrote in message
...
"S888Wheel" wrote in message
...
Here is the test. You sit in your own listening room and select a

familiar
cut
such as "Green Earings" from Steely Dan. After the song finishes

you leave
the
room and have someone switch cables. You listen again to the same

song.
Sound
identical? Then you leave the room again and switch to a different

set of
cables. Repeat the song. Sound different in any way? The

contention is that
if you replace cable after cable from Kimber PBJs to Nordost

Valhallas there
must be no detectable difference in the song. It would be as if

you had the
CD
player on replay. NO differences at all in any of these cables, no

matter
what
cables or how many or what manufacturer. The sound the comes out

of your
speakers is identical regardless. This is Mr. Pinkerton's

assertion. My
assertion is that i CAN detect a difference in the sound reaching

my ears
from
the different cables. The sound of the same song is changed

somehow. That is
the only way to interpret cables since no one "hears" a cable, only

the sound
that comes out of the speakers. Is the sound of the song changed

in any way
or
not? I say it is. I think most audiophiles would agree with me.

Otherwise,
every cable on planet Earth makes no difference at all. They all

produce
identical sound and we are all fools falling for marketing

ype. -Bob
Bernstein.


Well that makes for an easy test if you have a friend who can help.

You can
even do it DB. Use two cables that you think sound different. Listen

to the
first one then leave the room. Have your friend flip a coin heads he

changes
cables tails he doesn't. Then have the friend leave the room. you

come back in
and listen. decide if the cables were changed or not. Repeat this 20

times.
have your friend mark every trial as different or the same. You mark

every
trial different or the same. both keep track seperately. Of course

the cables
must be out of sight.


Having done this before, I'd make the following changes:

Always disconnect the cables before flipping the coin. In this
fashion it will be impossible to draw any conclusions from the length
of time required to make the switch.

I'd do 18 trials, instead of 20, since the probability of 14 correct
is 4.8%--very close to the statistical requirement of 5%.

A 3rd person should be present to verify the cable connection,
immediately AFTER the subject has made his guess. This 3rd person
should have no contact with the subject during the period of the test.
The subject's wife is a good choice for the 3rd person.

If you want to cross all the t's and dot all the i's, I recommend a
4th person to proctor the subject. In any case, there should never be
a person who knows the actual connection that is within range of the
subject.


As has been said here so many times, doing dbt's to evaluate equipment at
home is a simple process; anyone should do it *whenever* they want to
eliminate sighted bias and use the ears only.

All you need are four people and a weeks time to do it right. But that's
just me, obviously looking for an out! :-)
  #185   Report Post  
François Yves Le Gal
 
Posts: n/a
Default weakest Link in the Chain

On 15 Jan 2004 20:12:11 GMT, (Stewart Pinkerton) wrote:

In fact,
the technically very best speaker cable that money can buy is made by
Dunlavy Audio, and guess what? John Dunlavy himself admits that it
sounds just like zipcord.


Nope. Quoting John Dunlavy:

"I have clearly stated, within many of my posts here on the NET that,
within properly operating hi-end audiophile systems, expensive
loudspeaker and interconnect cables can seldom make an audible
difference or improvement."

Seldom never.

"However, I do believe it is possible that a properly designed cable
might potentially improve the audible accuracy of some high-end
audiophile systems."

A properly designed cable *might* improve the audible accuracy of a system.

http://groups.google.com/groups?selm...s1.newsguy.com

Furthermore, his US 5,510,578 patent on cables states:

"As a result a significant reduction in ringing and/or blurring of complex
musical transient signals caused by reflections attributable to the
mismatch between the characteristic impedance of the cable and the input
impedance of the loudspeaker is achieved".

Reflections in a speaker cable at AF frequencies, a few meters long? What
did you write? Oh, "technically very best speaker cable that money can buy".
Quite interesting, to say the least...



  #186   Report Post  
Mkuller
 
Posts: n/a
Default weakest Link in the Chain

Mkuller wrote:
"How do you know DBTs work - i.e. do not get in the way of identifying
subtle
audible differences - when they are used for audio equipment comparisons
using
music?" After all, most all of the published results we've seen are null.

If your only answer is that you *believe* they work here because they are
used
(differently) in research or psychometrics, that is not good enough.


"Bob Marcus" wrote:
You make the fallacious assumption that DBTs are used "differently" in
research. They are used in exactly the same way, with exactly the same
methodologies and protocols. They have even been used with music by
researchers. (There are whole academic treatises on hearing and music. Just
how do you think the authors conducted their research?) Those methods have
passed muster in the scientific community. If you want to claim that
comparing consumer audio gear is somehow different, you must not only
explain how it is different, but also provide some evidence that--or at
least a plausible hypothesis for why--this alleged difference would make the
test invalid.


I don't beleive DBTs are used in science and research the same way they are
advocated to be used in audio equiment comparisons. Here are a few differences,
that I beleive are very consequential.

Research - Strict controls, especially the environment - everything is
controlled except the variable being tested for.
Audio - Tom Nousaine and entourage go into Sunshine Stereo saying
intimidatingly, "Boy, you guys are going to lose bigtime!" creating a circus
atmosphere.

Research - Usually there is a particular artifact or distortion that is being
tested for. The researchers know how much of it is added to the program for
the panel to identify.
Audio - No one really knows whether the two, say amplifiers being compared,
have an audible difference and if so, what it is or how much of it is there
(open-ended).

Research - the subjects are given specific traing in being able to identify
the artifact under test prior to the testing.
Audio - audiophiles vary greatly in their listening experience and abilities
to identify audible differences. No training is provided.

Research - Usually a test will be used to focus on one or a small number of
variables - artifacts - at a time.
Audio - The possible variables in comparing the sound of two music
reproduction devices is unlimited. It is nearly impossible to remember even one
difference unless it is large (i.e. loudness or gross frequency response
differences).

Research - The program used is tested beforehand to determine the artifact's
audibility within it.
Audio - Usually music is the program which is a very *insensitive* source.

These are just a handful of the ways research DBTs and audio DBTs vary. I
suspect there are many more. If you are serious, it should be clear that DBTs
are a very poor method to control for biases since they seem to obscure much of
the differences because of their poor application.
Regards,
Mike

  #187   Report Post  
Bob Marcus
 
Posts: n/a
Default weakest Link in the Chain

Mkuller wrote:

I don't beleive DBTs are used in science and research the same way they
are
advocated to be used in audio equiment comparisons. Here are a few
differences,
that I beleive are very consequential.

Do you know any of this, or are you just assuming it?

Research - Strict controls, especially the environment - everything is
controlled except the variable being tested for.
Audio - Tom Nousaine and entourage go into Sunshine Stereo saying
intimidatingly, "Boy, you guys are going to lose bigtime!" creating a
circus
atmosphere.

Were you present? What elements of that situation do you suppose would
affect the outcome, and what evidence do you have that those elements
actually do affect the outcome of DBTs? Come on, real evidence.

Research - Usually there is a particular artifact or distortion that is
being
tested for. The researchers know how much of it is added to the program
for
the panel to identify.
Audio - No one really knows whether the two, say amplifiers being
compared,
have an audible difference and if so, what it is or how much of it is
there
(open-ended).

Do you honestly think that researchers know in advance whether a difference
is audible? Why would they do the research? What evidence do you have that
the researchers' lack of knowledge about the nature of any such difference
would impair the reliability of the test? Come on, real evidence.

Research - the subjects are given specific traing in being able to
identify
the artifact under test prior to the testing.
Audio - audiophiles vary greatly in their listening experience and
abilities
to identify audible differences. No training is provided.

What training could be provided to someone who already claimed that they
could hear a difference? The point of training is to get subjects to hear
more than they can, untrained. But they can already hear this difference,
untrained, or so they say. Indeed, they claim to have trained themselves.
What evidence do you have that further training improves the performance of
subjects in level-matched cable tests? Come on, real evidence.

Research - Usually a test will be used to focus on one or a small number
of
variables - artifacts - at a time.
Audio - The possible variables in comparing the sound of two music
reproduction devices is unlimited. It is nearly impossible to remember
even one
difference unless it is large (i.e. loudness or gross frequency response
differences).

The possible variables in comparing two cables is extremely limited. (My
list ends at two.) What evidence do you have that DBTs become less reliable
when there is more than one artifact that varies between the test states?
Come on, real evidence.

Research - The program used is tested beforehand to determine the
artifact's
audibility within it.
Audio - Usually music is the program which is a very *insensitive*
source.

Why would a program be tested beforehand to determine an artifact's
audibility, when the point of the research itself is to determine the
artifact's reliability? What evidence do you have that music is any less
insensitive under any other test that doesn't involve the imagination? Come
on, real evidence.

If you're going to lecture us about the scientific method, you ought to try
practicing a little of it. It's not enough to claim that a test had some
element that you suspect would affect its reliability. You must also offer
some evidence that this element actually can affect its reliability. Until
you have such evidence, you are talking through your hat. In the meantime,
the paradigm for testing audible differences stands.

bob

__________________________________________________ _______________
Find high-speed ‘net deals — comparison-shop your local providers here.
https://broadband.msn.com
  #188   Report Post  
John Corbett
 
Posts: n/a
Default weakest Link in the Chain

In article , "normanstrong"
wrote:


I'd do 18 trials, instead of 20, since the probability of 14 correct
is 4.8%--very close to the statistical requirement of 5%.


Actually the p-value for 14/18 is .0154 and the p-value for 13/18 is .048, so
you could ask for at least 13 correct in 18 trials if you wanted a .05
level test.

However, if you are interested in detecting only a fairly large effect,
you don't need very many trials. (It doesn't take many flips to
distinguish a fair coin from a two-headed one.)

If your subject claims he can _always_ detect a difference, then you can
get by with a 5/5 test; that will also work for someone who really gets
the right answer 99% of the time.

If someone claims he can hear a difference, say, 80% of the time, then he
should get the correct answer 90% of the time.
A test sensitive enough to detect that level of performance while
controlling both Type I error and Type II error rates at 5% needs just 10
correct of 13 trials.

Using 13 of 18 trials would allow someone whose true individual-trial
correct-answer rate is about 85% (so he really needs to hear only about
70% of the time) to have at least a 95% chance of passing the test.

The other side of this of course is that if you want to detect small
effects, you need large sample sizes.
(It takes a lot of flips to detect a coin that's only slightly biased.)

The typical ABX one-size-fits-all approach of always using, say, 16 (or
20) trials is wasteful for detecting large effects and leads to
experiments that are not worth doing for detecting small effects.

JC

  #189   Report Post  
Buster Mudd
 
Posts: n/a
Default weakest Link in the Chain

"Bob Marcus" wrote in message ...
Buster Mudd wrote:

I guess I'm confused: I got the impression from the posts I'd read
here that your claim was that the differences between boutique
audiophile speaker cable & Home Depot 12awg zip cord would be
inaudible in DBTs. But you're saying that spectral distortions
potentially introduced by these cables *don't* count?

The "claim" is not that all cables are indistinguishable, though sometimes
people's wording is imprecise. (This is Usenet, not a peer-reviewed journal,
after all.) The "claim" is that, if a difference is really heard, it will be
the result of resistance-related (or perhaps inductance/capacitance related)
frequency response anomalies.


Oh, well that seems incredibly obvious (and mercifully concise, thank
you).

Though that doesn't quite explain how a Snake Oil Advocate would go
about separating the (quote-unquote) "objectivists" from their hard
earned +/- $4000. If Joe Schmoe claims there is an audible difference
between, say, Home Depot 12awg zip cord ($0.25/foot) and Tara Labs
"The One" ($500/foot), & the ABX/DBT proctors claim "if a difference
is really heard, it will be the result of resistance-related (or
perhaps inductance/capacitance related) frequency response anomalies",
shouldn't the proper response be

"Ok, whatever"

....and then everyone just goes home & listens to music, in complete
agreement. Where does this whole You Only *Think* You Can Hear The
Difference challenge come into play?
  #190   Report Post  
Rusty Boudreaux
 
Posts: n/a
Default Audio challenge (was weakest Link in the Chain)

"Bob Marcus" wrote in message
...
BTW, the number $4,000 has been thrown around here recently. So

far as I
know, no one has a fixed list of contributors and amounts, and

some people
who have offered to contribute in the past may not have been

heard from
recently. Perhaps, in the interests of truth in advertising, we

should say
there's a pot in the low-to-mid thousands. If we get an actual

taker, we'll
probably have to confirm the pool.


OK, it's time to get some people to step up to the challenge.

I'll add $5,000 of my own hard cash to the pot for any winner
involving cables, amps, CD players, DACs, isolation devices,
power cords, etc.

The trials must be proctored and verified by someone such as Tom
Nousaine.

The offer stands indefinitely.

I can always be found at:
r us t y d ot bou dr ea u xatieeedotorg



  #191   Report Post  
Nousaine
 
Posts: n/a
Default weakest Link in the Chain

"Harry Lavo" wrote:

"Nousaine" wrote in message
news:fBkNb.68743$na.39732@attbi_s04...
(Audio Guy) wrote:

n article gi4Nb.55157$8H.104911@attbi_s03,
"Harry Lavo" writes:


..snips to specific content.....


Thank you for remembering. However, you don't quite remember

accurately.
The control test I proposed is similar to the Oohashi et al test in

that it
is evaluative over a range of qualitative factors, done in a relaxed

state
and environment, and with repeated hearing of full musical excerpts.

But
it
is not a duplicate of the test. The arguments for such tests have

been
made here for years..long before the Oohashi article was published.


Let's remember that the Oohashi "article" was not "published" in a
peer-reviewed journal but exists as an AES Convention Preprint like my AES
Preprints. Therefore they carry exactly the same weight as anything I've
presented at a Convention, AES Chapter Presentation or in a consumer audio
magazine.


Tom the Oohashi article I introduced here was peer reviewed and appeared in
Journal of Neurophysiology.


OK; point taken. I wonder why this hasn't appeared in the JAES?

It was discussed extensively and I repeated
the link often. Here is the link once again.:

http://jn.physiology.org/cgi/content/full/83/6/3548

I find it troubling that you, who makes a living as a "testing expert",
would have ignored such an important discussion, regardless of what you may
have thought of it. Else you would know it was a peer-reviewed and well
documented piece of research.


Actually a similar piece was presented in 1991 at an AES Convention but was not
selected for publication.

But what have I "ignored?" That ultrasonic stimulus can influence body
functions? If you read the paper carefully you'll see that certain bias may
have been implemented (the "random" selection of presentation order was exactly
repeated in reverse instead of re-randomization) but even so the data shows
that in their subjective analysis that subjects weren't able to show a "liking"
for one vs the other. It is true that they seemed to see one as softer, more
reverberant but the "like vs dislike" factor was not significant.

So what? That most likely means that there was no significantly important sound
quality difference.

But again .... so what? You seem to be just hunting for evidence that supports
your pre-held ideas. There's nothing wrong with that per se but you have not,
so far, done anything but propose an hypothesis that you, nor anyone else has
ever, tested.

Be MY Guest. I'll be more than happy to supply consultation without charge and
assist in any reasonable manner.


He and
his researchers apparently reached the same conclusion...that it was a

more
effective way of testing for the purposes under study...which were
semi-open-ended evaluations of the musical reproduction. Double blind,

by
the way, as was my proposed test.

As long as it's double blind I have no problems with it, but I have
yet to see you propose any reason for rejecting the current DBTs
except that they don't agree with your sighted results.

.


That's because you continually ignore the arguments of myself and other
"subjectivists" here on RAHE and elsewhere. The reasons have been well
documented. You'll find my reiteration of them elsewhere in this thread.


This is just not true. Personally I've listened to and incorporated those
arguments into controlled listening environments time and again. The single
largest problem I've encountered is actually getting a 'proponent' into a
controlled testing environment. Indeed one of my experiments (documented in
Stereo Review in 1998 "To Tweak or Not to Tweak") was conducted because an
on-line proponent argued that 'series tweaks' were needed to uncover audible
differences (one wire by itself may not reveal differences but a series of
tweaks would.) That particular proponent agreed to a test, to be held in his
system, where I would place a single 'bad' wire in his system and he would only
have to tell me if that wire were "in" the system with the I/O terminals
covered with a blanket.

As you might suspect that subject "disappeared" two weeks before that
experiment (to be held at my own personal expense) was to be held. So, I did it
myself by assembling a system with vacumn tube preamp,RCL power amp, special
interconnects and speaker cables, an outboard DAC, special vibration control
devices and wire/cable dress and comparing that to a system with a 20 year old
$99 kit preamp, a 10 year old integrated amplier, junk box rcas and 16 ga
car-audio zip cord speaker cables with non-PC dress (6-feet for one channel and
25-feet for the other with the longer section wrapped around the AC cables.) I
was trying to find as large a 'tweak' system difference as I could muster with
the idea of dis-sembling the system to which 'tweaks' mattered.

I was surprised that I couldn't distinguish either system driving the PSB
Stratus Mini PSACS reference speakers (with frequency response curves taken in
the NRC anechoic chamber) using an ABX Comparitor.

Interestingly not one of 10 hard-core audiophiles was able to reliably
distinguish one system from the other either in 10-16 trial single listener,
sweet spot sessions with no time limits and NO switching devices. They all
brought their own prefered cds for evaluation.

Again you are joining Stewart in repeating an (at best) half-truth

ad
nauseum here. Those firms use DBT's for specific purposes to listen

for
specific things that they "train" their listeners to hear. That is
called
"development".


Sure and ..... what else is there to hear but "real acoustical" differences?

Some of the blind tests I've conducted were testing the ability of

self-trained
listeners to identify 'differences' that had been described as "pretty

amazing"
by the subject using the very reference system and the programs that he

had
claimed illustrated these differences clearly.

One in particular used as long as 5 weeks in-situ training using the

actual
test apparatus and others have used the reference system of the claimants.


You have cited that test before, but without details as to how the final
choices were made (eg what happens after the weeks of listening, it
continues to be an "antedotal outlier" in your work.


What's the question?

In the food industry we used such tests for color, for
texture and "mouth feel", for saltiness and other flavor
characteristics.
That is a far cry from a final, open-ended evaluation whereby when

you
start
you have simply a new component and are not sure what you are

looking
for /
simply listening and trying to determine if / how the new component
sounds
vs. the old. It is called "open ended" testing for a reason...


Sure but one can do such a test without sound and still get usable results

for
marketing purposes.

I'm not interested in how the sound "feels in the mouth" unless that feel

is
soley related to sound quality. If we wish to limit the assessment to

sound
alone then bias-mechanisms are needed AND closely spaced presentations are

the
most sensitive.


The above is simply jibberish. :-)


What matters to me is SOUND QUALITY and not "mouth feel."

Yes, because it never ends, and so never gets to a result.


This is just retoric and is absolute nonesense! :-(

How so, if it is open-ended it seems by definition to not have an end.
How do you tell when it has reached the end, when it gets the results
you want?


To move off-topic for a moment I will say that most open sound quality
evaluations I've seen tend to do exactly that. Generally the 'evaluation'
starts with a presentation (salesman, conventioneer, audiophile, etc)

after
which the presenter asks "whaddya think?" This is followed by some

discussion
in which only a few present engage. Often they will report different

"things."
Then the presenter will say "Let's try again with BETTER material" and the
process will continue, often with considerable 'negotiation' of

differences,
until there is a trial where the 'right' answers are given and then the

session
is over. Sometimes there will be continued program delivery and some
hand-waving and back-slapping. But, I've seen the script played time and

again.


We are talking about how audiophiles do comparisons in their own homes, not
with salesmen in a convention or showroom. Talk about straw men!


I'd say you were adressing Stawmen. I was describing how how I've seen sound
quality assessments being made. Of course, I'm also, perhaps unfairly,
extrapolating them to single listener-home decisions but, quite frankly, I'm
used to hearing enthusiasts defend purchase decisions based on "reviews."

could reason to believe that conventional dbt or abx testing is not

the
best
way to do it and may mask certain important factors that may be more
apparent in more relaxed listening.


There's been no evidence that this is true, other than the non-published
Oohashi test.


There has never been a test run with a control test that would show this.
That is the crux of the matter.


Ball in your court. Be glad to helpin any reasonable way.


Not that this isn't an interesting theorem. But, there's no replicated

evidence
to support it. On the other hand, ABX and other blind testing has quite an
interesting set of data on testing audible quotient of products that fit

within
the known limits of human acoustical thresholds.


Okay, what is the "known limit" of soundstage depth, tom?


Perceptible in listening bias controlled conditions. Or any other conditions
with elements known to exceed human threshold acoustical limits. "Phantoms"
work for me as along as they are replicable.

Mr Lavo seems to be arguing that some kind of 'test' that requires lengthy
evaluation under "open-ended" conditions would somehow be more suitable

for the
average enthusiast. My guess is that no one except a few who may own a

store
would have such an opportunity and that a week-end or overnight ABX test
(perhaps with the QSC $600 ABX Comparitor) or other bias controlled test

is not
only more practical but quite implementable for the truly curious.


Well, son, let's see if you have a fever. I'm sorry but I just broke my
last thermomenter. We'll just use this barometric probe instead...it should
tell us the same thing. It is used to measure weather every day, and
weather includes temperature, doesn't it? :-)


I'll accept the emoticon. I expect you'll do the same for me.

And again, your only defense is that DBT results don't agree with
your opinions. DBTs can and have been done over long periods with
relaxed listening, and the results are the same.


On Tom's say so and without any detailed description or published data.


Actually that has been published in Audio and The $ensible Sound. You may

can
also trace data to the SMWTMS site through www.pcabx.com


No monadic or proto-mondadic evaluative tests have ever been done as a
control. without them you have *no* proof...simply assertion.


And that's doubly the case Harry with your comments as well. You seem to expect
that your extrapolations from an unreplicated ultrasonic frequency study
somehow also has application to the wire/amp debate. Simply assertion .... no
real appicable data.

I'm
talking about a rigorous, scientific test of dbt, control

proto-mondadic,
and sighted open ended testing. With careful sample selection,

proctoring,
statistical analysis, and peer-reviewed publication. Once that is done

I
will be happy to accept what conclusions emerge. It hasn't been done,

and
so *assertiosn* that comparative dbt's such as abx are appropriate is

just
that, an assertion.


As is your hypothesis. But bias-controlled listening tests of any kind, no
matter how un-intrusive, have never shown that wire/amp sound has any basis in
acoustical reality. When are you going to show us some data and stop "arguing?"

As is hte hypothesis that this test will have some useful benefit. And I

think
this poster is right; this test would be rejected if it failed to support

that
theory.


Well, I'll give you the same reply I gave him. You are assuming it would
give the same results. suppose it showed that proto-monadic testing
actually supported sighted evaluative listening tests. you don't think it
would be reported. Your assumption shows that you are operating from a
belief system, not as a proponent of truly scientific testing.


I'd say your comments are showing that you are working that way. Even IF we
allow full credence to your "argument" you still only have mild evidence that
DVD-A and SACD may be useful technology. There's nothing in the Oohashi report
that has any bearing on the amp/cable issue.



It's much more than an assertion, it has data to back it up, unlike
your assertions that audio DBTs are just "assertions".

The issue isn't so much the blind vs
sighted as it is the comparative vs. the evaluative....and while a
double
blind evaluative test (such as the proto-monadic "control test" I
outlined)
may be the ideal, it is so difficult to do that it is simply not
practical
for home listeners treating it as a hobby to undertake. So as
audiophiles
not convinced of the validity of convention dbt's for open-ended
evaluation,
we turn to the process of more open ended evaluative testing as a

better
bet
despite possible sighted bias.


Better bet? As opposed to any other process that reduces the likelihood of
false positives?


IOW he's willing to accept the increased (actually the nearly universal)
probablity of false positives as being less important than limiting the

results
to sound quality alone.


Yes, with the distinct possiblity that the "sound quality only" test is
actually missing/masking some key evaluative areas.


How could it "miss/mask" key sound quality areas by eliminating non-sonic
influences? If you're suggesting that non-sonic variables are important ....
wjho would argue with that?

If you are suggesting that factors other than sound quality are more important
to YOU, I have no trouble with that. Indeed because I've listened to so many
amplifiers under listening bias controlled conditions and found that nominally
competent ones are sonically equivalent I no longer consider "sound quality" as
the most important purchase consideration. So what? Why should anyone else
care?


Sounds like he's a marketeer, doesn't it


Since you obviously share the opinion that marketers are dishonest, I'd
suggest you try a little test.


Who said or implied dishonesty? A marketers job is to market products as far as
I can tell. No dishonesty infered or implied as a class. You "said" that
factors other than sonics may be important to consumers. I'd fully agree with
that ..... but I wouldn't argue that experiments that use listener bias
controls are somehow "missing" or "masking" important sound quality issues.

I'll blindfold you and put you in a roomful
of people. Ask any and all of them questions that show honesty or
dishonesty, and then tell me whom the marketers are. See if you can pick
them blind, eh?


Tell you what; put me in a room with Floyd Toole, Dave Clark, You, Bill
Clephane, Woody Cade, Bob Harley, John Atkinson, Roger Cox, Jerry Novetsky, and
a few others, and then disguise their voices with Lexicon effects and with
audio topics I'll be able to tell the bull-throwers from the others.

Please do not infer that I think marketers are dishonest. But I do not think
for a minute that a good one would not use every tool in the basket to sell
product.

If that were my job (and it isn't) I'd use all those tools. Perhaps that's why
I'm not in marketing.

  #192   Report Post  
Bob Marcus
 
Posts: n/a
Default weakest Link in the Chain

Harry Lavo wrote:

I don't recall anything saying that SACD was the recording system used.
192/24 might have been used? Did I miss something?

He used a proprietary one-bit system, similar to DSD but with a higher
sampling rate. To make it sound even listenable, it would have to use
noise-shaping.

snip

Look, all of this is irrelevant. The point I raised was that how the
brain
processes sound as music is much more complex (and at least in Oohashi's
case) required time to integrate. The problems "integrating" under
typical
dbt audio test situations, particularly abx, where the brain is forced
into
a "left brain" comparative as opposed to "right brain" evaluative mode
has
been the main continued thread of dissention on this forum for many
years.
*THAT* was the point I was bringing up, not ultrasonics per se.

Again, we don't know what Oohashi's subjects required time to process. We do
know that, whatever it was, it has never been experienced by an audiophile
while comparing cables. As for your point about "right brain" evaluative,
the right hemisphere is anything but, and I suspect that anyone who's ever
said, "This cable sounds better than that one" was using his left hemisphere
at the time. As for those years of dissension, I wouldn't begin to
characterize them, except to say they have been evidence-free.

I find it amusing that you are criticizing standard DBTs, which have

been
used repeatedly by scientists for decades in all sorts of ways

(including
testing audibility of music), while holding up as a model a test which,

as
far as you know, has only been used once in this field, and has no
verification whatsoever.


Well, I do know that researchers are finding additional pieces of
evidence
that music, as opposed to sound, is very much hardwired into our brain in
some ways only beginning to be discovered, and certainly not understood.

Granted.

That certainly reinforces the need for a confirming control test before
declaring conventional dbts "perfect tests forever" for the open-ended
evaluation of audio components. Especially since the tradition forms of
"proven" support cited for the test all related to perceptual jnd's of
loudness and artifact detection, not musical evaluation.

Hardly. It's at least equally plausible that being "hard-wired" for music
would interfere with our ability to hear subtle sonic differences in music,
because our brains would be occupied elsewhere. We don't know why this
hardwiring evolved, but it's very doubtful that it evolved because the
survival of the species depended on distinguishing between subtly different
sounds--as "subtle" is understood in the present context.

bob

__________________________________________________ _______________
Scope out the new MSN Plus Internet Software — optimizes dial-up to the max!
http://join.msn.com/?pgmarket=en-us&page=byoa/plus&ST=1
  #193   Report Post  
chung
 
Posts: n/a
Default weakest Link in the Chain

Harry Lavo wrote:

"chung" wrote in message
newsqhNb.52763$5V2.65388@attbi_s53...
Harry Lavo wrote:


Well lets start with timbre, dimensionality, width and depth of
soundstage,
transparency, microdynamics, macrodynamic freedom, etc etc.

And you think these are not caused by frequency responses differences

or
distortion differences. And seriously, in cables?


I think these things can be caused by subtle differences in the passive
components used, and in how they and the design itself handle dynamics.


Well, you are not making sense here. If there are subtle differences in
passive components, they have to show up in frequency response,
distortion or noise measurements. All of those you said DBT's are
capable of differentiating. Especially if it's fast A/B switching, and
not extended, open-ended evaluation.


Yes, but not necessarily in the conventional distortion measurements.

BTW, active components play a much, much more important role than
passives in the resulting sound. Any difference is more likely due to
active parts (like poorly designed DAC's) than passive parts. Of course,
the modders would like you to think that passives rule...


May be so, but the considerable increase in transparency between equipment
of the early 80's seems mostly attributable to the passive components.


You are simply speculating. Care to provide any evidence to back that up?

I would say that the apparent increase of transparency in equipment is
due to the prevalent use of CD's as source material. It's much harder to
achieve transparency when your phono stages, your cartridge, etc. all
contribute signigicant sources of degradations. We probably have even
higher level of transparency now that there is more care in the
mastering of SACD's, DVD-A's and CD's (such as XRCD). The digital
technology exposes weak analog designs (like poor amps with high noise
levels) much more than vinyl technology.

In any event, metal film resistors and high quality capacitors have been
in use since at least the '70's. And in a properly designed amplifier,
those components (whether they are carbon resistors or metal film
resistors for instance) make very little difference in sound.

Not
much else has changed in amplifiers, for example.


Depends on how you make comparisons. There were poorly designed
amplifiers. There are still bad ones now.

And yet the cumulative effect of improved (from a sound standpoint)
capacitors and low-noise resistors has been a marked increase in
transparency.


Big OSAF.

All you need to prove your point is to provide measurements using
so-called high-end passives vs stock passives, and show improvements
that are above hearing thresholds. Has anyone done that?


snip


Just provide evidence that a measureable difference exceeding JND is not
detectible in DBT's, but verifiably detectible in open-ended sighted
testing (or any test protocol you care to come up with). If your
speculations are true, it should not be too hard to do this, correct?


Since there is no technical reason why a cable should affect bass, your
concept of "dynamic clamp" is suspicious. That also raises the question
of how real are your claims of microdynamics, macrodynamics,
dimensionality, etc. Until you can state these in technical terms, I'm
afraid that they are not transferrable concepts.


This is RAHE, not RAT. I describe; you engineers figure out (by
investigating) what explains it.


This one is easy. You were not making a careful comparison. Bass
response depends a lot on the room acoustics and your listening
position, and there is no way a cable could cause a dynamic clamp in bass.

That is how progress has always come about
in this hobby of ours.


We make progress if we learn to listen carefully...

  #194   Report Post  
Buster Mudd
 
Posts: n/a
Default weakest Link in the Chain

(Stewart Pinkerton) wrote in message ...
On 15 Jan 2004 17:15:26 GMT,
(Buster Mudd) wrote:

(Stewart Pinkerton) wrote in message ...

Let me make sure I understand the above: you're saying that in the
hypothetical instance where the 2 cables in the DBT didn't level match
at 100Hz and/or 10kHz when they did level match at 1kHz, you would be
compelled to actively match them via spectral equalization before
proceeding with the test?

No, I'd just add a few passive components to some zipcord, to achieve
the same FR imbalance as the 'audiophile' cable. It would remain a
requirement of the test that matching is achieved, since this test is
not about whether you can hear the effect of rolled-off treble!


I guess I'm confused: I got the impression from the posts I'd read
here that your claim was that the differences between boutique
audiophile speaker cable & Home Depot 12awg zip cord would be
inaudible in DBTs. But you're saying that spectral distortions
potentially introduced by these cables *don't* count?


Of course they don't count, because it is *not* such simplistic FR
differences which are the basis of all the claims made for such
cables.


Oh. I guess I was under the impression that the claims of the
manufacturers were immaterial to your contention. Jeez, the claims of
nearly *any* manufacturer should be taken w/ a major grain of salt, it
doesn't require a DBT to set off ones' bull**** detectors when
Marketing Speak comes a 'calling! I thought it was the claims of the
listeners, those who said they could hear a difference between Tara
Labs or Monster Cable or Nordost et al, & Home Depot zip cord, that
was being challenged.

So you're saying you AGREE that there's (potentially) an audible
difference between cheap zip cord and expensive boutique cable, right?

Also, did you fail to get the point that I can match the FR of
*any* 'audiophile' cable for a couple of dollars?


I got that point. What I apparently didn't get is what the specifics
of this $4000 (+/-) challenge are. I thought, based on many posts by
yourself (Mr. Pinkerton), Mr. Nousaine, Mr. Krueger, & others, that
the contention was that those who claim they can hear a difference
between different speaker cables (or interconnects) were mistaken,
that they were succumbing to sighted bias & that in fact there were
*no* audible differences between any competent cable designs. But now
you're saying no audible differences "except for frequency response
anomalies" ...or no audible differences except for those that can be
matched with a couple dollars worth of passive components. I'm just
trying to get a handle on what the rules of this game are.


I certainly understand how critically important level matching at a
single reference frequency is...but if having done so, frequency
response anomalies are evident, one would have to concede that the two
products under test probably sound different. (And then subsequent 16
out of 20 ABX testing would only serve to test the discriminatory
abilities of the test subject, which is a whole 'nother issue.) What
is your justification for negating these potential difference?


See above.


I guess the above didn't clarify it for me. I apologize if I seem
dense, but I am truely & honestly interested in knowing exactly what
the specifics of this $4000 challenge are...not because I disagree
with the basic premise, but because based on the posts I've read I
find that the basic premise is not at all clear-cut and obvious.

If there's an audible difference between cable A & cable B, that
difference must be audible ALL OTHER THINGS BEING EQUAL. If all other
things are NOT equal, you're not really comparing Cable A to Cable B,
your comparing Cable A (+X) to Cable B (where X is some other variable
that, if I understand you correctly, causes Cable A to ...duh! sound
like Cable B!)

  #196   Report Post  
S888Wheel
 
Posts: n/a
Default weakest Link in the Chain

The same reason that there are more and more brands of shampoo -

Don't tell me you think all shampoos are the same as well.


You can never prove a negative,


Urban legend at best. Many negatives are easily proven.

  #197   Report Post  
S888Wheel
 
Posts: n/a
Default weakest Link in the Chain

Having done this before, I'd make the following changes:

Always disconnect the cables before flipping the coin. In this
fashion it will be impossible to draw any conclusions from the length
of time required to make the switch.

I'd do 18 trials, instead of 20, since the probability of 14 correct
is 4.8%--very close to the statistical requirement of 5%.

A 3rd person should be present to verify the cable connection,
immediately AFTER the subject has made his guess. This 3rd person
should have no contact with the subject during the period of the test.
The subject's wife is a good choice for the 3rd person.

If you want to cross all the t's and dot all the i's, I recommend a
4th person to proctor the subject. In any case, there should never be
a person who knows the actual connection that is within range of the
subject.

Norm Strong








I agree that your changes make for a better test. Of course adding two more
people to the formula makes it harder to organize.
  #198   Report Post  
Steve Maki
 
Posts: n/a
Default weakest Link in the Chain

(Mkuller) wrote:

I don't beleive DBTs are used in science and research the same way they are
advocated to be used in audio equiment comparisons. Here are a few differences,
that I beleive are very consequential.

Research - Strict controls, especially the environment - everything is
controlled except the variable being tested for.
Audio - Tom Nousaine and entourage go into Sunshine Stereo saying
intimidatingly, "Boy, you guys are going to lose bigtime!" creating a circus
atmosphere.


Mike, you couldn't be more wrong. The two day session was very cordial,
and we were there at Zipser's instigation. The circus followed.

Research - Usually there is a particular artifact or distortion that is being
tested for. The researchers know how much of it is added to the program for
the panel to identify.


Audio - No one really knows whether the two, say amplifiers being compared,
have an audible difference and if so, what it is or how much of it is there
(open-ended).


When someone claims that they hear a particular artifact, then we might
test whether they can really hear that artifact, no? Testing claims may
not be your typical "research", but that doesn't mean that the process
can't be done in a reasonably scientific way, or that the results are
meaningless.

Research - the subjects are given specific traing in being able to identify
the artifact under test prior to the testing.
Audio - audiophiles vary greatly in their listening experience and abilities
to identify audible differences. No training is provided.


If the subject claims to be easily able to hear the artifact, he is
claiming to be fully trained, no? If the claimed artifact is not one
of the big 3 (FR, distortion, noise), just what do you suggest for
training? Should we construct a variable Pace Box, or a variable
Soundstage Box, so that the listener can dial in large amounts at
will? Maybe a Liquidity Box?

--
Steve Maki

  #199   Report Post  
chung
 
Posts: n/a
Default weakest Link in the Chain

Buster Mudd wrote:
(Stewart Pinkerton) wrote in message ...
On 15 Jan 2004 17:15:26 GMT,
(Buster Mudd) wrote:

(Stewart Pinkerton) wrote in message ...

Let me make sure I understand the above: you're saying that in the
hypothetical instance where the 2 cables in the DBT didn't level match
at 100Hz and/or 10kHz when they did level match at 1kHz, you would be
compelled to actively match them via spectral equalization before
proceeding with the test?

No, I'd just add a few passive components to some zipcord, to achieve
the same FR imbalance as the 'audiophile' cable. It would remain a
requirement of the test that matching is achieved, since this test is
not about whether you can hear the effect of rolled-off treble!

I guess I'm confused: I got the impression from the posts I'd read
here that your claim was that the differences between boutique
audiophile speaker cable & Home Depot 12awg zip cord would be
inaudible in DBTs. But you're saying that spectral distortions
potentially introduced by these cables *don't* count?


Of course they don't count, because it is *not* such simplistic FR
differences which are the basis of all the claims made for such
cables.


Oh. I guess I was under the impression that the claims of the
manufacturers were immaterial to your contention. Jeez, the claims of
nearly *any* manufacturer should be taken w/ a major grain of salt, it
doesn't require a DBT to set off ones' bull**** detectors when
Marketing Speak comes a 'calling! I thought it was the claims of the
listeners, those who said they could hear a difference between Tara
Labs or Monster Cable or Nordost et al, & Home Depot zip cord, that
was being challenged.

So you're saying you AGREE that there's (potentially) an audible
difference between cheap zip cord and expensive boutique cable, right?



I think you have to understand that it is very rare for a cable,
zip-cord or boutique, to exhibit the kind of frequency response anamoly
that you allude to (like 1 dB up in bass or at 10KHz). With the
exception of cables with black box acting as tone controls, almost every
cable is flat (within 0.1 dB) from dc to about 10KHz and only droop a
small amount, like a couple of tenths of dB, at 20 KHz, with most
speaker loads, and with reasonable length runs.

I would suggest that if a cable measures to be grossly unflat (like a dB
off), that we do not need to do a DBT, since that cable is most likely
audibly different. However, cable manufacturers never claim that the
cables are not flat, and it will be very useful for anyone to expose
those that are intentionally designed to have frequency response errors.
That would make the customer realize that it is the euphonic inaccuracy
of the cable that makes it different, and not some other superb quality.

  #200   Report Post  
Nousaine
 
Posts: n/a
Default weakest Link in the Chain

(Mkuller) wrote:

Mkuller wrote:
"How do you know DBTs work - i.e. do not get in the way of identifying
subtle
audible differences - when they are used for audio equipment comparisons
using
music?" After all, most all of the published results we've seen are null.

If your only answer is that you *believe* they work here because they are


used
(differently) in research or psychometrics, that is not good enough.


"Bob Marcus"
wrote:
You make the fallacious assumption that DBTs are used "differently" in
research. They are used in exactly the same way, with exactly the same
methodologies and protocols. They have even been used with music by
researchers. (There are whole academic treatises on hearing and music. Just
how do you think the authors conducted their research?) Those methods have
passed muster in the scientific community. If you want to claim that
comparing consumer audio gear is somehow different, you must not only
explain how it is different, but also provide some evidence that--or at
least a plausible hypothesis for why--this alleged difference would make the


test invalid.


I don't beleive DBTs are used in science and research the same way they are
advocated to be used in audio equiment comparisons. Here are a few
differences,
that I beleive are very consequential.

Research - Strict controls, especially the environment - everything is
controlled except the variable being tested for.
Audio - Tom Nousaine and entourage go into Sunshine Stereo saying
intimidatingly, "Boy, you guys are going to lose bigtime!" creating a circus
atmosphere.


Entourage? Please. No more than 2 outsiders (me and challenger Steve Maki) and
2 others (Zip and either wife and one friend were ever in the same house at the
same time). What circus atmosphere? Would you care to define that? But, even IF
that were the case, how would that preclude big-time challenger Steve Zipser
from simply "knowing" when his $12K Pass monoblocks weren't "in" his reference
system?

Help me here; obvious differences so large that Zip had passed controlled lests
easily 19/20 or greater in exactly the same conditions. Not known in advance?
Unusual conditions?


Research - Usually there is a particular artifact or distortion that is
being
tested for. The researchers know how much of it is added to the program for
the panel to identify.
Audio - No one really knows whether the two, say amplifiers being compared,
have an audible difference and if so, what it is or how much of it is there
(open-ended).


Really. Zip and others have used program material they initially used to
describe those differences. I've always used equipment that was as "different"
as possibly (even magnifying apparent differences as much as I could.) But many
tests were conducted with reference equipment that was certified by the subject
to have audible superiority.



Research - the subjects are given specific traing in being able to identify
the artifact under test prior to the testing.
Audio - audiophiles vary greatly in their listening experience and abilities
to identify audible differences. No training is provided.



Let me question this aspect. I've conducted several experiments on topics where
I was not able to verify a sonic difference personally but had subjects
(posters) who claimed audibility was obvious and often suggested that my
hearing ability was suspect.

I regularly offered experimental conditions to these people to verify their
claims. I seldom found a claimant that was willing to put their "ears" on the
line under bias-controlled conditions. And when I did find a subject who was
willing to participate they were offered as much time/their own referencer
systems/own programs and anything else they deemed important.

Occasionally I would find a 'challenge' (Sunshine/Singh) situation where I was
the proctor.


Research - Usually a test will be used to focus on one or a small number of
variables - artifacts - at a time.
Audio - The possible variables in comparing the sound of two music
reproduction devices is unlimited. It is nearly impossible to remember even
one
difference unless it is large (i.e. loudness or gross frequency response
differences).


Oh boy? The 'possible variables are unlimited and its nearly impossible to
remember even one' ?????? If this is true how can anybody possibly be expected
to 'remember' how a piano sounds ..... even if one is a pianist?

What a crock.



Research - The program used is tested beforehand to determine the artifact's
audibility within it.
Audio - Usually music is the program which is a very *insensitive* source.


Agreed. Noise is a much more sensitive source. That's why research faciltites
use it. That's why I use it.



These are just a handful of the ways research DBTs and audio DBTs vary. I
suspect there are many more. If you are serious, it should be clear that
DBTs
are a very poor method to control for biases since they seem to obscure much
of
the differences because of their poor application.
Regards,
Mike


Isn't this funny? It is true that bias-controlled testing removes bias. So how
can this reduce or obscure sonic differences? If the cause is acoustical
reducing the bias of non-sonic factors can only increase the sensitivity of
only-sonic factors.
Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Gallons of Snake Oil malcolm Audio Opinions 3 February 17th 04 08:41 AM
Some serious cable measurements with interesting results. Bruno Putzeys High End Audio 78 December 19th 03 03:27 AM
cabling explained Midlant Car Audio 8 November 14th 03 03:07 AM
Digital Audio Cable Question(s) Hugh Cowan High End Audio 11 October 8th 03 07:15 PM
science vs. pseudo-science ludovic mirabel High End Audio 91 October 3rd 03 09:56 PM


All times are GMT +1. The time now is 07:18 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"