View Full Version : Blind Cable Test at CES
John Atkinson[_2_]
January 16th 08, 06:52 PM
http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_inside_to
Money quote: "I was struck by how the best-informed people at the
show -- like John Atkinson and Michael Fremer of Stereophile
Magazine -- easily picked the expensive cable."
So that's that, then. :-)
John Atkinson
Editor, Stereophile
Arny Krueger
January 16th 08, 07:16 PM
"John Atkinson" > wrote in
message
> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_inside_to
>
> Money quote: "I was struck by how the best-informed
> people at the show -- like John Atkinson and Michael
> Fremer of Stereophile Magazine -- easily picked the
> expensive cable."
This was a single blind test.
> So that's that, then. :-)
More proof that single blind tests are nothing more than defective double
blind tests.
George M. Middius
January 16th 08, 07:25 PM
John Atkinson said:
> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_inside_to
> Money quote: "I was struck by how the best-informed people at the
> show -- like John Atkinson and Michael Fremer of Stereophile
> Magazine -- easily picked the expensive cable."
Mr. Gomes apparently had an audiophile angel on one shoulder and a 'borg
angel on the other. He also said this:
"Remember, by definition, an audiophile is one who will bear any burden,
pay any price, to get even a tiny improvement in sound."
If he's going to prattle like that, he should rename his column
"Stereotypes R Us".
> So that's that, then. :-)
Thnak's John for, admitting Jhon that you have suborned the WSJ and/or R.
Murdoch with your elitist audiophile propaganda Jonn.
Arny Krueger
January 16th 08, 08:09 PM
"ScottW" > wrote in message
> On Jan 16, 10:52 am, John Atkinson
> > wrote:
>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>
>> Money quote: "I was struck by how the best-informed
>> people at the show -- like John Atkinson and Michael
>> Fremer of Stereophile Magazine -- easily picked the
>> expensive cable."
It was a single blind test - appeals to everybody who is ignorant of the
well-known failings of single blind tests.
>> So that's that, then. :-)
> Did you even mention that he needs to assure the levels
> are matched?
Level match is usually not an issue with cables.
> BTW...who makes this crap up?
> "Remember, by definition, an audiophile is one who will
> bear any burden, pay any price, to get even a tiny
> improvement in sound."
> I've never met an audiophile....not one.
Price always seems to be an issue at some level. So does WAF.
January 16th 08, 08:17 PM
On Jan 16, 10:52�am, John Atkinson > wrote:
> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>
> Money quote: "I was struck by how the best-informed people at the
> show -- like John Atkinson and Michael Fremer of Stereophile
> Magazine -- easily picked the expensive cable."
>
> So that's that, then. :-)
>
> John Atkinson
> Editor, Stereophile
So will you be receiving your $1 million from Randi anytime soon?
Boon
Shhhh! I'm Listening to Reason!
January 16th 08, 08:18 PM
On Jan 16, 1:34*pm, ScottW > wrote:
> On Jan 16, 10:52*am, John Atkinson > wrote:
>
> >http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in....
>
> > Money quote: "I was struck by how the best-informed people at the
> > show -- like John Atkinson and Michael Fremer of Stereophile
> > Magazine -- easily picked the expensive cable."
>
> > So that's that, then. :-)
>
> *Did you even mention that he needs to assure the levels are matched?
Do you level-match when you perform your blind testing, 2pid?
I'm just curious. What procedures do you use?
lol Lol LoL lOl LOL!
Harry Lavo
January 16th 08, 09:26 PM
"Arny Krueger" > wrote in message
...
> "ScottW" > wrote in message
>
>> On Jan 16, 10:52 am, John Atkinson
>> > wrote:
>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>>
>>> Money quote: "I was struck by how the best-informed
>>> people at the show -- like John Atkinson and Michael
>>> Fremer of Stereophile Magazine -- easily picked the
>>> expensive cable."
>
> It was a single blind test - appeals to everybody who is ignorant of the
> well-known failings of single blind tests.
>
Arny, double-blind vs. single-blind adds an extra level of *assurance* that
the test is fully blind. That hardily makes every single blind test
invalid...in fact most are not. It's just that if it is single blind and
shows a difference, it gives you a chance to
"diss and dance". Man, you are single-minded.
George M. Middius
January 16th 08, 09:51 PM
Shhhh! said:
> > > So that's that, then. :-)
> > *Did you even mention that he needs to assure the levels are matched?
> Do you level-match when you perform your blind testing, 2pid?
> I'm just curious. What procedures do you use?
Scottie piddles on the setup he "prefers".
George M. Middius
January 16th 08, 09:55 PM
Harry Lavo said:
> > It was a single blind test - appeals to everybody who is ignorant of the
> > well-known failings of single blind tests.
> Arny, double-blind vs. single-blind adds an extra level of *assurance* that
> the test is fully blind. That hardily makes every single blind test
> invalid...in fact most are not. It's just that if it is single blind and
> shows a difference, it gives you a chance to
> "diss and dance".
I can see where you're going with this, Harry. You want to engage the
Krooborg in a rational, human-style "debate" on the merits of blind
testing in general and SBT vs. DBT in particular. You're hoping that for
the first time in nearly 60 years, Mr. **** will find the ability to
respond rationally to ideas in conflict with his own dogma. You anticipate
a mutually enlightening exchange of thoughts that will benefit everybody
because of the informed viewpoints you and Turdborg bring to the subject.
Is that right? If so, please hold off until I can lay a bet on the
outcome. :-)
> Man, you are single-minded.
Is that the right term? Hmmm.... At any rate, whatever it is Krooger has
in his "mind", let's hope it offsets the encrustation of feces on his
body.
Arny Krueger
January 16th 08, 10:41 PM
"Harry Lavo" > wrote in message
> "Arny Krueger" > wrote in message
> ...
>> "ScottW" > wrote in message
>>
>>> On Jan 16, 10:52 am, John Atkinson
>>> > wrote:
>>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>>>
>>>> Money quote: "I was struck by how the best-informed
>>>> people at the show -- like John Atkinson and Michael
>>>> Fremer of Stereophile Magazine -- easily picked the
>>>> expensive cable."
>>
>> It was a single blind test - appeals to everybody who is
>> ignorant of the well-known failings of single blind
>> tests.
>
> Arny, double-blind vs. single-blind adds an extra level
> of *assurance* that the test is fully blind.
No, DBT it removes a relevant significant variable that is well-known to
exist.
> That
> hardily makes every single blind test invalid...
It leaves them at best highly questionable.
> in fact most are not.
I guess you never read about Clever Hans, Harry.
However Harry, its not your fault that your knowlege about experimental
design was based on OJT at what, a cereal company?
Clyde Slick
January 16th 08, 10:53 PM
On 16 Ian, 14:34, ScottW > wrote:
>
> BTW...who makes this crap up?
> "Remember, by definition, an audiophile is one who will bear any
> burden, pay any price, to get even a tiny improvement in sound."
Winston Churchill, but he was talking about cigars.
Shhhh! I'm Listening to Reason!
January 16th 08, 11:06 PM
On Jan 16, 4:55*pm, ScottW > wrote:
> On Jan 16, 1:51*pm, George M. Middius <cmndr _ *george @ comcast .
>
> net> wrote:
> > Shhhh! said:
>
> > > > > So that's that, then. :-)
> > > > *Did you even mention that he needs to assure the levels are matched?
> > > Do you level-match when you perform your blind testing, 2pid?
>
> * Am I the one making a test claim? *You are...once again...confused.
The confusion appears to be on your end, 2pid. This has nothing to do
with any claims. "Most people" could understand a very straightforward
question like this one was. lol Lol LoL lOl LOL!
This is why things tend to get very, very repititous when 'discussing'
things with you. This is why you must be asked the same question, over
and over and over, until you either "get it" or the asker gives up in
frustration. This is exactly why rational discussion with you is not
possible. This is exactly why you are considered RAO's resident
imbecile.
You stated, "Did you even mention that he needs to assure the levels
are matched?" I merely asked if you level-match when you do your blind
testing. Face it, 2pid: you are incapable of understanding or
answering direct questions. Most here strongly suspect it's because
you do not understand them. Here, I'll try again.
2pid, when you do your audio blind testing, do you insure that the
levels are matched?
George M. Middius
January 16th 08, 11:30 PM
Shhhh! said:
> > > > Do you level-match when you perform your blind testing, 2pid?
Note to Witlessmongrel: I didn't say that; Shushie said it. Do you
understand? I know you're a Usenet newbie, so I thought I'd spell out what
is obvious to experienced users.
> You stated, "Did you even mention that he needs to assure the levels
> are matched?" I merely asked if you level-match when you do your blind
> testing. Face it, 2pid: you are incapable of understanding or
> answering direct questions. Most here strongly suspect it's because
> you do not understand them. Here, I'll try again.
Look out, Scottie!
> 2pid, when you do your audio blind testing, do you insure that the
> levels are matched?
It's a trick question, Scottie. Don't answer! Bad dog!
Harry Lavo
January 17th 08, 02:20 AM
"Arny Krueger" > wrote in message
. ..
> "Harry Lavo" > wrote in message
>
>> "Arny Krueger" > wrote in message
>> ...
>>> "ScottW" > wrote in message
>>>
>>>> On Jan 16, 10:52 am, John Atkinson
>>>> > wrote:
>>>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>>>>
>>>>> Money quote: "I was struck by how the best-informed
>>>>> people at the show -- like John Atkinson and Michael
>>>>> Fremer of Stereophile Magazine -- easily picked the
>>>>> expensive cable."
>>>
>>> It was a single blind test - appeals to everybody who is
>>> ignorant of the well-known failings of single blind
>>> tests.
>>
>> Arny, double-blind vs. single-blind adds an extra level
>> of *assurance* that the test is fully blind.
>
> No, DBT it removes a relevant significant variable that is well-known to
> exist.
No, Arny. That *could* or *may* exist. Somewhere in your college
education, you skipped the class in logic, I guess.
>snip<
> However Harry, its not your fault that your knowlege about experimental
> design was based on OJT at what, a cereal company?
Just about one of the most sophisticated companies in the world when it came
to consumer testing....yeah, over ten years of test planning, design, and
interpretation. Beats ashtrays.
JBorg, Jr.[_2_]
January 17th 08, 07:51 AM
> Arny Krueger wrote:
>> John Atkinson wrote
>>
>> Money quote: "I was struck by how the best-informed
>> people at the show -- like John Atkinson and Michael
>> Fremer of Stereophile Magazine -- easily picked the
>> expensive cable."
>
> This was a single blind test.
>
>> So that's that, then. :-)
>
> More proof that single blind tests are nothing more than defective double
> blind tests.
From this article, the author wrote, "... the expensive cables
sounded roughly 5% better. Remember, by definition, an
audiophile is one who will bear any burden, pay any price,
to get even a tiny improvement in sound."
Only 5% ?
Could it be that due to poor component mismatches, the system
would have sounded better and higher than just 5% ? The cables,
regardless of price, does not produced sound of their own by
themselves.
I remember back in the mid-90s that I swap and tried at least
more than 7 different pairs of cables in order to gain more than
just 5% in sonic improvement. I recall some cables costing more
made my system sounding less natural.
Been there, done that.
John Atkinson[_2_]
January 17th 08, 12:23 PM
On Jan 17, 4:11 am, "Soundhaspriority" > wrote:
> Did the cables you chose by preference have a common
> signature of sound? Or just one element thereof?
The cables were mid-priced Monster Cables and the
same length of zip cord from a hardware store, the
kind repeatedly recommended on r.a.o by Howard
Ferstler. During the test, the 2 conditions were
identified as A and B, with no lcue as to their identity.
In fact, the listeners didn't even know they were listening
to different cables.
Tonally, there was virtually no difference, but what
was later revealed to be the more expensive cable
sounded less congested at signal peaks. The
hardware-store cable consistently sounded more hashy
at orchestral climaxes, but as I said, it was not a large
difference.
John Atkinson
Editor, Stereophile
Arny Krueger
January 17th 08, 12:29 PM
"JBorg, Jr." > wrote in message
>> Arny Krueger wrote:
>>> John Atkinson wrote
>>>
>>> Money quote: "I was struck by how the best-informed
>>> people at the show -- like John Atkinson and Michael
>>> Fremer of Stereophile Magazine -- easily picked the
>>> expensive cable."
>>
>> This was a single blind test.
>>
>>> So that's that, then. :-)
>>
>> More proof that single blind tests are nothing more than
>> defective double blind tests.
>
>
> From this article, the author wrote, "... the expensive
> cables sounded roughly 5% better. Remember, by
> definition, an audiophile is one who will bear any
> burden, pay any price, to get even a tiny improvement in sound."
>
> Only 5% ?
Even so, it was proabably 100% imagination.
> Could it be that due to poor component mismatches, the
> system would have sounded better and higher than just 5%
> ?
0% seems about right.
> The cables, regardless of price, does not produced
> sound of their own by themselves.
Agreed.
> I remember back in the mid-90s that I swap and tried at
> least more than 7 different pairs of cables in order to
> gain more than just 5% in sonic improvement.
I guess you haven't smartened up since then. :-(
> I recall
> some cables costing more made my system sounding less
> natural.
Even so, it was proabably 100% imagination.
Arny Krueger
January 17th 08, 12:32 PM
"Harry Lavo" > wrote in message
> "Arny Krueger" > wrote in message
> . ..
>> "Harry Lavo" > wrote in message
>>
>>> "Arny Krueger" > wrote in message
>>> ...
>>>> "ScottW" > wrote in message
>>>>
>>>>> On Jan 16, 10:52 am, John Atkinson
>>>>> > wrote:
>>>>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>>>>>
>>>>>> Money quote: "I was struck by how the best-informed
>>>>>> people at the show -- like John Atkinson and Michael
>>>>>> Fremer of Stereophile Magazine -- easily picked the
>>>>>> expensive cable."
>>>>
>>>> It was a single blind test - appeals to everybody who
>>>> is ignorant of the well-known failings of single blind
>>>> tests.
>>>
>>> Arny, double-blind vs. single-blind adds an extra level
>>> of *assurance* that the test is fully blind.
>> No, DBT it removes a relevant significant variable that
>> is well-known to exist.
> No, Arny. That *could* or *may* exist.
Saying that takes a ton of suspended disbelief. But from reading your posts
over the years Harry, I'm sure you have it in you.
> Somewhere in
> your college education, you skipped the class in logic, I
> guess.
Harry, it doesn't take a degree in philosophy to understand proper
experiemental design.
>> However Harry, its not your fault that your knowlege
>> about experimental design was based on OJT at what, a
>> cereal company?
> Just about one of the most sophisticated companies in the
> world when it came to consumer testing....yeah, over ten
> years of test planning, design, and interpretation.
That's strange considering all of your rants against their objectivity.
> Beats ashtrays.
I have no idea how that relates to the current discussion. Since I've never
smoked, my interest in ashtrays could be less, but I don't know how.
Arny Krueger
January 17th 08, 12:34 PM
"John Atkinson" > wrote in
message
> Tonally, there was virtually no difference, but what
> was later revealed to be the more expensive cable
> sounded less congested at signal peaks. The
> hardware-store cable consistently sounded more hashy
> at orchestral climaxes, but as I said, it was not a large
> difference.
Hmm, audible nonlinear distortion in 99.99% pure copper. Proving that could
easily get someone a Nobel prize. Or at least a million dollars from some
stage magician somewhere.
So which is it going to be John, are you going for the million bucks or the
Nobel prize?
Shhhh! I'm Listening to Reason!
January 17th 08, 04:41 PM
On Jan 17, 6:34*am, "Arny Krueger" > wrote:
> So which is it going to be John, are you going for the million bucks or the
> Nobel prize?
Do you always have to resort to strawmen when someone rattles one of
your pet beliefs, GOIA?
Shhhh! I'm Listening to Reason!
January 17th 08, 04:52 PM
On Jan 17, 10:06*am, "ScottW" > wrote:
> "Arny Krueger" > wrote in message
> > Level match is usually not an issue with cables.
>
> There was that one 24 guage phone wire vs 16 guage
> std vs some unknown guage monster...in which there were
> some measured level differences...
I was comparing plumbing pipe once. I was trying to save some money
when building my house. I compared waterline hose, that thin plastic
tubing you'd use to hook up a refrigerator's icemaker, to standard
1/2" and 3/4" copper tubing. As a control, I rolled up some towels, as
they will also obviously conduct water when saturated.
Once you matched the levels of the 1/2" and 3/4" tubing to the
waterline hose and the rolled-up towels, the *exact* amount of water
was carried by all four. So I chose to use waterline hose for the
water suppy tubing in my entire house. The towels would have been
cheaper yet, as they are readily available at thrift shops for almost
nothing, but the expense of the additional fasteners (sag was a big
issue) made the waterline hoses a better value, even after considering
the bribe to the code official.
George M. Middius
January 17th 08, 05:14 PM
Shhhh! said:
> > So which is it going to be John, are you going for the million bucks or the
> > Nobel prize?
>
> Do you always have to resort to strawmen when someone rattles one of
> your pet beliefs, GOIA?
Yes, of course he does. It's part of the Krooborg's firmware.
John Atkinson[_2_]
January 17th 08, 05:19 PM
On Jan 17, 12:14*pm, George M. Middius <cmndr _ george @ comcast .
net> wrote:
> Shhhh! said:
> > > So which is it going to be John, are you going for the million
> > > bucks or the Nobel prize?
> >
> > Do you always have to resort to strawmen when someone rattles
> > one of your pet beliefs, GOIA?
>
> Yes, of course he does. It's part of the Krooborg's firmware.
Remind me again how many times Arny Krueger has been
quoted in the Wall Street Journal?
At least he has stopped claiming that his neglected,
rarely updated, almost-never-promoted websites
get as much traffic as Stereophile's or that his
recordings are as commercially available as my
own. :-)
John Atkinson
Editor, Stereophile
Walt
January 17th 08, 05:56 PM
wrote:
> On Jan 16, 10:52�am, John Atkinson > wrote:
>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>
>> Money quote: "I was struck by how the best-informed people at the
>> show -- like John Atkinson and Michael Fremer of Stereophile
>> Magazine -- easily picked the expensive cable."
>
> So will you be receiving your $1 million from Randi anytime soon?
Don't count on it. From TFA: "But of the 39 people who took this test,
61% said they preferred the expensive cable." Hmmme. 39 trials. 50-50
chance. How statistically significant is 61%? You do the math.
(HINT: it ain't.)
And of course this doesn't even address the single-blind nature of the
test. See http://en.wikipedia.org/wiki/Clever_Hans
//Walt
Walt
January 17th 08, 06:44 PM
John Atkinson wrote:
> Remind me again how many times Arny Krueger has been
> quoted in the Wall Street Journal?
Ok. So you've been quoted in the WSJ. So have Uri Geller and Ken Lay.
What's your point?
//Walt
Clyde Slick
January 17th 08, 06:45 PM
On 17 Ian, 07:29, "Arny Krueger" > wrote:
> "JBorg, Jr." > wrote in message
>
>
>
>
>
>
>
> >> Arny Krueger wrote:
> >>> John Atkinson wrote
>
> >>> Money quote: "I was struck by how the best-informed
> >>> people at the show -- like John Atkinson and Michael
> >>> Fremer of Stereophile Magazine -- easily picked the
> >>> expensive cable."
>
> >> This was a single blind test.
>
> >>> So that's that, then. :-)
>
> >> More proof that single blind tests are nothing more than
> >> defective double blind tests.
>
> > From this article, the author wrote, "... the expensive
> > cables sounded roughly 5% better. * Remember, by
> > definition, an audiophile is one who will bear any
> > burden, pay any price, to get even a tiny improvement in sound."
>
> > Only 5% ?
>
> Even so, it was proabably 100% imagination.
>
> > Could it be that due to poor component mismatches, the
> > system would have sounded better and higher than just 5%
> > ?
>
> 0% seems about right.
>
> > The cables, regardless of price, does not produced
> > sound of their own by themselves.
>
> Agreed.
>
> > I remember back in the mid-90s that I swap and tried at
> > least more than 7 different pairs of cables in order to
> > gain more than just 5% in sonic improvement.
>
> I guess you haven't smartened up since then. :-(
>
> > I recall
> > some cables costing more made my system sounding less
> > natural.
>
> Even so, it was proabably 100% imagination.-
To each his own.
Arny dreams of voltmeters. That is the extent of Arny's imagination.
My imagination centers upon a certain city bus
Clyde Slick
January 17th 08, 06:46 PM
On 17 Ian, 07:32, "Arny Krueger" > wrote:
>
> Harry, it doesn't take a degree in philosophy to understand proper
> experiemental design.
>
Thanks for admitting that Clerkie wasted six years of his life,
and is still clueless.
George M. Middius
January 17th 08, 07:15 PM
Walt sassed:
> > Remind me again how many times Arny Krueger has been
> > quoted in the Wall Street Journal?
> Ok. So you've been quoted in the WSJ. So have Uri Geller and Ken Lay.
> What's your point?
Having trouble reading plain English, Walt?
George M. Middius
January 17th 08, 07:17 PM
Clyde Slick said:
> Arny dreams of voltmeters. That is the extent of Arny's imagination.
> My imagination centers upon a certain city bus
I think the repetitions have conditioned Turdy to flinch when he detects a
bus rolling in his vicinity. I switched my bet to "getting electrocuted by
lightning". The odds on this bet are considerably better.
John Atkinson[_2_]
January 17th 08, 08:25 PM
On Jan 17, 12:56*pm, Walt > wrote:
> of course this doesn't even *address the single-blind nature of the
> test. *Seehttp://en.wikipedia.org/wiki/Clever_Hans
The test was immune to the Clever Hans Effect as the moderator
sat behind and to the side and was not in the listener's view. The
listener didn't know what he was listening to or comparing. All he
had was a remote with 2 buttons, labeled A and B. All he could
see were the loudspeakers and the amplifier volume display.
Levels were matched. The listener listened on his own and could
switch between A and B for as long as he wished. He didn't know
what was being compared until after he had handed in his results.
Of its type, it was quite a well-designed test.
John Atkinson
Editor, Stereophile
Shhhh! I'm Listening to Reason!
January 17th 08, 08:33 PM
On Jan 16, 5:30*pm, George M. Middius <cmndr _ george @ comcast .
net> wrote:
> Shhhh! said:
> > 2pid, when you do your audio blind testing, do you insure that the
> > levels are matched?
>
> It's a trick question, Scottie. Don't answer! Bad dog!
2pid answer a direct question someone asks of him?
LOL!
Walt
January 17th 08, 09:04 PM
John Atkinson wrote:
> On Jan 17, 12:56 pm, Walt > wrote:
>> of course this doesn't even address the single-blind nature of the
>> test. Seehttp://en.wikipedia.org/wiki/Clever_Hans
>
> The test was immune to the Clever Hans Effect as the moderator
> sat behind and to the side and was not in the listener's view. The
> listener didn't know what he was listening to or comparing. All he
> had was a remote with 2 buttons, labeled A and B. All he could
> see were the loudspeakers and the amplifier volume display.
> Levels were matched. The listener listened on his own and could
> switch between A and B for as long as he wished. He didn't know
> what was being compared until after he had handed in his results.
> Of its type, it was quite a well-designed test.
So why were there two CD players if you were comparing speaker cables?
Were you swicthing out more than just the speaker cables?
I'm confused...
From TFA:
"Using two identical CD players, I tested a $2,000, eight-foot
pair of Sigma Retro Gold cables from Monster Cable, which are
as thick as your thumb, against 14-gauge, hardware-store speaker
cable."
//Walt
Arny Krueger
January 17th 08, 09:13 PM
"Walt" > wrote in message
> John Atkinson wrote:
>
>> Remind me again how many times Arny Krueger has been
>> quoted in the Wall Street Journal?
>
> Ok. So you've been quoted in the WSJ. So have Uri Geller
> and Ken Lay.
> What's your point?
That people more credible than Atkinson have been quoted in the WSJ?
Arny Krueger
January 17th 08, 09:13 PM
"John Atkinson" > wrote in
message
> On Jan 17, 12:56 pm, Walt >
> wrote:
>> of course this doesn't even address the single-blind
>> nature of the test.
>> Seehttp://en.wikipedia.org/wiki/Clever_Hans
>
> The test was immune to the Clever Hans Effect as the
> moderator sat behind and to the side and was not in the
> listener's view. The listener didn't know what he was
> listening to or comparing. All he had was a remote with 2
> buttons, labeled A and B. All he could see were the
> loudspeakers and the amplifier volume display. Levels
> were matched. The listener listened on his own and could
> switch between A and B for as long as he wished. He
> didn't know what was being compared until after he had
> handed in his results. Of its type, it was quite a
> well-designed test.
Wrong, but I bet that Atkinson can't figure out why.
Shhhh! I'm Listening to Reason!
January 17th 08, 09:56 PM
On Jan 17, 3:13*pm, "Arny Krueger" > wrote:
> "Walt" > wrote in message
>
>
>
> > John Atkinson wrote:
>
> >> Remind me again how many times Arny Krueger has been
> >> quoted in the Wall Street Journal?
>
> > Ok. So you've been quoted in the WSJ. *So have Uri Geller
> > and Ken Lay.
> > What's your point?
>
> That people more credible than Atkinson have been quoted in the WSJ?
Or that less-credible people haven't been? ;-)
Oliver Costich
January 17th 08, 11:15 PM
On Wed, 16 Jan 2008 10:52:40 -0800 (PST), John Atkinson
> wrote:
>http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_inside_to
>
>Money quote: "I was struck by how the best-informed people at the
>show -- like John Atkinson and Michael Fremer of Stereophile
>Magazine -- easily picked the expensive cable."
>
>So that's that, then. :-)
>
>John Atkinson
>Editor, Stereophile
From the article: Using two identical CD players, I tested a $2,000,
eight-foot pair of Sigma Retro Gold cables from Monster Cable, which
are as thick as your thumb, against 14-gauge, hardware-store speaker
cable. Many audiophiles say they are equally good. I couldn't hear a
difference and was a wee bit suspicious that anyone else could. But of
the 39 people who took this test, 61% said they preferred the
expensive cable.
Back to reality: 61% correct in one experiment fails to reject that
they can't tell the difference. If the claim is that listeners can
tell the better cable more the half the time, then to support that you
have to be able to reject that the in the population of all audio
interested listeners, the correct guesses occur half the time or less.
61% of 39 doesn't do it. (Null hypothesis is p=.5, alternative
hypothesis is p>.5. The null hypthesis cannot be rejected with the
sample data given.)
In other words, that 61% of a sample of 39 got the correct result
isn't sufficient evidence that in the general population of listeners
more than half can pick the better cable.
So, I'd say "that's hardly that".
Oliver Costich
January 17th 08, 11:20 PM
On Thu, 17 Jan 2008 07:32:07 -0500, "Arny Krueger" >
wrote:
>"Harry Lavo" > wrote in message
>> "Arny Krueger" > wrote in message
>> . ..
>>> "Harry Lavo" > wrote in message
>>>
>>>> "Arny Krueger" > wrote in message
>>>> ...
>>>>> "ScottW" > wrote in message
>>>>>
>>>>>> On Jan 16, 10:52 am, John Atkinson
>>>>>> > wrote:
>>>>>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>>>>>>
>>>>>>> Money quote: "I was struck by how the best-informed
>>>>>>> people at the show -- like John Atkinson and Michael
>>>>>>> Fremer of Stereophile Magazine -- easily picked the
>>>>>>> expensive cable."
>>>>>
>>>>> It was a single blind test - appeals to everybody who
>>>>> is ignorant of the well-known failings of single blind
>>>>> tests.
>>>>
>>>> Arny, double-blind vs. single-blind adds an extra level
>>>> of *assurance* that the test is fully blind.
>
>>> No, DBT it removes a relevant significant variable that
>>> is well-known to exist.
>
>> No, Arny. That *could* or *may* exist.
>
>Saying that takes a ton of suspended disbelief. But from reading your posts
>over the years Harry, I'm sure you have it in you.
>
>> Somewhere in
>> your college education, you skipped the class in logic, I
>> guess.
In my several years of graduate school in mathemeatics, I skipped
neither the logic nor the statistics classes. Logic is on the side of
not making decisions about human behavior without sufficient testing
using good design of experiment method and statistical analysis.
Very little of the claims about people being able to discern
differences in cables is supported by such testing.
>
>Harry, it doesn't take a degree in philosophy to understand proper
>experiemental design.
>
>>> However Harry, its not your fault that your knowlege
>>> about experimental design was based on OJT at what, a
>>> cereal company?
>
>> Just about one of the most sophisticated companies in the
>> world when it came to consumer testing....yeah, over ten
>> years of test planning, design, and interpretation.
>
>That's strange considering all of your rants against their objectivity.
>
>> Beats ashtrays.
>
>I have no idea how that relates to the current discussion. Since I've never
>smoked, my interest in ashtrays could be less, but I don't know how.
>
>
Oliver Costich
January 17th 08, 11:25 PM
On Thu, 17 Jan 2008 12:56:23 -0500, Walt >
wrote:
wrote:
>> On Jan 16, 10:52?am, John Atkinson > wrote:
>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>>
>>> Money quote: "I was struck by how the best-informed people at the
>>> show -- like John Atkinson and Michael Fremer of Stereophile
>>> Magazine -- easily picked the expensive cable."
>>
>> So will you be receiving your $1 million from Randi anytime soon?
>
>Don't count on it. From TFA: "But of the 39 people who took this test,
>61% said they preferred the expensive cable." Hmmme. 39 trials. 50-50
>chance. How statistically significant is 61%? You do the math.
>(HINT: it ain't.)
Here's the math: Claim is p (proportion of correct answers) >.5. Null
hypothesis is p=.5. The null hypothsis cannot be rejected (and the
claim cannot be supported) at the 95% significance level.
>
>And of course this doesn't even address the single-blind nature of the
>test. See http://en.wikipedia.org/wiki/Clever_Hans
>
>
>
>//Walt
Oliver Costich
January 17th 08, 11:27 PM
On Thu, 17 Jan 2008 12:25:54 -0800 (PST), John Atkinson
> wrote:
>On Jan 17, 12:56*pm, Walt > wrote:
>> of course this doesn't even *address the single-blind nature of the
>> test. *Seehttp://en.wikipedia.org/wiki/Clever_Hans
>
>The test was immune to the Clever Hans Effect as the moderator
>sat behind and to the side and was not in the listener's view. The
>listener didn't know what he was listening to or comparing. All he
>had was a remote with 2 buttons, labeled A and B. All he could
>see were the loudspeakers and the amplifier volume display.
>Levels were matched. The listener listened on his own and could
>switch between A and B for as long as he wished. He didn't know
>what was being compared until after he had handed in his results.
>Of its type, it was quite a well-designed test.
>
>John Atkinson
>Editor, Stereophile
>
>
Argument about the design is moot when the results aren't sufficient
to tatistically support the claim that people can can identify the
more expensive cables more than half the time.
Oliver Costich
January 17th 08, 11:29 PM
On Thu, 17 Jan 2008 07:34:12 -0500, "Arny Krueger" >
wrote:
>"John Atkinson" > wrote in
>message
>
>> Tonally, there was virtually no difference, but what
>> was later revealed to be the more expensive cable
>> sounded less congested at signal peaks. The
>> hardware-store cable consistently sounded more hashy
>> at orchestral climaxes, but as I said, it was not a large
>> difference.
>
>Hmm, audible nonlinear distortion in 99.99% pure copper. Proving that could
>easily get someone a Nobel prize. Or at least a million dollars from some
>stage magician somewhere.
>
>So which is it going to be John, are you going for the million bucks or the
>Nobel prize?
>
I think that the Nobel Prize also pays a million bucks. I'd go for the
double play:-)
Oliver Costich
January 17th 08, 11:31 PM
On Thu, 17 Jan 2008 09:19:05 -0800 (PST), John Atkinson
> wrote:
>On Jan 17, 12:14*pm, George M. Middius <cmndr _ george @ comcast .
>net> wrote:
>> Shhhh! said:
>> > > So which is it going to be John, are you going for the million
>> > > bucks or the Nobel prize?
>> >
>> > Do you always have to resort to strawmen when someone rattles
>> > one of your pet beliefs, GOIA?
>>
>> Yes, of course he does. It's part of the Krooborg's firmware.
>
>Remind me again how many times Arny Krueger has been
>quoted in the Wall Street Journal?
>
>At least he has stopped claiming that his neglected,
>rarely updated, almost-never-promoted websites
>get as much traffic as Stereophile's or that his
>recordings are as commercially available as my
>own. :-)
>
>John Atkinson
>Editor, Stereophile
>
>
Now there'sa measurement of validity of argument that's really
convincing. Mine is bigger than yours.
Forget that there are statistical methods to answer these questions.
Really on the word of the guy selling them.
Oliver Costich
January 17th 08, 11:32 PM
On Thu, 17 Jan 2008 13:44:05 -0500, Walt >
wrote:
>John Atkinson wrote:
>
>> Remind me again how many times Arny Krueger has been
>> quoted in the Wall Street Journal?
>
>Ok. So you've been quoted in the WSJ. So have Uri Geller and Ken Lay.
>
>What's your point?
So has Osama Bin Laden. The point is that he's devoid of a sound
argument.
>
>//Walt
Shhhh! I'm Listening to Reason!
January 17th 08, 11:54 PM
On Jan 17, 5:15*pm, Oliver Costich > wrote:
> In other words, that 61% of a sample of 39 got the correct result
> isn't sufficient evidence that in the general population of listeners
> more than half can pick the better cable.
>
> So, I'd say "that's hardly that".
I'm curious what percent of the "best informed" got. I mean, you could
mix in hot dog vendors, the deaf, people who might try to fail just to
be contrary, you, and so on, and get different results. Apparently JA
and MF did better than random chance.
The real issue to me is "who cares". People who want expensive cables,
wires, cars, clothes, or whatever, will buy them. People who want to
tell other people what they should or shouldn't buy will come out of
the woodwork to bitch about it. ;-)
This seems to have really gotten your dander up. Why?
Shhhh! I'm Listening to Reason!
January 17th 08, 11:57 PM
On Jan 17, 5:25*pm, Oliver Costich > wrote:
> >Don't count on it. *From TFA: "But of the 39 people who took this test,
> >61% said they preferred the expensive cable." Hmmme. *39 trials. 50-50
> >chance. *How statistically significant is 61%? *You do the math.
Why is this important to you, so much so that you have blasted so many
posts in this thread?
Is this really, really important?
> >(HINT: it ain't.)
OK then. ;-)
George M. Middius
January 18th 08, 12:30 AM
Shhhh! said to McInturd:
> The real issue to me is "who cares". People who want expensive cables,
> wires, cars, clothes, or whatever, will buy them. People who want to
> tell other people what they should or shouldn't buy will come out of
> the woodwork to bitch about it. ;-)
>
> This seems to have really gotten your dander up. Why?
Because John Atkinson is the Devil. (The real one, not that poseur who
called himself Roy Briggs.)
Eeyore
January 18th 08, 12:36 AM
Oliver Costich wrote:
> Back to reality: 61% correct in one experiment fails to reject that
> they can't tell the difference.
61% is statistically close enough as doesn't matter to pure 50-50% random choice.
Try flicking coins and see if you get a perfect 50-50 distribution for any given
sample size. You WON'T. In fact pure 50-50 would be the exception by a mile.
No, 61% is as good as proof that there's NO difference. Which there ISN'T of
course. Copper is copper is copper. High pricing, alleged magic and phoney
marketing doesn't make if any different.
Graham
Eeyore
January 18th 08, 12:40 AM
John Atkinson wrote:
> but what was later revealed to be the more expensive cable
> sounded less congested at signal peaks.
CHARLATAN ! What the ****** is 'congested*.
Another stupid means-nothing word like speed, pace, delivery, darkness,
depth et al.
Of course if you used a technical word it would be disprovable so you
LIE to promote this idiocy using made-up phoney concepts that you use to
bamboozle and confuse the public with.
Graham
George M. Middius
January 18th 08, 01:06 AM
Poopie brayed hysterically:
> > but what was later revealed to be the more expensive cable
> > sounded less congested at signal peaks.
> CHARLATAN ! What the ****** is 'congested*.
> Another stupid means-nothing word like speed, pace, delivery, darkness,
> depth et al.
> Of course if you used a technical word it would be disprovable so you
> LIE to promote this idiocy using made-up phoney concepts that you use to
> bamboozle and confuse the public with.
Poopie, in all seriousness, are you tanked? If not, maybe you should get a
rabies booster shot. You're raving like a loonyborg.
John Atkinson[_2_]
January 18th 08, 02:55 AM
On Jan 17, 4:04 pm, Walt > wrote:
> John Atkinson wrote:
> > The listener didn't know what he was listening to or comparing. All
> > he had was a remote with 2 buttons, labeled A and B. All he could
> > see were the loudspeakers and the amplifier volume display.
> > Levels were matched. The listener listened on his own and could
> > switch between A and B for as long as he wished. He didn't know
> > what was being compared until after he had handed in his results.
> > Of its type, it was quite a well-designed test.
>
> So why were there two CD players if you were comparing speaker
> cables?
I have no idea. I didn't design the test, not did I look at the
playback system. I was a listening subject. If you read the
article in the WSJ, you will see that Lee Gomes did other
comparisons, not just cables. But the only test I took
part in involved cables..
John Atkinson
Editor, Stereophile
Eeyore
January 18th 08, 07:20 AM
"George M. Middius" wrote:
> Poopie brayed hysterically:
>
> > > but what was later revealed to be the more expensive cable
> > > sounded less congested at signal peaks.
>
> > CHARLATAN ! What the ****** is 'congested*.
> > Another stupid means-nothing word like speed, pace, delivery, darkness,
> > depth et al.
> > Of course if you used a technical word it would be disprovable so you
> > LIE to promote this idiocy using made-up phoney concepts that you use to
> > bamboozle and confuse the public with.
>
> Poopie, in all seriousness, are you tanked? If not, maybe you should get a
> rabies booster shot. You're raving like a loonyborg.
**** OFF
JBorg, Jr.[_2_]
January 18th 08, 08:30 AM
> Arny Krueger wrote:
>> JBorg, Jr. wrote
>>> Arny Krueger wrote:
>>>
>>>
>>>
>>>
>>> More proof that single blind tests are nothing more than
>>> defective double blind tests.
>>
>>
>> From this article, the author wrote, "... the expensive
>> cables sounded roughly 5% better. Remember, by
>> definition, an audiophile is one who will bear any
>> burden, pay any price, to get even a tiny improvement in sound."
>>
>> Only 5% ?
>
> Even so, it was proabably 100% imagination.
How can that be so? From the article, it said, "... 39 people who
took this test, 61% said they preferred the expensive cable."
At what percentage do you consider it imagination, and when it
is not. Somehow, this showdown at the CES looked like a DBT
sans blackbox.
>> Could it be that due to poor component mismatches, the
>> system would have sounded better and higher than just 5% ?
>
> 0% seems about right.
That would be about right for someone like Howard ferstler
who has a known, and by his own admission, hearing deficiency.
When it comes to discerning differences, Ferstler gets 0.
You put two and two together and you'll see why he's fuming
all the time.
>> snip
>
>> I recall some cables costing more made my system sounding less natural.
>
> Even so, it was proabably 100% imagination.
I don't follow your thought because you are abviously keep on
guessing as you go.
If you're not guessing, can you form realistic idea exposing that
the percieved differences I heard while swapping cables did not
physically exist ?
JBorg, Jr.[_2_]
January 18th 08, 08:42 AM
> Oliver Costich wrote:
>
>
>
>
>
> Very little of the claims about people being able to discern
> differences in cables is supported by such testing.
I take it you don't recommend testing for such purposes.
Ok then...
JBorg, Jr.[_2_]
January 18th 08, 08:48 AM
> Oliver Costich wrote:
>> Walt wrote:
>>> vinylanach wrote:
>>
>>
>>
>>
>>> So will you be receiving your $1 million from Randi anytime soon?
>>
>> Don't count on it. From TFA: "But of the 39 people who took this
>> test, 61% said they preferred the expensive cable." Hmmme. 39
>> trials. 50-50 chance. How statistically significant is 61%? You do
>> the math. (HINT: it ain't.)
>
> Here's the math: Claim is p (proportion of correct answers) >.5. Null
> hypothesis is p=.5. The null hypothsis cannot be rejected (and the
> claim cannot be supported) at the 95% significance level.
Well yes, Mr. Costich, the test results aren't scientifically valid but it
didn't disproved that the sound differences heard by participants did
not physically exist.
JBorg, Jr.[_2_]
January 18th 08, 08:58 AM
> Oliver Costich wrote:
>
>
>
>
>
>
> Now there'sa measurement of validity of argument that's really
> convincing. Mine is bigger than yours.
Mr. Costich, there is no such valid *measurement* that will
*validly* measure a response against strawman arguments.
> Forget that there are statistical methods to answer these questions.
> Rely on the word of the guy selling them.
JBorg, Jr.[_2_]
January 18th 08, 09:02 AM
> Oliver Costich wrote:
>> Walt wrote:
>>> John Atkinson wrote:
>>
>>
>>
>>
>>> Remind me again how many times Arny Krueger has been
>>> quoted in the Wall Street Journal?
>>
>> Ok. So you've been quoted in the WSJ. So have Uri Geller and Ken
>> Lay.
>>
>> What's your point?
>
> So has Osama Bin Laden. The point is that he's devoid of a sound
> argument.
Mr. Costich, there is no sound argument to improve upon a strawman
arguments. It just doesn't exist.
>>
>> //Walt
Incidentally Mr. Costich, how well do you know Arny Krueger if you
don't mind me asking so.
JBorg, Jr.[_2_]
January 18th 08, 09:19 AM
> Oliver Costich > wrote:
>
>
>
>
> I think that the Nobel Prize also pays a million bucks. I'd go for the
> double play:-)
What sort of test should one have in mind for this type of opportunity
to ensure success, Mr. Costich ?
JBorg, Jr.[_2_]
January 18th 08, 09:21 AM
> Shhhh! wrote:
>> Oliver Costich wrote:
>
>
>
>
>
>> In other words, that 61% of a sample of 39 got the correct result
>> isn't sufficient evidence that in the general population of listeners
>> more than half can pick the better cable.
>>
>> So, I'd say "that's hardly that".
>
> I'm curious what percent of the "best informed" got. I mean, you could
> mix in hot dog vendors, the deaf, people who might try to fail just to
> be contrary, you, and so on, and get different results.
Well asked.
Arny Krueger
January 18th 08, 01:20 PM
"Oliver Costich" > wrote in
message
> On Thu, 17 Jan 2008 07:32:07 -0500, "Arny Krueger"
> > wrote:
>> "Harry Lavo" > wrote in message
>>
>>> Somewhere in
>>> your college education, you skipped the class in logic,
>>> I guess.
> In my several years of graduate school in mathemeatics, I
> skipped neither the logic nor the statistics classes.
Nor did I. I did extensive undergraduate and postgraduate work in math and
statistics. One of the inspirations for the development of double blind
testing was my wife who has a degree in experimental psychology. Another was
a friend with a degree in mathematics.
> Logic is on the side of not making decisions about human
> behavior without sufficient testing using good design of
> experiment method and statistical analysis.
4 of the 6 ABX partners had technical degrees ranging from BS to PhD.
> Very little of the claims about people being able to
> discern differences in cables is supported by such
> testing.
When it comes to audible differences between cables that is not supported by
science and math, which is what this thread is about, none of it is
supported by well-designed experiments.
Arny Krueger
January 18th 08, 01:21 PM
"MiNe 109" > wrote in message
> In article >,
> Oliver Costich > wrote:
>
>> On Thu, 17 Jan 2008 12:56:23 -0500, Walt
>> > wrote:
>>
>>> wrote:
>>>> On Jan 16, 10:52?am, John Atkinson
>>>> > wrote:
>>>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>>>>
>>>>> Money quote: "I was struck by how the best-informed
>>>>> people at the show -- like John Atkinson and Michael
>>>>> Fremer of Stereophile Magazine -- easily picked the
>>>>> expensive cable."
>>>>
>>>> So will you be receiving your $1 million from Randi
>>>> anytime soon?
>>>
>>> Don't count on it. From TFA: "But of the 39 people who
>>> took this test, 61% said they preferred the expensive
>>> cable." Hmmme. 39 trials. 50-50 chance. How
>>> statistically significant is 61%? You do the math.
>>> (HINT: it ain't.)
>>
>> Here's the math: Claim is p (proportion of correct
>> answers) >.5. Null hypothesis is p=.5. The null
>> hypothsis cannot be rejected (and the claim cannot be
>> supported) at the 95% significance level.
>
> Welcome to the group! Out of curiosity, what significance
> level does 61% support?
You haven't formed the question properly. 61% is statisically signifcant or
not, depending on the total number of trials.
Arny Krueger
January 18th 08, 01:23 PM
"John Atkinson" > wrote in
message
> On Jan 17, 12:14 pm, George M. Middius <cmndr _ george @
> comcast . net> wrote:
>> Shhhh! said:
>>>> So which is it going to be John, are you going for the
>>>> million bucks or the Nobel prize?
>>>
>>> Do you always have to resort to strawmen when someone
>>> rattles one of your pet beliefs, GOIA?
>> Yes, of course he does. It's part of the Krooborg's
>> firmware.
> Remind me again how many times Arny Krueger has been
> quoted in the Wall Street Journal?
This is not logical discussion or even just rhetoric, this is abuse.
Arny Krueger
January 18th 08, 01:30 PM
"JBorg, Jr." > wrote in message
>> Arny Krueger wrote:
>>> JBorg, Jr. wrote
>>>> Arny Krueger wrote:
>
>>>>
>>>>
>>>>
>>>>
>>>> More proof that single blind tests are nothing more
>>>> than defective double blind tests.
>>>
>>>
>>> From this article, the author wrote, "... the expensive
>>> cables sounded roughly 5% better. Remember, by
>>> definition, an audiophile is one who will bear any
>>> burden, pay any price, to get even a tiny improvement
>>> in sound." Only 5% ?
>>
>> Even so, it was proabably 100% imagination.
> How can that be so? From the article, it said, "... 39
> people who took this test, 61% said they preferred the
> expensive cable."
> At what percentage do you consider it imagination, and
> when it is not.
Well Borg, this post is more evidence that ignorance of basic statistics is
a common problem among golden ears. It's not a well-formed question. It's
not the percentage of correct answers that defines statistical signicance,
its both the percentage of correct answers and the total number of trials.
And, that's all based on the idea that basic experiment was well-designed.
The most fundamental question is whether the experiment was well-designed.
> Somehow, this showdown at the CES looked like a
> DBT sans blackbox.
Nope. This comment is even more evidence that ignorance of basic
experimental design is a common problem among golden ears. The basic rule
of double blind testing is that no clue other than the independent variable
is available to the listener. In this alleged test, the person who
controlled the cables interacted with the listeners. In a proper DBT, nobody
or anything that could possibly reveal the indentity of the object chosen
for comparison is acessible in any way to the listener.
Arny Krueger
January 18th 08, 01:32 PM
"JBorg, Jr." > wrote in message
> Well yes, Mr. Costich, the test results aren't
> scientifically valid but it didn't disproved that the
> sound differences heard by participants did not physically exist.
That was another potential flaw in the tests. I see no controls that ensured
that the listeners heard the indentically same selections of music.
Therefore, the listeners may have heard differences that did physically
exist - unfortunately they were due to random choices by the experimenter,
not audible differences that were inherent in the cables.
Harry Lavo
January 18th 08, 02:10 PM
"Arny Krueger" > wrote in message
...
> "Oliver Costich" > wrote in
> message
>
>> On Thu, 17 Jan 2008 07:32:07 -0500, "Arny Krueger"
>> > wrote:
>
>>> "Harry Lavo" > wrote in message
>>>
>
>
>>>> Somewhere in
>>>> your college education, you skipped the class in logic,
>>>> I guess.
>
>> In my several years of graduate school in mathemeatics, I
>> skipped neither the logic nor the statistics classes.
>
> Nor did I. I did extensive undergraduate and postgraduate work in math and
> statistics. One of the inspirations for the development of double blind
> testing was my wife who has a degree in experimental psychology. Another
> was a friend with a degree in mathematics.
>
>> Logic is on the side of not making decisions about human
>> behavior without sufficient testing using good design of
>> experiment method and statistical analysis.
>
> 4 of the 6 ABX partners had technical degrees ranging from BS to PhD.
>
>> Very little of the claims about people being able to
>> discern differences in cables is supported by such
>> testing.
>
> When it comes to audible differences between cables that is not supported
> by science and math, which is what this thread is about, none of it is
> supported by well-designed experiments.
>
Well, then rather than "braying and flaying" why don't you communicate the
statistics.
As reported 61% of 39 people chose the correct cable. That according to my
calculator was 24 people.
According to my Binomial Distribution Table, that provides less than a 5%
chance of error...in other words the percentage is statistically
significant. In fact, it is significant at the 98% level....a 2% chance of
error.
Had one more chosen correctly, the error probability would have been less
than 1%, or "beyond a shadow of a doubt".
So presumably John and Michael did at least this well to be singled out by
the reporter.
Is this why you are desparately flaying at the test, Arny...inventing
"possibibilites" without a single shred of evidence to support your
conjectures? Because you know (if you truly do know math and statistics)
that the test statistics hold up (but don't have the integrity to say so)?
Arny Krueger
January 18th 08, 02:13 PM
"MiNe 109" > wrote in message
> In article >,
> "Arny Krueger" > wrote:
>
>> "MiNe 109" > wrote in message
>>
>>> In article >,
>>> Oliver Costich > wrote:
>>>
>>>> On Thu, 17 Jan 2008 12:56:23 -0500, Walt
>>>> > wrote:
>>>>
>>>>> wrote:
>>>>>> On Jan 16, 10:52?am, John Atkinson
>>>>>> > wrote:
>>>>>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>>>>>>
>>>>>>> Money quote: "I was struck by how the best-informed
>>>>>>> people at the show -- like John Atkinson and Michael
>>>>>>> Fremer of Stereophile Magazine -- easily picked the
>>>>>>> expensive cable."
>>>>>>
>>>>>> So will you be receiving your $1 million from Randi
>>>>>> anytime soon?
>>>>>
>>>>> Don't count on it. From TFA: "But of the 39 people
>>>>> who took this test, 61% said they preferred the
>>>>> expensive cable." Hmmme. 39 trials. 50-50 chance.
>>>>> How statistically significant is 61%? You do the
>>>>> math. (HINT: it ain't.)
>>>>
>>>> Here's the math: Claim is p (proportion of correct
>>>> answers) >.5. Null hypothesis is p=.5. The null
>>>> hypothsis cannot be rejected (and the claim cannot be
>>>> supported) at the 95% significance level.
>>>
>>> Welcome to the group! Out of curiosity, what
>>> significance level does 61% support?
>>
>> You haven't formed the question properly. 61% is
>> statisically signifcant or not, depending on the total
>> number of trials.
>
> Okay, in 39 trials, what level of significance does 61%
> indicate?
>
In this case nothing, because the basic experiment seems to be so flawed.
Arny Krueger
January 18th 08, 02:14 PM
"JBorg, Jr." > wrote in message
> Incidentally Mr. Costich, how well do you know Arny
> Krueger if you don't mind me asking so.
I seriously doubt that we've ever seen each other or corresponded, other
than what you see here on Usenet.
Paranoid much?
Arny Krueger
January 18th 08, 02:16 PM
"Harry Lavo" > wrote in message
> As reported 61% of 39 people chose the correct cable. That according to my
> calculator was 24 people.
As reported, the experiment was invalid. No further analysis is necesary.
Arny Krueger
January 18th 08, 03:10 PM
"MiNe 109" > wrote in message
> In article >,
> "Arny Krueger" > wrote:
>
>> "MiNe 109" > wrote in message
>>
>>> In article
>>> >, "Arny
>>> Krueger" > wrote:
>>>
>>>> "MiNe 109" > wrote in
>>>> message
>>>>
>>>>> In article
>>>>> >, Oliver
>>>>> Costich > wrote:
>>>>>
>>>>>> On Thu, 17 Jan 2008 12:56:23 -0500, Walt
>>>>>> > wrote:
>>>>>>
>>>>>>> wrote:
>>>>>>>> On Jan 16, 10:52?am, John Atkinson
>>>>>>>> > wrote:
>>>>>>>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in.
>>>>>>>>> ..
>>>>>>>>>
>>>>>>>>> Money quote: "I was struck by how the
>>>>>>>>> best-informed people at the show -- like John
>>>>>>>>> Atkinson and Michael Fremer of Stereophile
>>>>>>>>> Magazine -- easily picked the expensive cable."
>>>>>>>>
>>>>>>>> So will you be receiving your $1 million from Randi
>>>>>>>> anytime soon?
>>>>>>>
>>>>>>> Don't count on it. From TFA: "But of the 39 people
>>>>>>> who took this test, 61% said they preferred the
>>>>>>> expensive cable." Hmmme. 39 trials. 50-50 chance.
>>>>>>> How statistically significant is 61%? You do the
>>>>>>> math. (HINT: it ain't.)
>>>>>>
>>>>>> Here's the math: Claim is p (proportion of correct
>>>>>> answers) >.5. Null hypothesis is p=.5. The null
>>>>>> hypothsis cannot be rejected (and the claim cannot be
>>>>>> supported) at the 95% significance level.
>>>>>
>>>>> Welcome to the group! Out of curiosity, what
>>>>> significance level does 61% support?
>>>>
>>>> You haven't formed the question properly. 61% is
>>>> statisically signifcant or not, depending on the total
>>>> number of trials.
>>>
>>> Okay, in 39 trials, what level of significance does 61%
>>> indicate?
>>>
>>
>> In this case nothing, because the basic experiment seems
>> to be so flawed.
>
> In a perfectly designed test with 39 trials, what level
> of significance does 61% indicate?
It is on the web - do your own research.
Walt
January 18th 08, 03:26 PM
Shhhh! I'm Listening to Reason! wrote:
> On Jan 17, 5:25 pm, Oliver Costich > wrote:
>
>>> Don't count on it. From TFA: "But of the 39 people who took this test,
>>> 61% said they preferred the expensive cable." Hmmme. 39 trials. 50-50
>>> chance. How statistically significant is 61%? You do the math.
>
> Why is this important to you, so much so that you have blasted so many
> posts in this thread?
I've blasted "so many posts"? WTF?
I count three. This will make four. You must have me confused with
somebody else.
//Walt
Arny Krueger
January 18th 08, 03:31 PM
"Walt" > wrote in message
> Shhhh! I'm Listening to Reason! wrote:
>> On Jan 17, 5:25 pm, Oliver Costich
>> > wrote:
>>>> Don't count on it. From TFA: "But of the 39 people
>>>> who took this test, 61% said they preferred the
>>>> expensive cable." Hmmme. 39 trials. 50-50 chance. How statistically
>>>> significant is 61%? You do the
>>>> math.
>>
>> Why is this important to you, so much so that you have
>> blasted so many posts in this thread?
>
> I've blasted "so many posts"? WTF?
>
> I count three. This will make four. You must have me
> confused with somebody else.
I think I counted 7 posts to this thread from ****R. The interesting
question about the Middiot Clique is which of them is less self-aware.
Right now Stephen, Jenn, ****R and the Middiot himself are duking it out for
the dishonor. ;-)
Clyde Slick
January 18th 08, 03:50 PM
On 17 Ian, 20:17, George M. Middius <cmndr _ george @ comcast . net>
wrote:
> Clyde Slick said:
>
> > Arny dreams of voltmeters. That is the extent of Arny's imagination.
> > My imagination centers upon a certain city bus
>
> I think the repetitions have conditioned Turdy to flinch when he detects a
> bus rolling in his vicinity. I switched my bet to "getting electrocuted by
> lightning". The odds on this bet are considerably better.
Personally, I like choking on a ham sandwich.
Clyde Slick
January 18th 08, 03:54 PM
On 18 Ian, 00:15, Oliver Costich > wrote:
> On Wed, 16 Jan 2008 10:52:40 -0800 (PST), John Atkinson
>
> > wrote:
> >http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>
> >Money quote: "I was struck by how the best-informed people at the
> >show -- like John Atkinson and Michael Fremer of Stereophile
> >Magazine -- easily picked the expensive cable."
>
> >So that's that, then. :-)
>
> >John Atkinson
> >Editor, Stereophile
>
> From the article: Using two identical CD players, I tested a $2,000,
> eight-foot pair of Sigma Retro Gold cables from Monster Cable, which
> are as thick as your thumb, against 14-gauge, hardware-store speaker
> cable. Many audiophiles say they are equally good. I couldn't hear a
> difference and was a wee bit suspicious that anyone else could. But of
> the 39 people who took this test, 61% said they preferred the
> expensive cable.
>
> Back to reality: 61% correct in one experiment fails to reject that
> they can't tell the difference. If the claim is that listeners can
> tell the better cable more the half the time, then to support that you
> have to be able to reject that the in the population of all audio
> interested listeners, the correct guesses occur half the time or less.
> 61% of 39 doesn't do it. (Null hypothesis is p=.5, alternative
> hypothesis is p>.5. The null hypthesis cannot be rejected with the
> sample data given.)
>
> In other words, that 61% of a sample of 39 got the correct result
> isn't sufficient evidence that in the general population of listeners
> more than half can pick the better cable.
>
> So, I'd say "that's hardly that".
you seem to be mixing difference with preference, you reference both,
for the same test. And just what is the general population of
listeners.
Are you testing the 99% who don't give a rat's
ass anyway? If so, so what. Or are you testing people who actually
care.
Clyde Slick
January 18th 08, 03:55 PM
On 18 Ian, 00:20, Oliver Costich > wrote:
> On Thu, 17 Jan 2008 07:32:07 -0500, "Arny Krueger" >
> wrote:
>
>
>
>
>
> >"Harry Lavo" > wrote in message
>
> >> "Arny Krueger" > wrote in message
> . ..
> >>> "Harry Lavo" > wrote in message
>
> >>>> "Arny Krueger" > wrote in message
> ...
> >>>>> "ScottW" > wrote in message
>
> >>>>>> On Jan 16, 10:52 am, John Atkinson
> >>>>>> > wrote:
> >>>>>>>http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>
> >>>>>>> Money quote: "I was struck by how the best-informed
> >>>>>>> people at the show -- like John Atkinson and Michael
> >>>>>>> Fremer of Stereophile Magazine -- easily picked the
> >>>>>>> expensive cable."
>
> >>>>> It was a single blind test - appeals to everybody who
> >>>>> is ignorant of the well-known failings of single blind
> >>>>> tests.
>
> >>>> Arny, double-blind vs. single-blind adds an extra level
> >>>> of *assurance* that the test is fully blind.
>
> >>> No, DBT it removes a relevant significant variable that
> >>> is well-known to exist.
>
> >> No, Arny. *That *could* or *may* exist.
>
> >Saying that takes a ton of suspended disbelief. But from reading your posts
> >over the years Harry, I'm sure you have it in you.
>
> >> Somewhere in
> >> your college education, you skipped the class in logic, I
> >> guess.
>
> In my several years of graduate school in mathemeatics, I skipped
> neither the logic nor the statistics classes. Logic is on the side of
> not making decisions about human behavior without sufficient testing
> using good design of experiment method and statistical analysis.
>
> Very little of the claims about people being able to discern
> differences in cables is supported by such testing.
>
>
>
>
>
> >Harry, it doesn't take a degree in philosophy to understand proper
> >experiemental design.
>
> >>> However Harry, its not your fault that your knowlege
> >>> about experimental design was based on OJT at what, a
> >>> cereal company?
>
> >> Just about one of the most sophisticated companies in the
> >> world when it came to consumer testing....yeah, over ten
> >> years of test planning, design, and interpretation.
>
> >That's strange considering all of your rants against their objectivity.
>
> >> Beats ashtrays.
>
> >I have no idea how that relates to the current discussion. Since I've never
> >smoked, my interest in ashtrays could be less, but I don't know how.- Ascunde citatul -
>
> - Afiºare text în citat -- Ascunde citatul -
>
> - Afiºare text în citat -
you missed the session on common sense, too bad.
Clyde Slick
January 18th 08, 04:03 PM
On 18 Ian, 14:20, "Arny Krueger" > wrote:
>
> One of the inspirations for the development of double blind
> testing was my wife
I am touched that you find your wife to be such an inspiriational
experience.
BTW. just what other woman did you double blind test her against?
Clyde Slick
January 18th 08, 04:04 PM
On 18 Ian, 14:23, "Arny Krueger" > wrote:
> "John Atkinson" > wrote in
> > Remind me again how many times Arny Krueger has been
> > quoted in the Wall Street Journal?
>
> This is not logical discussion or even just rhetoric, this is abuse.
HUH?????
George M. Middius
January 18th 08, 04:05 PM
Clyde Slick said:
> > > Arny dreams of voltmeters. That is the extent of Arny's imagination.
> > > My imagination centers upon a certain city bus
> >
> > I think the repetitions have conditioned Turdy to flinch when he detects a
> > bus rolling in his vicinity. I switched my bet to "getting electrocuted by
> > lightning". The odds on this bet are considerably better.
>
> Personally, I like choking on a ham sandwich.
It's your money to risk as you choose. The bookie I use doesn't offer odds
on that possibility, but he does offer a line on the Krooborg drowning in
a municipal sewage tank.
George M. Middius
January 18th 08, 04:06 PM
Clyde Slick said to McInturd:
> Are you testing the 99% who don't give a rat's
> ass anyway? If so, so what. Or are you testing people who actually
> care.
Good point to bring out on, LOt"S. The 'borg viewpoint is that nobody
should be allowed to care about things that 'borgs can't afford to own.
Oliver Costich
January 18th 08, 04:25 PM
On Fri, 18 Jan 2008 00:42:13 -0800, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>
>>
>>
>>
>>
>> Very little of the claims about people being able to discern
>> differences in cables is supported by such testing.
>
>
>
>I take it you don't recommend testing for such purposes.
>Ok then...
>
>
>
I don't recommend badly designed tests and I don't recommend making
statistically invalid claims based on any kind of test.
But the only way to statistically support (or reject) claims about
human behavior is through well designed experiments and real
statistical analysis.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
Oliver Costich
January 18th 08, 05:00 PM
On Fri, 18 Jan 2008 09:10:59 -0500, "Harry Lavo" >
wrote:
>
>"Arny Krueger" > wrote in message
...
>> "Oliver Costich" > wrote in
>> message
>>
>>> On Thu, 17 Jan 2008 07:32:07 -0500, "Arny Krueger"
>>> > wrote:
>>
>>>> "Harry Lavo" > wrote in message
>>>>
>>
>>
>>>>> Somewhere in
>>>>> your college education, you skipped the class in logic,
>>>>> I guess.
>>
>>> In my several years of graduate school in mathemeatics, I
>>> skipped neither the logic nor the statistics classes.
>>
>> Nor did I. I did extensive undergraduate and postgraduate work in math and
>> statistics. One of the inspirations for the development of double blind
>> testing was my wife who has a degree in experimental psychology. Another
>> was a friend with a degree in mathematics.
>>
>>> Logic is on the side of not making decisions about human
>>> behavior without sufficient testing using good design of
>>> experiment method and statistical analysis.
>>
>> 4 of the 6 ABX partners had technical degrees ranging from BS to PhD.
>>
>>> Very little of the claims about people being able to
>>> discern differences in cables is supported by such
>>> testing.
>>
>> When it comes to audible differences between cables that is not supported
>> by science and math, which is what this thread is about, none of it is
>> supported by well-designed experiments.
>>
>
>Well, then rather than "braying and flaying" why don't you communicate the
>statistics.
>
>As reported 61% of 39 people chose the correct cable. That according to my
>calculator was 24 people.
>
>According to my Binomial Distribution Table, that provides less than a 5%
>chance of error...in other words the percentage is statistically
>significant. In fact, it is significant at the 98% level....a 2% chance of
>error.
I did in other posts but here's a summary. Hypothesis test of claim
that p>.5 (p is the probability that more the half of listeners can do
better than guessing). Null hypothesis is p=.5. The P-value is .0748
but would need to be below .05 to support the claim at the 95%
Confidence Level.
You rounded off .054 to .05. You would need to get a probability of
less than .05 to assert the claim, and NO, .054 isn't "close enough"
for statistical validity. I don't know where you got the 98% from.
>
>Had one more chosen correctly, the error probability would have been less
>than 1%, or "beyond a shadow of a doubt".
If it had been 25 instead of 24 it would have supported the claim at
the 95% level but not at 97% or higher. But that's the point. You
don't get to wiggle around the numbers so you get what you want. If it
had been one less, you you still make the claim? What about if 39 more
people did the experiment and only 20 got it right. You can only draw
so much support for a claim from a single sample.
And nothing that can only be tested statistically is "beyond a shadow
of a doubt" unless you mean "supported at a very high level of
confidence" which isn't the case here, even with another correct
"guess". Statistics can only be used to support a claim up to the
probability (1-confidence level) of falsely supporting an invalid
conclusion.
The underlying model for determining whether binary selection is
random is tossing a coin. Tossing a coin 39 times and getting 24 heads
doesn't mean the coin is baised towards heads.
>
>So presumably John and Michael did at least this well to be singled out by
>the reporter.
Who obviously was deeply knowledgable about statistics.
>
>Is this why you are desparately flaying at the test, Arny...inventing
>"possibibilites" without a single shred of evidence to support your
>conjectures? Because you know (if you truly do know math and statistics)
>that the test statistics hold up (but don't have the integrity to say so)?
>
Oliver Costich
January 18th 08, 05:02 PM
On Fri, 18 Jan 2008 09:16:01 -0500, "Arny Krueger" >
wrote:
>"Harry Lavo" > wrote in message
>
>> As reported 61% of 39 people chose the correct cable. That according to my
>> calculator was 24 people.
>
>As reported, the experiment was invalid. No further analysis is necesary.
>
Bertrand Russell once pointed out that one of Augustine's seven
arguments for God's existence was valid. That doesn't mena God exists.
Similarly, even if this experiment were valid, the data doesn't
support the claim.
Clyde Slick
January 18th 08, 05:08 PM
On 18 Ian, 18:00, Oliver Costich > wrote:
> On Fri, 18 Jan 2008 09:10:59 -0500, "Harry Lavo" >
> wrote:
>
>
>
>
>
>
>
> >"Arny Krueger" > wrote in message
> ...
> >> "Oliver Costich" > wrote in
> >> messagenews:7eovo350khiqqsqqk5iisucqn7s7d1pd8s@4ax .com
>
> >>> On Thu, 17 Jan 2008 07:32:07 -0500, "Arny Krueger"
> >>> > wrote:
>
> >>>> "Harry Lavo" > wrote in message
>
>
> >>>>> Somewhere in
> >>>>> your college education, you skipped the class in logic,
> >>>>> I guess.
>
> >>> In my several years of graduate school in mathemeatics, I
> >>> skipped neither the logic nor the statistics classes.
>
> >> Nor did I. I did extensive undergraduate and postgraduate work in math and
> >> statistics. One of the inspirations for the development of double blind
> >> testing was my wife who has a degree in experimental psychology. Another
> >> was a friend with a degree in mathematics.
>
> >>> Logic is on the side of not making decisions about human
> >>> behavior without sufficient testing using good design of
> >>> experiment method and statistical analysis.
>
> >> 4 of the 6 ABX partners had technical degrees ranging from BS to PhD.
>
> >>> Very little of the claims about people being able to
> >>> discern differences in cables is supported by such
> >>> testing.
>
> >> When it comes to audible differences between cables that is not supported
> >> by science and math, which is what this thread is about, none of it is
> >> supported by well-designed experiments.
>
> >Well, then rather than "braying and flaying" why don't you communicate the
> >statistics.
>
> >As reported 61% of 39 people chose the correct cable. *That according to my
> >calculator was 24 people.
>
> >According to my Binomial Distribution Table, that provides less than a 5%
> >chance of error...in other words the percentage is statistically
> >significant. *In fact, it is significant at the 98% level....a 2% chance of
> >error.
>
> I did in other posts but here's a summary. Hypothesis test of claim
> that p>.5 (p is the probability that more the half of listeners can do
> better than guessing). Null hypothesis is p=.5. The P-value is .0748
> but would need to be below .05 to support the claim at the 95%
> Confidence Level.
>
> You rounded off .054 to .05. You would need to get a probability of
> less than .05 to assert the claim, and NO, .054 isn't "close enough"
> for statistical validity. *I don't know where you got the 98% from.
>
>
>
> >Had one more chosen correctly, the error probability would have been less
> >than 1%, or "beyond a shadow of a doubt".
>
> If it had been 25 instead of 24 it would have supported the claim at
> the 95% level but not at 97% or higher. But that's the point. You
> don't get to wiggle around the numbers so you get what you want. If it
> had been one less, you you still make the claim? What about if 39 more
> people did the experiment and only 20 got it right. You can only draw
> so much support for a claim from a single sample.
>
> And nothing that can only be tested statistically is "beyond a shadow
> of a doubt" unless you mean "supported at a very high level of
> confidence" which isn't the case here, even with another correct
> "guess". Statistics can only be used to support a claim up to the
> probability (1-confidence level) of falsely supporting an invalid
> conclusion.
>
> The underlying model for determining whether binary selection is
> random is tossing a coin. Tossing a coin 39 times and getting 24 heads
> doesn't mean the coin is baised towards heads.
>
As a practical matter as a "CONSUMER", I don't really care
whether or not a statistically relevant number of people,
from a sample of people I care nothing about, heard differences,
or had a preference. What matters too me, as a "CONSUMER",
is what my particular preference is.
Oliver Costich
January 18th 08, 05:19 PM
On Thu, 17 Jan 2008 17:44:27 -0600, MiNe 109
> wrote:
>In article >,
> Oliver Costich > wrote:
>
>> On Thu, 17 Jan 2008 12:56:23 -0500, Walt >
>> wrote:
>>
>> wrote:
>> >> On Jan 16, 10:52?am, John Atkinson > wrote:
>> >>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>> >>>
>> >>> Money quote: "I was struck by how the best-informed people at the
>> >>> show -- like John Atkinson and Michael Fremer of Stereophile
>> >>> Magazine -- easily picked the expensive cable."
>> >>
>> >> So will you be receiving your $1 million from Randi anytime soon?
>> >
>> >Don't count on it. From TFA: "But of the 39 people who took this test,
>> >61% said they preferred the expensive cable." Hmmme. 39 trials. 50-50
>> >chance. How statistically significant is 61%? You do the math.
>> >(HINT: it ain't.)
>>
>> Here's the math: Claim is p (proportion of correct answers) >.5. Null
>> hypothesis is p=.5. The null hypothsis cannot be rejected (and the
>> claim cannot be supported) at the 95% significance level.
>
>Welcome to the group! Out of curiosity, what significance level does 61%
>support?
>
>Stephen
First you have to find out where the 61% came from. In this case, I
presume it is 24 out of 39. From the sample data and the claim about
the population proportion, you can compute a number called the
P-Value, not to be confused with the population probability in the
claim, usually denoted "p". To be able to support a claim that more
than half of the population can do better then guessing, you need the
P-Value for p=.5, which in this case is .07477. To support the claim
that p>.5 at the 95% confidence level, you need the P-Value to be less
than (1-significance level). So for 95%, you need a P-Value of less
than .05, for 93% you need a P-Value less than .07. Looks like 24 out
of 39 supports the claim at the 92% level.
However, that's not how you does statistics. You don't compute the
P-Value and then fish around for a significance level that supports
your claim (or rejects it depending what side of the argument you are
on.
>
>> >And of course this doesn't even address the single-blind nature of the
>> >test. See http://en.wikipedia.org/wiki/Clever_Hans
>> >
The data from badly designed experiments is useless for analysis. I
would have thought that was obvious.
John Atkinson[_2_]
January 18th 08, 05:19 PM
On Jan 18, 8:23*am, "Arny Krueger" > wrote:
> "John Atkinson" > wrote in
>
> > Remind me again how many times Arny Krueger has been
> > quoted in the Wall Street Journal?
>
> This is not logical discussion or even just rhetoric, this is abuse.
Er, no. It is a straightforward question, Mr. Krueger. How many times
have you been quoted in the WSJ?
>>At least he has stopped claiming that his neglected, rarely updated,
>>almost-never-promoted websites get as much traffic as Stereophile's...
No argument from Mr. Krueger about this, at least. :-)
>> or that his recordings are as commercially available as my own. :-)
Nor this, though I do note that he continues to argue with
professional
recording engineer Iain Churches that his own work is somehow
comparable. BTW, Mr. Krueger, my most-recent choral recording --
see http://www.stereophile.com/news/121007cantus/ -- was No.9
in NPR's Top Next-Generation Classical CDs of 2007. Even if I
am unaware of truncated reverb tails, as you mistakenly claim in
another thread. How are your own choral recordings doing?
John Atkinson
Editor, Stereophile
"Well Informed" - The Wall Street Journal
Oliver Costich
January 18th 08, 05:20 PM
On Fri, 18 Jan 2008 08:21:43 -0500, "Arny Krueger" >
wrote:
>"MiNe 109" > wrote in message
>> In article >,
>> Oliver Costich > wrote:
>>
>>> On Thu, 17 Jan 2008 12:56:23 -0500, Walt
>>> > wrote:
>>>
>>>> wrote:
>>>>> On Jan 16, 10:52?am, John Atkinson
>>>>> > wrote:
>>>>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>>>>>
>>>>>> Money quote: "I was struck by how the best-informed
>>>>>> people at the show -- like John Atkinson and Michael
>>>>>> Fremer of Stereophile Magazine -- easily picked the
>>>>>> expensive cable."
>>>>>
>>>>> So will you be receiving your $1 million from Randi
>>>>> anytime soon?
>>>>
>>>> Don't count on it. From TFA: "But of the 39 people who
>>>> took this test, 61% said they preferred the expensive
>>>> cable." Hmmme. 39 trials. 50-50 chance. How
>>>> statistically significant is 61%? You do the math.
>>>> (HINT: it ain't.)
>>>
>>> Here's the math: Claim is p (proportion of correct
>>> answers) >.5. Null hypothesis is p=.5. The null
>>> hypothsis cannot be rejected (and the claim cannot be
>>> supported) at the 95% significance level.
>>
>> Welcome to the group! Out of curiosity, what significance
>> level does 61% support?
>
>You haven't formed the question properly. 61% is statisically signifcant or
>not, depending on the total number of trials.
>
Yes. 61% of 39 is not, but 61% of 50 is.
Oliver Costich
January 18th 08, 05:21 PM
On Fri, 18 Jan 2008 07:43:13 -0600, MiNe 109
> wrote:
>In article >,
> "Arny Krueger" > wrote:
>
>> "MiNe 109" > wrote in message
>>
>> > In article >,
>> > Oliver Costich > wrote:
>> >
>> >> On Thu, 17 Jan 2008 12:56:23 -0500, Walt
>> >> > wrote:
>> >>
>> >>> wrote:
>> >>>> On Jan 16, 10:52?am, John Atkinson
>> >>>> > wrote:
>> >>>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>> >>>>>
>> >>>>> Money quote: "I was struck by how the best-informed
>> >>>>> people at the show -- like John Atkinson and Michael
>> >>>>> Fremer of Stereophile Magazine -- easily picked the
>> >>>>> expensive cable."
>> >>>>
>> >>>> So will you be receiving your $1 million from Randi
>> >>>> anytime soon?
>> >>>
>> >>> Don't count on it. From TFA: "But of the 39 people who
>> >>> took this test, 61% said they preferred the expensive
>> >>> cable." Hmmme. 39 trials. 50-50 chance. How
>> >>> statistically significant is 61%? You do the math.
>> >>> (HINT: it ain't.)
>> >>
>> >> Here's the math: Claim is p (proportion of correct
>> >> answers) >.5. Null hypothesis is p=.5. The null
>> >> hypothsis cannot be rejected (and the claim cannot be
>> >> supported) at the 95% significance level.
>> >
>> > Welcome to the group! Out of curiosity, what significance
>> > level does 61% support?
>>
>> You haven't formed the question properly. 61% is statisically signifcant or
>> not, depending on the total number of trials.
>
>Okay, in 39 trials, what level of significance does 61% indicate?
>
>Stephen
About 92%
Oliver Costich
January 18th 08, 05:21 PM
On Fri, 18 Jan 2008 08:45:24 -0600, MiNe 109
> wrote:
>In article >,
> "Arny Krueger" > wrote:
>
>> "MiNe 109" > wrote in message
>>
>> > In article >,
>> > "Arny Krueger" > wrote:
>> >
>> >> "MiNe 109" > wrote in message
>> >>
>> >>> In article >,
>> >>> Oliver Costich > wrote:
>> >>>
>> >>>> On Thu, 17 Jan 2008 12:56:23 -0500, Walt
>> >>>> > wrote:
>> >>>>
>> >>>>> wrote:
>> >>>>>> On Jan 16, 10:52?am, John Atkinson
>> >>>>>> > wrote:
>> >>>>>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in.
>> >>>>>>> ..
>> >>>>>>>
>> >>>>>>> Money quote: "I was struck by how the best-informed
>> >>>>>>> people at the show -- like John Atkinson and Michael
>> >>>>>>> Fremer of Stereophile Magazine -- easily picked the
>> >>>>>>> expensive cable."
>> >>>>>>
>> >>>>>> So will you be receiving your $1 million from Randi
>> >>>>>> anytime soon?
>> >>>>>
>> >>>>> Don't count on it. From TFA: "But of the 39 people
>> >>>>> who took this test, 61% said they preferred the
>> >>>>> expensive cable." Hmmme. 39 trials. 50-50 chance.
>> >>>>> How statistically significant is 61%? You do the
>> >>>>> math. (HINT: it ain't.)
>> >>>>
>> >>>> Here's the math: Claim is p (proportion of correct
>> >>>> answers) >.5. Null hypothesis is p=.5. The null
>> >>>> hypothsis cannot be rejected (and the claim cannot be
>> >>>> supported) at the 95% significance level.
>> >>>
>> >>> Welcome to the group! Out of curiosity, what
>> >>> significance level does 61% support?
>> >>
>> >> You haven't formed the question properly. 61% is
>> >> statisically signifcant or not, depending on the total
>> >> number of trials.
>> >
>> > Okay, in 39 trials, what level of significance does 61%
>> > indicate?
>> >
>>
>> In this case nothing, because the basic experiment seems to be so flawed.
>
>In a perfectly designed test with 39 trials, what level of significance
>does 61% indicate?
>
>Stephen
Still about 92% and Generalisimo Franco is still dead.
Oliver Costich
January 18th 08, 05:23 PM
On Thu, 17 Jan 2008 15:57:58 -0800 (PST), "Shhhh! I'm Listening to
Reason!" > wrote:
>On Jan 17, 5:25*pm, Oliver Costich > wrote:
>
>> >Don't count on it. *From TFA: "But of the 39 people who took this test,
>> >61% said they preferred the expensive cable." Hmmme. *39 trials. 50-50
>> >chance. *How statistically significant is 61%? *You do the math.
>
>Why is this important to you, so much so that you have blasted so many
>posts in this thread?
>
>Is this really, really important?
Only is you want to understand what the test tells you and lots of got
it wrong.
>
>> >(HINT: it ain't.)
To you, and I don't care much either. I do care about such bad logic.
>
>OK then. ;-)
Clyde Slick
January 18th 08, 05:24 PM
On 18 Ian, 18:19, John Atkinson > wrote:
> On Jan 18, 8:23*am, "Arny Krueger" > wrote:
>
How are your own choral recordings doing?
>
Singing the blues.
Bill Riel
January 18th 08, 05:25 PM
In article <191df265-d5b6-4ce5-a4ef-
>,
says...
> On 18 Ian, 14:20, "Arny Krueger" > wrote:
> >
>
> > One of the inspirations for the development of double blind
> > testing was my wife
>
> I am touched that you find your wife to be such an inspiriational
> experience.
> BTW. just what other woman did you double blind test her against?
LOL! Very good.
--
Bill
Oliver Costich
January 18th 08, 05:32 PM
On Fri, 18 Jan 2008 00:48:21 -0800, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>> Walt wrote:
>>>> vinylanach wrote:
>>>
>>>
>>>
>>>
>>>> So will you be receiving your $1 million from Randi anytime soon?
>>>
>>> Don't count on it. From TFA: "But of the 39 people who took this
>>> test, 61% said they preferred the expensive cable." Hmmme. 39
>>> trials. 50-50 chance. How statistically significant is 61%? You do
>>> the math. (HINT: it ain't.)
>>
>> Here's the math: Claim is p (proportion of correct answers) >.5. Null
>> hypothesis is p=.5. The null hypothsis cannot be rejected (and the
>> claim cannot be supported) at the 95% significance level.
>
>
>Well yes, Mr. Costich, the test results aren't scientifically valid but it
>didn't disproved that the sound differences heard by participants did
>not physically exist.
>
Of course not. Certainty is not in the realm of statistical analysis.
Let's say you want to claim the a certain coin is biased to produce
heads when flipped. That you flip it 39 times and get 24 heads is not
sufficient to support the claim at a 95% confidence level. If you
lower your standard or do a lot more flips and still get 61%, the
conclusion will change
I'm sure there are audible differences. The issue is whether they are
enough to make consisten determinations. A bigger issue for those of
use who just listen to music is whether the diffeneces are detectable
when you are emotionally involved in the music and not just playing
"golden ears".
George M. Middius
January 18th 08, 05:33 PM
Clyde Slick said:
> > > Remind me again how many times Arny Krueger has been
> > > quoted in the Wall Street Journal?
> > This is not logical discussion or even just rhetoric, this is abuse.
> HUH?????
Krooger is practicing his martyr shtick for church. Only two days till the
next roast.
George M. Middius
January 18th 08, 05:33 PM
MiNe 109 said:
> > > In a perfectly designed test with 39 trials, what level
> > > of significance does 61% indicate?
> >
> > It is on the web - do your own research.
>
> Thanks! You've been a big help in formulating the correct question.
Stephen, please stop abusing the Krooborg.
Oliver Costich
January 18th 08, 05:34 PM
On Fri, 18 Jan 2008 01:02:47 -0800, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>> Walt wrote:
>>>> John Atkinson wrote:
>>>
>>>
>>>
>>>
>>>> Remind me again how many times Arny Krueger has been
>>>> quoted in the Wall Street Journal?
>>>
>>> Ok. So you've been quoted in the WSJ. So have Uri Geller and Ken
>>> Lay.
>>>
>>> What's your point?
>>
>> So has Osama Bin Laden. The point is that he's devoid of a sound
>> argument.
>
>
>Mr. Costich, there is no sound argument to improve upon a strawman
>arguments. It just doesn't exist.
Agreed.
>
>
>
>>>
>>> //Walt
>
>
>Incidentally Mr. Costich, how well do you know Arny Krueger if you
>don't mind me asking so.
>
I only know of his existence from the news group, if that's his real
name:-) BTW, I don't necessarily agree with much of his opinion.
Oliver Costich
January 18th 08, 05:35 PM
On Fri, 18 Jan 2008 01:19:57 -0800, "JBorg, Jr."
> wrote:
>> Oliver Costich > wrote:
>>
>>
>>
>>
>> I think that the Nobel Prize also pays a million bucks. I'd go for the
>> double play:-)
>
>
>What sort of test should one have in mind for this type of opportunity
>to ensure success, Mr. Costich ?
>
The proof is in the pudding:-)
George M. Middius
January 18th 08, 05:35 PM
John Atkinson said:
> How are your own choral recordings doing?
No calls to the plumber in the last three months, thank you very much.
Oliver Costich
January 18th 08, 05:44 PM
On Thu, 17 Jan 2008 15:54:42 -0800 (PST), "Shhhh! I'm Listening to
Reason!" > wrote:
>On Jan 17, 5:15*pm, Oliver Costich > wrote:
>
>> In other words, that 61% of a sample of 39 got the correct result
>> isn't sufficient evidence that in the general population of listeners
>> more than half can pick the better cable.
>>
>> So, I'd say "that's hardly that".
>
>I'm curious what percent of the "best informed" got. I mean, you could
>mix in hot dog vendors, the deaf, people who might try to fail just to
>be contrary, you, and so on, and get different results. Apparently JA
>and MF did better than random chance.
However "random chance" is defined. To make a valid statement about
the abilities of the "best informed", you'd have to define that
population and do the experiment on them. If 24 of them got it right
out of 39, then you'd still not be able to support the calim and the
95% confidence level.
>
>The real issue to me is "who cares". People who want expensive cables,
>wires, cars, clothes, or whatever, will buy them. People who want to
>tell other people what they should or shouldn't buy will come out of
>the woodwork to bitch about it. ;-)
>
>This seems to have really gotten your dander up. Why?
I don't care much about it either. If people want to buy overpriced
stuff based on bogus claims that's fine with me. What bugs me is that
they try to support the claims based on bogus experiments and bad
analysis. I spend way too much time in classrooms trying to
communicate the importance of critical thinking to today's college
students (and it ain't easy) to just let this sloppy logic pass.
By the way, I don't use lamp cord or Home Depot interconnects in my
system.
Oliver Costich
January 18th 08, 05:46 PM
On Fri, 18 Jan 2008 01:21:16 -0800, "JBorg, Jr."
> wrote:
>> Shhhh! wrote:
>>> Oliver Costich wrote:
>>
>>
>>
>>
>>
>>> In other words, that 61% of a sample of 39 got the correct result
>>> isn't sufficient evidence that in the general population of listeners
>>> more than half can pick the better cable.
>>>
>>> So, I'd say "that's hardly that".
>>
>> I'm curious what percent of the "best informed" got. I mean, you could
>> mix in hot dog vendors, the deaf, people who might try to fail just to
>> be contrary, you, and so on, and get different results.
>
>
>
>Well asked.
>
What population of listeners was the claim made for and how was it
defined? My guess is that however it's constructed, it a lot bigger
than 39.
Oliver Costich
January 18th 08, 05:58 PM
On Fri, 18 Jan 2008 00:36:18 +0000, Eeyore
> wrote:
>
>
>Oliver Costich wrote:
>
>> Back to reality: 61% correct in one experiment fails to reject that
>> they can't tell the difference.
>
>61% is statistically close enough as doesn't matter to pure 50-50% random choice.
>
>Try flicking coins and see if you get a perfect 50-50 distribution for any given
>sample size. You WON'T. In fact pure 50-50 would be the exception by a mile.
>
>No, 61% is as good as proof that there's NO difference. Which there ISN'T of
>course. Copper is copper is copper. High pricing, alleged magic and phoney
>marketing doesn't make if any different.
>
>Graham
You have to be more specific. 61% may or may not be significant
depending on the sample size and significance level desired. It's also
possible that the hyptohesis based on the claim can be rejected when
it's true and you can fail to reject it when it's false. That's the
nature of statiscal analysis.
Based on the data given, even accepting the design of the experiment
(a different issue), at the 95$ significance level, you can't support
the claim. That does not mean it is false. it justs means there's not
enough evidence.
Sometimes guilty people get acquitted.
Oliver Costich
January 18th 08, 06:01 PM
On Fri, 18 Jan 2008 07:54:54 -0800 (PST), Clyde Slick
> wrote:
>On 18 Ian, 00:15, Oliver Costich > wrote:
>> On Wed, 16 Jan 2008 10:52:40 -0800 (PST), John Atkinson
>>
>> > wrote:
>> >http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>
>> >Money quote: "I was struck by how the best-informed people at the
>> >show -- like John Atkinson and Michael Fremer of Stereophile
>> >Magazine -- easily picked the expensive cable."
>>
>> >So that's that, then. :-)
>>
>> >John Atkinson
>> >Editor, Stereophile
>>
>> From the article: Using two identical CD players, I tested a $2,000,
>> eight-foot pair of Sigma Retro Gold cables from Monster Cable, which
>> are as thick as your thumb, against 14-gauge, hardware-store speaker
>> cable. Many audiophiles say they are equally good. I couldn't hear a
>> difference and was a wee bit suspicious that anyone else could. But of
>> the 39 people who took this test, 61% said they preferred the
>> expensive cable.
>>
>> Back to reality: 61% correct in one experiment fails to reject that
>> they can't tell the difference. If the claim is that listeners can
>> tell the better cable more the half the time, then to support that you
>> have to be able to reject that the in the population of all audio
>> interested listeners, the correct guesses occur half the time or less.
>> 61% of 39 doesn't do it. (Null hypothesis is p=.5, alternative
>> hypothesis is p>.5. The null hypthesis cannot be rejected with the
>> sample data given.)
>>
>> In other words, that 61% of a sample of 39 got the correct result
>> isn't sufficient evidence that in the general population of listeners
>> more than half can pick the better cable.
>>
>> So, I'd say "that's hardly that".
>
>you seem to be mixing difference with preference, you reference both,
>for the same test.
For the purpose of statistical analysis it makes no difference.
> And just what is the general population of
>listeners.
You tell me. I presume that those who attend CES and would be a good
one to use. What would you use and how would you construct a simple
random sample from it?
>Are you testing the 99% who don't give a rat's
>ass anyway? If so, so what. Or are you testing people who actually
>care.
Oliver Costich
January 18th 08, 06:06 PM
On Fri, 18 Jan 2008 08:30:20 -0500, "Arny Krueger" >
wrote:
>"JBorg, Jr." > wrote in message
>>> Arny Krueger wrote:
>>>> JBorg, Jr. wrote
>>>>> Arny Krueger wrote:
>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> More proof that single blind tests are nothing more
>>>>> than defective double blind tests.
>>>>
>>>>
>>>> From this article, the author wrote, "... the expensive
>>>> cables sounded roughly 5% better. Remember, by
>>>> definition, an audiophile is one who will bear any
>>>> burden, pay any price, to get even a tiny improvement
>>>> in sound." Only 5% ?
>>>
>>> Even so, it was proabably 100% imagination.
>
>> How can that be so? From the article, it said, "... 39
>> people who took this test, 61% said they preferred the
>> expensive cable."
>
>> At what percentage do you consider it imagination, and
>> when it is not.
>
>Well Borg, this post is more evidence that ignorance of basic statistics is
>a common problem among golden ears. It's not a well-formed question. It's
>not the percentage of correct answers that defines statistical signicance,
>its both the percentage of correct answers and the total number of trials.
And the predetermined level of significance.
>And, that's all based on the idea that basic experiment was well-designed.
>
>The most fundamental question is whether the experiment was well-designed.
>
>> Somehow, this showdown at the CES looked like a
>> DBT sans blackbox.
>
>Nope. This comment is even more evidence that ignorance of basic
>experimental design is a common problem among golden ears. The basic rule
>of double blind testing is that no clue other than the independent variable
>is available to the listener. In this alleged test, the person who
>controlled the cables interacted with the listeners. In a proper DBT, nobody
>or anything that could possibly reveal the indentity of the object chosen
>for comparison is acessible in any way to the listener.
>
>
Harry Lavo
January 18th 08, 08:53 PM
"Oliver Costich" > wrote in message
...
> On Fri, 18 Jan 2008 09:10:59 -0500, "Harry Lavo" >
> wrote:
>
>>
>>"Arny Krueger" > wrote in message
...
>>> "Oliver Costich" > wrote in
>>> message
>>>
>>>> On Thu, 17 Jan 2008 07:32:07 -0500, "Arny Krueger"
>>>> > wrote:
>>>
>>>>> "Harry Lavo" > wrote in message
>>>>>
>>>
>>>
>>>>>> Somewhere in
>>>>>> your college education, you skipped the class in logic,
>>>>>> I guess.
>>>
>>>> In my several years of graduate school in mathemeatics, I
>>>> skipped neither the logic nor the statistics classes.
>>>
>>> Nor did I. I did extensive undergraduate and postgraduate work in math
>>> and
>>> statistics. One of the inspirations for the development of double blind
>>> testing was my wife who has a degree in experimental psychology. Another
>>> was a friend with a degree in mathematics.
>>>
>>>> Logic is on the side of not making decisions about human
>>>> behavior without sufficient testing using good design of
>>>> experiment method and statistical analysis.
>>>
>>> 4 of the 6 ABX partners had technical degrees ranging from BS to PhD.
>>>
>>>> Very little of the claims about people being able to
>>>> discern differences in cables is supported by such
>>>> testing.
>>>
>>> When it comes to audible differences between cables that is not
>>> supported
>>> by science and math, which is what this thread is about, none of it is
>>> supported by well-designed experiments.
>>>
>>
>>Well, then rather than "braying and flaying" why don't you communicate the
>>statistics.
>>
>>As reported 61% of 39 people chose the correct cable. That according to
>>my
>>calculator was 24 people.
>>
>>According to my Binomial Distribution Table, that provides less than a 5%
>>chance of error...in other words the percentage is statistically
>>significant. In fact, it is significant at the 98% level....a 2% chance
>>of
>>error.
>
> I did in other posts but here's a summary. Hypothesis test of claim
> that p>.5 (p is the probability that more the half of listeners can do
> better than guessing). Null hypothesis is p=.5. The P-value is .0748
> but would need to be below .05 to support the claim at the 95%
> Confidence Level.
>
> You rounded off .054 to .05. You would need to get a probability of
> less than .05 to assert the claim, and NO, .054 isn't "close enough"
> for statistical validity. I don't know where you got the 98% from.
I saw your previous post, found it hard to believe with a sample of 39, and
so checked it myself. I used a professionally published 100x100 Binomial
Distribution Table with the correct P value for every combination of
right/total sample. Without checking your math, I'm not about to yield to
your numbers.
And I didn't round off anything...the probabilities are right out of the
table....020 for 24/39 and .008 for 25/39.
>>Had one more chosen correctly, the error probability would have been less
>>than 1%, or "beyond a shadow of a doubt".
>
> If it had been 25 instead of 24 it would have supported the claim at
> the 95% level but not at 97% or higher. But that's the point. You
> don't get to wiggle around the numbers so you get what you want. If it
> had been one less, you you still make the claim? What about if 39 more
> people did the experiment and only 20 got it right. You can only draw
> so much support for a claim from a single sample.
>
> And nothing that can only be tested statistically is "beyond a shadow
> of a doubt" unless you mean "supported at a very high level of
> confidence" which isn't the case here, even with another correct
> "guess". Statistics can only be used to support a claim up to the
> probability (1-confidence level) of falsely supporting an invalid
> conclusion.
>
> The underlying model for determining whether binary selection is
> random is tossing a coin. Tossing a coin 39 times and getting 24 heads
> doesn't mean the coin is baised towards heads.
I understand all that...I will hope you intended this for others.
>
>>
>>So presumably John and Michael did at least this well to be singled out by
>>the reporter.
>
> Who obviously was deeply knowledgable about statistics.
Perhaps not, and so he could be wrong. But presumable the test designer
would have corrected him if he were wildly so.
>>
>>Is this why you are desparately flaying at the test, Arny...inventing
>>"possibibilites" without a single shred of evidence to support your
>>conjectures? Because you know (if you truly do know math and statistics)
>>that the test statistics hold up (but don't have the integrity to say so)?
>>
>
Shhhh! I'm Listening to Reason!
January 18th 08, 08:55 PM
On Jan 18, 9:26*am, Walt > wrote:
> Shhhh! I'm Listening to Reason! wrote:
>
> > On Jan 17, 5:25 pm, Oliver Costich > wrote:
>
> >>> Don't count on it. *From TFA: "But of the 39 people who took this test,
> >>> 61% said they preferred the expensive cable." Hmmme. *39 trials. 50-50
> >>> chance. *How statistically significant is 61%? *You do the math.
>
> > Why is this important to you, so much so that you have blasted so many
> > posts in this thread?
>
> I've blasted "so many posts"? *WTF?
>
> I count three. *This will make four. *You must have me confused with
> somebody else.
Please look at who this post was in response to. I think you are
confused about who you are.;-)
Shhhh! I'm Listening to Reason!
January 18th 08, 09:01 PM
On Jan 18, 9:31*am, "Arny Krueger" > wrote:
> "Walt" > wrote in message
>
>
>
>
>
>
>
> > Shhhh! I'm Listening to Reason! wrote:
> >> On Jan 17, 5:25 pm, Oliver Costich
> >> > wrote:
> >>>> Don't count on it. *From TFA: "But of the 39 people
> >>>> who took this test, 61% said they preferred the
> >>>> expensive cable." Hmmme. *39 trials. 50-50 chance. How statistically
> >>>> significant is 61%? *You do the
> >>>> math.
>
> >> Why is this important to you, so much so that you have
> >> blasted so many posts in this thread?
>
> > I've blasted "so many posts"? *WTF?
>
> > I count three. *This will make four. *You must have me
> > confused with somebody else.
>
> I think I counted 7 posts to this thread from ****R. The interesting
> question about the Middiot Clique is which of them is less self-aware.
;-)
> Right now Stephen, Jenn, ****R and the Middiot himself are duking it out for
> the dishonor. ;-)
The difference being, GOIA, is that I did not post the same response
to multiple posts.
We get the fact that you don't consider the methodology valid. Oliver
made double-damned *sure* we knew that he didn't (which was my point).
What you and the others have not responded to is whether that "proves"
no difference existed, or (even more importantly) why it matters to
you in the least.
As I've said before, GOIA, I'm actually in your camp when it comes to
wires and cables. I just don't see why "you people" go so bonkers when
somebody doesn't agree with you. I know, I know, you're just trying to
save them money. But it's theirs to spend as they see fit, isn't it?
LOL!
Oliver Costich
January 18th 08, 09:04 PM
On Fri, 18 Jan 2008 09:08:37 -0800 (PST), Clyde Slick
> wrote:
>On 18 Ian, 18:00, Oliver Costich > wrote:
>> On Fri, 18 Jan 2008 09:10:59 -0500, "Harry Lavo" >
>> wrote:
>>
>>
>>
>>
>>
>>
>>
>> >"Arny Krueger" > wrote in message
>> ...
>> >> "Oliver Costich" > wrote in
>> >> messagenews:7eovo350khiqqsqqk5iisucqn7s7d1pd8s@4ax .com
>>
>> >>> On Thu, 17 Jan 2008 07:32:07 -0500, "Arny Krueger"
>> >>> > wrote:
>>
>> >>>> "Harry Lavo" > wrote in message
>>
>>
>> >>>>> Somewhere in
>> >>>>> your college education, you skipped the class in logic,
>> >>>>> I guess.
>>
>> >>> In my several years of graduate school in mathemeatics, I
>> >>> skipped neither the logic nor the statistics classes.
>>
>> >> Nor did I. I did extensive undergraduate and postgraduate work in math and
>> >> statistics. One of the inspirations for the development of double blind
>> >> testing was my wife who has a degree in experimental psychology. Another
>> >> was a friend with a degree in mathematics.
>>
>> >>> Logic is on the side of not making decisions about human
>> >>> behavior without sufficient testing using good design of
>> >>> experiment method and statistical analysis.
>>
>> >> 4 of the 6 ABX partners had technical degrees ranging from BS to PhD.
>>
>> >>> Very little of the claims about people being able to
>> >>> discern differences in cables is supported by such
>> >>> testing.
>>
>> >> When it comes to audible differences between cables that is not supported
>> >> by science and math, which is what this thread is about, none of it is
>> >> supported by well-designed experiments.
>>
>> >Well, then rather than "braying and flaying" why don't you communicate the
>> >statistics.
>>
>> >As reported 61% of 39 people chose the correct cable. *That according to my
>> >calculator was 24 people.
>>
>> >According to my Binomial Distribution Table, that provides less than a 5%
>> >chance of error...in other words the percentage is statistically
>> >significant. *In fact, it is significant at the 98% level....a 2% chance of
>> >error.
>>
>> I did in other posts but here's a summary. Hypothesis test of claim
>> that p>.5 (p is the probability that more the half of listeners can do
>> better than guessing). Null hypothesis is p=.5. The P-value is .0748
>> but would need to be below .05 to support the claim at the 95%
>> Confidence Level.
>>
>> You rounded off .054 to .05. You would need to get a probability of
>> less than .05 to assert the claim, and NO, .054 isn't "close enough"
>> for statistical validity. *I don't know where you got the 98% from.
>>
>>
>>
>> >Had one more chosen correctly, the error probability would have been less
>> >than 1%, or "beyond a shadow of a doubt".
>>
>> If it had been 25 instead of 24 it would have supported the claim at
>> the 95% level but not at 97% or higher. But that's the point. You
>> don't get to wiggle around the numbers so you get what you want. If it
>> had been one less, you you still make the claim? What about if 39 more
>> people did the experiment and only 20 got it right. You can only draw
>> so much support for a claim from a single sample.
>>
>> And nothing that can only be tested statistically is "beyond a shadow
>> of a doubt" unless you mean "supported at a very high level of
>> confidence" which isn't the case here, even with another correct
>> "guess". Statistics can only be used to support a claim up to the
>> probability (1-confidence level) of falsely supporting an invalid
>> conclusion.
>>
>> The underlying model for determining whether binary selection is
>> random is tossing a coin. Tossing a coin 39 times and getting 24 heads
>> doesn't mean the coin is baised towards heads.
>>
>
>
>As a practical matter as a "CONSUMER", I don't really care
>whether or not a statistically relevant number of people,
>from a sample of people I care nothing about, heard differences,
>or had a preference. What matters too me, as a "CONSUMER",
>is what my particular preference is.
That's fine and as it ought to be. But this thread was about an
experiment that some would like to make claims about.
Harry Lavo
January 18th 08, 09:04 PM
"Oliver Costich" > wrote in message
...
> On Thu, 17 Jan 2008 17:44:27 -0600, MiNe 109
> > wrote:
>
>>In article >,
>> Oliver Costich > wrote:
>>
>>> On Thu, 17 Jan 2008 12:56:23 -0500, Walt >
>>> wrote:
>>>
>>> wrote:
>>> >> On Jan 16, 10:52?am, John Atkinson >
>>> >> wrote:
>>> >>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>> >>>
>>> >>> Money quote: "I was struck by how the best-informed people at the
>>> >>> show -- like John Atkinson and Michael Fremer of Stereophile
>>> >>> Magazine -- easily picked the expensive cable."
>>> >>
>>> >> So will you be receiving your $1 million from Randi anytime soon?
>>> >
>>> >Don't count on it. From TFA: "But of the 39 people who took this test,
>>> >61% said they preferred the expensive cable." Hmmme. 39 trials. 50-50
>>> >chance. How statistically significant is 61%? You do the math.
>>> >(HINT: it ain't.)
>>>
>>> Here's the math: Claim is p (proportion of correct answers) >.5. Null
>>> hypothesis is p=.5. The null hypothsis cannot be rejected (and the
>>> claim cannot be supported) at the 95% significance level.
>>
>>Welcome to the group! Out of curiosity, what significance level does 61%
>>support?
>>
>>Stephen
>
> First you have to find out where the 61% came from. In this case, I
> presume it is 24 out of 39. From the sample data and the claim about
> the population proportion, you can compute a number called the
> P-Value, not to be confused with the population probability in the
> claim, usually denoted "p". To be able to support a claim that more
> than half of the population can do better then guessing, you need the
> P-Value for p=.5, which in this case is .07477. To support the claim
> that p>.5 at the 95% confidence level, you need the P-Value to be less
> than (1-significance level). So for 95%, you need a P-Value of less
> than .05, for 93% you need a P-Value less than .07. Looks like 24 out
> of 39 supports the claim at the 92% level.
>
> However, that's not how you does statistics. You don't compute the
> P-Value and then fish around for a significance level that supports
> your claim (or rejects it depending what side of the argument you are
> on.
The fact is, there is nothing magical about 95%, except that it has been
widely accepted in the scientific community to meet their standards of
"probably so". It gives odds of 19:1 that the null hypothesis is invalid.
A 93% value gives odds of 13:1.
A 99% value gives odds of 99:1.
See, it's all a level of the amount of risk you are willing to take in being
wrong. For me personally, I'd be happy with 90% when it came to making an
audio choice...its not a life or death decision, and I'd happily accept odds
in my favor of 9:1.
In the food business, we typically used the 95% confidence level, but
sometimes set the standard to 99% if the consequences of being wrong were
severe. Coca-Cola may never have launched "New Coke" if they had been that
careful.
>>
>>> >And of course this doesn't even address the single-blind nature of the
>>> >test. See http://en.wikipedia.org/wiki/Clever_Hans
>>> >
>
> The data from badly designed experiments is useless for analysis. I
> would have thought that was obvious.
Except that nobody has presented any evidence that this was a badly designed
test. On the face of it it was apparently a decently-designed single-blind
test. And single blind tests are not automatically invalid. They just have
a potential weakness that must diligently be guarded against.
Oliver Costich
January 18th 08, 09:06 PM
On Fri, 18 Jan 2008 12:08:44 -0600, MiNe 109
> wrote:
>In article >,
> Oliver Costich > wrote:
>
>> On Fri, 18 Jan 2008 08:45:24 -0600, MiNe 109
>> > wrote:
>>
>> >In article >,
>> > "Arny Krueger" > wrote:
>> >
>> >> "MiNe 109" > wrote in message
>> >>
>> >> > In article >,
>> >> > "Arny Krueger" > wrote:
>> >> >
>> >> >> "MiNe 109" > wrote in message
>> >> >>
>> >> >>> In article >,
>> >> >>> Oliver Costich > wrote:
>> >> >>>
>> >> >>>> On Thu, 17 Jan 2008 12:56:23 -0500, Walt
>> >> >>>> > wrote:
>> >> >>>>
>> >> >>>>> wrote:
>> >> >>>>>> On Jan 16, 10:52?am, John Atkinson
>> >> >>>>>> > wrote:
>> >> >>>>>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_
>> >> >>>>>>> in.
>> >> >>>>>>> ..
>> >> >>>>>>>
>> >> >>>>>>> Money quote: "I was struck by how the best-informed
>> >> >>>>>>> people at the show -- like John Atkinson and Michael
>> >> >>>>>>> Fremer of Stereophile Magazine -- easily picked the
>> >> >>>>>>> expensive cable."
>> >> >>>>>>
>> >> >>>>>> So will you be receiving your $1 million from Randi
>> >> >>>>>> anytime soon?
>> >> >>>>>
>> >> >>>>> Don't count on it. From TFA: "But of the 39 people
>> >> >>>>> who took this test, 61% said they preferred the
>> >> >>>>> expensive cable." Hmmme. 39 trials. 50-50 chance.
>> >> >>>>> How statistically significant is 61%? You do the
>> >> >>>>> math. (HINT: it ain't.)
>> >> >>>>
>> >> >>>> Here's the math: Claim is p (proportion of correct
>> >> >>>> answers) >.5. Null hypothesis is p=.5. The null
>> >> >>>> hypothsis cannot be rejected (and the claim cannot be
>> >> >>>> supported) at the 95% significance level.
>> >> >>>
>> >> >>> Welcome to the group! Out of curiosity, what
>> >> >>> significance level does 61% support?
>> >> >>
>> >> >> You haven't formed the question properly. 61% is
>> >> >> statisically signifcant or not, depending on the total
>> >> >> number of trials.
>> >> >
>> >> > Okay, in 39 trials, what level of significance does 61%
>> >> > indicate?
>> >> >
>> >>
>> >> In this case nothing, because the basic experiment seems to be so flawed.
>> >
>> >In a perfectly designed test with 39 trials, what level of significance
>> >does 61% indicate?
>> >
>> >Stephen
>>
>> Still about 92% and Generalisimo Franco is still dead.
>
>How about Suharto?
>
>Stephen
Not yet. The time to his death is not normally distributed:-)
Harry Lavo
January 18th 08, 09:06 PM
"Oliver Costich" > wrote in message
...
> On Fri, 18 Jan 2008 07:43:13 -0600, MiNe 109
> > wrote:
>
>>In article >,
>> "Arny Krueger" > wrote:
>>
>>> "MiNe 109" > wrote in message
>>>
>>> > In article >,
>>> > Oliver Costich > wrote:
>>> >
>>> >> On Thu, 17 Jan 2008 12:56:23 -0500, Walt
>>> >> > wrote:
>>> >>
>>> >>> wrote:
>>> >>>> On Jan 16, 10:52?am, John Atkinson
>>> >>>> > wrote:
>>> >>>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>> >>>>>
>>> >>>>> Money quote: "I was struck by how the best-informed
>>> >>>>> people at the show -- like John Atkinson and Michael
>>> >>>>> Fremer of Stereophile Magazine -- easily picked the
>>> >>>>> expensive cable."
>>> >>>>
>>> >>>> So will you be receiving your $1 million from Randi
>>> >>>> anytime soon?
>>> >>>
>>> >>> Don't count on it. From TFA: "But of the 39 people who
>>> >>> took this test, 61% said they preferred the expensive
>>> >>> cable." Hmmme. 39 trials. 50-50 chance. How
>>> >>> statistically significant is 61%? You do the math.
>>> >>> (HINT: it ain't.)
>>> >>
>>> >> Here's the math: Claim is p (proportion of correct
>>> >> answers) >.5. Null hypothesis is p=.5. The null
>>> >> hypothsis cannot be rejected (and the claim cannot be
>>> >> supported) at the 95% significance level.
>>> >
>>> > Welcome to the group! Out of curiosity, what significance
>>> > level does 61% support?
>>>
>>> You haven't formed the question properly. 61% is statisically signifcant
>>> or
>>> not, depending on the total number of trials.
>>
>>Okay, in 39 trials, what level of significance does 61% indicate?
>>
>>Stephen
>
>
> About 92%
This is wrong, according to my binomial table...should be 98% instead.
Oliver Costich
January 18th 08, 09:08 PM
On Fri, 18 Jan 2008 12:06:54 -0600, MiNe 109
> wrote:
>In article >,
> Oliver Costich > wrote:
>
>> On Thu, 17 Jan 2008 17:44:27 -0600, MiNe 109
>> > wrote:
>>
>> >In article >,
>> > Oliver Costich > wrote:
>> >
>> >> On Thu, 17 Jan 2008 12:56:23 -0500, Walt >
>> >> wrote:
>> >>
>> >> wrote:
>> >> >> On Jan 16, 10:52?am, John Atkinson > wrote:
>> >> >>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in..
>> >> >>> .
>> >> >>>
>> >> >>> Money quote: "I was struck by how the best-informed people at the
>> >> >>> show -- like John Atkinson and Michael Fremer of Stereophile
>> >> >>> Magazine -- easily picked the expensive cable."
>> >> >>
>> >> >> So will you be receiving your $1 million from Randi anytime soon?
>> >> >
>> >> >Don't count on it. From TFA: "But of the 39 people who took this test,
>> >> >61% said they preferred the expensive cable." Hmmme. 39 trials. 50-50
>> >> >chance. How statistically significant is 61%? You do the math.
>> >> >(HINT: it ain't.)
>> >>
>> >> Here's the math: Claim is p (proportion of correct answers) >.5. Null
>> >> hypothesis is p=.5. The null hypothsis cannot be rejected (and the
>> >> claim cannot be supported) at the 95% significance level.
>> >
>> >Welcome to the group! Out of curiosity, what significance level does 61%
>> >support?
>> >
>> >Stephen
>>
>> First you have to find out where the 61% came from. In this case, I
>> presume it is 24 out of 39. From the sample data and the claim about
>> the population proportion, you can compute a number called the
>> P-Value, not to be confused with the population probability in the
>> claim, usually denoted "p". To be able to support a claim that more
>> than half of the population can do better then guessing, you need the
>> P-Value for p=.5, which in this case is .07477. To support the claim
>> that p>.5 at the 95% confidence level, you need the P-Value to be less
>> than (1-significance level). So for 95%, you need a P-Value of less
>> than .05, for 93% you need a P-Value less than .07. Looks like 24 out
>> of 39 supports the claim at the 92% level.
>
>Thanks!
>
>> However, that's not how you does statistics. You don't compute the
>> P-Value and then fish around for a significance level that supports
>> your claim (or rejects it depending what side of the argument you are
>> on.
>
>Can you fish around for confidence levels?
Confidence levels, significance levels - two sides of the same coin.
E.g., 95% confidence is same as 5% significance.
>
>> >> >And of course this doesn't even address the single-blind nature of the
>> >> >test. See http://en.wikipedia.org/wiki/Clever_Hans
>> >> >
>>
>> The data from badly designed experiments is useless for analysis. I
>> would have thought that was obvious.
>
>Which wire did Clever Hans prefer?
>
>Stephen
Shhhh! I'm Listening to Reason!
January 18th 08, 09:08 PM
On Jan 17, 6:36*pm, Eeyore >
wrote:
> No, 61% is as good as proof that there's NO difference.
That's not true, of course. I'd have to believe that even good old
insane Arns would disagree with this statement.
For one thing, if a test design is not valid to prove a difference
exists, it is certainly not valid to prove one doesn't.
George M. Middius
January 18th 08, 09:09 PM
Shhhh! said:
> > > On Jan 17, 5:25 pm, Oliver Costich > wrote:
> > > Why is this important to you, so much so that you have blasted so many
> > > posts in this thread?
> > I've blasted "so many posts"? *WTF?
> Please look at who this post was in response to. I think you are
> confused about who you are.;-)
'Borgs are interchangeable, note.
Oliver Costich
January 18th 08, 09:11 PM
On Fri, 18 Jan 2008 13:01:00 -0800 (PST), "Shhhh! I'm Listening to
Reason!" > wrote:
>On Jan 18, 9:31*am, "Arny Krueger" > wrote:
>> "Walt" > wrote in message
>>
>>
>>
>>
>>
>>
>>
>> > Shhhh! I'm Listening to Reason! wrote:
>> >> On Jan 17, 5:25 pm, Oliver Costich
>> >> > wrote:
>> >>>> Don't count on it. *From TFA: "But of the 39 people
>> >>>> who took this test, 61% said they preferred the
>> >>>> expensive cable." Hmmme. *39 trials. 50-50 chance. How statistically
>> >>>> significant is 61%? *You do the
>> >>>> math.
>>
>> >> Why is this important to you, so much so that you have
>> >> blasted so many posts in this thread?
>>
>> > I've blasted "so many posts"? *WTF?
>>
>> > I count three. *This will make four. *You must have me
>> > confused with somebody else.
>>
>> I think I counted 7 posts to this thread from ****R. The interesting
>> question about the Middiot Clique is which of them is less self-aware.
>
>;-)
>
>> Right now Stephen, Jenn, ****R and the Middiot himself are duking it out for
>> the dishonor. ;-)
>
>The difference being, GOIA, is that I did not post the same response
>to multiple posts.
>
>We get the fact that you don't consider the methodology valid. Oliver
>made double-damned *sure* we knew that he didn't (which was my point).
>What you and the others have not responded to is whether that "proves"
>no difference existed, or (even more importantly) why it matters to
>you in the least.
>
>As I've said before, GOIA, I'm actually in your camp when it comes to
>wires and cables. I just don't see why "you people" go so bonkers when
>somebody doesn't agree with you. I know, I know, you're just trying to
>save them money. But it's theirs to spend as they see fit, isn't it?
>
>LOL!
My point was that even if the test was well designed (another thread
for another day, and I won't be there:-) that the data don't support
the claim that people can distinuish one of these cables from the
other. That doesn't mean that they can't. It means that this test
doesn't support it.
If the data don't support the claim, it doesn't matter what the design
was.
Oliver Costich
January 18th 08, 09:13 PM
On Fri, 18 Jan 2008 09:19:48 -0800 (PST), John Atkinson
> wrote:
>On Jan 18, 8:23*am, "Arny Krueger" > wrote:
>> "John Atkinson" > wrote in
>>
>> > Remind me again how many times Arny Krueger has been
>> > quoted in the Wall Street Journal?
>>
>> This is not logical discussion or even just rhetoric, this is abuse.
>
>Er, no. It is a straightforward question, Mr. Krueger. How many times
>have you been quoted in the WSJ?
And what would you hope to discern from that number?
>
>>>At least he has stopped claiming that his neglected, rarely updated,
>>>almost-never-promoted websites get as much traffic as Stereophile's...
>
>No argument from Mr. Krueger about this, at least. :-)
>
>>> or that his recordings are as commercially available as my own. :-)
>
>Nor this, though I do note that he continues to argue with
>professional
>recording engineer Iain Churches that his own work is somehow
>comparable. BTW, Mr. Krueger, my most-recent choral recording --
>see http://www.stereophile.com/news/121007cantus/ -- was No.9
>in NPR's Top Next-Generation Classical CDs of 2007. Even if I
>am unaware of truncated reverb tails, as you mistakenly claim in
>another thread. How are your own choral recordings doing?
>
>John Atkinson
>Editor, Stereophile
>"Well Informed" - The Wall Street Journal
Shhhh! I'm Listening to Reason!
January 18th 08, 09:19 PM
On Jan 18, 11:44*am, Oliver Costich >
wrote:
> On Thu, 17 Jan 2008 15:54:42 -0800 (PST), "Shhhh! I'm Listening to
>
> Reason!" > wrote:
> >On Jan 17, 5:15*pm, Oliver Costich > wrote:
>
> >> In other words, that 61% of a sample of 39 got the correct result
> >> isn't sufficient evidence that in the general population of listeners
> >> more than half can pick the better cable.
>
> >> So, I'd say "that's hardly that".
>
> >I'm curious what percent of the "best informed" got. I mean, you could
> >mix in hot dog vendors, the deaf, people who might try to fail just to
> >be contrary, you, and so on, and get different results. Apparently JA
> >and MF did better than random chance.
>
> However "random chance" is defined. To make a valid statement about
> the abilities of the "best informed", you'd have to define that
> population and do the experiment on them. If 24 of them got it right
> out of 39, then you'd still not be able to support the calim and the
> 95% confidence level.
The claim I was basing that question on was the statement about how
the author was impressed with "how easily" JA and MF and the other
"best informed" picked the more expensive cable. Your question would
have to be answered by the author, as I do not know.
> >The real issue to me is "who cares". People who want expensive cables,
> >wires, cars, clothes, or whatever, will buy them. People who want to
> >tell other people what they should or shouldn't buy will come out of
> >the woodwork to bitch about it. ;-)
>
> >This seems to have really gotten your dander up. Why?
>
> I don't care much about it either. If people want to buy overpriced
> stuff based on bogus claims that's fine with me. What bugs me is that
> they try to support the claims based on bogus experiments and bad
> analysis. I spend way too much time in classrooms trying to
> communicate the importance of critical thinking to today's college
> students (and it ain't easy) to just let this sloppy logic pass.
Fair enough. If you really want to have some fun, read virtually any
post by "ScottW". His sloppy thinking and poor communication will
certainly catch your attention.:-)
What do you teach?
> By the way, I don't use lamp cord or Home Depot interconnects in my
> system.
I do not use expensive wires or cables in my system. I just don't
really care if others do.
Walt
January 18th 08, 09:51 PM
Shhhh! I'm Listening to Reason! wrote:
>>> Why is this important to you, so much so that you have blasted so many
>>> posts in this thread?
>> I've blasted "so many posts"? WTF?
>
> Please look at who this post was in response to. I think you are
> confused about who you are.;-)
You replied to Oliver's post, but since you snipped everything he wrote
and responded only to what I had written I assumed you were talking to me.
Anyway, as for why it's important to me, well, in 20+ years of following
the cable debate this is the first instance I've seen of a blind test
indicating that differences in speaker cables are audible. So, some
questions about the methodology and statistical analysis are in order.
Maybe JA can really hear the difference between $2k Monster cable and 14
gauge zipcord. If that's actually the case, I'm interested.
//Walt
John Atkinson[_2_]
January 18th 08, 10:24 PM
On Jan 18, 4:51*pm, Walt > wrote:
> Maybe JA can really hear the difference between $2k Monster cable
> and 14 gauge zipcord. *If that's actually the case, I'm interested.
If it was printed in a newspaper, it must be true, right? :-)
John Atkinson
Editor, Stereophile
"Well-informed" - The Wall Street Journal"
George M. Middius
January 18th 08, 10:32 PM
Shhhh! said:
> > No, 61% is as good as proof that there's NO difference.
>
> That's not true, of course. I'd have to believe that even good old
> insane Arns would disagree with this statement.
>
> For one thing, if a test design is not valid to prove a difference
> exists, it is certainly not valid to prove one doesn't.
Unless one happens to "know" that all alleged differences are nonexistent,
in which case contrary "test" results are prima facie "wrong" and
conforming "test" results are "proof" that the received "knowledge" is
correct and true.
You seem oddly lacking in the faith necessary to despise high-end audio.
Have you even learned to hate music yet?
JBorg, Jr.[_2_]
January 19th 08, 02:24 AM
> Arny Krueger wrote:
>> JBorg, Jr. wrote
>>> Arny Krueger wrote:
>>>> JBorg, Jr. wrote
>>>>> Arny Krueger wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> More proof that single blind tests are nothing more
>>>>> than defective double blind tests.
>>>>
>>>> From this article, the author wrote, "... the expensive
>>>> cables sounded roughly 5% better. Remember, by
>>>> definition, an audiophile is one who will bear any
>>>> burden, pay any price, to get even a tiny improvement
>>>> in sound." Only 5% ?
>>>
>>> Even so, it was proabably 100% imagination.
>
>> How can that be so? From the article, it said, "... 39
>> people who took this test, 61% said they preferred the
>> expensive cable."
>
>> At what percentage do you consider it imagination, and
>> when it is not.
>
> Well Borg, this post is more evidence that ignorance of basic
> statistics is a common problem among golden ears. It's not a
> well-formed question. It's not the percentage of correct answers that
> defines statistical signicance, its both the percentage of correct answers
> and the total number of trials. And, that's all based on the idea that
> basic experiment was well-designed.
> The most fundamental question is whether the experiment was
> well-designed.
I made no claim saying that the test result were based upon
well-designed scientific experiment. What I ask regards your
contention claiming that the 61% who preferred the sound
produced by expensive cables did so perhaps based on their
imagination.
>> Somehow, this showdown at the CES looked like a
>> DBT sans blackbox.
>
> Nope. This comment is even more evidence that ignorance of basic
> experimental design is a common problem among golden ears.
From what I understand was that the participants were not informed
what was playing, and when it was playing.
> The basic rule of double blind testing is that no clue other than the
> independent variable is available to the listener. In this alleged
> test, the person who controlled the cables interacted with the
> listeners. In a proper DBT, nobody or anything that could possibly
> reveal the indentity of the object chosen for comparison is acessible
> in any way to the listener.
I reread the article and it seems to be SBT.
JBorg, Jr.[_2_]
January 19th 08, 02:40 AM
> Oliver Costich wrote:
>> JBorg, Jr. wrote:
>>> Oliver Costich wrote:
>>
>>
>>
>>
>>
>>
>>> Here's the math: Claim is p (proportion of correct answers) >.5.
>>> Null hypothesis is p=.5. The null hypothsis cannot be rejected (and
>>> the claim cannot be supported) at the 95% significance level.
>>
>>
>> Well yes, Mr. Costich, the test results aren't scientifically valid
>> but it didn't disproved that the sound differences heard by
>> participants did not physically exist.
>>
>
> Of course not. Certainty is not in the realm of statistical analysis.
Right. Why then Arny and his ilk consistently assert using statistical
analysis during audio testing claiming to proved that the sound
differences heard by audiophiles did so based on their fevered
imagination.
> Let's say you want to claim the a certain coin is biased to produce
> heads when flipped. That you flip it 39 times and get 24 heads is not
> sufficient to support the claim at a 95% confidence level. If you
> lower your standard or do a lot more flips and still get 61%, the
> conclusion will change
Ok.
> I'm sure there are audible differences. The issue is whether they are
> enough to make consistent determinations. A bigger issue for those of
> use who just listen to music is whether the diffeneces are detectable
> when you are emotionally involved in the music and not just playing
> "golden ears".
Well then, you agreed that subtle differences do exist.
JBorg, Jr.[_2_]
January 19th 08, 02:47 AM
> Oliver Costich wrote:
>> JBorg, Jr. wrote:
>>> Oliver Costich wrote:
>>>
>>>
>>>
>>>
>>>
>>> Very little of the claims about people being able to discern
>>> differences in cables is supported by such testing.
>>
>> I take it you don't recommend testing for such purposes.
>> Ok then...
>
> I don't recommend badly designed tests and I don't recommend
> making statistically invalid claims based on any kind of test.
>
> But the only way to statistically support (or reject) claims about
> human behavior is through well designed experiments and real
> statistical analysis.
May I interject then based on what you said above that audio
testing such as SBT and ABX/DBT are poorly designed experiments
and will fail to disprove that sound differences heard by audiophiles do
not physically exist.
JBorg, Jr.[_2_]
January 19th 08, 03:37 AM
> Oliver Costich wrote:
>> JBorg, Jr. wrote:
>>> Oliver Costich wrote:
>>>> Walt wrote:
>>>>> John Atkinson wrote:
>>>>
>>>>
>>>>
>>>>
>>>>> Remind me again how many times Arny Krueger has been
>>>>> quoted in the Wall Street Journal?
>>>>
>>>> Ok. So you've been quoted in the WSJ. So have Uri Geller and Ken
>>>> Lay.
>>>>
>>>> What's your point?
>>>
>>> So has Osama Bin Laden. The point is that he's devoid of a sound
>>> argument.
>>
>>
>> Mr. Costich, there is no sound argument to improve upon a strawman
>> arguments. It just doesn't exist.
>
>
>
> Agreed.
Ok.
>> Incidentally Mr. Costich, how well do you know Arny Krueger if you
>> don't mind me asking so.
>>
> I only know of his existence from the news group, if that's his real
> name:-)
He made claims that he had submitted peer-reviewed papers in AES.
He also calim to be audio engineer and well educated concerning
statistical analysis in well designed audio experiment. To be honest,
Mr. Costich, he is the worst offender of common sense and has been
pestering this group for a long, long time.
> BTW, I don't necessarily agree with much of his opinion.
I am very happy to hear that.
JBorg, Jr.[_2_]
January 19th 08, 03:59 AM
> Oliver Costich wrote:
>> JBorg, Jr.wrote:
>>> Shhhh! wrote:
>>>> Oliver Costich wrote:
>>>
>>>
>>>
>>>
>>>
>>>> In other words, that 61% of a sample of 39 got the correct result
>>>> isn't sufficient evidence that in the general population of
>>>> listeners more than half can pick the better cable.
>>>>
>>>> So, I'd say "that's hardly that".
>>>
>>> I'm curious what percent of the "best informed" got. I mean, you
>>> could mix in hot dog vendors, the deaf, people who might try to
>>> fail just to be contrary, you, and so on, and get different results.
>>
>>
>>
>> Well asked.
>>
>
> What population of listeners was the claim made for and how was it
> defined? My guess is that however it's constructed, it a lot bigger
> than 39.
No information were provided for that. Still, valid parameter for such
test should exclude participants with personal biases and preferences
and those lacking extended listening experience, as examples.
JBorg, Jr.[_2_]
January 19th 08, 04:33 AM
> Oliver Costich wrote:
>> Mr.clydeslick wrote:
>>> Oliver Costich wrote:
>>>
>>>
>>>
>>>
>>>snip
>>>
>>> Back to reality: 61% correct in one experiment fails to reject that
>>> they can't tell the difference. If the claim is that listeners can
>>> tell the better cable more the half the time, then to support that
>>> you have to be able to reject that the in the population of all
>>> audio interested listeners, the correct guesses occur half the time
>>> or less. 61% of 39 doesn't do it. (Null hypothesis is p=.5,
>>> alternative hypothesis is p>.5. The null hypthesis cannot be
>>> rejected with the sample data given.)
>>>
>>> In other words, that 61% of a sample of 39 got the correct result
>>> isn't sufficient evidence that in the general population of
>>> listeners more than half can pick the better cable.
>>>
>>> So, I'd say "that's hardly that".
>>
>> you seem to be mixing difference with preference, you reference
>> both, for the same test.
>
> For the purpose of statistical analysis it makes no difference.
But for the purpose of sensible analysis, shouldn't it makes a
difference.
As you have said that logic is on the side of not making decisions
about human behavior. Isn't this reqiured to ensure sufficient testing
using well designed experiment and statistical analysis.
>> And just what is the general population of listeners.
>
> You tell me. I presume that those who attend CES and would be
> a good one to use.
That could very well include someone like Howard Ferstler, a raving
lunatic with a well-known hearing loss out to destroy high-end audio
and derogate all audiophiles young and young at heart. Provided,
of course, he can *afford* the fares.
> What would you use and how would you construct a simple
> random sample from it?
>
>> Are you testing the 99% who don't give a rat's
>> ass anyway? If so, so what. Or are you testing people who actually
>> care.
We need a bias controlled experiment.
JBorg, Jr.[_2_]
January 19th 08, 05:11 AM
> Oliver Costich wrote:
>> JBorg, Jr. wrote:
>>
>>
>>
>>
>>
>> Incidentally Mr. Costich, how well do you know Arny Krueger if you
>> don't mind me asking so.
>>
> I only know of his existence from the news group, if that's his real
> name:-)
We also got him busted by John Corbett for his iridescent display
of falsehood and abuse of statistic in his effort to glorify his pcabx
website.
http://tinyurl.com/37xrpl
http://tinyurl.com/3bwvw6
> BTW, I don't necessarily agree with much of his opinion.
I am very happy to hear that.
Shhhh! I'm Listening to Reason!
January 19th 08, 07:21 AM
On Jan 18, 4:32*pm, George M. Middius <cmndr _ george @ comcast .
net> wrote:
> Shhhh! said:
>
> > > No, 61% is as good as proof that there's NO difference.
>
> > That's not true, of course. I'd have to believe that even good old
> > insane Arns would disagree with this statement.
>
> > For one thing, if a test design is not valid to prove a difference
> > exists, it is certainly not valid to prove one doesn't.
>
> Unless one happens to "know" that all alleged differences are nonexistent,
> in which case contrary "test" results are prima facie "wrong" and
> conforming "test" results are "proof" that the received "knowledge" is
> correct and true.
>
> You seem oddly lacking in the faith necessary to despise high-end audio.
I suppose I'm an atheist in that reagrd as well.
I don't have the faith to despise high-end audio, but I also lack the
faith to shell out $1,000/meter for wire.:-)
> Have you even learned to hate music yet?
I've heard that's an acquired taste. After listening to what's on
GOIA's website I can see how he acquired it. ;-)
Oliver Costich
January 19th 08, 08:16 AM
On Fri, 18 Jan 2008 18:47:00 -0800, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>> JBorg, Jr. wrote:
>>>> Oliver Costich wrote:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Very little of the claims about people being able to discern
>>>> differences in cables is supported by such testing.
>>>
>>> I take it you don't recommend testing for such purposes.
>>> Ok then...
>>
>> I don't recommend badly designed tests and I don't recommend
>> making statistically invalid claims based on any kind of test.
>>
>> But the only way to statistically support (or reject) claims about
>> human behavior is through well designed experiments and real
>> statistical analysis.
>
>
>May I interject then based on what you said above that audio
>testing such as SBT and ABX/DBT are poorly designed experiments
>and will fail to disprove that sound differences heard by audiophiles do
>not physically exist.
>
>
>
Check the poster. I never siad that.
Oliver Costich
January 19th 08, 08:47 AM
On Fri, 18 Jan 2008 15:53:05 -0500, "Harry Lavo" >
wrote:
>
>"Oliver Costich" > wrote in message
...
>> On Fri, 18 Jan 2008 09:10:59 -0500, "Harry Lavo" >
>> wrote:
>>
>>>
>>>"Arny Krueger" > wrote in message
...
>>>> "Oliver Costich" > wrote in
>>>> message
>>>>
>>>>> On Thu, 17 Jan 2008 07:32:07 -0500, "Arny Krueger"
>>>>> > wrote:
>>>>
>>>>>> "Harry Lavo" > wrote in message
>>>>>>
>>>>
>>>>
>>>>>>> Somewhere in
>>>>>>> your college education, you skipped the class in logic,
>>>>>>> I guess.
>>>>
>>>>> In my several years of graduate school in mathemeatics, I
>>>>> skipped neither the logic nor the statistics classes.
>>>>
>>>> Nor did I. I did extensive undergraduate and postgraduate work in math
>>>> and
>>>> statistics. One of the inspirations for the development of double blind
>>>> testing was my wife who has a degree in experimental psychology. Another
>>>> was a friend with a degree in mathematics.
>>>>
>>>>> Logic is on the side of not making decisions about human
>>>>> behavior without sufficient testing using good design of
>>>>> experiment method and statistical analysis.
>>>>
>>>> 4 of the 6 ABX partners had technical degrees ranging from BS to PhD.
>>>>
>>>>> Very little of the claims about people being able to
>>>>> discern differences in cables is supported by such
>>>>> testing.
>>>>
>>>> When it comes to audible differences between cables that is not
>>>> supported
>>>> by science and math, which is what this thread is about, none of it is
>>>> supported by well-designed experiments.
>>>>
>>>
>>>Well, then rather than "braying and flaying" why don't you communicate the
>>>statistics.
>>>
>>>As reported 61% of 39 people chose the correct cable. That according to
>>>my
>>>calculator was 24 people.
>>>
>>>According to my Binomial Distribution Table, that provides less than a 5%
>>>chance of error...in other words the percentage is statistically
>>>significant. In fact, it is significant at the 98% level....a 2% chance
>>>of
>>>error.
>>
>> I did in other posts but here's a summary. Hypothesis test of claim
>> that p>.5 (p is the probability that more the half of listeners can do
>> better than guessing). Null hypothesis is p=.5. The P-value is .0748
>> but would need to be below .05 to support the claim at the 95%
>> Confidence Level.
>>
>> You rounded off .054 to .05. You would need to get a probability of
>> less than .05 to assert the claim, and NO, .054 isn't "close enough"
>> for statistical validity. I don't know where you got the 98% from.
>
>I saw your previous post, found it hard to believe with a sample of 39, and
>so checked it myself. I used a professionally published 100x100 Binomial
>Distribution Table with the correct P value for every combination of
>right/total sample. Without checking your math, I'm not about to yield to
>your numbers.
First, binomial tables aren't what one uses to test claims or
hypotheses. Don't bring a knife to a gunfight.
Binomial distributions are related to conficence levels in a much more
sophisticated way that just looking at values in a tabeBut tell me how
you used the tables. Did you use a density function table or a
cumulative density function table? And why is that the right way to
test a hypothesis?
The math can be found in any elementary statistics book. Look for the
chapter on hypothesis testing. You can also use the one on confidence
intervals (nor levels).
Here's the gist of it. The claim is that poeple in the chosen
population can identify the more expensive cable more than half the
time so the claim is that p>.5. The Hypothesis you have to reject is
that they can't do better than just guessing so you assume the
population proprtion is=.5. You compute a test statistic by
subtracting the assumed value of .5 from the sample date of .61 and
divide by the standard error of the sampling distribution of
proportions which is the square root[( .5*.5)/39] in this situation.
You get what's called the test statistic for the experiment, in thsi
case, 1.37. To to support the claim that p>.5, you need the test
statistic to be bigger than what is the z-alpha for the confidence
level and for 95% that's 1.644.
>
>And I didn't round off anything...the probabilities are right out of the
>table....020 for 24/39 and .008 for 25/39.
But it wasn't 24/39 was it? And 24/39 doesn't support the claim at the
95% confidence level.
>
>>>Had one more chosen correctly, the error probability would have been less
>>>than 1%, or "beyond a shadow of a doubt".
>>
>> If it had been 25 instead of 24 it would have supported the claim at
>> the 95% level but not at 97% or higher. But that's the point. You
>> don't get to wiggle around the numbers so you get what you want. If it
>> had been one less, you you still make the claim? What about if 39 more
>> people did the experiment and only 20 got it right. You can only draw
>> so much support for a claim from a single sample.
>>
>> And nothing that can only be tested statistically is "beyond a shadow
>> of a doubt" unless you mean "supported at a very high level of
>> confidence" which isn't the case here, even with another correct
>> "guess". Statistics can only be used to support a claim up to the
>> probability (1-confidence level) of falsely supporting an invalid
>> conclusion.
>>
>> The underlying model for determining whether binary selection is
>> random is tossing a coin. Tossing a coin 39 times and getting 24 heads
>> doesn't mean the coin is baised towards heads.
>
>I understand all that...I will hope you intended this for others.
>
>
>>
>>>
>>>So presumably John and Michael did at least this well to be singled out by
>>>the reporter.
>>
>> Who obviously was deeply knowledgable about statistics.
>
>Perhaps not, and so he could be wrong. But presumable the test designer
>would have corrected him if he were wildly so.
Presuming the test designer had a clue about proper statistical
experiment and analysis which appears questionable.
>
>
>>>
>>>Is this why you are desparately flaying at the test, Arny...inventing
>>>"possibibilites" without a single shred of evidence to support your
>>>conjectures? Because you know (if you truly do know math and statistics)
>>>that the test statistics hold up (but don't have the integrity to say so)?
>>>
>>
>
Oliver Costich
January 19th 08, 08:52 AM
On Fri, 18 Jan 2008 16:06:21 -0500, "Harry Lavo" >
wrote:
>
>"Oliver Costich" > wrote in message
...
>> On Fri, 18 Jan 2008 07:43:13 -0600, MiNe 109
>> > wrote:
>>
>>>In article >,
>>> "Arny Krueger" > wrote:
>>>
>>>> "MiNe 109" > wrote in message
>>>>
>>>> > In article >,
>>>> > Oliver Costich > wrote:
>>>> >
>>>> >> On Thu, 17 Jan 2008 12:56:23 -0500, Walt
>>>> >> > wrote:
>>>> >>
>>>> >>> wrote:
>>>> >>>> On Jan 16, 10:52?am, John Atkinson
>>>> >>>> > wrote:
>>>> >>>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>>> >>>>>
>>>> >>>>> Money quote: "I was struck by how the best-informed
>>>> >>>>> people at the show -- like John Atkinson and Michael
>>>> >>>>> Fremer of Stereophile Magazine -- easily picked the
>>>> >>>>> expensive cable."
>>>> >>>>
>>>> >>>> So will you be receiving your $1 million from Randi
>>>> >>>> anytime soon?
>>>> >>>
>>>> >>> Don't count on it. From TFA: "But of the 39 people who
>>>> >>> took this test, 61% said they preferred the expensive
>>>> >>> cable." Hmmme. 39 trials. 50-50 chance. How
>>>> >>> statistically significant is 61%? You do the math.
>>>> >>> (HINT: it ain't.)
>>>> >>
>>>> >> Here's the math: Claim is p (proportion of correct
>>>> >> answers) >.5. Null hypothesis is p=.5. The null
>>>> >> hypothsis cannot be rejected (and the claim cannot be
>>>> >> supported) at the 95% significance level.
>>>> >
>>>> > Welcome to the group! Out of curiosity, what significance
>>>> > level does 61% support?
>>>>
>>>> You haven't formed the question properly. 61% is statisically signifcant
>>>> or
>>>> not, depending on the total number of trials.
>>>
>>>Okay, in 39 trials, what level of significance does 61% indicate?
>>>
>>>Stephen
>>
>>
>> About 92%
>
>This is wrong, according to my binomial table...should be 98% instead.
>
You simply don't know anything about statistical testing of claims.
You've pulled the numbers off a binomial table of some sort and
interpreted the result by some method you thought would tell you
something.
Get a stsistics book to understand how hypotheses are tested and buy a
Texas Instrument TI-83 calculator to do all the computation with any
serious manipulation. You put the numbers in (in this case .5, 24, 39)
and push the button to get a number to do the test comparison.
Oliver Costich
January 19th 08, 08:53 AM
On Fri, 18 Jan 2008 14:24:45 -0800 (PST), John Atkinson
> wrote:
>On Jan 18, 4:51*pm, Walt > wrote:
>> Maybe JA can really hear the difference between $2k Monster cable
>> and 14 gauge zipcord. *If that's actually the case, I'm interested.
>
>If it was printed in a newspaper, it must be true, right? :-)
>
>John Atkinson
>Editor, Stereophile
>"Well-informed" - The Wall Street Journal"
Like the WSJ, where you can get contradictory opinions on the stock
market on any given day?
Oliver Costich
January 19th 08, 09:01 AM
On Fri, 18 Jan 2008 18:40:22 -0800, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>> JBorg, Jr. wrote:
>>>> Oliver Costich wrote:
>>>
>>>
>>>
>>>
>>>
>>>
>>>> Here's the math: Claim is p (proportion of correct answers) >.5.
>>>> Null hypothesis is p=.5. The null hypothsis cannot be rejected (and
>>>> the claim cannot be supported) at the 95% significance level.
>>>
>>>
>>> Well yes, Mr. Costich, the test results aren't scientifically valid
>>> but it didn't disproved that the sound differences heard by
>>> participants did not physically exist.
>>>
>>
>> Of course not. Certainty is not in the realm of statistical analysis.
>
>
>Right. Why then Arny and his ilk consistently assert using statistical
>analysis during audio testing claiming to proved that the sound
>differences heard by audiophiles did so based on their fevered
>imagination.
Because statistical analysis is the only toll we have to test claims
about the behavior of populations from samples. Any such test has
error and the likelihood of error needs to be set in advance. The
significance level is precisely the probabiliy of rejecting a true nul
hypothesis
>
>
>
>> Let's say you want to claim the a certain coin is biased to produce
>> heads when flipped. That you flip it 39 times and get 24 heads is not
>> sufficient to support the claim at a 95% confidence level. If you
>> lower your standard or do a lot more flips and still get 61%, the
>> conclusion will change
>
>Ok.
>
>> I'm sure there are audible differences. The issue is whether they are
>> enough to make consistent determinations. A bigger issue for those of
>> use who just listen to music is whether the diffeneces are detectable
>> when you are emotionally involved in the music and not just playing
>> "golden ears".
>
>Well then, you agreed that subtle differences do exist.
Sometimes even large ones, like the MP3s vs CD. That does not mean
that 61% success in a sample of 39 supports the claim that people can
tell the difference between the two cables used in the test.
>
>
>
>
>
Oliver Costich
January 19th 08, 09:03 AM
On Fri, 18 Jan 2008 19:37:01 -0800, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>> JBorg, Jr. wrote:
>>>> Oliver Costich wrote:
>>>>> Walt wrote:
>>>>>> John Atkinson wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> Remind me again how many times Arny Krueger has been
>>>>>> quoted in the Wall Street Journal?
>>>>>
>>>>> Ok. So you've been quoted in the WSJ. So have Uri Geller and Ken
>>>>> Lay.
>>>>>
>>>>> What's your point?
>>>>
>>>> So has Osama Bin Laden. The point is that he's devoid of a sound
>>>> argument.
>>>
>>>
>>> Mr. Costich, there is no sound argument to improve upon a strawman
>>> arguments. It just doesn't exist.
>>
>>
>>
>> Agreed.
>
>
>Ok.
>
>
>>> Incidentally Mr. Costich, how well do you know Arny Krueger if you
>>> don't mind me asking so.
>>>
>> I only know of his existence from the news group, if that's his real
>> name:-)
>
>He made claims that he had submitted peer-reviewed papers in AES.
>He also calim to be audio engineer and well educated concerning
>statistical analysis in well designed audio experiment. To be honest,
>Mr. Costich, he is the worst offender of common sense and has been
>pestering this group for a long, long time.
That's an opinion, which is in the name of the newsgroup. There are
many offenders here, not only of common sense by of scientific method.
>
>
>> BTW, I don't necessarily agree with much of his opinion.
>
>
>I am very happy to hear that.
>
>
>
>
>
Oliver Costich
January 19th 08, 09:08 AM
On Fri, 18 Jan 2008 19:59:03 -0800, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>> JBorg, Jr.wrote:
>>>> Shhhh! wrote:
>>>>> Oliver Costich wrote:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>> In other words, that 61% of a sample of 39 got the correct result
>>>>> isn't sufficient evidence that in the general population of
>>>>> listeners more than half can pick the better cable.
>>>>>
>>>>> So, I'd say "that's hardly that".
>>>>
>>>> I'm curious what percent of the "best informed" got. I mean, you
>>>> could mix in hot dog vendors, the deaf, people who might try to
>>>> fail just to be contrary, you, and so on, and get different results.
>>>
>>>
>>>
>>> Well asked.
>>>
>>
>> What population of listeners was the claim made for and how was it
>> defined? My guess is that however it's constructed, it a lot bigger
>> than 39.
>
>
>No information were provided for that. Still, valid parameter for such
>test should exclude participants with personal biases and preferences
>and those lacking extended listening experience, as examples.
>
Personal bias can be filtered out with well designed double blind
experiments. That's the whole point of that method. If neither the
tester or those tested know what they are listening to. People with
listening experience is still a large, but shrinking, population.
The golden ear cult would like to define the population to be those
among that have a good enough run of guesses to get a statistically
significant outcome:-)
Oliver Costich
January 19th 08, 09:10 AM
On Fri, 18 Jan 2008 13:19:23 -0800 (PST), "Shhhh! I'm Listening to
Reason!" > wrote:
>On Jan 18, 11:44*am, Oliver Costich >
>wrote:
>> On Thu, 17 Jan 2008 15:54:42 -0800 (PST), "Shhhh! I'm Listening to
>>
>> Reason!" > wrote:
>> >On Jan 17, 5:15*pm, Oliver Costich > wrote:
>>
>> >> In other words, that 61% of a sample of 39 got the correct result
>> >> isn't sufficient evidence that in the general population of listeners
>> >> more than half can pick the better cable.
>>
>> >> So, I'd say "that's hardly that".
>>
>> >I'm curious what percent of the "best informed" got. I mean, you could
>> >mix in hot dog vendors, the deaf, people who might try to fail just to
>> >be contrary, you, and so on, and get different results. Apparently JA
>> >and MF did better than random chance.
>>
>> However "random chance" is defined. To make a valid statement about
>> the abilities of the "best informed", you'd have to define that
>> population and do the experiment on them. If 24 of them got it right
>> out of 39, then you'd still not be able to support the calim and the
>> 95% confidence level.
>
>The claim I was basing that question on was the statement about how
>the author was impressed with "how easily" JA and MF and the other
>"best informed" picked the more expensive cable. Your question would
>have to be answered by the author, as I do not know.
>
>> >The real issue to me is "who cares". People who want expensive cables,
>> >wires, cars, clothes, or whatever, will buy them. People who want to
>> >tell other people what they should or shouldn't buy will come out of
>> >the woodwork to bitch about it. ;-)
>>
>> >This seems to have really gotten your dander up. Why?
>>
>> I don't care much about it either. If people want to buy overpriced
>> stuff based on bogus claims that's fine with me. What bugs me is that
>> they try to support the claims based on bogus experiments and bad
>> analysis. I spend way too much time in classrooms trying to
>> communicate the importance of critical thinking to today's college
>> students (and it ain't easy) to just let this sloppy logic pass.
>
>Fair enough. If you really want to have some fun, read virtually any
>post by "ScottW". His sloppy thinking and poor communication will
>certainly catch your attention.:-)
>
>What do you teach?
>
>> By the way, I don't use lamp cord or Home Depot interconnects in my
>> system.
>
>I do not use expensive wires or cables in my system. I just don't
>really care if others do.
I don't care what they use. I do care that they want to justify it
with sloppy logic and BS.
Oliver Costich
January 19th 08, 09:13 AM
On Fri, 18 Jan 2008 13:08:05 -0800 (PST), "Shhhh! I'm Listening to
Reason!" > wrote:
>On Jan 17, 6:36*pm, Eeyore >
>wrote:
>
>> No, 61% is as good as proof that there's NO difference.
>
>That's not true, of course. I'd have to believe that even good old
>insane Arns would disagree with this statement.
>
>For one thing, if a test design is not valid to prove a difference
>exists, it is certainly not valid to prove one doesn't.
In this sample of 39, 61% is not sufficient to reject the claim that
they are just guessing or flipping a coin to decide. It does not mean
that people can't really tell, it just means that it's very unlikely.
That the design is bad is another issue, but it has no effect on the
analysis of the data.
Oliver Costich
January 19th 08, 09:16 AM
On Fri, 18 Jan 2008 17:32:49 -0500, George M. Middius <cmndr _ george
@ comcast . net> wrote:
>
>
>Shhhh! said:
>
>> > No, 61% is as good as proof that there's NO difference.
>>
>> That's not true, of course. I'd have to believe that even good old
>> insane Arns would disagree with this statement.
>>
>> For one thing, if a test design is not valid to prove a difference
>> exists, it is certainly not valid to prove one doesn't.
>
>Unless one happens to "know" that all alleged differences are nonexistent,
>in which case contrary "test" results are prima facie "wrong" and
>conforming "test" results are "proof" that the received "knowledge" is
>correct and true.
>
>You seem oddly lacking in the faith necessary to despise high-end audio.
>Have you even learned to hate music yet?
I, for one, do care about the music and would rather just listen to it
than sit around doing badly designed tests. When you are involved in
the music, subtle differences, even if they exist, aren't really
discernable.
>
>
>
Oliver Costich
January 19th 08, 09:19 AM
On Fri, 18 Jan 2008 16:51:54 -0500, Walt >
wrote:
>Shhhh! I'm Listening to Reason! wrote:
>
>
>>>> Why is this important to you, so much so that you have blasted so many
>>>> posts in this thread?
>
>>> I've blasted "so many posts"? WTF?
>>
>> Please look at who this post was in response to. I think you are
>> confused about who you are.;-)
>
>You replied to Oliver's post, but since you snipped everything he wrote
>and responded only to what I had written I assumed you were talking to me.
>
>Anyway, as for why it's important to me, well, in 20+ years of following
>the cable debate this is the first instance I've seen of a blind test
>indicating that differences in speaker cables are audible. So, some
>questions about the methodology and statistical analysis are in order.
On the contrary, the test result is that at the 95% confidence level,
you can't reject that they were just guessing.
>
>Maybe JA can really hear the difference between $2k Monster cable and 14
>gauge zipcord. If that's actually the case, I'm interested.
>
>
>//Walt
Oliver Costich
January 19th 08, 09:28 AM
On Fri, 18 Jan 2008 20:33:43 -0800, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>> Mr.clydeslick wrote:
>>>> Oliver Costich wrote:
>>>>
>>>>
>>>>
>>>>
>>>>snip
>>>>
>>>> Back to reality: 61% correct in one experiment fails to reject that
>>>> they can't tell the difference. If the claim is that listeners can
>>>> tell the better cable more the half the time, then to support that
>>>> you have to be able to reject that the in the population of all
>>>> audio interested listeners, the correct guesses occur half the time
>>>> or less. 61% of 39 doesn't do it. (Null hypothesis is p=.5,
>>>> alternative hypothesis is p>.5. The null hypthesis cannot be
>>>> rejected with the sample data given.)
>>>>
>>>> In other words, that 61% of a sample of 39 got the correct result
>>>> isn't sufficient evidence that in the general population of
>>>> listeners more than half can pick the better cable.
>>>>
>>>> So, I'd say "that's hardly that".
>>>
>>> you seem to be mixing difference with preference, you reference
>>> both, for the same test.
>>
>> For the purpose of statistical analysis it makes no difference.
>
>
>But for the purpose of sensible analysis, shouldn't it makes a
>difference.
I don't think so. I can't see any way the statistical analysis would
be different.
>
>As you have said that logic is on the side of not making decisions
>about human behavior. Isn't this reqiured to ensure sufficient testing
>using well designed experiment and statistical analysis.
I didn't say that.
>
>
>>> And just what is the general population of listeners.
>>
>> You tell me. I presume that those who attend CES and would be
>> a good one to use.
>
>That could very well include someone like Howard Ferstler, a raving
>lunatic with a well-known hearing loss out to destroy high-end audio
>and derogate all audiophiles young and young at heart. Provided,
>of course, he can *afford* the fares.
Obviously you want to weed out people who are absolutely sure you
can't tell. But leaving out people who are skeptics biases the result
as well. I doubt that the 39 people who did the test comprised a
simple random sample, another design flaw. On the other hand I'd like
to see a well designed test using a simple random sample from the
population of true believers just to see if they can really. Even if
some people can tell, I suspect that it's a very small number. I do
know a couple of people who can really lock onto particular
characteristics and use then to identify what's playing.
>
>
>> What would you use and how would you construct a simple
>> random sample from it?
>>
>>> Are you testing the 99% who don't give a rat's
>>> ass anyway? If so, so what. Or are you testing people who actually
>>> care.
>
>
>We need a bias controlled experiment.
Yes but the neither the golden ear cult or the nonbeleivers would
accept the results if they didn't agree with them. It's become a
religious, not a scientific, argument.
>
>
Harry Lavo
January 19th 08, 01:43 PM
"Oliver Costich" > wrote in message
...
> On Fri, 18 Jan 2008 19:59:03 -0800, "JBorg, Jr."
> > wrote:
>
>>> Oliver Costich wrote:
>>>> JBorg, Jr.wrote:
>>>>> Shhhh! wrote:
>>>>>> Oliver Costich wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> In other words, that 61% of a sample of 39 got the correct result
>>>>>> isn't sufficient evidence that in the general population of
>>>>>> listeners more than half can pick the better cable.
>>>>>>
>>>>>> So, I'd say "that's hardly that".
>>>>>
>>>>> I'm curious what percent of the "best informed" got. I mean, you
>>>>> could mix in hot dog vendors, the deaf, people who might try to
>>>>> fail just to be contrary, you, and so on, and get different results.
>>>>
>>>>
>>>>
>>>> Well asked.
>>>>
>>>
>>> What population of listeners was the claim made for and how was it
>>> defined? My guess is that however it's constructed, it a lot bigger
>>> than 39.
>>
>>
>>No information were provided for that. Still, valid parameter for such
>>test should exclude participants with personal biases and preferences
>>and those lacking extended listening experience, as examples.
>>
>
> Personal bias can be filtered out with well designed double blind
> experiments. That's the whole point of that method. If neither the
> tester or those tested know what they are listening to. People with
> listening experience is still a large, but shrinking, population.
>
> The golden ear cult would like to define the population to be those
> among that have a good enough run of guesses to get a statistically
> significant outcome:-)
While I agree with this in general, one of the criticisms that can't be
refuted is that, if the bias is "there are no differences", a participant
can simply go into the test and choose randomly, thus weighting the test
toward "no difference". There is no built in safeguard against that, even
in a DBT. The only safeguard is to know who truly holds that opinion and
exclude them. Such a test should only be among those people who are open to
the idea that there may *be* differences...so if a null results, it goes
against their biases.
John Atkinson[_2_]
January 19th 08, 02:56 PM
On Jan 18, 4:13 pm, Oliver Costich > wrote:
> On Fri, 18 Jan 2008 09:19:48 -0800 (PST), John Atkinson
> > asked "Arny Krueger" >:
> > How many times have you been quoted in the WSJ?
>
> And what would you hope to discern from that number?
As you may have noted, whenever it is mentioned on
Usenet that someone has done something of note, Arny
Krueger inevitably jumps in with the claim that he had
done the same or similar at an earlier date. "Been there,
done that" is his constant refrain. In this instance, I am
merely forestalling the expected claim by emphasizing
that he has not yet been quoted in that newspaper. :-)
John Atkinson
Editor, Stereophile
Harry Lavo
January 19th 08, 04:06 PM
"Oliver Costich" > wrote in message
...
> On Fri, 18 Jan 2008 15:53:05 -0500, "Harry Lavo" >
> wrote:
>
>>
>>"Oliver Costich" > wrote in message
...
>>> On Fri, 18 Jan 2008 09:10:59 -0500, "Harry Lavo" >
>>> wrote:
>>>
>>>>
>>>>"Arny Krueger" > wrote in message
...
>>>>> "Oliver Costich" > wrote in
>>>>> message
>>>>>
>>>>>> On Thu, 17 Jan 2008 07:32:07 -0500, "Arny Krueger"
>>>>>> > wrote:
>>>>>
>>>>>>> "Harry Lavo" > wrote in message
>>>>>>>
>>>>>
>>>>>
>>>>>>>> Somewhere in
>>>>>>>> your college education, you skipped the class in logic,
>>>>>>>> I guess.
>>>>>
>>>>>> In my several years of graduate school in mathemeatics, I
>>>>>> skipped neither the logic nor the statistics classes.
>>>>>
>>>>> Nor did I. I did extensive undergraduate and postgraduate work in math
>>>>> and
>>>>> statistics. One of the inspirations for the development of double
>>>>> blind
>>>>> testing was my wife who has a degree in experimental psychology.
>>>>> Another
>>>>> was a friend with a degree in mathematics.
>>>>>
>>>>>> Logic is on the side of not making decisions about human
>>>>>> behavior without sufficient testing using good design of
>>>>>> experiment method and statistical analysis.
>>>>>
>>>>> 4 of the 6 ABX partners had technical degrees ranging from BS to PhD.
>>>>>
>>>>>> Very little of the claims about people being able to
>>>>>> discern differences in cables is supported by such
>>>>>> testing.
>>>>>
>>>>> When it comes to audible differences between cables that is not
>>>>> supported
>>>>> by science and math, which is what this thread is about, none of it is
>>>>> supported by well-designed experiments.
>>>>>
>>>>
>>>>Well, then rather than "braying and flaying" why don't you communicate
>>>>the
>>>>statistics.
>>>>
>>>>As reported 61% of 39 people chose the correct cable. That according to
>>>>my
>>>>calculator was 24 people.
>>>>
>>>>According to my Binomial Distribution Table, that provides less than a
>>>>5%
>>>>chance of error...in other words the percentage is statistically
>>>>significant. In fact, it is significant at the 98% level....a 2% chance
>>>>of
>>>>error.
>>>
>>> I did in other posts but here's a summary. Hypothesis test of claim
>>> that p>.5 (p is the probability that more the half of listeners can do
>>> better than guessing). Null hypothesis is p=.5. The P-value is .0748
>>> but would need to be below .05 to support the claim at the 95%
>>> Confidence Level.
>>>
>>> You rounded off .054 to .05. You would need to get a probability of
>>> less than .05 to assert the claim, and NO, .054 isn't "close enough"
>>> for statistical validity. I don't know where you got the 98% from.
>>
>>I saw your previous post, found it hard to believe with a sample of 39,
>>and
>>so checked it myself. I used a professionally published 100x100 Binomial
>>Distribution Table with the correct P value for every combination of
>>right/total sample. Without checking your math, I'm not about to yield to
>>your numbers.
>
> First, binomial tables aren't what one uses to test claims or
> hypotheses. Don't bring a knife to a gunfight.
It certainly is for samples above 100, which is where I spent my
professional life using market research. However, I must admit that I
forgot that it gets a little more complicated with small sample sizes.
However, the table I used had P-values specifically calculated adjusting for
the small sample sizes.....a fact that I confirmed with my own manual
calculations (see
my reply to your other post).
> Binomial distributions are related to conficence levels in a much more
> sophisticated way that just looking at values in a tabeBut tell me how
> you used the tables. Did you use a density function table or a
> cumulative density function table? And why is that the right way to
> test a hypothesis?
>
> The math can be found in any elementary statistics book. Look for the
> chapter on hypothesis testing. You can also use the one on confidence
> intervals (nor levels).
The table I used was calculated for small sample sizes, as my "refresher
course" with the books reminded me was necessary. But I have also manually
calculated using the small sample size forumula for estimating standard
error and obtained the same result...see my reply to your other post.
> Here's the gist of it. The claim is that poeple in the chosen
> population can identify the more expensive cable more than half the
> time so the claim is that p>.5. The Hypothesis you have to reject is
> that they can't do better than just guessing so you assume the
> population proprtion is=.5. You compute a test statistic by
> subtracting the assumed value of .5 from the sample date of .61 and
> divide by the standard error of the sampling distribution of
> proportions which is the square root[( .5*.5)/39] in this situation.
> You get what's called the test statistic for the experiment, in thsi
> case, 1.37. To to support the claim that p>.5, you need the test
> statistic to be bigger than what is the z-alpha for the confidence
> level and for 95% that's 1.644.
>>
>>And I didn't round off anything...the probabilities are right out of the
>>table....020 for 24/39 and .008 for 25/39.
>
> But it wasn't 24/39 was it? And 24/39 doesn't support the claim at the
> 95% confidence level.
Actually it was 24....what number did you use?
>>
>>>>Had one more chosen correctly, the error probability would have been
>>>>less
>>>>than 1%, or "beyond a shadow of a doubt".
>>>
>>> If it had been 25 instead of 24 it would have supported the claim at
>>> the 95% level but not at 97% or higher. But that's the point. You
>>> don't get to wiggle around the numbers so you get what you want. If it
>>> had been one less, you you still make the claim? What about if 39 more
>>> people did the experiment and only 20 got it right. You can only draw
>>> so much support for a claim from a single sample.
>>>
>>> And nothing that can only be tested statistically is "beyond a shadow
>>> of a doubt" unless you mean "supported at a very high level of
>>> confidence" which isn't the case here, even with another correct
>>> "guess". Statistics can only be used to support a claim up to the
>>> probability (1-confidence level) of falsely supporting an invalid
>>> conclusion.
>>>
>>> The underlying model for determining whether binary selection is
>>> random is tossing a coin. Tossing a coin 39 times and getting 24 heads
>>> doesn't mean the coin is baised towards heads.
>>
>>I understand all that...I will hope you intended this for others.
>>
>>
>>>
>>>>
>>>>So presumably John and Michael did at least this well to be singled out
>>>>by
>>>>the reporter.
>>>
>>> Who obviously was deeply knowledgable about statistics.
>>
>>Perhaps not, and so he could be wrong. But presumable the test designer
>>would have corrected him if he were wildly so.
>
> Presuming the test designer had a clue about proper statistical
> experiment and analysis which appears questionable.
Why is it questionable, other than that he didn't do it double-blind. There
are lots of practical reasons why a double-blind audio test can be extremely
difficult to execute, and the comparator boxes used for them have been much
the subject of criticism.
If it was single blind, and the test was run to avoid the pitfalls of same,
it can still be a valid test.
I haven't read the article, but nothing revealed here suggests that the test
was flawed other than being single-blind.
>snip<
Harry Lavo
January 19th 08, 04:06 PM
"Oliver Costich" > wrote in message
...
> On Fri, 18 Jan 2008 16:06:21 -0500, "Harry Lavo" >
> wrote:
>
>>
>>"Oliver Costich" > wrote in message
...
>>> On Fri, 18 Jan 2008 07:43:13 -0600, MiNe 109
>>> > wrote:
>>>
>>>>In article >,
>>>> "Arny Krueger" > wrote:
>>>>
>>>>> "MiNe 109" > wrote in message
>>>>>
>>>>> > In article >,
>>>>> > Oliver Costich > wrote:
>>>>> >
>>>>> >> On Thu, 17 Jan 2008 12:56:23 -0500, Walt
>>>>> >> > wrote:
>>>>> >>
>>>>> >>> wrote:
>>>>> >>>> On Jan 16, 10:52?am, John Atkinson
>>>>> >>>> > wrote:
>>>>> >>>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>>>> >>>>>
>>>>> >>>>> Money quote: "I was struck by how the best-informed
>>>>> >>>>> people at the show -- like John Atkinson and Michael
>>>>> >>>>> Fremer of Stereophile Magazine -- easily picked the
>>>>> >>>>> expensive cable."
>>>>> >>>>
>>>>> >>>> So will you be receiving your $1 million from Randi
>>>>> >>>> anytime soon?
>>>>> >>>
>>>>> >>> Don't count on it. From TFA: "But of the 39 people who
>>>>> >>> took this test, 61% said they preferred the expensive
>>>>> >>> cable." Hmmme. 39 trials. 50-50 chance. How
>>>>> >>> statistically significant is 61%? You do the math.
>>>>> >>> (HINT: it ain't.)
>>>>> >>
>>>>> >> Here's the math: Claim is p (proportion of correct
>>>>> >> answers) >.5. Null hypothesis is p=.5. The null
>>>>> >> hypothsis cannot be rejected (and the claim cannot be
>>>>> >> supported) at the 95% significance level.
>>>>> >
>>>>> > Welcome to the group! Out of curiosity, what significance
>>>>> > level does 61% support?
>>>>>
>>>>> You haven't formed the question properly. 61% is statisically
>>>>> signifcant
>>>>> or
>>>>> not, depending on the total number of trials.
>>>>
>>>>Okay, in 39 trials, what level of significance does 61% indicate?
>>>>
>>>>Stephen
>>>
>>>
>>> About 92%
>>
>>This is wrong, according to my binomial table...should be 98% instead.
>>
>
> You simply don't know anything about statistical testing of claims.
> You've pulled the numbers off a binomial table of some sort and
> interpreted the result by some method you thought would tell you
> something.
Don't be so frigging saracastic. I know a great deal about statistics,
particularly their practical use, although I am not a statistician. But I
spent many years of my life working with market researchers, most of whom
had a strong statistics background, and I've had statistics courses at both
the graduate and undergraduate level. I did forget, however, that you have
to make adjustments for samples under 100, since in my work we never used
samples smaller than that. And above that level standard practice is to
assume normal distribution and use the simplified standard error
calculation. However, the Binary Table of P-Values I used was specifically
calculated to take these small sample sizes into account, so I still get the
same answer calculating them manually (see below).
>
> Get a stsistics book to understand how hypotheses are tested and buy a
> Texas Instrument TI-83 calculator to do all the computation with any
> serious manipulation. You put the numbers in (in this case .5, 24, 39)
> and push the button to get a number to do the test comparison.
I didn't have to get one, two were sitting on my shelf. So I gave myself a
refresher course. Used the normal table as recommended by the book to
estimate distribution for samples 30-100 (below that the "t" table), used
the formulas to calculate the standard error of
"r" (in this case, .04876), double-checked my work, and still come out with
the 95% confidence level at 23.22 (1.96 standard deviations). In other
words, 24 of 39 *is* significant at the 95% level, just as I had previously
stated, according to my calculations. Even if you want to use two standard
deviations, it works out to 23.3.
Are you absolutely certain that you didn't put 23 into your calculations by
mistake? That would give an error measure approximately in-line with these
calculations.
JBorg, Jr.[_2_]
January 19th 08, 05:34 PM
> Oliver Costich wrote:
>> JBorg, Jr. wrote:
>>> Oliver Costich wrote:
>>>> JBorg, Jr.wrote:
>>>>> Shhhh! wrote:
>>>>>> Oliver Costich wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> In other words, that 61% of a sample of 39 got the correct result
>>>>>> isn't sufficient evidence that in the general population of
>>>>>> listeners more than half can pick the better cable.
>>>>>>
>>>>>> So, I'd say "that's hardly that".
>>>>>
>>>>> I'm curious what percent of the "best informed" got. I mean, you
>>>>> could mix in hot dog vendors, the deaf, people who might try to
>>>>> fail just to be contrary, you, and so on, and get different
>>>>> results.
>>>>
>>>> Well asked.
>>>>
>>> What population of listeners was the claim made for and how was it
>>> defined? My guess is that however it's constructed, it a lot bigger
>>> than 39.
>>
>> No information were provided for that. Still, valid parameter for
>> such test should exclude participants with personal biases and
>> preferences and those lacking extended listening experience, as
>> examples.
>>
>
> Personal bias can be filtered out with well designed double blind
> experiments. That's the whole point of that method. If neither the
> tester or those tested know what they are listening to. People with
> listening experience is still a large, but shrinking, population.
Mr. Costich, how do you filter out from DBT experiments the listeners
personal biases and preferences in sound acquired over time through
extended listening experience. As an example, a person with strong
affinity and craves the sound produced by jazz ensemble tends to be
receptive to the subtle nuance produce and articulated by those sets of
instruments. Is hiding the components during DBT removed this
adulation out ?
What if the subject for the test is someone like Howard Ferstler who
admitted to having deeply held personal vendetta towards high-end
establishment going back in the late '70s, how would you go about
explaining that a no-difference Ferstler test result is valid ?
> The golden ear cult would like to define the population to be those
> among that have a good enough run of guesses to get a statistically
> significant outcome:-)
Mr. Costich, please don't be so frigging sarcastic towards audiophiles.
Audiophiles who had honed and increased their listening sensitivity from
listening to live, unamplified, and reproduced music over extended period
of time.
<bbl...>
Harry Lavo
January 19th 08, 05:41 PM
"Oliver Costich" > wrote in message
...
> On Fri, 18 Jan 2008 16:06:21 -0500, "Harry Lavo" >
> wrote:
>
>>
>>"Oliver Costich" > wrote in message
...
>>> On Fri, 18 Jan 2008 07:43:13 -0600, MiNe 109
>>> > wrote:
>>>
>>>>In article >,
>>>> "Arny Krueger" > wrote:
>>>>
>>>>> "MiNe 109" > wrote in message
>>>>>
>>>>> > In article >,
>>>>> > Oliver Costich > wrote:
>>>>> >
>>>>> >> On Thu, 17 Jan 2008 12:56:23 -0500, Walt
>>>>> >> > wrote:
>>>>> >>
>>>>> >>> wrote:
>>>>> >>>> On Jan 16, 10:52?am, John Atkinson
>>>>> >>>> > wrote:
>>>>> >>>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>>>> >>>>>
>>>>> >>>>> Money quote: "I was struck by how the best-informed
>>>>> >>>>> people at the show -- like John Atkinson and Michael
>>>>> >>>>> Fremer of Stereophile Magazine -- easily picked the
>>>>> >>>>> expensive cable."
>>>>> >>>>
>>>>> >>>> So will you be receiving your $1 million from Randi
>>>>> >>>> anytime soon?
>>>>> >>>
>>>>> >>> Don't count on it. From TFA: "But of the 39 people who
>>>>> >>> took this test, 61% said they preferred the expensive
>>>>> >>> cable." Hmmme. 39 trials. 50-50 chance. How
>>>>> >>> statistically significant is 61%? You do the math.
>>>>> >>> (HINT: it ain't.)
>>>>> >>
>>>>> >> Here's the math: Claim is p (proportion of correct
>>>>> >> answers) >.5. Null hypothesis is p=.5. The null
>>>>> >> hypothsis cannot be rejected (and the claim cannot be
>>>>> >> supported) at the 95% significance level.
>>>>> >
>>>>> > Welcome to the group! Out of curiosity, what significance
>>>>> > level does 61% support?
>>>>>
>>>>> You haven't formed the question properly. 61% is statisically
>>>>> signifcant
>>>>> or
>>>>> not, depending on the total number of trials.
>>>>
>>>>Okay, in 39 trials, what level of significance does 61% indicate?
>>>>
>>>>Stephen
>>>
>>>
>>> About 92%
>>
>>This is wrong, according to my binomial table...should be 98% instead.
>>
>
> You simply don't know anything about statistical testing of claims.
> You've pulled the numbers off a binomial table of some sort and
> interpreted the result by some method you thought would tell you
> something.
Don't be so frigging saracastic. I know a great deal about statistics,
particularly their practical use, although I am not a statistician. But I
spent many years of my life working with market researchers, most of whom
had a strong statistics background, and I've had statistics courses at both
the graduate and undergraduate level. I did forget, however, that you have
to make adjustments for samples under 100, since in my work we never used
samples smaller than that. And above that level standard practice is to
assume normal distribution and use the simplified standard error
calculation. However, the Binary Table of P-Values I used was specifically
calculated to take these small sample sizes into account, so I still get the
same answer calculating them manually (see below).
>
> Get a stsistics book to understand how hypotheses are tested and buy a
> Texas Instrument TI-83 calculator to do all the computation with any
> serious manipulation. You put the numbers in (in this case .5, 24, 39)
> and push the button to get a number to do the test comparison.
I didn't have to get one, two were sitting on my shelf. So I gave myself a
refresher course. Used the normal table as recommended by the book to
estimate distribution for samples 30-100 (below that the "t" table), used
the formulas to calculate the standard error of
"r" (in this case, .04876), double-checked my work, and still come out with
the 95% confidence level at 23.22 (1.96 standard deviations). In other
words, 24 of 39 *is* significant at the 95% level, just as I had previously
stated, according to my calculations. Even if you want to use two standard
deviations, it works out to 23.3.
Are you absolutely certain that you didn't put 23 into your calculations by
mistake? That would give an error measure approximately in-line with these
calculations.
Harry Lavo
January 19th 08, 05:42 PM
"Oliver Costich" > wrote in message
...
> On Fri, 18 Jan 2008 16:06:21 -0500, "Harry Lavo" >
> wrote:
>
>>
>>"Oliver Costich" > wrote in message
...
>>> On Fri, 18 Jan 2008 07:43:13 -0600, MiNe 109
>>> > wrote:
>>>
>>>>In article >,
>>>> "Arny Krueger" > wrote:
>>>>
>>>>> "MiNe 109" > wrote in message
>>>>>
>>>>> > In article >,
>>>>> > Oliver Costich > wrote:
>>>>> >
>>>>> >> On Thu, 17 Jan 2008 12:56:23 -0500, Walt
>>>>> >> > wrote:
>>>>> >>
>>>>> >>> wrote:
>>>>> >>>> On Jan 16, 10:52?am, John Atkinson
>>>>> >>>> > wrote:
>>>>> >>>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>>>> >>>>>
>>>>> >>>>> Money quote: "I was struck by how the best-informed
>>>>> >>>>> people at the show -- like John Atkinson and Michael
>>>>> >>>>> Fremer of Stereophile Magazine -- easily picked the
>>>>> >>>>> expensive cable."
>>>>> >>>>
>>>>> >>>> So will you be receiving your $1 million from Randi
>>>>> >>>> anytime soon?
>>>>> >>>
>>>>> >>> Don't count on it. From TFA: "But of the 39 people who
>>>>> >>> took this test, 61% said they preferred the expensive
>>>>> >>> cable." Hmmme. 39 trials. 50-50 chance. How
>>>>> >>> statistically significant is 61%? You do the math.
>>>>> >>> (HINT: it ain't.)
>>>>> >>
>>>>> >> Here's the math: Claim is p (proportion of correct
>>>>> >> answers) >.5. Null hypothesis is p=.5. The null
>>>>> >> hypothsis cannot be rejected (and the claim cannot be
>>>>> >> supported) at the 95% significance level.
>>>>> >
>>>>> > Welcome to the group! Out of curiosity, what significance
>>>>> > level does 61% support?
>>>>>
>>>>> You haven't formed the question properly. 61% is statisically
>>>>> signifcant
>>>>> or
>>>>> not, depending on the total number of trials.
>>>>
>>>>Okay, in 39 trials, what level of significance does 61% indicate?
>>>>
>>>>Stephen
>>>
>>>
>>> About 92%
>>
>>This is wrong, according to my binomial table...should be 98% instead.
>>
>
> You simply don't know anything about statistical testing of claims.
> You've pulled the numbers off a binomial table of some sort and
> interpreted the result by some method you thought would tell you
> something.
Don't be so frigging saracastic. I know a great deal about statistics,
particularly their practical use, although I am not a statistician. But I
spent many years of my life working with market researchers, most of whom
had a strong statistics background, and I've had statistics courses at both
the graduate and undergraduate level. I did forget, however, that you have
to make adjustments for samples under 100, since in my work we never used
samples smaller than that. And above that level standard practice is to
assume normal distribution and use the simplified standard error
calculation. However, the Binary Table of P-Values I used was specifically
calculated to take these small sample sizes into account, so I still get the
same answer calculating them manually (see below).
>
> Get a stsistics book to understand how hypotheses are tested and buy a
> Texas Instrument TI-83 calculator to do all the computation with any
> serious manipulation. You put the numbers in (in this case .5, 24, 39)
> and push the button to get a number to do the test comparison.
I didn't have to get one, two were sitting on my shelf. So I gave myself a
refresher course. Used the normal table as recommended by the book to
estimate distribution for samples 30-100 (below that the "t" table), used
the formulas to calculate the standard error of
"r" (in this case, .04876), double-checked my work, and still come out with
the 95% confidence level at 23.22 (1.96 standard deviations). In other
words, 24 of 39 *is* significant at the 95% level, just as I had previously
stated, according to my calculations. Even if you want to use two standard
deviations, it works out to 23.3.
Are you absolutely certain that you didn't put 23 into your calculations by
mistake? That would give an error measure approximately in-line with these
calculations.
Shhhh! I'm Listening to Reason!
January 19th 08, 06:44 PM
On Jan 18, 4:24*pm, John Atkinson > wrote:
> On Jan 18, 4:51*pm, Walt > wrote:
>
> > Maybe JA can really hear the difference between $2k Monster cable
> > and 14 gauge zipcord. *If that's actually the case, I'm interested.
>
> If it was printed in a newspaper, it must be true, right? :-)
You troll. :-)
> John Atkinson
> Editor, Stereophile
> "Well-informed" - The Wall Street Journal"
Shhhh! I'm Listening to Reason!
Critic, RAO
"The funniest guy, like, *ever*!" - VG Daily News
Shhhh! I'm Listening to Reason!
January 19th 08, 06:54 PM
On Jan 19, 3:10*am, Oliver Costich > wrote:
> On Fri, 18 Jan 2008 13:19:23 -0800 (PST), "Shhhh! I'm Listening to
> Reason!" > wrote:
> >Oliver Costich wrote
> >> By the way, I don't use lamp cord or Home Depot interconnects in my
> >> system.
>
> >I do not use expensive wires or cables in my system. I just don't
> >really care if others do.
>
> I don't care what they use. I do care that they want to justify it
> with sloppy logic and BS.
I haven't seen any justifications, though I haven't read all the posts
in this thread.
As a matter of curiosity, what would happen to the results if, out of
a sample of 100 participants, 50 selected a certain product correctly
100% of the time and the other 50 selected incorrectly 100% of the
time?
That's what I was trying to get at when I wondered how often the "well-
informed" selected correctly.
Shhhh! I'm Listening to Reason!
January 19th 08, 07:15 PM
On Jan 19, 3:13*am, Oliver Costich > wrote:
> On Fri, 18 Jan 2008 13:08:05 -0800 (PST), "Shhhh! I'm Listening to
>
> Reason!" > wrote:
> >On Jan 17, 6:36*pm, Eeyore >
> >wrote:
>
> >> No, 61% is as good as proof that there's NO difference.
>
> >That's not true, of course. I'd have to believe that even good old
> >insane Arns would disagree with this statement.
>
> >For one thing, if a test design is not valid to prove a difference
> >exists, it is certainly not valid to prove one doesn't.
>
> In this sample of 39, 61% is not sufficient to reject the claim that
> they are just guessing or flipping a coin to decide. It does not mean
> that people can't really tell, it just means that it's very unlikely.
I have no problem with putting it that way. I *do* have a problem
saying that something "is as good as proof that there's NO
difference".
I may not be looking at this logically, or I may not be as good
communicating as Graham is, but to me "likelihoods" are not absolute
"proof".
> That the design is bad is another issue, but it has no effect on the
> analysis of the data.
A poorly-designed test will "likely" give incorrect (or certainly not
valid) results. If the test is not valid, then arguing over or
analyzing the results seems silly to me. Since the test design was
questioned, with the implication that the results were skewed, the
actual question at hand is whether or not the test design is valid. If
the test design is not valid, the results must be discarded. If the
test results must be discarded, then they are not valid as "proof" of
either hypothesis.
Therefore, as I said, Graham's claim "No, 61% is as good as proof that
there's NO difference" is incorrect. He is taking the results of a
test which he may have even questioned the design of and claiming
"proof".
So are we, IYO, discussing invalid conclusions people have drawn from
the valid results of a well-designed test?
BTW, I am sure that I'm exactly like the vast majority of people in
the audio world: if Stereophile says it, it must be true. (It's funny
to me that some people get all balled up if SP says something, yet if
Limbaugh or Hannity or Glenn Beck say something that's IMO outrageous,
it's just "entertainment" to them. Talk about poor thinking and
reasoning skills!)
John Corbett
January 20th 08, 02:29 AM
In article >, "Harry Lavo"
> wrote:
> I know a great deal about statistics, particularly their practical use,
> although I am not a statistician.
Well, I am a statistician.
You seem to be so confused about statistics that you can neither perform
the calculations nor understand what they mean.
Before looking at your calculations, we need to consider something that's
been overlooked so far in this thread.
These calculations apply in the situation where the number of correct
answers has a binomial distribution. You---and other posters---seem to
have assumed that is appropriate in this example. However, a binomial
distribution describes the number of successes in a *fixed* number of
independent and identically-distributed binary trials. That would appear
to be the case if we believed that the experimenter planned to do 39
trials. I suspect that he did not pick that number before starting his
testing. Without knowing the stopping rule, we really cannot calculate
values needed to do proper tests. For instance, if he planned to do tests
until he had 15 wrong answers, and it happened to take 39 trials, then an
ordinary binomial distribution would *not* apply---we would use a negative
binomial distribution in that case. If he planned to go until he got 24
correct ansers, and that happened to take 39 trials, we'd use a different
negative binomial distribution. If he had some other rule (e.g., "stop at
8 pm"), yet another distribution would be called for. If we don't know the
stopping rule, then we ought to be very cautious applying the usual simple
procedures.
So, assuming the binomial model is applicable, let's check some
calculations ...
> So I gave myself a refresher course. Used the normal table as
> recommended by the book to estimate distribution for samples 30-100
> (below that the "t" table),
I really doubt that your book says to use "t" for a *binomial* problem.
The t distribution is needed when you have independent estimates of the
mean and variance, but the variance is a function of the mean for a
binomial distribution, so the t is not appropriate. If you are using a
normal approximation for a binomal distribution, you should use a "z"
table, no matter what the sample size is. (Of course, that assumes the
sample is large enough to use a normal approximation in the first place.)
> used the formulas to calculate the standard error of
> "r" (in this case, .04876),
What is "r" here, and what formulas did you use to get .04876?
> double-checked my work, and still come out with the 95% confidence level at
> 23.22 (1.96 standard deviations).
95% _is_ a confidence level; 23.22 is _not_ a confidence level.
(Do you know what a confidence level is?)
1.96 standard deviations would apply if you were forming a two-sided
confidence interval, or if you were performing a two-tailed test; it is
the z value correpnding to an upper tail area of .025. Since you are
looking at a one-tailed test here, you need the cutoff z value for an
upper tail area of .05, which is 1.645.
> In other words, 24 of 39 *is* significant at the 95% level, just as I had
> previously stated, according to my calculations. Even if you want to use
> two standard deviations, it works out to 23.3.
No.
If you use the normal approximation to the binomial, and use the
continuity correction, you should get that 24 of 39 has a p-value of
0.1001. If you don't use the continuity correction, you get .07477;
that's what Oliver Costich did. If you do the exact calculation, based
directly on the binomial instead of an approximating normal, you get
..0998; you can see that the properly-used normal approximation is very
good. So 24 of 39 is *not* significant at the .05 level, although it is
significant at the .10 level. (Elsewhere you have indicated that you're
okay with significance at the 10% level.)
Earler in this thread, you wrote
> The fact is, there is nothing magical about 95%, except that it has been
> widely accepted in the scientific community to meet their standards of
> "probably so".
Yes, 95% is widely accepted primarily because it is widely accepted. ;-)
> It gives odds of 19:1 that the null hypothesis is invalid.
No, that is NOT what it means.
In hypothesis testing, calculations are done
_under_the_assumption_that_the_null_hypothesis_is_ true_.
Stop and read that again. Got it yet?
P-values and significance levels involve probabilities of results that may
occur if the null hypothesis is true; they are not probabilities that the
null hypothesis was true in the first place. (BTW, 95% is a typical
*confidence* level. Hypothesis tests involve *significance* levels, and
..05 is a commonly used value. Although there is a connection between
these concepts, it is _not_ as simple as saying that 95% confidence is the
same as 5% significance.)
Doing real statistics is not about just doing arithmetic as an excuse to
avoid having to think about the data.
JBorg, Jr.[_2_]
January 20th 08, 04:35 AM
> Oliver Costich wrote:
>> JBorg, Jr. wrote:
>>> Oliver Costich wrote:
>>>> JBorg, Jr. wrote:
>>>>> Oliver Costich wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Very little of the claims about people being able to discern
>>>>> differences in cables is supported by such testing.
>>>>
>>>> I take it you don't recommend testing for such purposes.
>>>> Ok then...
>>>
>>> I don't recommend badly designed tests and I don't recommend
>>> making statistically invalid claims based on any kind of test.
>>>
>>> But the only way to statistically support (or reject) claims about
>>> human behavior is through well designed experiments and real
>>> statistical analysis.
>>
>> May I interject then based on what you said above that audio
>> testing such as SBT and ABX/DBT are poorly designed experiments
>> and will fail to disprove that sound differences heard by
>> audiophiles do not physically exist.
>>
>>
>>
>>
> Check the poster. I never said that.
Well, I did not say that you did, but basing on what you said about
making proper statistic claims, do you think that audio testing such
as SBT and ABX/DBT are insufficient experiments to determinine
whether the subtle sound differences heard by audiophiles physically
exist ?
JBorg, Jr.[_2_]
January 20th 08, 04:40 AM
Oliver Costich > wrote:
> On Fri, 18 Jan 2008 18:40:22 -0800, "JBorg, Jr."
> > wrote:
>
>>> Oliver Costich wrote:
>>>> JBorg, Jr. wrote:
>>>>> Oliver Costich wrote:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>> Here's the math: Claim is p (proportion of correct answers) >.5.
>>>>> Null hypothesis is p=.5. The null hypothsis cannot be rejected
>>>>> (and the claim cannot be supported) at the 95% significance level.
>>>>
>>>>
>>>> Well yes, Mr. Costich, the test results aren't scientifically valid
>>>> but it didn't disproved that the sound differences heard by
>>>> participants did not physically exist.
>>>>
>>>
>>> Of course not. Certainty is not in the realm of statistical
>>> analysis.
>>
>>
>> Right. Why then Arny and his ilk consistently assert using
>> statistical analysis during audio testing claiming to proved that
>> the sound differences heard by audiophiles did so based on their
>> fevered imagination.
>
> Because statistical analysis is the only toll we have to test claims
> about the behavior of populations from samples. Any such test has
> error and the likelihood of error needs to be set in advance. The
> significance level is precisely the probabiliy of rejecting a true nul
> hypothesis
Well now! Disproving that the sound differences heard by
audiophiles do not physically exist is -- certainty not in the
realm of statistical analysis.
>>> Let's say you want to claim the a certain coin is biased to produce
>>> heads when flipped. That you flip it 39 times and get 24 heads is
>>> not sufficient to support the claim at a 95% confidence level. If
>>> you lower your standard or do a lot more flips and still get 61%,
>>> the conclusion will change
>>
>> Ok.
>>
>>> I'm sure there are audible differences. The issue is whether they
>>> are enough to make consistent determinations. A bigger issue for
>>> those of use who just listen to music is whether the diffeneces are
>>> detectable when you are emotionally involved in the music and not
>>> just playing "golden ears".
>>
>> Well then, you agreed that subtle differences do exist.
>
> Sometimes even large ones, like the MP3s vs CD. That does not mean
> that 61% success in a sample of 39 supports the claim that people can
> tell the difference between the two cables used in the test.
Ok.
JBorg, Jr.[_2_]
January 20th 08, 04:55 AM
> Arny Krueger wrote:
>> JBorg, Jr. wrote
>
>
>
>
>
>
>> Well yes, Mr. Costich, the test results aren't
>> scientifically valid but it didn't disproved that the
>> sound differences heard by participants did not physically exist.
>
> That was another potential flaw in the tests. I see no controls that
> ensured that the listeners heard the identically same selections of
> music. Therefore, the listeners may have heard differences that did
> physically exist - unfortunately they were due to random choices by
> the experimenter, not audible differences that were inherent in the
> cables.
Mr. Costich opined that disproving the sound differences heard by
audiophiles do not physically exist is not in the realm of statistical
analysis.
JBorg, Jr.[_2_]
January 20th 08, 05:15 AM
> Oliver Costich wrote:
>> JBorg, Jr.wrote:
>>> Oliver Costich wrote:
>>>> JBorg, Jr. wrote:
>>>>> Oliver Costich wrote:
>>>>>> Walt wrote:
>>>>>>> John Atkinson wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Remind me again how many times Arny Krueger has been
>>>>>>> quoted in the Wall Street Journal?
>>>>>>
>>>>>> Ok. So you've been quoted in the WSJ. So have Uri Geller
>>>>>> and Ken Lay.
>>>>>>
>>>>>> What's your point?
>>>>>
>>>>> So has Osama Bin Laden. The point is that he's devoid of a sound
>>>>> argument.
>>>>
>>>> Mr. Costich, there is no sound argument to improve upon a strawman
>>>> arguments. It just doesn't exist.
>>>
>>> Agreed.
>>
>> Ok.
>>
>>>> Incidentally Mr. Costich, how well do you know Arny Krueger if you
>>>> don't mind me asking so.
>>>>
>>> I only know of his existence from the news group, if that's his real
>>> name:-)
>>
>> He made claims that he had submitted peer-reviewed papers in AES.
>> He also calim to be audio engineer and well educated concerning
>> statistical analysis in well designed audio experiment. To be
>> honest, Mr. Costich, he is the worst offender of common sense and
>> has been pestering this group for a long, long time.
>
>
> That's an opinion, which is in the name of the newsgroup.
This is indeed a newsgroup of opinion but do you think it is proper,
as Mr. Krueger has done in not so distant past, to declare false
claims and present it as FACTS ?
> There are many offenders here, not only of common sense by of
> scientific method.
I nominate:
1.) Arny Krueger, for reasons given above.
JBorg, Jr.[_2_]
January 20th 08, 06:05 AM
> Oliver Costich wrote:
>> JBorg, Jr. wrote:
>>> Oliver Costich wrote:
>>>> Mr.clydeslick wrote:
>>>>> Oliver Costich wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> snip
>>>>>
>>>>> Back to reality: 61% correct in one experiment fails to reject
>>>>> that they can't tell the difference. If the claim is that
>>>>> listeners can tell the better cable more the half the time, then
>>>>> to support that you have to be able to reject that the in the
>>>>> population of all audio interested listeners, the correct guesses
>>>>> occur half the time or less. 61% of 39 doesn't do it. (Null
>>>>> hypothesis is p=.5, alternative hypothesis is p>.5. The null
>>>>> hypthesis cannot be rejected with the sample data given.)
>>>>>
>>>>> In other words, that 61% of a sample of 39 got the correct result
>>>>> isn't sufficient evidence that in the general population of
>>>>> listeners more than half can pick the better cable.
>>>>>
>>>>> So, I'd say "that's hardly that".
>>>>
>>>> you seem to be mixing difference with preference, you reference
>>>> both, for the same test.
>>>
>>> For the purpose of statistical analysis it makes no difference.
>>
>> But for the purpose of sensible analysis, shouldn't it makes a
>> difference.
>
> I don't think so. I can't see any way the statistical analysis would
> be different.
Preferences are, statistically, immeasureable if the claim is that
listeners can tell the better cable more the half the time.
Agree or Disagree ?
>> As you have said that logic is on the side of not making decisions
>> about human behavior. Isn't this required to ensure sufficient
>> testing using well designed experiment and statistical analysis.
>
> I didn't say that.
My response above is what I attempt to convey.
>>>> And just what is the general population of listeners.
>>>
>>> You tell me. I presume that those who attend CES would be
>>> a good one to use.
>>
>> That could very well include someone like Howard Ferstler, a raving
>> lunatic with a well-known hearing loss out to destroy high-end audio
>> and derogate all audiophiles young and young at heart. Provided,
>> of course, he can *afford* the fares.
>
> Obviously you want to weed out people who are absolutely sure you
> can't tell. But leaving out people who are skeptics biases the result
> as well. I doubt that the 39 people who did the test comprised a
> simple random sample, another design flaw. On the other hand I'd like
> to see a well designed test using a simple random sample from the
> population of true believers just to see if they can really. Even if
> some people can tell, I suspect that it's a very small number. I do
> know a couple of people who can really lock onto particular
> characteristics and use then to identify what's playing.
It is difficult to discuss this, at least for me, unless there's a specific
protocol and experimental design to reference. I hope you understand.
>>> What would you use and how would you construct a simple
>>> random sample from it?
>>>
>>>> Are you testing the 99% who don't give a rat's
>>>> ass anyway? If so, so what. Or are you testing people who actually
>>>> care.
>>
>> We need a bias controlled experiment.
>
> Yes but neither the golden ear cult or the nonbeleivers would
> accept the results if they didn't agree with them. It's become a
> religious, not a scientific, argument.
As far as I can tell, it's the Objectivist that has turned these into
religious arguments.
Arny Krueger
January 20th 08, 12:52 PM
"John Corbett" > wrote in message
> These calculations apply in the situation where the
> number of correct answers has a binomial distribution.
> You---and other posters---seem to have assumed that is
> appropriate in this example. However, a binomial
> distribution describes the number of successes in a
> *fixed* number of independent and identically-distributed
> binary trials. That would appear to be the case if we
> believed that the experimenter planned to do 39 trials.
> I suspect that he did not pick that number before
> starting his testing. Without knowing the stopping rule,
> we really cannot calculate values needed to do proper
> tests. For instance, if he planned to do tests until he
> had 15 wrong answers, and it happened to take 39 trials,
> then an ordinary binomial distribution would *not*
> apply---we would use a negative binomial distribution in
> that case. If he planned to go until he got 24 correct
> ansers, and that happened to take 39 trials, we'd use a
> different negative binomial distribution. If he had some
> other rule (e.g., "stop at 8 pm"), yet another
> distribution would be called for. If we don't know the
> stopping rule, then we ought to be very cautious applying
> the usual simple procedures.
Good points, but you don't need me to tell you that, John.
The key point that I see here is that a valid experiment is intentional. Yu
set forth what you are going to do, and then you do it.
The opposite of a valid experiment is what often happens - people take data
analyzing it as they go along, and when they have the analysis they want,
they stop taking data.
Arny Krueger
January 20th 08, 01:01 PM
"Harry Lavo" > wrote in message
> Except that nobody has presented any evidence that this
> was a badly designed test.
Nobody other than the experimenter himself. He's the only guy who can
present evidence. Everything else is inference based on the data he
presented.
> On the face of it it was
> apparently a decently-designed single-blind test.
That would be an oxymoron, at least in this case. There was no reason why an
knowlegeable person would suppose that this test was improved by being
single blind, or avoid doing a double blind test.
If the experimenter actually paid for the rooms at CES, then he had put a
lot of money into it. As things stand, a lot of time was put into setting
up the experiment, such as it was. Why risk it all by avoiding making the
test double blind?
> And single blind tests are not automatically invalid.
There are times when single blind is all you can do - but this wasn't one of
them.
> They just have a potential weakness that must diligently be
> guarded against.
I see no signs of that kind of dilligence in the evidence that has been
presented.
I can speculate about a different story or maybe the same story but with
more details.
Eyewitness accounts say that the high end part of the 2008 CES show was just
about dead, compared to past similar events. Exhibitions that would have
maybe 200 people on-site at a time only had about 20. Those are facts -
based on actual eyewitness accounts.
My speculation is that the rooms were idle or perhaps rented by someone
promoting high end cables, and the so-called experiments were set up as a
sort of publicity stunt.
IOW, its all a joke - just like the rest of cable mysticism. Bad joke, one
that at best cheats people our of good sound that they might obtain if they
spent their money in more reasonable ways, say acoustic treatments for their
listening room. Just one man's opinion! ;-)
Arny Krueger
January 20th 08, 01:03 PM
"Oliver Costich" > wrote in
message
> On Fri, 18 Jan 2008 14:24:45 -0800 (PST), John Atkinson
> > wrote:
>
>> On Jan 18, 4:51 pm, Walt >
>> wrote:
>>> Maybe JA can really hear the difference between $2k
>>> Monster cable and 14 gauge zipcord. If that's actually
>>> the case, I'm interested.
>>
>> If it was printed in a newspaper, it must be true,
>> right? :-)
>>
>> John Atkinson
>> Editor, Stereophile
>> "Well-informed" - The Wall Street Journal"
>
> Like the WSJ, where you can get contradictory opinions on
> the stock market on any given day?
LOL!
Just another example of what happens when journalists stray too far from
their core areas of expertise. Sort of like most of the writers in
Stereophile! ;-)
Arny Krueger
January 20th 08, 01:04 PM
"JBorg, Jr." > wrote in message
> He made claims that he had submitted peer-reviewed papers
> in AES.
False claim.
> He also calim to be audio engineer and well educated
> concerning statistical analysis in well designed audio
> experiment.
True.
Arny Krueger
January 20th 08, 01:09 PM
"John Atkinson" > wrote in
message
> As you may have noted, whenever it is mentioned on
> Usenet that someone has done something of note, Arny
> Krueger inevitably jumps in with the claim that he had
> done the same or similar at an earlier date. "Been there,
> done that" is his constant refrain.
This would be a false claim. For example, John Atkinson claims to be the
editor of Stereophile which would be an accomplishment of note. I've never
ever claimed that I did that at an earlier date. In fact I've made only a
few claims like that, and they are all true. It Atkinson would provide a
specific claim where he thinks I incorrectly did such thing, I would be
happy to explain it.
> In this instance, I am
> merely forestalling the expected claim by emphasizing
> that he has not yet been quoted in that newspaper. :-)
I considered the possibility that if I did a really good job of
deconstructing the WSJ article, perhaps documented in a letter to the
editor, I might indeed be published in the WSJ. Not worth the trouble. In
fact almost everything that I have done that might be reasonably considered
to be of note, where either done with considerable assistance by others,
and/or without any such intention on my part. If they exist, they were
accidents.
Arny Krueger
January 20th 08, 01:27 PM
"JBorg, Jr." > wrote in message
>> Arny Krueger wrote:
>>> JBorg, Jr. wrote
>>
>>
>>
>>
>>
>>
>>> Well yes, Mr. Costich, the test results aren't
>>> scientifically valid but it didn't disproved that the
>>> sound differences heard by participants did not
>>> physically exist.
>>
>> That was another potential flaw in the tests. I see no
>> controls that ensured that the listeners heard the
>> identically same selections of music. Therefore, the
>> listeners may have heard differences that did physically
>> exist - unfortunately they were due to random choices by
>> the experimenter, not audible differences that were
>> inherent in the cables.
>
>
> Mr. Costich opined that disproving the sound differences
> heard by audiophiles do not physically exist is not in
> the realm of statistical analysis.
I read all his posts, and saw no such thing. I did see him correct other
such misrepresentions of what he said.
Basically borglet, you're not a reliable analyst in matters like these.
George M. Middius
January 20th 08, 02:36 PM
Cheapskateborg moans and groans.
> I would be a moron, at least in this case.
Now you're getting it, Turdy.
> an knowlegeable person
<snicker>
> If the experimenter actually paid for the rooms at CES,
Well, that lets you out.
> I see no signs
Arnii, with all due ;-) respect, you're not exactly known for your
perspicacity. In fact, pretty much the opposite. If you look up "obtuse"
in the dictionary, it says "see Krooborg, Arnii".
George M. Middius
January 20th 08, 02:38 PM
Time for some "debating trade" sewage.
> > As you may have noted, whenever it is mentioned on
> > Usenet that someone has done something of note, Arny
> > Krueger inevitably jumps in with the claim that he had
> > done the same or similar at an earlier date. "Been there,
> > done that" is his constant refrain.
>
> This would be a false claim. For example, John Atkinson claims to be the
> editor of Stereophile which would[sic] be an accomplishment of note. I've never
> ever claimed that I did that at an earlier date.
Wow. Arnii, you've outdone yourself. You've proven not only that JA is a
"liar", but also that everybody else who attributes those four little
words to you is also a "liar".
Is it lonely up there on top of your...er... "mountain"?
Shhhh! I'm Listening to Reason!
January 20th 08, 06:02 PM
On Jan 20, 7:03*am, "Arny Krueger" > wrote:
> LOL!
>
> Just another example of what happens when journalists stray too far from
> their core areas of expertise.
Hm. The only "core expertise" that I'm aware of for journalists is an
expertise in reporting what happened.
Are you aware of another "core expertise", GOIA?
> Sort of like most of the writers in Stereophile! ;-)
So we can add yet another area you consider yourself an "expert" in:
journalism.
I agree: LOL!
Shhhh! I'm Listening to Reason!
January 20th 08, 06:07 PM
On Jan 20, 7:04*am, "Arny Krueger" > wrote:
> "JBorg, Jr." > wrote in message
>
>
>
> > He made claims that he had submitted peer-reviewed papers
> > in AES.
>
> False claim.
I think the claim was that he had been published in JAES, and IIRC he
would not disclose which issue or issues that he was published in.
As it turns out, he would not back up his claim.
> > He also calim to be audio engineer and well educated
> > concerning statistical analysis in well designed audio
> > experiment.
>
> True.
Where is your degree in engineering from? What was the exact degree
conferred? As I recall, you said that you took a few classes at some
community college.
Shhhh! I'm Listening to Reason!
January 20th 08, 06:10 PM
On Jan 20, 6:52*am, "Arny Krueger" > wrote:
> The opposite of a valid experiment is what often happens - people take data
> analyzing it as they go along, and when they have the analysis they want,
> they stop taking data.
Are you claiming that the WSJ reporter did this, or are you making
things up?
In the world of journalism this would be a *very* serious charge.
JBorg, Jr.[_2_]
January 21st 08, 05:15 AM
> Arny Krueger wrote:
>> JBorg, Jr. wrote
>>> Arny Krueger wrote:
>>>> JBorg, Jr. wrote
>>>
>>>
>>>
>>>
>>>
>>>
>>>> Well yes, Mr. Costich, the test results aren't
>>>> scientifically valid but it didn't disproved that the
>>>> sound differences heard by participants did not
>>>> physically exist.
>>>
>>> That was another potential flaw in the tests. I see no
>>> controls that ensured that the listeners heard the
>>> identically same selections of music. Therefore, the
>>> listeners may have heard differences that did physically
>>> exist - unfortunately they were due to random choices by
>>> the experimenter, not audible differences that were
>>> inherent in the cables.
>>
>>
>> Mr. Costich opined that disproving the sound differences
>> heard by audiophiles do not physically exist is not in
>> the realm of statistical analysis.
>
>
> I read all his posts, and saw no such thing. I did see him correct
> other such misrepresentions of what he said.
Along this subthread, I said the following to Mr. Costich concerning
the test:
" ... Mr. Costich, the test results aren't scientifically valid
but it didn't disproved that the sound differences heard by
participants did not physically exist."
Mr. Costich replied:
" Of course not. Certainty is not in the realm of statistical analysis...."
He is saying that "certainty" about disproving a claim that the
sound differences heard by participants do not physically
exist -- is not in the realm of statistical analysis.
Further, Mr. Costich supported his claim by saying:
" Because statistical analysis is the only toll we have to test claims
about the behavior of populations from samples. Any such test has
error and the likelihood of error needs to be set in advance. The
significance level is precisely the probabiliy of rejecting a true null
hypothesis."
***
You seem to suggest that Mr. Costich misspoke, but how so ?
Are you inferring that Mr. Costich was untruthful about what he
said and that he spoke too hastily ?
> Basically borglet, you're not a reliable analyst in matters like
> these.
JBorg, Jr.[_2_]
January 21st 08, 05:25 AM
> Arny Krueger wrote:
>> JBorg, Jr. wrote
>
>
>
>
>> He made claims that he had submitted peer-reviewed papers
>> in AES.
>
> False claim.
Mr. Krueger, could you please provide the date of publication
of the paper(s) you claimed to have submitted to JAES ?
>> He also calim to be audio engineer and well educated
>> concerning statistical analysis in well designed audio
>> experiment.
>
> True.
Mr. Krueger, could you please provide information concerning
your engineering credential, as well as, specific information
regarding your studies of statistic from a credited institution ?
Thank You.
JBorg, Jr.[_2_]
January 21st 08, 06:28 AM
> Oliver Costich wrote:
>> JBorg, Jr. wrote:
>>> Oliver Costich wrote:
>>>> Mr.clydeslick wrote:
>>>>> Oliver Costich wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> snip
>>>>>
>>>>> Back to reality: 61% correct in one experiment fails to reject
>>>>> that they can't tell the difference. If the claim is that
>>>>> listeners can tell the better cable more the half the time, then
>>>>> to support that you have to be able to reject that the in the
>>>>> population of all audio interested listeners, the correct guesses
>>>>> occur half the time or less. 61% of 39 doesn't do it. (Null
>>>>> hypothesis is p=.5, alternative hypothesis is p>.5. The null
>>>>> hypthesis cannot be rejected with the sample data given.)
>>>>>
>>>>> In other words, that 61% of a sample of 39 got the correct result
>>>>> isn't sufficient evidence that in the general population of
>>>>> listeners more than half can pick the better cable.
>>>>>
>>>>> So, I'd say "that's hardly that".
>>>>
>>>> you seem to be mixing difference with preference, you reference
>>>> both, for the same test.
>>>
>>> For the purpose of statistical analysis it makes no difference.
>>
>> But for the purpose of sensible analysis, shouldn't it makes a
>> difference.
>
> I don't think so. I can't see any way the statistical analysis would
> be different.
Preferences are, statistically, immeasureable if the claim is that
listeners can tell the better cable more than half the time.
Agree or Disagree ?
>> As you have said that logic is on the side of not making decisions
>> about human behavior. Isn't this required to ensure sufficient
>> testing using well designed experiment and statistical analysis.
>
> I didn't say that.
My response above is what I attempt to convey.
------
Hello Mr. Costich, I would like to know if it is correct for me to assume
that mixing a difference with preference on a well-designed audio testing
would make no difference for the purpose of statistical analysis.
JBorg, Jr.[_2_]
January 21st 08, 06:49 AM
> John Corbett wrote:
>
>
>
>
>
> Well, I am a statistician.
> You seem to be so confused about statistics that you can neither
> perform the calculations nor understand what they mean.
Hello Mr. Corbett, I would like to know if it is appropriate to
assume that disproving sound differences heard by audiophiles
that I presume physically exist is -- a certainty not in the
realm of statistical analysis.
Arny Krueger
January 21st 08, 12:32 PM
"JBorg, Jr." > wrote in message
>> Arny Krueger wrote:
>>> JBorg, Jr. wrote
>>>> Arny Krueger wrote:
>>>>> JBorg, Jr. wrote
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>> Well yes, Mr. Costich, the test results aren't
>>>>> scientifically valid but it didn't disproved that the
>>>>> sound differences heard by participants did not
>>>>> physically exist.
>>>>
>>>> That was another potential flaw in the tests. I see no
>>>> controls that ensured that the listeners heard the
>>>> identically same selections of music. Therefore, the
>>>> listeners may have heard differences that did
>>>> physically exist - unfortunately they were due to
>>>> random choices by the experimenter, not audible
>>>> differences that were inherent in the cables.
>>>
>>>
>>> Mr. Costich opined that disproving the sound differences
>>> heard by audiophiles do not physically exist is not in
>>> the realm of statistical analysis.
>> I read all his posts, and saw no such thing. I did see
>> him correct other such misrepresentations of what he said.
> Along this subthread, I said the following to Mr. Costich
> concerning the test:
>
>
> " ... Mr. Costich, the test results aren't scientifically valid but it
> didn't disproved that the sound differences heard by participants did not
> physically exist."
There's your first anti-logic attack borglet. You're forcing your opinion to
be disproved with a proof of a negative hypothesis. Everybody knows that
negative hypothesis are difficult or impossible to prove. Yet, you are
demanding that a negative hypothesis be proved.
Mr. Costich replied:
> " Of course not. Certainty is not in the realm of
> statistical analysis...."
The man speaks truth, but that's not what you are interested in, right
borglet?
> He is saying that "certainty" about disproving a claim that the sound
> differences heard by participants do not physically
> exist -- is not in the realm of statistical analysis.
What's is certain in this life Borglet, except that you will trash logic and
reason to justify your religious belief that just about everything,
including the most perfected and inert of audio components, being cables
also sounds different?
> s precisely the
> probability of rejecting a true null hypothesis."
IOW borglet, the truism that proving a negative hypothesis is difficult or
impossible. You're just obfuscating the fact that you can't prove the
corresponding positive hypothesis.
> You seem to suggest that Mr. Costich misspoke, but how so
> ?
Again Borglet, you are distorting what other's said to justify your
religious belief that cables sound different.
Costlich didn't misspeak borglet, he obviously talked way over your head.
> Are you inferring that Mr. Costich was untruthful about
> what he said and that he spoke too hastily ?
No borglet, I'm inferring that rational discussion with you is impossible
because you distort everything you hear to fit into your own erroneous and
illogical thinking.
>> Basically borglet, you're not a reliable analyst in
>> matters like these.
To say the least!
Arny Krueger
January 21st 08, 12:33 PM
"JBorg, Jr." > wrote in message
>> Arny Krueger wrote:
>>> JBorg, Jr. wrote
>>
>>
>>
>>
>>> He made claims that he had submitted peer-reviewed
>>> papers in AES.
>>
>> False claim.
>
>
> Mr. Krueger, could you please provide the date of
> publication of the paper(s) you claimed to have submitted to JAES ?
Irrelevant and stupid.
>>> He also calim to be audio engineer and well educated
>>> concerning statistical analysis in well designed audio
>>> experiment.
>>
>> True.
>
>
> Mr. Krueger, could you please provide information
> concerning your engineering credential, as well as,
> specific information regarding your studies of statistic
> from a credited institution ?
Been there, done that.
Shhhh! I'm Listening to Reason!
January 21st 08, 12:40 PM
Good Morning, GOIA.
On Jan 21, 6:33*am, "Arny Krueger" > wrote:
> "JBorg, Jr." > wrote in message
> > Mr. Krueger, could you please provide the date of
> > publication of the paper(s) you claimed to have submitted to JAES ?
>
> Irrelevant and stupid.
IOW, "They do not exist".
The "papers" appear to be in the form of letters to the editor, club
meeting announcements, or the like.
> > Mr. Krueger, could you please provide information
> > concerning your engineering credential, as well as,
> > specific information regarding your studies of statistic
> > from a credited institution ?
>
> Been there, done that.
IOW, "They do not exist".
GOIA took a few classes at a local community college. There's no
evidence that he passed any of them, or what the subject matter was.
John Atkinson[_2_]
January 21st 08, 12:52 PM
On Jan 21, 12:25 am, "JBorg, Jr." > wrote:
> > Arny Krueger wrote:
> >> JBorg, Jr. wrote
> >> He made claims that he had submitted peer-reviewed papers
> >> in AES.
>
> > False claim.
>
> Mr. Krueger, could you please provide the date of publication
> of the paper(s) you claimed to have submitted to JAES ?
Mr, Krueger''s exact word were "The JAES has published a
number of works that I authored or co-authored." The context
for this statement was a discussion involving peer-reviewed
technical papers.
I responded to this claim:
"Not that can be retrieved using the search engine at
www.aes.org, Mr. Krueger, using all the alternative spellings
of your name, and searching both the index of published papers
and the preprint index. Could you supply the references, please."
Mr. Krueger could not substantiate his claim with any
specific references, nor did he clarify whether by "a
number of works," he actually meant one of the reports
from local sections that the JAES regularly publishes or
a letter to the editor.
Ludovic Mirabel also did a search asking the Toronto
University Library to search their publications
database. This was the response he got from a
J. Wang:
"Hello, Your message was forwarded to me.
I searched many databases but only found one article
which is close to what you requested:
Amplifier-loudspeaker interfacing Krueger, A. B.
Published in "DB, The Sound Engineering Magazine",
Vol. 18, No. 7, Aug. Sept. 1984."
So, given that Mr. Krueger has never "authored or
co-authored a number of works" in the JAES, it
must be concluded that he must have mis-remembered
the facts of the matter.
John Atkinson
Editor, Stereophile
Arny Krueger
January 21st 08, 01:21 PM
"John Atkinson" > wrote in
message
> On Jan 21, 12:25 am, "JBorg, Jr."
> > wrote:
>>> Arny Krueger wrote:
>>>> JBorg, Jr. wrote
>>>> He made claims that he had submitted peer-reviewed
>>>> papers in AES.
>>
>>> False claim.
>>
>> Mr. Krueger, could you please provide the date of
>> publication of the paper(s) you claimed to have
>> submitted to JAES ?
>
> Mr, Krueger''s exact word were "The JAES has published a
> number of works that I authored or co-authored." The
> context
> for this statement was a discussion involving
> peer-reviewed technical papers.
And thus you find my name in at least one paper that was published in the
JAES.
Of course such things do not appear in the AES online index.
The paper in question would be the origional JAES article about ABX.
> I responded to this claim:
> "Not that can be retrieved using the search engine at
> www.aes.org, Mr. Krueger, using all the alternative
> spellings
> of your name, and searching both the index of published
> papers
> and the preprint index. Could you supply the references,
> please."
Mr Atkinson seems to have me confused with his research department. I guess
economic cut-backs have affected the staffing at Stereophile and instead of
relying on paid staff, Mr Atkinson has been forced to go begging for help on
Usenet. :-(
> This was the response he got from a
> J. Wang:
>
> "Hello, Your message was forwarded to me.
> I searched many databases but only found one article
> which is close to what you requested:
> Amplifier-loudspeaker interfacing Krueger, A. B.
> Published in "DB, The Sound Engineering Magazine",
> Vol. 18, No. 7, Aug. Sept. 1984."
OK, so I'll out a little secret. The dB magazine article was very closely
related to an article that I had previously submitted to the AES. After what
I recolled to be a very long wait, the AES sent me a letter that asked a
number of questions about the article, presumably from the review board. By
then I had despaired of any response from the AES and sold the related
article to dB Magazine. Regrettably, dB stiffed me and I was never paid. So
I lost both ways - I neither had any money, nor did I have the corresponding
line for my resume that would have come from the JAES publication.
By then I had figured out that my career in data processing was generating
much better economic and other professional benefits than would be possible
in Audio. So I wasn't overly concerned about any of those events.
> So, given that Mr. Krueger has never "authored or
> co-authored a number of works" in the JAES, it
> must be concluded that he must have mis-remembered
> the facts of the matter.
As usual Atkinson twists real events around his ego-centric view of the
world. In that world, he's a god and I'm trash. Somehow, I have no problem
living with that! ;-)
George M. Middius
January 21st 08, 03:22 PM
Shhhh! said:
> GOIA took a few classes at a local community college. There's no
> evidence that he passed any of them, or what the subject matter was.
Krooger once klaimed to have 90% of the knowledge that a real engineer has
about digital audio. That klaim was so absurd that Dr. Z had to drag
Turdborg to school for a whuppin'.
http://groups.google.com/group/rec.audio.opinion/msg/b48f3c98d122a9d2
This thread tells a newbie quite a lot about the disease we know as Arnii
Krooger.
Jenn
January 21st 08, 04:30 PM
In article
>,
"Shhhh! I'm Listening to Reason!" > wrote:
>
> GOIA took a few classes at a local community college. There's no
> evidence that he passed any of them, or what the subject matter was.
IIRC, Arny has stated that he holds a degree from Oakland University in
Michigan.
Oliver Costich
January 21st 08, 05:37 PM
On Sat, 19 Jan 2008 20:35:27 -0800, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>> JBorg, Jr. wrote:
>>>> Oliver Costich wrote:
>>>>> JBorg, Jr. wrote:
>>>>>> Oliver Costich wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Very little of the claims about people being able to discern
>>>>>> differences in cables is supported by such testing.
>>>>>
>>>>> I take it you don't recommend testing for such purposes.
>>>>> Ok then...
>>>>
>>>> I don't recommend badly designed tests and I don't recommend
>>>> making statistically invalid claims based on any kind of test.
>>>>
>>>> But the only way to statistically support (or reject) claims about
>>>> human behavior is through well designed experiments and real
>>>> statistical analysis.
>>>
>>> May I interject then based on what you said above that audio
>>> testing such as SBT and ABX/DBT are poorly designed experiments
>>> and will fail to disprove that sound differences heard by
>>> audiophiles do not physically exist.
>>>
>>>
>>>
>>>
>> Check the poster. I never said that.
>
>
>Well, I did not say that you did, but basing on what you said about
>making proper statistic claims, do you think that audio testing such
>as SBT and ABX/DBT are insufficient experiments to determinine
>whether the subtle sound differences heard by audiophiles physically
>exist ?
>
>
>
>
No, not if designed and analyzed correctly.
Oliver Costich
January 21st 08, 05:48 PM
On Sat, 19 Jan 2008 11:06:27 -0500, "Harry Lavo" >
wrote:
>
>"Oliver Costich" > wrote in message
...
>> On Fri, 18 Jan 2008 15:53:05 -0500, "Harry Lavo" >
>> wrote:
>>
>>>
>>>"Oliver Costich" > wrote in message
...
>>>> On Fri, 18 Jan 2008 09:10:59 -0500, "Harry Lavo" >
>>>> wrote:
>>>>
>>>>>
>>>>>"Arny Krueger" > wrote in message
...
>>>>>> "Oliver Costich" > wrote in
>>>>>> message
>>>>>>
>>>>>>> On Thu, 17 Jan 2008 07:32:07 -0500, "Arny Krueger"
>>>>>>> > wrote:
>>>>>>
>>>>>>>> "Harry Lavo" > wrote in message
>>>>>>>>
>>>>>>
>>>>>>
>>>>>>>>> Somewhere in
>>>>>>>>> your college education, you skipped the class in logic,
>>>>>>>>> I guess.
>>>>>>
>>>>>>> In my several years of graduate school in mathemeatics, I
>>>>>>> skipped neither the logic nor the statistics classes.
>>>>>>
>>>>>> Nor did I. I did extensive undergraduate and postgraduate work in math
>>>>>> and
>>>>>> statistics. One of the inspirations for the development of double
>>>>>> blind
>>>>>> testing was my wife who has a degree in experimental psychology.
>>>>>> Another
>>>>>> was a friend with a degree in mathematics.
>>>>>>
>>>>>>> Logic is on the side of not making decisions about human
>>>>>>> behavior without sufficient testing using good design of
>>>>>>> experiment method and statistical analysis.
>>>>>>
>>>>>> 4 of the 6 ABX partners had technical degrees ranging from BS to PhD.
>>>>>>
>>>>>>> Very little of the claims about people being able to
>>>>>>> discern differences in cables is supported by such
>>>>>>> testing.
>>>>>>
>>>>>> When it comes to audible differences between cables that is not
>>>>>> supported
>>>>>> by science and math, which is what this thread is about, none of it is
>>>>>> supported by well-designed experiments.
>>>>>>
>>>>>
>>>>>Well, then rather than "braying and flaying" why don't you communicate
>>>>>the
>>>>>statistics.
>>>>>
>>>>>As reported 61% of 39 people chose the correct cable. That according to
>>>>>my
>>>>>calculator was 24 people.
>>>>>
>>>>>According to my Binomial Distribution Table, that provides less than a
>>>>>5%
>>>>>chance of error...in other words the percentage is statistically
>>>>>significant. In fact, it is significant at the 98% level....a 2% chance
>>>>>of
>>>>>error.
>>>>
>>>> I did in other posts but here's a summary. Hypothesis test of claim
>>>> that p>.5 (p is the probability that more the half of listeners can do
>>>> better than guessing). Null hypothesis is p=.5. The P-value is .0748
>>>> but would need to be below .05 to support the claim at the 95%
>>>> Confidence Level.
>>>>
>>>> You rounded off .054 to .05. You would need to get a probability of
>>>> less than .05 to assert the claim, and NO, .054 isn't "close enough"
>>>> for statistical validity. I don't know where you got the 98% from.
>>>
>>>I saw your previous post, found it hard to believe with a sample of 39,
>>>and
>>>so checked it myself. I used a professionally published 100x100 Binomial
>>>Distribution Table with the correct P value for every combination of
>>>right/total sample. Without checking your math, I'm not about to yield to
>>>your numbers.
>>
>> First, binomial tables aren't what one uses to test claims or
>> hypotheses. Don't bring a knife to a gunfight.
>
>It certainly is for samples above 100, which is where I spent my
>professional life using market research. However, I must admit that I
>forgot that it gets a little more complicated with small sample sizes.
>However, the table I used had P-values specifically calculated adjusting for
>the small sample sizes.....a fact that I confirmed with my own manual
>calculations (see
>my reply to your other post).
The proportion of "correct" responses needed for a fixed statistical
significance increases as sample size decreases.
>
>> Binomial distributions are related to conficence levels in a much more
>> sophisticated way that just looking at values in a tabeBut tell me how
>> you used the tables. Did you use a density function table or a
>> cumulative density function table? And why is that the right way to
>> test a hypothesis?
>>
>> The math can be found in any elementary statistics book. Look for the
>> chapter on hypothesis testing. You can also use the one on confidence
>> intervals (nor levels).
>
>The table I used was calculated for small sample sizes, as my "refresher
>course" with the books reminded me was necessary. But I have also manually
>calculated using the small sample size forumula for estimating standard
>error and obtained the same result...see my reply to your other post.
The finite sample size adjustment is considered appropriate to use
when the sample size represents less than 5% of the population size.
Unless there are less than 780 people in the population of people who
care about cable determination, it doesn't come into play. If there
are less then 780, then what's the fuss about.
>
>
>> Here's the gist of it. The claim is that poeple in the chosen
>> population can identify the more expensive cable more than half the
>> time so the claim is that p>.5. The Hypothesis you have to reject is
>> that they can't do better than just guessing so you assume the
>> population proprtion is=.5. You compute a test statistic by
>> subtracting the assumed value of .5 from the sample date of .61 and
>> divide by the standard error of the sampling distribution of
>> proportions which is the square root[( .5*.5)/39] in this situation.
>> You get what's called the test statistic for the experiment, in thsi
>> case, 1.37. To to support the claim that p>.5, you need the test
>> statistic to be bigger than what is the z-alpha for the confidence
>> level and for 95% that's 1.644.
>>>
>>>And I didn't round off anything...the probabilities are right out of the
>>>table....020 for 24/39 and .008 for 25/39.
>>
>> But it wasn't 24/39 was it? And 24/39 doesn't support the claim at the
>> 95% confidence level.
>
>Actually it was 24....what number did you use?
>
>
>>>
>>>>>Had one more chosen correctly, the error probability would have been
>>>>>less
>>>>>than 1%, or "beyond a shadow of a doubt".
>>>>
>>>> If it had been 25 instead of 24 it would have supported the claim at
>>>> the 95% level but not at 97% or higher. But that's the point. You
>>>> don't get to wiggle around the numbers so you get what you want. If it
>>>> had been one less, you you still make the claim? What about if 39 more
>>>> people did the experiment and only 20 got it right. You can only draw
>>>> so much support for a claim from a single sample.
>>>>
>>>> And nothing that can only be tested statistically is "beyond a shadow
>>>> of a doubt" unless you mean "supported at a very high level of
>>>> confidence" which isn't the case here, even with another correct
>>>> "guess". Statistics can only be used to support a claim up to the
>>>> probability (1-confidence level) of falsely supporting an invalid
>>>> conclusion.
>>>>
>>>> The underlying model for determining whether binary selection is
>>>> random is tossing a coin. Tossing a coin 39 times and getting 24 heads
>>>> doesn't mean the coin is baised towards heads.
>>>
>>>I understand all that...I will hope you intended this for others.
>>>
>>>
>>>>
>>>>>
>>>>>So presumably John and Michael did at least this well to be singled out
>>>>>by
>>>>>the reporter.
>>>>
>>>> Who obviously was deeply knowledgable about statistics.
>>>
>>>Perhaps not, and so he could be wrong. But presumable the test designer
>>>would have corrected him if he were wildly so.
>>
>> Presuming the test designer had a clue about proper statistical
>> experiment and analysis which appears questionable.
>
>Why is it questionable, other than that he didn't do it double-blind. There
>are lots of practical reasons why a double-blind audio test can be extremely
>difficult to execute, and the comparator boxes used for them have been much
>the subject of criticism.
>
>If it was single blind, and the test was run to avoid the pitfalls of same,
>it can still be a valid test.
>
>I haven't read the article, but nothing revealed here suggests that the test
>was flawed other than being single-blind.
>
>
>>snip<
>
>
Oliver Costich
January 21st 08, 06:18 PM
On Sat, 19 Jan 2008 11:06:39 -0500, "Harry Lavo" >
wrote:
>
>"Oliver Costich" > wrote in message
...
>> On Fri, 18 Jan 2008 16:06:21 -0500, "Harry Lavo" >
>> wrote:
>>
>>>
>>>"Oliver Costich" > wrote in message
...
>>>> On Fri, 18 Jan 2008 07:43:13 -0600, MiNe 109
>>>> > wrote:
>>>>
>>>>>In article >,
>>>>> "Arny Krueger" > wrote:
>>>>>
>>>>>> "MiNe 109" > wrote in message
>>>>>>
>>>>>> > In article >,
>>>>>> > Oliver Costich > wrote:
>>>>>> >
>>>>>> >> On Thu, 17 Jan 2008 12:56:23 -0500, Walt
>>>>>> >> > wrote:
>>>>>> >>
>>>>>> >>> wrote:
>>>>>> >>>> On Jan 16, 10:52?am, John Atkinson
>>>>>> >>>> > wrote:
>>>>>> >>>>> http://online.wsj.com/article/SB120044692027492991.html?mod=hpp_us_in...
>>>>>> >>>>>
>>>>>> >>>>> Money quote: "I was struck by how the best-informed
>>>>>> >>>>> people at the show -- like John Atkinson and Michael
>>>>>> >>>>> Fremer of Stereophile Magazine -- easily picked the
>>>>>> >>>>> expensive cable."
>>>>>> >>>>
>>>>>> >>>> So will you be receiving your $1 million from Randi
>>>>>> >>>> anytime soon?
>>>>>> >>>
>>>>>> >>> Don't count on it. From TFA: "But of the 39 people who
>>>>>> >>> took this test, 61% said they preferred the expensive
>>>>>> >>> cable." Hmmme. 39 trials. 50-50 chance. How
>>>>>> >>> statistically significant is 61%? You do the math.
>>>>>> >>> (HINT: it ain't.)
>>>>>> >>
>>>>>> >> Here's the math: Claim is p (proportion of correct
>>>>>> >> answers) >.5. Null hypothesis is p=.5. The null
>>>>>> >> hypothsis cannot be rejected (and the claim cannot be
>>>>>> >> supported) at the 95% significance level.
>>>>>> >
>>>>>> > Welcome to the group! Out of curiosity, what significance
>>>>>> > level does 61% support?
>>>>>>
>>>>>> You haven't formed the question properly. 61% is statisically
>>>>>> signifcant
>>>>>> or
>>>>>> not, depending on the total number of trials.
>>>>>
>>>>>Okay, in 39 trials, what level of significance does 61% indicate?
>>>>>
>>>>>Stephen
>>>>
>>>>
>>>> About 92%
>>>
>>>This is wrong, according to my binomial table...should be 98% instead.
>>>
>>
>> You simply don't know anything about statistical testing of claims.
>> You've pulled the numbers off a binomial table of some sort and
>> interpreted the result by some method you thought would tell you
>> something.
>
>Don't be so frigging saracastic. I know a great deal about statistics,
>particularly their practical use, although I am not a statistician. But I
>spent many years of my life working with market researchers, most of whom
>had a strong statistics background, and I've had statistics courses at both
>the graduate and undergraduate level. I did forget, however, that you have
>to make adjustments for samples under 100, since in my work we never used
>samples smaller than that. And above that level standard practice is to
>assume normal distribution and use the simplified standard error
>calculation. However, the Binary Table of P-Values I used was specifically
>calculated to take these small sample sizes into account, so I still get the
>same answer calculating them manually (see below).
The sample size for which the sampling distribution of a proportion is
considered to be approximately normal is usually smaller than 100,
depending on the proportion being tested. For p=.5, the requirement is
that half the sample siz is >5, and 18.5>5.
>
>>
>> Get a stsistics book to understand how hypotheses are tested and buy a
>> Texas Instrument TI-83 calculator to do all the computation with any
>> serious manipulation. You put the numbers in (in this case .5, 24, 39)
>> and push the button to get a number to do the test comparison.
>
>I didn't have to get one, two were sitting on my shelf. So I gave myself a
>refresher course. Used the normal table as recommended by the book to
>estimate distribution for samples 30-100 (below that the "t" table), used
>the formulas to calculate the standard error of
>"r" (in this case, .04876), double-checked my work, and still come out with
>the 95% confidence level at 23.22 (1.96 standard deviations). In other
>words, 24 of 39 *is* significant at the 95% level, just as I had previously
>stated, according to my calculations. Even if you want to use two standard
>deviations, it works out to 23.3.
>
>Are you absolutely certain that you didn't put 23 into your calculations by
>mistake? That would give an error measure approximately in-line with these
>calculations.
>
I just ran it again and got a P-Value of .07477 (>.05). Here's the
model: Claim is population proportion p>.5, null hypothesis if p=.5,
alternative is p>.5.
Ran it using a TI-84 graphing calculator, with n=39, x=24 and got the
P-Value above and a test statistic of 1.4412, certainly less than
1.6449, the z-value for for a right tailed hypothesis test at the 95%
level.
The result is that the null hypothesis cannot be rejected so the
official conclusion is that "there is not sufficient evidence to
support the claim that more than half of the population can correctly
choose the more expensive cable"
Oliver Costich
January 21st 08, 06:24 PM
On Sat, 19 Jan 2008 20:29:21 -0600, (John
Corbett) wrote:
>In article >, "Harry Lavo"
> wrote:
>
>
>> I know a great deal about statistics, particularly their practical use,
>> although I am not a statistician.
>
>Well, I am a statistician.
>You seem to be so confused about statistics that you can neither perform
>the calculations nor understand what they mean.
>
>
>Before looking at your calculations, we need to consider something that's
>been overlooked so far in this thread.
>
>These calculations apply in the situation where the number of correct
>answers has a binomial distribution. You---and other posters---seem to
>have assumed that is appropriate in this example. However, a binomial
>distribution describes the number of successes in a *fixed* number of
>independent and identically-distributed binary trials. That would appear
>to be the case if we believed that the experimenter planned to do 39
>trials. I suspect that he did not pick that number before starting his
>testing. Without knowing the stopping rule, we really cannot calculate
>values needed to do proper tests. For instance, if he planned to do tests
>until he had 15 wrong answers, and it happened to take 39 trials, then an
>ordinary binomial distribution would *not* apply---we would use a negative
>binomial distribution in that case. If he planned to go until he got 24
>correct ansers, and that happened to take 39 trials, we'd use a different
>negative binomial distribution. If he had some other rule (e.g., "stop at
>8 pm"), yet another distribution would be called for. If we don't know the
>stopping rule, then we ought to be very cautious applying the usual simple
>procedures.
>
>So, assuming the binomial model is applicable, let's check some
>calculations ...
>
>
>
>> So I gave myself a refresher course. Used the normal table as
>> recommended by the book to estimate distribution for samples 30-100
>> (below that the "t" table),
>
>I really doubt that your book says to use "t" for a *binomial* problem.
>The t distribution is needed when you have independent estimates of the
>mean and variance, but the variance is a function of the mean for a
>binomial distribution, so the t is not appropriate. If you are using a
>normal approximation for a binomal distribution, you should use a "z"
>table, no matter what the sample size is. (Of course, that assumes the
>sample is large enough to use a normal approximation in the first place.)
>
>
>> used the formulas to calculate the standard error of
>> "r" (in this case, .04876),
>
>What is "r" here, and what formulas did you use to get .04876?
>
>
>> double-checked my work, and still come out with the 95% confidence level at
>> 23.22 (1.96 standard deviations).
>
>95% _is_ a confidence level; 23.22 is _not_ a confidence level.
>(Do you know what a confidence level is?)
>
>1.96 standard deviations would apply if you were forming a two-sided
>confidence interval, or if you were performing a two-tailed test; it is
>the z value correpnding to an upper tail area of .025. Since you are
>looking at a one-tailed test here, you need the cutoff z value for an
>upper tail area of .05, which is 1.645.
>
>
>
>> In other words, 24 of 39 *is* significant at the 95% level, just as I had
>> previously stated, according to my calculations. Even if you want to use
>> two standard deviations, it works out to 23.3.
>
>No.
>
>If you use the normal approximation to the binomial, and use the
>continuity correction, you should get that 24 of 39 has a p-value of
>0.1001. If you don't use the continuity correction, you get .07477;
>that's what Oliver Costich did. If you do the exact calculation, based
>directly on the binomial instead of an approximating normal, you get
>.0998; you can see that the properly-used normal approximation is very
>good. So 24 of 39 is *not* significant at the .05 level, although it is
>significant at the .10 level. (Elsewhere you have indicated that you're
>okay with significance at the 10% level.)
>
>Earler in this thread, you wrote
>
>> The fact is, there is nothing magical about 95%, except that it has been
>> widely accepted in the scientific community to meet their standards of
>> "probably so".
>
>Yes, 95% is widely accepted primarily because it is widely accepted. ;-)
>
>> It gives odds of 19:1 that the null hypothesis is invalid.
>
>No, that is NOT what it means.
>
>In hypothesis testing, calculations are done
>_under_the_assumption_that_the_null_hypothesis_is_ true_.
>
>Stop and read that again. Got it yet?
>
>P-values and significance levels involve probabilities of results that may
>occur if the null hypothesis is true; they are not probabilities that the
>null hypothesis was true in the first place. (BTW, 95% is a typical
>*confidence* level. Hypothesis tests involve *significance* levels, and
>.05 is a commonly used value. Although there is a connection between
>these concepts, it is _not_ as simple as saying that 95% confidence is the
>same as 5% significance.)
>
>Doing real statistics is not about just doing arithmetic as an excuse to
>avoid having to think about the data.
Agreed. I was trying to keep it as simple by using ordinary binomial
methods (which make the calculations easy with a TI calculator).
Oliver Costich
January 21st 08, 06:26 PM
On Sun, 20 Jan 2008 10:10:26 -0800 (PST), "Shhhh! I'm Listening to
Reason!" > wrote:
>On Jan 20, 6:52*am, "Arny Krueger" > wrote:
>
>> The opposite of a valid experiment is what often happens - people take data
>> analyzing it as they go along, and when they have the analysis they want,
>> they stop taking data.
>
>Are you claiming that the WSJ reporter did this, or are you making
>things up?
>
>In the world of journalism this would be a *very* serious charge.
The Stephen Glass of audio? :-)
Oliver Costich
January 21st 08, 06:27 PM
On Sun, 20 Jan 2008 22:49:16 -0800, "JBorg, Jr."
> wrote:
>> John Corbett wrote:
>>
>>
>>
>>
>>
>> Well, I am a statistician.
>> You seem to be so confused about statistics that you can neither
>> perform the calculations nor understand what they mean.
>
>
>
>
>Hello Mr. Corbett, I would like to know if it is appropriate to
>assume that disproving sound differences heard by audiophiles
>that I presume physically exist is -- a certainty not in the
>realm of statistical analysis.
>
Then what is it in the realm of? Religion?
Oliver Costich
January 21st 08, 06:30 PM
On Sat, 19 Jan 2008 20:55:47 -0800, "JBorg, Jr."
> wrote:
>> Arny Krueger wrote:
>>> JBorg, Jr. wrote
>>
>>
>>
>>
>>
>>
>>> Well yes, Mr. Costich, the test results aren't
>>> scientifically valid but it didn't disproved that the
>>> sound differences heard by participants did not physically exist.
>>
>> That was another potential flaw in the tests. I see no controls that
>> ensured that the listeners heard the identically same selections of
>> music. Therefore, the listeners may have heard differences that did
>> physically exist - unfortunately they were due to random choices by
>> the experimenter, not audible differences that were inherent in the
>> cables.
>
>
>Mr. Costich opined that disproving the sound differences heard by
>audiophiles do not physically exist is not in the realm of statistical
>analysis.
>
>
I did not say that
Oliver Costich
January 21st 08, 06:38 PM
On Sun, 20 Jan 2008 21:15:40 -0800, "JBorg, Jr."
> wrote:
>> Arny Krueger wrote:
>>> JBorg, Jr. wrote
>>>> Arny Krueger wrote:
>>>>> JBorg, Jr. wrote
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>> Well yes, Mr. Costich, the test results aren't
>>>>> scientifically valid but it didn't disproved that the
>>>>> sound differences heard by participants did not
>>>>> physically exist.
>>>>
>>>> That was another potential flaw in the tests. I see no
>>>> controls that ensured that the listeners heard the
>>>> identically same selections of music. Therefore, the
>>>> listeners may have heard differences that did physically
>>>> exist - unfortunately they were due to random choices by
>>>> the experimenter, not audible differences that were
>>>> inherent in the cables.
>>>
>>>
>>> Mr. Costich opined that disproving the sound differences
>>> heard by audiophiles do not physically exist is not in
>>> the realm of statistical analysis.
>>
>>
>> I read all his posts, and saw no such thing. I did see him correct
>> other such misrepresentions of what he said.
>
>
>
>Along this subthread, I said the following to Mr. Costich concerning
>the test:
>
>
>" ... Mr. Costich, the test results aren't scientifically valid
>but it didn't disproved that the sound differences heard by
>participants did not physically exist."
>
>
>Mr. Costich replied:
>
>" Of course not. Certainty is not in the realm of statistical analysis...."
>
>
>He is saying that "certainty" about disproving a claim that the
>sound differences heard by participants do not physically
>exist -- is not in the realm of statistical analysis.
>
>
>Further, Mr. Costich supported his claim by saying:
>
>
>" Because statistical analysis is the only toll we have to test claims
>about the behavior of populations from samples. Any such test has
>error and the likelihood of error needs to be set in advance. The
>significance level is precisely the probabiliy of rejecting a true null
>hypothesis."
>
> ***
>
>You seem to suggest that Mr. Costich misspoke, but how so ?
>
>Are you inferring that Mr. Costich was untruthful about what he
>said and that he spoke too hastily ?
>
>
>
>> Basically borglet, you're not a reliable analyst in matters like
>> these.
>
>
Unbeleivable! Your interpretation amazes me. Take a philosophy of
science course to learn that there is no 100% certainty in any science
save mathematics, if that is even a science.
Physics is models based on observation of events repeating, or as
David Hume calls it "bad habit". The level of "certainty" or
probability of error can be reduced to infinitesimal levels but
uncertainty is still there even if we don't behave like it is. The
less controllable experiments are, and the farther the "science"
deviates from having rigorous mathematical models, the less certainty
you have. In particular you seem to be willing to abandon all use of
statistics as to you the results don't provide certainty (and they
never do).
Oliver Costich
January 21st 08, 06:45 PM
On Sat, 19 Jan 2008 20:40:21 -0800, "JBorg, Jr."
> wrote:
>Oliver Costich > wrote:
>> On Fri, 18 Jan 2008 18:40:22 -0800, "JBorg, Jr."
>> > wrote:
>>
>>>> Oliver Costich wrote:
>>>>> JBorg, Jr. wrote:
>>>>>> Oliver Costich wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> Here's the math: Claim is p (proportion of correct answers) >.5.
>>>>>> Null hypothesis is p=.5. The null hypothsis cannot be rejected
>>>>>> (and the claim cannot be supported) at the 95% significance level.
>>>>>
>>>>>
>>>>> Well yes, Mr. Costich, the test results aren't scientifically valid
>>>>> but it didn't disproved that the sound differences heard by
>>>>> participants did not physically exist.
>>>>>
>>>>
>>>> Of course not. Certainty is not in the realm of statistical
>>>> analysis.
>>>
>>>
>>> Right. Why then Arny and his ilk consistently assert using
>>> statistical analysis during audio testing claiming to proved that
>>> the sound differences heard by audiophiles did so based on their
>>> fevered imagination.
>>
>> Because statistical analysis is the only toll we have to test claims
>> about the behavior of populations from samples. Any such test has
>> error and the likelihood of error needs to be set in advance. The
>> significance level is precisely the probabiliy of rejecting a true nul
>> hypothesis
>
>
>Well now! Disproving that the sound differences heard by
>audiophiles do not physically exist is -- certainty not in the
>realm of statistical analysis.
Disproving that the sound differences BELIEVED to beheard by
audiophiles actually exist is not provable or disprovable by
statistical methods of your standard is 100% certainty. Nothing is,
other than 1+1=2 and its ilk.
That's what the argument is about - some claim to hear things that
allow them the distinguish but can't (at least in this test)
demonstrate it.
>
>
>
>>>> Let's say you want to claim the a certain coin is biased to produce
>>>> heads when flipped. That you flip it 39 times and get 24 heads is
>>>> not sufficient to support the claim at a 95% confidence level. If
>>>> you lower your standard or do a lot more flips and still get 61%,
>>>> the conclusion will change
>>>
>>> Ok.
>>>
>>>> I'm sure there are audible differences. The issue is whether they
>>>> are enough to make consistent determinations. A bigger issue for
>>>> those of use who just listen to music is whether the diffeneces are
>>>> detectable when you are emotionally involved in the music and not
>>>> just playing "golden ears".
>>>
>>> Well then, you agreed that subtle differences do exist.
>>
>> Sometimes even large ones, like the MP3s vs CD. That does not mean
>> that 61% success in a sample of 39 supports the claim that people can
>> tell the difference between the two cables used in the test.
>
>Ok.
>
Oliver Costich
January 21st 08, 06:46 PM
On Sat, 19 Jan 2008 21:15:01 -0800, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>> JBorg, Jr.wrote:
>>>> Oliver Costich wrote:
>>>>> JBorg, Jr. wrote:
>>>>>> Oliver Costich wrote:
>>>>>>> Walt wrote:
>>>>>>>> John Atkinson wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> Remind me again how many times Arny Krueger has been
>>>>>>>> quoted in the Wall Street Journal?
>>>>>>>
>>>>>>> Ok. So you've been quoted in the WSJ. So have Uri Geller
>>>>>>> and Ken Lay.
>>>>>>>
>>>>>>> What's your point?
>>>>>>
>>>>>> So has Osama Bin Laden. The point is that he's devoid of a sound
>>>>>> argument.
>>>>>
>>>>> Mr. Costich, there is no sound argument to improve upon a strawman
>>>>> arguments. It just doesn't exist.
>>>>
>>>> Agreed.
>>>
>>> Ok.
>>>
>>>>> Incidentally Mr. Costich, how well do you know Arny Krueger if you
>>>>> don't mind me asking so.
>>>>>
>>>> I only know of his existence from the news group, if that's his real
>>>> name:-)
>>>
>>> He made claims that he had submitted peer-reviewed papers in AES.
>>> He also calim to be audio engineer and well educated concerning
>>> statistical analysis in well designed audio experiment. To be
>>> honest, Mr. Costich, he is the worst offender of common sense and
>>> has been pestering this group for a long, long time.
>>
>>
>> That's an opinion, which is in the name of the newsgroup.
>
>
>This is indeed a newsgroup of opinion but do you think it is proper,
>as Mr. Krueger has done in not so distant past, to declare false
>claims and present it as FACTS ?
I'm not a judge or a censor.
>
>
>> There are many offenders here, not only of common sense by of
>> scientific method.
>
>
>I nominate:
>
>1.) Arny Krueger, for reasons given above.
>
>
>
>
>
>
>
>
>
>
>
>
>
Oliver Costich
January 21st 08, 06:50 PM
On Sat, 19 Jan 2008 08:43:39 -0500, "Harry Lavo" >
wrote:
>
>"Oliver Costich" > wrote in message
...
>> On Fri, 18 Jan 2008 19:59:03 -0800, "JBorg, Jr."
>> > wrote:
>>
>>>> Oliver Costich wrote:
>>>>> JBorg, Jr.wrote:
>>>>>> Shhhh! wrote:
>>>>>>> Oliver Costich wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> In other words, that 61% of a sample of 39 got the correct result
>>>>>>> isn't sufficient evidence that in the general population of
>>>>>>> listeners more than half can pick the better cable.
>>>>>>>
>>>>>>> So, I'd say "that's hardly that".
>>>>>>
>>>>>> I'm curious what percent of the "best informed" got. I mean, you
>>>>>> could mix in hot dog vendors, the deaf, people who might try to
>>>>>> fail just to be contrary, you, and so on, and get different results.
>>>>>
>>>>>
>>>>>
>>>>> Well asked.
>>>>>
>>>>
>>>> What population of listeners was the claim made for and how was it
>>>> defined? My guess is that however it's constructed, it a lot bigger
>>>> than 39.
>>>
>>>
>>>No information were provided for that. Still, valid parameter for such
>>>test should exclude participants with personal biases and preferences
>>>and those lacking extended listening experience, as examples.
>>>
>>
>> Personal bias can be filtered out with well designed double blind
>> experiments. That's the whole point of that method. If neither the
>> tester or those tested know what they are listening to. People with
>> listening experience is still a large, but shrinking, population.
>>
>> The golden ear cult would like to define the population to be those
>> among that have a good enough run of guesses to get a statistically
>> significant outcome:-)
>
>While I agree with this in general, one of the criticisms that can't be
>refuted is that, if the bias is "there are no differences", a participant
>can simply go into the test and choose randomly, thus weighting the test
>toward "no difference". There is no built in safeguard against that, even
>in a DBT. The only safeguard is to know who truly holds that opinion and
>exclude them. Such a test should only be among those people who are open to
>the idea that there may *be* differences...so if a null results, it goes
>against their biases.
>
But then you have limited your population to those who have some
potential to support the claim.
Oliver Costich
January 21st 08, 06:57 PM
On Sat, 19 Jan 2008 09:34:51 -0800, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>> JBorg, Jr. wrote:
>>>> Oliver Costich wrote:
>>>>> JBorg, Jr.wrote:
>>>>>> Shhhh! wrote:
>>>>>>> Oliver Costich wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> In other words, that 61% of a sample of 39 got the correct result
>>>>>>> isn't sufficient evidence that in the general population of
>>>>>>> listeners more than half can pick the better cable.
>>>>>>>
>>>>>>> So, I'd say "that's hardly that".
>>>>>>
>>>>>> I'm curious what percent of the "best informed" got. I mean, you
>>>>>> could mix in hot dog vendors, the deaf, people who might try to
>>>>>> fail just to be contrary, you, and so on, and get different
>>>>>> results.
>>>>>
>>>>> Well asked.
>>>>>
>>>> What population of listeners was the claim made for and how was it
>>>> defined? My guess is that however it's constructed, it a lot bigger
>>>> than 39.
>>>
>>> No information were provided for that. Still, valid parameter for
>>> such test should exclude participants with personal biases and
>>> preferences and those lacking extended listening experience, as
>>> examples.
>>>
>>
>> Personal bias can be filtered out with well designed double blind
>> experiments. That's the whole point of that method. If neither the
>> tester or those tested know what they are listening to. People with
>> listening experience is still a large, but shrinking, population.
>
>
>Mr. Costich, how do you filter out from DBT experiments the listeners
>personal biases and preferences in sound acquired over time through
>extended listening experience. As an example, a person with strong
>affinity and craves the sound produced by jazz ensemble tends to be
>receptive to the subtle nuance produce and articulated by those sets of
>instruments. Is hiding the components during DBT removed this
>adulation out ?
In other words, this is a religious argument for you, not a scientific
one. Would you decide if most people believe in God by taking a sample
of members of the Baptist Church?
>
>What if the subject for the test is someone like Howard Ferstler who
>admitted to having deeply held personal vendetta towards high-end
>establishment going back in the late '70s, how would you go about
>explaining that a no-difference Ferstler test result is valid ?
>
>
>
>
>> The golden ear cult would like to define the population to be those
>> among that have a good enough run of guesses to get a statistically
>> significant outcome:-)
>
OK, fine leave him out, just like you almost always do with outliers
in sttastical dat analysis.
>
>Mr. Costich, please don't be so frigging sarcastic towards audiophiles.
>Audiophiles who had honed and increased their listening sensitivity from
>listening to live, unamplified, and reproduced music over extended period
>of time.
I guess that I have been labeled an audiophile for over 40 years and
listened to countless hours of music plus auditioning, and yes
comparing, various components over the same period doesn't qualify me
because I won't accept a false conclusion from an experiment.
>
>
>
>
>
><bbl...>
>
>
>
>
>
>
Oliver Costich
January 21st 08, 07:00 PM
On Sat, 19 Jan 2008 10:54:17 -0800 (PST), "Shhhh! I'm Listening to
Reason!" > wrote:
>On Jan 19, 3:10*am, Oliver Costich > wrote:
>> On Fri, 18 Jan 2008 13:19:23 -0800 (PST), "Shhhh! I'm Listening to
>> Reason!" > wrote:
>> >Oliver Costich wrote
>
>> >> By the way, I don't use lamp cord or Home Depot interconnects in my
>> >> system.
>>
>> >I do not use expensive wires or cables in my system. I just don't
>> >really care if others do.
>>
>> I don't care what they use. I do care that they want to justify it
>> with sloppy logic and BS.
>
>I haven't seen any justifications, though I haven't read all the posts
>in this thread.
>
>As a matter of curiosity, what would happen to the results if, out of
>a sample of 100 participants, 50 selected a certain product correctly
>100% of the time and the other 50 selected incorrectly 100% of the
>time?
>
Would it be unusual to get 50 heads when flipping a coin 100 times?
Getting 50 correct answers out of 100 participants is exactly what you
would expext from random guessing (flipping coins).
>That's what I was trying to get at when I wondered how often the "well-
>informed" selected correctly.
Oliver Costich
January 21st 08, 07:15 PM
On Sat, 19 Jan 2008 11:15:57 -0800 (PST), "Shhhh! I'm Listening to
Reason!" > wrote:
>On Jan 19, 3:13*am, Oliver Costich > wrote:
>> On Fri, 18 Jan 2008 13:08:05 -0800 (PST), "Shhhh! I'm Listening to
>>
>> Reason!" > wrote:
>> >On Jan 17, 6:36*pm, Eeyore >
>> >wrote:
>>
>> >> No, 61% is as good as proof that there's NO difference.
>>
>> >That's not true, of course. I'd have to believe that even good old
>> >insane Arns would disagree with this statement.
>>
>> >For one thing, if a test design is not valid to prove a difference
>> >exists, it is certainly not valid to prove one doesn't.
>>
>> In this sample of 39, 61% is not sufficient to reject the claim that
>> they are just guessing or flipping a coin to decide. It does not mean
>> that people can't really tell, it just means that it's very unlikely.
>
>I have no problem with putting it that way. I *do* have a problem
>saying that something "is as good as proof that there's NO
>difference".
>
>I may not be looking at this logically, or I may not be as good
>communicating as Graham is, but to me "likelihoods" are not absolute
>"proof".
Only mathematics is "for sure". Everything else, especially "factual"
claims about economics and human social behavior are based on
statistical analysis. Otherwise your are in the realm of religion.
>
>> That the design is bad is another issue, but it has no effect on the
>> analysis of the data.
>
>A poorly-designed test will "likely" give incorrect (or certainly not
>valid) results. If the test is not valid, then arguing over or
>analyzing the results seems silly to me. Since the test design was
>questioned, with the implication that the results were skewed, the
>actual question at hand is whether or not the test design is valid. If
>the test design is not valid, the results must be discarded. If the
>test results must be discarded, then they are not valid as "proof" of
>either hypothesis.
The argument over the test design is moot when the data collected
don't support the claim. The claim is not supported. The next sample
of 39 may have enough correct to support the claim that they can
choose correctly. BUT, this isn't certainty either. All hypotheis test
results have a level of uncertainty. From a single sample it is
possible to be wrong about the true population proportion. You can
Reject the null hypothesis when it is true (Type I error) or fail to
reject it when it's false (Type II).
>
>Therefore, as I said, Graham's claim "No, 61% is as good as proof that
>there's NO difference" is incorrect. He is taking the results of a
>test which he may have even questioned the design of and claiming
>"proof".
>
>So are we, IYO, discussing invalid conclusions people have drawn from
>the valid results of a well-designed test?
How would you know if that were the case? Sample results can lead to
false conclusions, but you won't know unless you do a population
census.
>
>BTW, I am sure that I'm exactly like the vast majority of people in
>the audio world: if Stereophile says it, it must be true. (It's funny
>to me that some people get all balled up if SP says something, yet if
>Limbaugh or Hannity or Glenn Beck say something that's IMO outrageous,
>it's just "entertainment" to them. Talk about poor thinking and
>reasoning skills!)
I read these magazines. They have usefull measurements but beyond that
they are entertainment, or at best, unsubstantiated opinion. A review
by a sible person is near useless. It's no different than movie
reviews - depending on the critic, it can be wonderful or awful.
Oliver Costich
January 21st 08, 07:17 PM
On Sat, 19 Jan 2008 22:05:00 -0800, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>> JBorg, Jr. wrote:
>>>> Oliver Costich wrote:
>>>>> Mr.clydeslick wrote:
>>>>>> Oliver Costich wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> snip
>>>>>>
>>>>>> Back to reality: 61% correct in one experiment fails to reject
>>>>>> that they can't tell the difference. If the claim is that
>>>>>> listeners can tell the better cable more the half the time, then
>>>>>> to support that you have to be able to reject that the in the
>>>>>> population of all audio interested listeners, the correct guesses
>>>>>> occur half the time or less. 61% of 39 doesn't do it. (Null
>>>>>> hypothesis is p=.5, alternative hypothesis is p>.5. The null
>>>>>> hypthesis cannot be rejected with the sample data given.)
>>>>>>
>>>>>> In other words, that 61% of a sample of 39 got the correct result
>>>>>> isn't sufficient evidence that in the general population of
>>>>>> listeners more than half can pick the better cable.
>>>>>>
>>>>>> So, I'd say "that's hardly that".
>>>>>
>>>>> you seem to be mixing difference with preference, you reference
>>>>> both, for the same test.
>>>>
>>>> For the purpose of statistical analysis it makes no difference.
>>>
>>> But for the purpose of sensible analysis, shouldn't it makes a
>>> difference.
>>
>> I don't think so. I can't see any way the statistical analysis would
>> be different.
>
>
>Preferences are, statistically, immeasureable if the claim is that
>listeners can tell the better cable more the half the time.
>
>Agree or Disagree ?
>
>
I have no idea what you are asking.
>>> As you have said that logic is on the side of not making decisions
>>> about human behavior. Isn't this required to ensure sufficient
>>> testing using well designed experiment and statistical analysis.
>>
>> I didn't say that.
>
>
>My response above is what I attempt to convey.
>
>
>
>>>>> And just what is the general population of listeners.
>>>>
>>>> You tell me. I presume that those who attend CES would be
>>>> a good one to use.
>>>
>>> That could very well include someone like Howard Ferstler, a raving
>>> lunatic with a well-known hearing loss out to destroy high-end audio
>>> and derogate all audiophiles young and young at heart. Provided,
>>> of course, he can *afford* the fares.
>>
>> Obviously you want to weed out people who are absolutely sure you
>> can't tell. But leaving out people who are skeptics biases the result
>> as well. I doubt that the 39 people who did the test comprised a
>> simple random sample, another design flaw. On the other hand I'd like
>> to see a well designed test using a simple random sample from the
>> population of true believers just to see if they can really. Even if
>> some people can tell, I suspect that it's a very small number. I do
>> know a couple of people who can really lock onto particular
>> characteristics and use then to identify what's playing.
>
>
>
>It is difficult to discuss this, at least for me, unless there's a specific
>protocol and experimental design to reference. I hope you understand.
>
>
>
>>>> What would you use and how would you construct a simple
>>>> random sample from it?
>>>>
>>>>> Are you testing the 99% who don't give a rat's
>>>>> ass anyway? If so, so what. Or are you testing people who actually
>>>>> care.
>>>
>>> We need a bias controlled experiment.
>>
>> Yes but neither the golden ear cult or the nonbeleivers would
>> accept the results if they didn't agree with them. It's become a
>> religious, not a scientific, argument.
>
>
>As far as I can tell, it's the Objectivist that has turned these into
>religious arguments.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
Arny Krueger
January 21st 08, 08:13 PM
"Oliver Costich" > wrote in
message
> On Sat, 19 Jan 2008 20:55:47 -0800, "JBorg, Jr."
> > wrote:
>
>>> Arny Krueger wrote:
>>>> JBorg, Jr. wrote
>>>
>>>
>>>
>>>
>>>
>>>
>>>> Well yes, Mr. Costich, the test results aren't
>>>> scientifically valid but it didn't disproved that the
>>>> sound differences heard by participants did not
>>>> physically exist.
>>>
>>> That was another potential flaw in the tests. I see no
>>> controls that ensured that the listeners heard the
>>> identically same selections of music. Therefore, the
>>> listeners may have heard differences that did
>>> physically exist - unfortunately they were due to
>>> random choices by the experimenter, not audible
>>> differences that were inherent in the cables.
>>
>>
>> Mr. Costich opined that disproving the sound differences
>> heard by audiophiles do not physically exist is not in
>> the realm of statistical analysis.
>>
>>
> I did not say that
I've got more years of experience corresponding with borglet than I care to
remember. He's never shown any symptoms of reading comprehension when the
item being read disagrees with his rather narrow and disbelief-suspended
view of life. He's a hysterical golden ear. To him, just about everything
sounds different.
Arny Krueger
January 21st 08, 08:14 PM
"JBorg, Jr." > wrote in message
> As far as I can tell, it's the Objectivist that has
> turned these into religious arguments.
?????????????
George M. Middius
January 21st 08, 08:49 PM
The Krooborg is konfused.
> > As far as I can tell, it's the Objectivist that has
> > turned these into religious arguments.
> ?????????????
Bratty Eddie actually means 'borgs are religious nuts.
Lest it be forgotten, society's downtrodden are entitled to pray for
equalization in the "afterlife". Usually, giving them that outlet shuts
them up, since they can cuddle with their bibles and tell themselves that
"God" will sort it all out. You, however, have shown yourself to be highly
resistant to pacification. You can't do science and you can't do
christianity, Arnii. You're a damn failure at all the things you boast
about. Is it any wonder that calls for your suicide ring out from the four
corners of the earth?
Shhhh! I'm Listening to Reason!
January 21st 08, 08:51 PM
On Jan 21, 7:21*am, "Arny Krueger" > wrote:
> "John Atkinson" > wrote in
>
>
>
>
>
>
> > On Jan 21, 12:25 am, "JBorg, Jr."
> > > wrote:
> >>> Arny Krueger wrote:
> >>>> JBorg, Jr. wrote
> >>>> He made claims that he had submitted peer-reviewed
> >>>> papers in AES.
>
> >>> False claim.
>
> >> Mr. Krueger, could you please provide the date of
> >> publication of the paper(s) you claimed to have
> >> submitted to JAES ?
<snip long justification about why they don't exist>
IOW, "They do not exist".
Shhhh! I'm Listening to Reason!
January 21st 08, 09:01 PM
On Jan 21, 1:00*pm, Oliver Costich > wrote:
> On Sat, 19 Jan 2008 10:54:17 -0800 (PST), "Shhhh! I'm Listening to
> Reason!" > wrote:
> >On Jan 19, 3:10*am, Oliver Costich > wrote:
> >> On Fri, 18 Jan 2008 13:19:23 -0800 (PST), "Shhhh! I'm Listening to
> >> Reason!" > wrote:
> >> >Oliver Costich wrote
>
> >> >> By the way, I don't use lamp cord or Home Depot interconnects in my
> >> >> system.
>
> >> >I do not use expensive wires or cables in my system. I just don't
> >> >really care if others do.
>
> >> I don't care what they use. I do care that they want to justify it
> >> with sloppy logic and BS.
>
> >I haven't seen any justifications, though I haven't read all the posts
> >in this thread.
>
> >As a matter of curiosity, what would happen to the results if, out of
> >a sample of 100 participants, 50 selected a certain product correctly
> >100% of the time and the other 50 selected incorrectly 100% of the
> >time?
>
> Would it be unusual to get 50 heads when flipping a coin 100 times?
> Getting 50 correct answers out of 100 participants is exactly what you
> would expext from random guessing (flipping coins).
Sorry, I didn't state my question clearly.
Assume a test with 100 participants and 10 trials. 50 of the
participants score 100% on all 10 trials (or correct at a
statistically significant level). 50 score 0% (or at some
statistically insignificant level) on all 10 trials. Would not the
overall results still show "random guessing"? Is so, could you still
reasonably attribute the results of those 50 that got it correct 100%
of the time to random guessing?
I'm not suggesting that this was the case here, or relating this in
any way to the WSJ article. I'm just curious. It seems to me that for
issues of perception a truly "random" population is counterproductive.
dizzy
January 21st 08, 11:14 PM
Arny Krueger wrote:
>> I responded to this claim:
>> "Not that can be retrieved using the search engine at
>> www.aes.org, Mr. Krueger, using all the alternative
>> spellings
>> of your name, and searching both the index of published
>> papers
>> and the preprint index. Could you supply the references,
>> please."
>
>Mr Atkinson seems to have me confused with his research department. I guess
>economic cut-backs have affected the staffing at Stereophile and instead of
>relying on paid staff, Mr Atkinson has been forced to go begging for help on
>Usenet. :-(
That's quite the illogical (and snotty) remark, Arny.
Would it be expected, or even ethical, for him to have his employees
doing research regarding your USENET claims, Arny, considering that
they have nothing to do with Stereophile?
It sounds like he made a reasonable effort to find the works, Arny.
Conversely, it does not sound like "begging for help" to ask you to
provide references to works that you claim to have authored, Arny.
You lose, Arny.
George M. Middius
January 21st 08, 11:24 PM
dippy said:
> Arny.
> Arny,
> Arny.
> Arny.
> You lose, Arny.
Is somebody in love? ;-)
John Atkinson[_2_]
January 22nd 08, 01:09 AM
On Jan 21, 8:21 am, "Arny Krueger" > wrote:
> "John Atkinson" > wrote in
> > On Jan 21, 12:25 am, "JBorg, Jr."
> > > wrote:
> >>> Arny Krueger wrote:
> >>>> JBorg, Jr. wrote
> >>>> He made claims that he had submitted peer-reviewed
> >>>> papers in AES.
> >>>
> >>> False claim.
> >>
> >> Mr. Krueger, could you please provide the date of
> >> publication of the paper(s) you claimed to have
> >> submitted to JAES ?
> >
> > Mr, Krueger''s exact word were "The JAES has published
> > a number of works that I authored or co-authored." The
> > context for this statement was a discussion involving
> > peer-reviewed technical papers.
>
> And thus you find my name in at least one paper that was
> published in the JAES.
But not as author as co-author, which was the specific
claim you made, Mr. Krueger.
> Of course such things do not appear in the AES online index.
No, because that index lists authors and co-authors, and as
you now admit, you were neither.
> The paper in question would be the origional JAES article about
> ABX.
So you were mentioned, Mr. Krueger, How does that justify
your claim that "The JAES has published a number of works
that [you] authored or co-authored"? Even if the "number"
you coyly referred to now turns out to be _one_, you still
weren't the author or co-author of that technical paper.
>> I responded to this claim: "Not that can be retrieved using
>> the search engine at www.aes.org, Mr. Krueger, using all
>> the alternative spellings of your name, and searching both
>> the index of published papers and the preprint index.
>> Could you supply the references, please."
>
> Mr Atkinson seems to have me confused with his research
> department. I guess economic cut-backs have affected the
> staffing at Stereophile and instead of relying on paid staff,
> Mr Atkinson has been forced to go begging for help on Usenet. :-(
Don't know how you infer that, Mr. Krueger, The more
straightforward explanation for your absence from the
AES author index is that you were not an author, as you
now admit.
> > This was the response [Ludovic Mirabel] got from
> > a J. Wang [at the Toronto University library]:
>
> > "Hello, Your message was forwarded to me.
> > I searched many databases but only found one article
> > which is close to what you requested:
> > Amplifier-loudspeaker interfacing Krueger, A. B.
> > Published in "DB, The Sound Engineering Magazine",
> > Vol. 18, No. 7, Aug. Sept. 1984."
>
> OK, so I'll out a little secret. The dB magazine article was
> very closely related to an article that I had previously submitted
> to the AES. After what I [recalled] to be a very long wait, the
> AES sent me a letter that asked a number of questions about
> the article, presumably from the review board. By then I had
> despaired of any response from the AES and sold the related
> article to dB Magazine. Regrettably, dB stiffed me and I was
> never paid. So I lost both ways - I neither had any money, nor
> did I have the corresponding line for my resume that would have
> come from the JAES publication.
It is indeed a touching story, Mr. Krueger, and one that
confirms that, contrary to your original claim, you were
never an author or co-author of "a number of works
that [you] authored or co-authored." Thank you for coming clean
on this matter after years of evasion.
[i]
> > So, given that Mr. Krueger has never "authored or
> > co-authored a number of works" in the JAES, it
> > must be concluded that he must have mis-remembered
> > the facts of the matter.
>
> As usual Atkinson twists real events around his ego-centric
> view of the world. In that world, he's a god and I'm trash.
You must have hit the crackpipe a little early this evening,
Mr. Krueger, as I don't see any words from me that are
equivalent to my making the claim that I am a "god."
> Somehow, I have no problem living with that! ;-)
:-) (Laughing at you, Mr. Krueger, not with you.)
John Atkinson
Editor, Stereophile
Oliver Costich
January 22nd 08, 02:26 AM
On Mon, 21 Jan 2008 17:09:04 -0800 (PST), John Atkinson
> wrote:
>On Jan 21, 8:21 am, "Arny Krueger" > wrote:
>> "John Atkinson" > wrote in
>> > On Jan 21, 12:25 am, "JBorg, Jr."
>> > > wrote:
>> >>> Arny Krueger wrote:
>> >>>> JBorg, Jr. wrote
>> >>>> He made claims that he had submitted peer-reviewed
>> >>>> papers in AES.
>> >>>
>> >>> False claim.
>> >>
>> >> Mr. Krueger, could you please provide the date of
>> >> publication of the paper(s) you claimed to have
>> >> submitted to JAES ?
>> >
>> > Mr, Krueger''s exact word were "The JAES has published
>> > a number of works that I authored or co-authored." The
>> > context for this statement was a discussion involving
>> > peer-reviewed technical papers.
>>
>> And thus you find my name in at least one paper that was
>> published in the JAES.
>
>But not as author as co-author, which was the specific
>claim you made, Mr. Krueger.
>
>> Of course such things do not appear in the AES online index.
>
>No, because that index lists authors and co-authors, and as
>you now admit, you were neither.
>
>> The paper in question would be the origional JAES article about
>> ABX.
>
>So you were mentioned, Mr. Krueger, How does that justify
>your claim that "The JAES has published a number of works
>that [you] authored or co-authored"? Even if the "number"
>you coyly referred to now turns out to be _one_, you still
>weren't the author or co-author of that technical paper.
>
>>> I responded to this claim: "Not that can be retrieved using
>>> the search engine at www.aes.org, Mr. Krueger, using all
>>> the alternative spellings of your name, and searching both
>>> the index of published papers and the preprint index.
>>> Could you supply the references, please."
>>
>> Mr Atkinson seems to have me confused with his research
>> department. I guess economic cut-backs have affected the
>> staffing at Stereophile and instead of relying on paid staff,
>> Mr Atkinson has been forced to go begging for help on Usenet. :-(
>
>Don't know how you infer that, Mr. Krueger, The more
>straightforward explanation for your absence from the
>AES author index is that you were not an author, as you
>now admit.
>
>> > This was the response [Ludovic Mirabel] got from
>> > a J. Wang [at the Toronto University library]:
>>
>> > "Hello, Your message was forwarded to me.
>> > I searched many databases but only found one article
>> > which is close to what you requested:
>> > Amplifier-loudspeaker interfacing Krueger, A. B.
>> > Published in "DB, The Sound Engineering Magazine",
>> > Vol. 18, No. 7, Aug. Sept. 1984."
>>
>> OK, so I'll out a little secret. The dB magazine article was
>> very closely related to an article that I had previously submitted
>> to the AES. After what I [recalled] to be a very long wait, the
>> AES sent me a letter that asked a number of questions about
>> the article, presumably from the review board. By then I had
>> despaired of any response from the AES and sold the related
>> article to dB Magazine. Regrettably, dB stiffed me and I was
>> never paid. So I lost both ways - I neither had any money, nor
>> did I have the corresponding line for my resume that would have
>> come from the JAES publication.
>
>It is indeed a touching story, Mr. Krueger, and one that
>confirms that, contrary to your original claim, you were
>never an author or co-author of "a number of works
>that [you] authored or co-authored." Thank you for coming clean
>on this matter after years of evasion.
>[i]
>> > So, given that Mr. Krueger has never "authored or
>> > co-authored a number of works" in the JAES, it
>> > must be concluded that he must have mis-remembered
>> > the facts of the matter.
>>
>> As usual Atkinson twists real events around his ego-centric
>> view of the world. In that world, he's a god and I'm trash.
>
>You must have hit the crackpipe a little early this evening,
>Mr. Krueger, as I don't see any words from me that are
>equivalent to my making the claim that I am a "god."
>
>> Somehow, I have no problem living with that! ;-)
>
>:-) (Laughing at you, Mr. Krueger, not with you.)
>
>John Atkinson
>Editor, Stereophile
Could you take this unrelated drivel someplace else. Publications,
degrees, etc. are not as correlated with wisdom to the degree most
people assume.
Oliver Costich
January 22nd 08, 02:51 AM
On Mon, 21 Jan 2008 13:01:21 -0800 (PST), "Shhhh! I'm Listening to
Reason!" > wrote:
>On Jan 21, 1:00*pm, Oliver Costich > wrote:
>> On Sat, 19 Jan 2008 10:54:17 -0800 (PST), "Shhhh! I'm Listening to
>
>> Reason!" > wrote:
>> >On Jan 19, 3:10*am, Oliver Costich > wrote:
>> >> On Fri, 18 Jan 2008 13:19:23 -0800 (PST), "Shhhh! I'm Listening to
>> >> Reason!" > wrote:
>> >> >Oliver Costich wrote
>>
>> >> >> By the way, I don't use lamp cord or Home Depot interconnects in my
>> >> >> system.
>>
>> >> >I do not use expensive wires or cables in my system. I just don't
>> >> >really care if others do.
>>
>> >> I don't care what they use. I do care that they want to justify it
>> >> with sloppy logic and BS.
>>
>> >I haven't seen any justifications, though I haven't read all the posts
>> >in this thread.
>>
>> >As a matter of curiosity, what would happen to the results if, out of
>> >a sample of 100 participants, 50 selected a certain product correctly
>> >100% of the time and the other 50 selected incorrectly 100% of the
>> >time?
>>
>> Would it be unusual to get 50 heads when flipping a coin 100 times?
>> Getting 50 correct answers out of 100 participants is exactly what you
>> would expext from random guessing (flipping coins).
>
>Sorry, I didn't state my question clearly.
>
>Assume a test with 100 participants and 10 trials. 50 of the
>participants score 100% on all 10 trials (or correct at a
>statistically significant level). 50 score 0% (or at some
>statistically insignificant level) on all 10 trials. Would not the
>overall results still show "random guessing"? Is so, could you still
>reasonably attribute the results of those 50 that got it correct 100%
>of the time to random guessing?
What exactly are you testing? If it is whether individuals can
corectly determine what you are testing, the for those who got the all
correct, you can support the claim they are right more than half the
time (guessing). For the others you can support that they are wrong
more than half the time. Alternatively you could be testing the whole
population with 1000 trials, and 50 people get their 500 right and 50
others you'd toss the experiment as simply too bizarre. What do you
think is the likelihood of a randomly selected sample giving that
outcome? You'd look for other factors to explain the results. Maybe a
statistics course is in order
>
>I'm not suggesting that this was the case here, or relating this in
>any way to the WSJ article. I'm just curious. It seems to me that for
>issues of perception a truly "random" population is counterproductive.
It is unless you are looking to home in on the truth. I don't know of
any statistical method for drawing conclusions about population
parameters from sample statistics that doesn't require that samples be
simple random samples. Randomness alone is not enough. It has to be
simple random which in this particualr case means that every group of
39 has an equally likely chance of being selected. One of the problems
with this test is that the "respondents" were self-selected or
otherwise not randomly selected. It's like taking a poll on the death
penalty by asking people who walk by your front door. If the test was
sponsored by anyone who has an interest in speaker cable differences
being heard, then agoin the test is suspect. Virtually every
elementary statisitics text gives similare examples of faulty data
collection.
The population need not be the whole world but just limiting it to
people who think you can tell would still give a huge population
relative to 39. It's interesting that even in this population, which I
assume contained the 39 tested, the results are insufficient to
support that hypothesis.
Better yet, make the population those who have not decided that you
can't tell.
The condition is that you take a simple random sample from the
population of interest. Then the result will hold only for that
population.
George M. Middius
January 22nd 08, 03:07 AM
McInturd whined:
> >:-) (Laughing at you, Mr. Krueger, not with you.)
> Could you take this unrelated drivel someplace else.
Ollie, your endless prattling about statistics is equally irrelevant to
RAO. Have you even read the charter? I doubt it very much. You come off
like a pompous, self-important poseur who has no opinions about audio but
an endless supply of envy for those who make their livings in the field.
JBorg, Jr.[_2_]
January 22nd 08, 03:24 AM
> Arny Krueger wrote:
>> JBorg, Jr. wrote
>>> Arny Krueger wrote:
>>>> JBorg, Jr. wrote
>>>>> Arny Krueger wrote:
>>>>>> JBorg, Jr. wrote
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> Well yes, Mr. Costich, the test results aren't
>>>>>> scientifically valid but it didn't disproved that the
>>>>>> sound differences heard by participants did not
>>>>>> physically exist.
>>>>>
>>>>> That was another potential flaw in the tests. I see no
>>>>> controls that ensured that the listeners heard the
>>>>> identically same selections of music. Therefore, the
>>>>> listeners may have heard differences that did
>>>>> physically exist - unfortunately they were due to
>>>>> random choices by the experimenter, not audible
>>>>> differences that were inherent in the cables.
>>>>
>>>>
>>>> Mr. Costich opined that disproving the sound differences
>>>> heard by audiophiles do not physically exist is not in
>>>> the realm of statistical analysis.
>
>>> I read all his posts, and saw no such thing. I did see
>>> him correct other such misrepresentations of what he said.
>
>> Along this subthread, I said the following to Mr. Costich
>> concerning the test:
>>
>>
>> " ... Mr. Costich, the test results aren't scientifically valid but
>> it didn't disproved that the sound differences heard by participants
>> did not physically exist."
>
> There's your first anti-logic attack borglet. You're forcing your
> opinion to be disproved with a proof of a negative hypothesis.
> Everybody knows that negative hypothesis are difficult or impossible
> to prove. Yet, you are demanding that a negative hypothesis be proved.
>
My opinion? The truth that such test fails to proved that sound
differences heard by participants don't physically exist -- is a FACT.
A fact even supported by Mr. Costich in these thread stating that
disproving sound differences heard by audiophiles is -- certainty
not in the realm of statistical analysis
> Mr. Costich replied:
>
>> " Of course not. Certainty is not in the realm of
>> statistical analysis...."
>
> The man speaks truth, but that's not what you are interested in, right
> borglet?
??
What are you talking about ?
>> He is saying that "certainty" about disproving a claim that the sound
>> differences heard by participants do not physically
>> exist -- is not in the realm of statistical analysis.
>
> What's is certain in this life Borglet, except that you will trash
> logic and reason to justify your religious belief that just about
> everything, including the most perfected and inert of audio
> components, being cables also sounds different?
??
With all due respect, you sound like Ferstler.
> IOW borglet, the truism that proving a negative hypothesis is
> difficult or impossible. You're just obfuscating the fact that you
> can't prove the corresponding positive hypothesis.
>> You seem to suggest that Mr. Costich misspoke, but how so ?
>
> Again Borglet, you are distorting what other's said to justify your
> religious belief that cables sound different.
>
> Costlich didn't misspeak borglet, he obviously talked way over your
> head.
So then, you agreed with what he said that disproving sound
differences heard by audiophiles is -- a certainty not in the
realm of statistical analysis ?
>> Are you inferring that Mr. Costich was untruthful about
>> what he said and that he spoke too hastily ?
>
> No borglet, I'm inferring that rational discussion with you is
> impossible because you distort everything you hear to fit into your
> own erroneous and illogical thinking.
Could you please demonstrate which part I distorted and explain why.
>>> Basically borglet, you're not a reliable analyst in
>>> matters like these.
>
> To say the least!
JBorg, Jr.[_2_]
January 22nd 08, 03:37 AM
> Oliver Costich wrote:
>> JBorg, Jr. wrote:
>>> John Corbett wrote:
>>>
>>>
>>>
>>>
>>>
>>> Well, I am a statistician.
>>> You seem to be so confused about statistics that you can neither
>>> perform the calculations nor understand what they mean.
>>
>> Hello Mr. Corbett, I would like to know if it is appropriate to
>> assume that disproving sound differences heard by audiophiles
>> that I presume physically exist is -- a certainty not in the
>> realm of statistical analysis.
>
>
> Then what is it in the realm of? Religion?
No, Mr. Costich. Disproving presence of subtle sound differences
heard by audiophiles is not in the realm of religion.
As a statician, how could you say that.
JBorg, Jr.[_2_]
January 22nd 08, 03:38 AM
> Oliver Costich wrote:
>
>
>
>
>
>
>
>
>
>
>
> I, for one, do care about the music and would rather just listen to it
> than sit around doing badly designed tests. When you are involved in
> the music, subtle differences, even if they exist, aren't really
> discernable.
I wonder how it is possible to ascertain that subtle differences
aren't discernable when one is involve in the music and free from
the task of knowing whether subtle differences exist or not.
Is it because one do not care about the music ?
JBorg, Jr.[_2_]
January 22nd 08, 03:48 AM
> Oliver Costich wrote:[i]
>> JBorg, Jr.wrote:
>>
>>
>>
>>
>> Well now! Disproving that the sound differences heard by
>> audiophiles do not physically exist is -- certainty not in the
>> realm of statistical analysis.
>
> Disproving that the sound differences BELIEVED to beheard by
> audiophiles actually exist is not provable or disprovable by
> statistical methods your standard is 100% certainty. Nothing is,
> other than 1+1=2 and its ilk.
If that is the case, what are the reason(s) you persistently refer to
audiophiles as *golden ear cult*, and why?
> That's what the argument is about - some claim to hear things that
> allow them the distinguish but can't (at least in this test)
> demonstrate it.
But the test did not proved that the subtle difference did not exist.
JBorg, Jr.[_2_]
January 22nd 08, 03:56 AM
> Oliver Costich wrote:
>> JBorg, Jr. wrote:
>>> Arny Krueger wrote:
>>>> JBorg, Jr. wrote
>>>>> Arny Krueger wrote:
>>>>>> JBorg, Jr. wrote
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> Well yes, Mr. Costich, the test results aren't
>>>>>> scientifically valid but it didn't disproved that the
>>>>>> sound differences heard by participants did not
>>>>>> physically exist.
>>>>>
>>>>> That was another potential flaw in the tests. I see no
>>>>> controls that ensured that the listeners heard the
>>>>> identically same selections of music. Therefore, the
>>>>> listeners may have heard differences that did physically
>>>>> exist - unfortunately they were due to random choices by
>>>>> the experimenter, not audible differences that were
>>>>> inherent in the cables.
>>>>
>>>> Mr. Costich opined that disproving the sound differences
>>>> heard by audiophiles do not physically exist is not in
>>>> the realm of statistical analysis.
>>>>
>>> I read all his posts, and saw no such thing. I did see him correct
>>> other such misrepresentions of what he said.
>>
>> Along this subthread, I said the following to Mr. Costich concerning
>> the test:
>>
>> " ... Mr. Costich, the test results aren't scientifically valid
>> but it didn't disproved that the sound differences heard by
>> participants did not physically exist."
>>
>> Mr. Costich replied:
>>
>> " Of course not. Certainty is not in the realm of statistical
>> analysis...."
>>
>> He is saying that "certainty" about disproving a claim that the
>> sound differences heard by participants do not physically
>> exist -- is not in the realm of statistical analysis.
>>
>> Further, Mr. Costich supported his claim by saying:
>>
>> " Because statistical analysis is the only toll we have to test
>> claims about the behavior of populations from samples. Any such test
>> has
>> error and the likelihood of error needs to be set in advance. The
>> significance level is precisely the probabiliy of rejecting a true
>> null hypothesis."
>>
>> ***
>> You seem to suggest that Mr. Costich misspoke, but how so ?
>>
>> Are you inferring that Mr. Costich was untruthful about what he
>> said and that he spoke too hastily ?
>>
>>> Basically borglet, you're not a reliable analyst in matters like
>>> these.
>
>
>
> Unbeleivable! Your interpretation amazes me. Take a philosophy of
> science course to learn that there is no 100% certainty in any science
> save mathematics, if that is even a science.
>
> Physics is models based on observation of events repeating, or as
> David Hume calls it "bad habit". The level of "certainty" or
> probability of error can be reduced to infinitesimal levels but
> uncertainty is still there even if we don't behave like it is. The
> less controllable experiments are, and the farther the "science"
> deviates from having rigorous mathematical models, the less certainty
> you have. In particular you seem to be willing to abandon all use of
> statistics as to you the results don't provide certainty (and they
> never do).
But clearly, such test obviously did not proved that subtle differences
did not exist, and as you said, disproving the subtle difference that
audiophiles hear is a certainty NOT in the realm of Statistical analysis.
You seems to be saying still that statistic analysis or some other method
can be use, but stat analysis is not it.
What am I suppose to do, Mr. Costich ?
JBorg, Jr.[_2_]
January 22nd 08, 03:59 AM
> Oliver Costich wrote:
>> JBorg, Jr.wrote:
>>> Arny Krueger wrote:
>>>> JBorg, Jr. wrote
>>>
>>>
>>>
>>>
>>>
>>>
>>>> Well yes, Mr. Costich, the test results aren't
>>>> scientifically valid but it didn't disproved that the
>>>> sound differences heard by participants did not physically exist.
>>>
>>> That was another potential flaw in the tests. I see no controls that
>>> ensured that the listeners heard the identically same selections of
>>> music. Therefore, the listeners may have heard differences that did
>>> physically exist - unfortunately they were due to random choices by
>>> the experimenter, not audible differences that were inherent in the
>>> cables.
>>
>> Mr. Costich opined that disproving the sound differences heard by
>> audiophiles do not physically exist is not in the realm of
>> statistical analysis.
>
>
> I did not say that
I left out words in haste to write, that's all.
Here it is again:
Mr. Costich opined that disproving the sound differences heard by
audiophiles do not physically exist is -- a certainty not in the realm
of statistical analysis.
JBorg, Jr.[_2_]
January 22nd 08, 04:02 AM
> Arny Krueger wrote:
>
> I've got more years of experience corresponding with borglet than I
> care to remember. He's never shown any symptoms of reading
> comprehension when the item being read disagrees with his rather
> narrow and disbelief-suspended view of life. He's a hysterical golden
> ear. To him, just about everything sounds different.
Do tell about the love poems and short notes I sent to Rao with you
in mind.
Tenderly Yours...................
JBorg, Jr.[_2_]
January 22nd 08, 04:16 AM
> Oliver Costich wrote:
>> JBorg, Jr. wrote:
>>> Oliver Costich wrote:
>>>> JBorg, Jr.wrote:
>>>>> Oliver Costich wrote:
>>>>>> JBorg, Jr. wrote:
>>>>>>> Oliver Costich wrote:
>>>>>>>> Walt wrote:
>>>>>>>>> John Atkinson wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> Remind me again how many times Arny Krueger has been
>>>>>>>>> quoted in the Wall Street Journal?
>>>>>>>>
>>>>>>>> Ok. So you've been quoted in the WSJ. So have Uri Geller
>>>>>>>> and Ken Lay.
>>>>>>>>
>>>>>>>> What's your point?
>>>>>>>
>>>>>>> So has Osama Bin Laden. The point is that he's devoid of a
>>>>>>> sound argument.
>>>>>>
>>>>>> Mr. Costich, there is no sound argument to improve upon a
>>>>>> strawman arguments. It just doesn't exist.
>>>>>
>>>>> Agreed.
>>>>
>>>> Ok.
>>>>
>>>>>> Incidentally Mr. Costich, how well do you know Arny Krueger if
>>>>>> you don't mind me asking so.
>>>>>>
>>>>> I only know of his existence from the news group, if that's his
>>>>> real name:-)
>>>>
>>>> He made claims that he had submitted peer-reviewed papers in AES.
>>>> He also calim to be audio engineer and well educated concerning
>>>> statistical analysis in well designed audio experiment. To be
>>>> honest, Mr. Costich, he is the worst offender of common sense and
>>>> has been pestering this group for a long, long time.
>>>
>>>
>>> That's an opinion, which is in the name of the newsgroup.
>>
>>
>> This is indeed a newsgroup of opinion but do you think it is proper,
>> as Mr. Krueger has done in not so distant past, to declare false
>> claims and present it as FACTS ?
>
> I'm not a judge or a censor.
But certainly, you are quick to judge and indict whether one is devoid of
sound argument, did you not?
JBorg, Jr.[_2_]
January 22nd 08, 04:38 AM
> Oliver Costich wrote:
>> JBorg, Jr. wrote:
>>> Oliver Costich wrote:
>>>> JBorg, Jr. wrote:
>>>>> Oliver Costich wrote:
>>>>>> JBorg, Jr.wrote:
>>>>>>> Shhhh! wrote:
>>>>>>>> Oliver Costich wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> In other words, that 61% of a sample of 39 got the correct
>>>>>>>> result isn't sufficient evidence that in the general
>>>>>>>> population of listeners more than half can pick the better
>>>>>>>> cable.
>>>>>>>>
>>>>>>>> So, I'd say "that's hardly that".
>>>>>>>
>>>>>>> I'm curious what percent of the "best informed" got. I mean, you
>>>>>>> could mix in hot dog vendors, the deaf, people who might try to
>>>>>>> fail just to be contrary, you, and so on, and get different
>>>>>>> results.
>>>>>>
>>>>>> Well asked.
>>>>>>
>>>>> What population of listeners was the claim made for and how was it
>>>>> defined? My guess is that however it's constructed, it a lot
>>>>> bigger than 39.
>>>>
>>>> No information were provided for that. Still, valid parameter for
>>>> such test should exclude participants with personal biases and
>>>> preferences and those lacking extended listening experience, as
>>>> examples.
>>>
>>> Personal bias can be filtered out with well designed double blind
>>> experiments. That's the whole point of that method. If neither the
>>> tester or those tested know what they are listening to. People with
>>> listening experience is still a large, but shrinking, population.
>>
>> Mr. Costich, how do you filter out from DBT experiments the listeners
>> personal biases and preferences in sound acquired over time through
>> extended listening experience. As an example, a person with strong
>> affinity and craves the sound produced by jazz ensemble tends to be
>> receptive to the subtle nuance produce and articulated by those sets
>> of instruments. Is hiding the components during DBT removed this
>> adulation out ?
>
> In other words, this is a religious argument for you, not a scientific
> one. Would you decide if most people believe in God by taking a sample
> of members of the Baptist Church?
YOU are out of order!
With regards to personal biases, please answer the question.
>> What if the subject for the test is someone like Howard Ferstler who
>> admitted to having deeply held personal vendetta towards high-end
>> establishment going back in the late '70s, how would you go about
>> explaining that a no-difference Ferstler test result is valid ?
>>
>>> The golden ear cult would like to define the population to be those
>>> among that have a good enough run of guesses to get a statistically
>>> significant outcome:-)
>
>
> OK, fine leave him out, just like you almost always do with outliers
> in sttastical dat analysis.
You are missing the point, Mr. Costich. How do you exclude
participants with hidden motives from skewing the data and test
results.
>> Mr. Costich, please don't be so frigging sarcastic towards
>> audiophiles. Audiophiles who had honed and increased their listening
>> sensitivity from listening to live, unamplified, and reproduced
>> music over extended period of time.
>
> I guess that I have been labeled an audiophile for over 40 years and
> listened to countless hours of music plus auditioning, and yes
> comparing, various components over the same period doesn't qualify me
> because I won't accept a false conclusion from an experiment.
Mr. Costich, why are you being sarcastic to yourself and refering to
yourself to be none other a *golden cult* follower ?
JBorg, Jr.[_2_]
January 22nd 08, 06:25 AM
> Oliver Costich wrote:
>> JBorg, Jr. wrote:
>>> Oliver Costich wrote:
>>>> JBorg, Jr. wrote:
>>>>> Oliver Costich wrote:
>>>>>> Mr.clydeslick wrote:
>>>>>>> Oliver Costich wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> snip
>>>>>>>
>>>>>>> Back to reality: 61% correct in one experiment fails to reject
>>>>>>> that they can't tell the difference. If the claim is that
>>>>>>> listeners can tell the better cable more the half the time, then
>>>>>>> to support that you have to be able to reject that the in the
>>>>>>> population of all audio interested listeners, the correct
>>>>>>> guesses occur half the time or less. 61% of 39 doesn't do it.
>>>>>>> (Null hypothesis is p=.5, alternative hypothesis is p>.5. The
>>>>>>> null hypthesis cannot be rejected with the sample data given.)
>>>>>>>
>>>>>>> In other words, that 61% of a sample of 39 got the correct
>>>>>>> result isn't sufficient evidence that in the general population
>>>>>>> of listeners more than half can pick the better cable.
>>>>>>>
>>>>>>> So, I'd say "that's hardly that".
>>>>>>
>>>>>> you seem to be mixing difference with preference, you reference
>>>>>> both, for the same test.
>>>>>
>>>>> For the purpose of statistical analysis it makes no difference.
>>>>
>>>> But for the purpose of sensible analysis, shouldn't it makes a
>>>> difference.
>>>
>>> I don't think so. I can't see any way the statistical analysis would
>>> be different.
>>
>> Preferences are, statistically, immeasureable if the claim is that
>> listeners can tell the better cable more the half the time.
>>
>>
>> Agree or Disagree ?
>
>
> I have no idea what you are asking.
You are admitting that, for the purpose of statistical analysis,
it would make no difference whether the participant determine
or discern subtle differences based on sound differences
or sound preferences during audio testing.
Mr. Costich, do you still meant to say that mixing differences with
preferences during testing would make no difference for the purpose
statistical analysis ?
Yes or No ?
> snip
JBorg, Jr.[_2_]
January 22nd 08, 06:32 AM
> Oliver Costich wrote:
>> JBorg, Jr. wrote:
>>
>>
>>
>>
>>
>> snip ...
>>
>>
>>
>>
>>
>>
>> Well, I did not say that you did, but basing on what you said about
>> making proper statistic claims, do you think that audio testing such
>> as SBT and ABX/DBT are insufficient experiments to determinine
>> whether the subtle sound differences heard by audiophiles physically
>> exist ?
>>
>>
>>
>>
> No, not if designed and analyzed correctly.
Very well. I have nothing further on "this" subthread.
Oliver Costich
January 22nd 08, 06:51 AM
On Tue, 22 Jan 2008 04:38:25 GMT, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>> JBorg, Jr. wrote:
>>>> Oliver Costich wrote:
>>>>> JBorg, Jr. wrote:
>>>>>> Oliver Costich wrote:
>>>>>>> JBorg, Jr.wrote:
>>>>>>>> Shhhh! wrote:
>>>>>>>>> Oliver Costich wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> In other words, that 61% of a sample of 39 got the correct
>>>>>>>>> result isn't sufficient evidence that in the general
>>>>>>>>> population of listeners more than half can pick the better
>>>>>>>>> cable.
>>>>>>>>>
>>>>>>>>> So, I'd say "that's hardly that".
>>>>>>>>
>>>>>>>> I'm curious what percent of the "best informed" got. I mean, you
>>>>>>>> could mix in hot dog vendors, the deaf, people who might try to
>>>>>>>> fail just to be contrary, you, and so on, and get different
>>>>>>>> results.
>>>>>>>
>>>>>>> Well asked.
>>>>>>>
>>>>>> What population of listeners was the claim made for and how was it
>>>>>> defined? My guess is that however it's constructed, it a lot
>>>>>> bigger than 39.
>>>>>
>>>>> No information were provided for that. Still, valid parameter for
>>>>> such test should exclude participants with personal biases and
>>>>> preferences and those lacking extended listening experience, as
>>>>> examples.
>>>>
>>>> Personal bias can be filtered out with well designed double blind
>>>> experiments. That's the whole point of that method. If neither the
>>>> tester or those tested know what they are listening to. People with
>>>> listening experience is still a large, but shrinking, population.
>>>
>>> Mr. Costich, how do you filter out from DBT experiments the listeners
>>> personal biases and "preferences in sound acquired over time through
>>> extended listening experience". As an example, a person with strong
>>> affinity and craves the sound produced by jazz ensemble tends to be
>>> receptive to the subtle nuance produce and articulated by those sets
>>> of instruments. Is hiding the components during DBT removed this
>>> adulation out ?
>>
>> In other words, this is a religious argument for you, not a scientific
>> one. Would you decide if most people believe in God by taking a sample
>> of members of the Baptist Church?
>
>
>YOU are out of order!
>
>
>With regards to personal biases, please answer the question.
Your question is nonsense. You are assuming that there are
"preferences in sound acquired over time through extended listening
experience". Show me evidence of this outside of your belief.
>
>
>
>>> What if the subject for the test is someone like Howard Ferstler who
>>> admitted to having deeply held personal vendetta towards high-end
>>> establishment going back in the late '70s, how would you go about
>>> explaining that a no-difference Ferstler test result is valid ?
>>>
>>>> The golden ear cult would like to define the population to be those
>>>> among that have a good enough run of guesses to get a statistically
>>>> significant outcome:-)
>>
>>
>> OK, fine leave him out, just like you almost always do with outliers
>> in sttastical dat analysis.
>
>
>You are missing the point, Mr. Costich. How do you exclude
>participants with hidden motives from skewing the data and test
>results.
You mean like lining up 39 people off the CES high end floor? By
designing the test so that there is a RANDOM selection from the
population.
>
>
>
>>> Mr. Costich, please don't be so frigging sarcastic towards
>>> audiophiles. Audiophiles who had honed and increased their listening
>>> sensitivity from listening to live, unamplified, and reproduced
>>> music over extended period of time.
>>
>> I guess that I have been labeled an audiophile for over 40 years and
>> listened to countless hours of music plus auditioning, and yes
>> comparing, various components over the same period doesn't qualify me
>> because I won't accept a false conclusion from an experiment.
>
>
>
>Mr. Costich, why are you being sarcastic to yourself and refering to
>yourself to be none other a *golden cult* follower ?
Is English your ninth language? Where did I say such a thing? Go take
a statistics course. Take one in design of experiments. Take one in
Philosophy od Sciece and come back when you have a modicum of
knowledge about such things.
As of now you come off as a moron.
>
>
>
>
>
>
>
Oliver Costich
January 22nd 08, 06:56 AM
On Tue, 22 Jan 2008 03:37:05 GMT, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>> JBorg, Jr. wrote:
>>>> John Corbett wrote:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Well, I am a statistician.
>>>> You seem to be so confused about statistics that you can neither
>>>> perform the calculations nor understand what they mean.
>>>
>>> Hello Mr. Corbett, I would like to know if it is appropriate to
>>> assume that disproving sound differences heard by audiophiles
>>> that I presume physically exist is -- a certainty not in the
>>> realm of statistical analysis.
>>
>>
>> Then what is it in the realm of? Religion?
>
>
>No, Mr. Costich. Disproving presence of subtle sound differences
>heard by audiophiles is not in the realm of religion.
>
>As a statician, how could you say that.
>
>
Your premise is "sound differences heard by audiophiles that I presume
physically exist". This is more mealy mouthed golden ears bull****.
Some things sound different, some don't. When the experiments say they
don't the true believers come up with convoluted nonsense based on
assumptions with no basis other than religion-like belief.
Oliver Costich
January 22nd 08, 07:00 AM
On Tue, 22 Jan 2008 03:48:18 GMT, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:[i]
>>> JBorg, Jr.wrote:
>>>
>>>
>>>
>>>
>>> Well now! Disproving that the sound differences heard by
>>> audiophiles do not physically exist is -- certainty not in the
>>> realm of statistical analysis.
>>
>> Disproving that the sound differences BELIEVED to beheard by
>> audiophiles actually exist is not provable or disprovable by
>> statistical methods your standard is 100% certainty. Nothing is,
>> other than 1+1=2 and its ilk.
>
>
>If that is the case, what are the reason(s) you persistently refer to
>audiophiles as *golden ear cult*, and why?
Because they always fall back on bull**** like this when they fail to
produce evidence.
>
>
>> That's what the argument is about - some claim to hear things that
>> allow them the distinguish but can't (at least in this test)
>> demonstrate it.
>
>
>But the test did not proved that the subtle difference did not exist.
>
Of course not absolutely. But then again disproving that something
exists when no one has observed it is pretty hard, like for
leprechauns.
>
>
>
>
>
>
>
>
>
>
>
Oliver Costich
January 22nd 08, 07:02 AM
On Tue, 22 Jan 2008 04:16:28 GMT, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>> JBorg, Jr. wrote:
>>>> Oliver Costich wrote:
>>>>> JBorg, Jr.wrote:
>>>>>> Oliver Costich wrote:
>>>>>>> JBorg, Jr. wrote:
>>>>>>>> Oliver Costich wrote:
>>>>>>>>> Walt wrote:
>>>>>>>>>> John Atkinson wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> Remind me again how many times Arny Krueger has been
>>>>>>>>>> quoted in the Wall Street Journal?
>>>>>>>>>
>>>>>>>>> Ok. So you've been quoted in the WSJ. So have Uri Geller
>>>>>>>>> and Ken Lay.
>>>>>>>>>
>>>>>>>>> What's your point?
>>>>>>>>
>>>>>>>> So has Osama Bin Laden. The point is that he's devoid of a
>>>>>>>> sound argument.
>>>>>>>
>>>>>>> Mr. Costich, there is no sound argument to improve upon a
>>>>>>> strawman arguments. It just doesn't exist.
>>>>>>
>>>>>> Agreed.
>>>>>
>>>>> Ok.
>>>>>
>>>>>>> Incidentally Mr. Costich, how well do you know Arny Krueger if
>>>>>>> you don't mind me asking so.
>>>>>>>
>>>>>> I only know of his existence from the news group, if that's his
>>>>>> real name:-)
>>>>>
>>>>> He made claims that he had submitted peer-reviewed papers in AES.
>>>>> He also calim to be audio engineer and well educated concerning
>>>>> statistical analysis in well designed audio experiment. To be
>>>>> honest, Mr. Costich, he is the worst offender of common sense and
>>>>> has been pestering this group for a long, long time.
>>>>
>>>>
>>>> That's an opinion, which is in the name of the newsgroup.
>>>
>>>
>>> This is indeed a newsgroup of opinion but do you think it is proper,
>>> as Mr. Krueger has done in not so distant past, to declare false
>>> claims and present it as FACTS ?
>>
>> I'm not a judge or a censor.
>
>
>But certainly, you are quick to judge and indict whether one is devoid of
>sound argument, did you not?
>
>
>
I was referring to the particular post. On the other hand, based on
your posts and ability to frame a question or provide a rational
response, there is sufficient evidence to support the claim that
you're an idiot at the 99% confidence level.
>
Oliver Costich
January 22nd 08, 07:03 AM
On Tue, 22 Jan 2008 03:38:14 GMT, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> I, for one, do care about the music and would rather just listen to it
>> than sit around doing badly designed tests. When you are involved in
>> the music, subtle differences, even if they exist, aren't really
>> discernable.
>
>
>I wonder how it is possible to ascertain that subtle differences
>aren't discernable when one is involve in the music and free from
>the task of knowing whether subtle differences exist or not.
>
>Is it because one do not care about the music ?
>
>
>
Another nonsensical "ponderance"
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
Oliver Costich
January 22nd 08, 07:10 AM
On Tue, 22 Jan 2008 03:24:18 GMT, "JBorg, Jr."
> wrote:
>> Arny Krueger wrote:
>>> JBorg, Jr. wrote
>>>> Arny Krueger wrote:
>>>>> JBorg, Jr. wrote
>>>>>> Arny Krueger wrote:
>>>>>>> JBorg, Jr. wrote
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Well yes, Mr. Costich, the test results aren't
>>>>>>> scientifically valid but it didn't disproved that the
>>>>>>> sound differences heard by participants did not
>>>>>>> physically exist.
>>>>>>
>>>>>> That was another potential flaw in the tests. I see no
>>>>>> controls that ensured that the listeners heard the
>>>>>> identically same selections of music. Therefore, the
>>>>>> listeners may have heard differences that did
>>>>>> physically exist - unfortunately they were due to
>>>>>> random choices by the experimenter, not audible
>>>>>> differences that were inherent in the cables.
>>>>>
>>>>>
>>>>> Mr. Costich opined that disproving the sound differences
>>>>> heard by audiophiles do not physically exist is not in
>>>>> the realm of statistical analysis.
>>
>>>> I read all his posts, and saw no such thing. I did see
>>>> him correct other such misrepresentations of what he said.
>>
>>> Along this subthread, I said the following to Mr. Costich
>>> concerning the test:
>>>
>>>
>>> " ... Mr. Costich, the test results aren't scientifically valid but
>>> it didn't disproved that the sound differences heard by participants
>>> did not physically exist."
>>
>> There's your first anti-logic attack borglet. You're forcing your
>> opinion to be disproved with a proof of a negative hypothesis.
>> Everybody knows that negative hypothesis are difficult or impossible
>> to prove. Yet, you are demanding that a negative hypothesis be proved.
>>
>
>My opinion? The truth that such test fails to proved that sound
>differences heard by participants don't physically exist -- is a FACT.
The same FACT that no statiscal analysis proved anything with complete
certainty. That everything you have dropped out the window has fallen
down is not proof to a certainty that everything you ever drop out the
window will fall down. The belief that it will is based on huge
amounts of data from googles of trials and that there ae mathematical
models.
How do you show something physically exists unless you can somehow
provide meaningful description, usually measurements?. The golden ear
terminology is inadequate for cables to say the least.
>A fact even supported by Mr. Costich in these thread stating that
>disproving sound differences heard by audiophiles is -- certainty
>not in the realm of statistical analysis
>
>
>
>
>> Mr. Costich replied:
>>
>>> " Of course not. Certainty is not in the realm of
>>> statistical analysis...."
>>
>> The man speaks truth, but that's not what you are interested in, right
>> borglet?
>
>
>??
>
>What are you talking about ?
>
>
>>> He is saying that "certainty" about disproving a claim that the sound
>>> differences heard by participants do not physically
>>> exist -- is not in the realm of statistical analysis.
>>
>> What's is certain in this life Borglet, except that you will trash
>> logic and reason to justify your religious belief that just about
>> everything, including the most perfected and inert of audio
>> components, being cables also sounds different?
>
>
>??
>
>With all due respect, you sound like Ferstler.
>
>
>> IOW borglet, the truism that proving a negative hypothesis is
>> difficult or impossible. You're just obfuscating the fact that you
>> can't prove the corresponding positive hypothesis.
>
>
>
>
>
>
>
>>> You seem to suggest that Mr. Costich misspoke, but how so ?
>>
>> Again Borglet, you are distorting what other's said to justify your
>> religious belief that cables sound different.
>>
>> Costlich didn't misspeak borglet, he obviously talked way over your
>> head.
>
>
>So then, you agreed with what he said that disproving sound
>differences heard by audiophiles is -- a certainty not in the
>realm of statistical analysis ?
>
>
>
>
>>> Are you inferring that Mr. Costich was untruthful about
>>> what he said and that he spoke too hastily ?
>>
>> No borglet, I'm inferring that rational discussion with you is
>> impossible because you distort everything you hear to fit into your
>> own erroneous and illogical thinking.
>
>
>Could you please demonstrate which part I distorted and explain why.
>
>
>
>
>>>> Basically borglet, you're not a reliable analyst in
>>>> matters like these.
>>
>> To say the least!
>
>
>
>
>
>
>
>
>
>
Oliver Costich
January 22nd 08, 07:12 AM
On Tue, 22 Jan 2008 03:56:29 GMT, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>> JBorg, Jr. wrote:
>>>> Arny Krueger wrote:
>>>>> JBorg, Jr. wrote
>>>>>> Arny Krueger wrote:
>>>>>>> JBorg, Jr. wrote
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Well yes, Mr. Costich, the test results aren't
>>>>>>> scientifically valid but it didn't disproved that the
>>>>>>> sound differences heard by participants did not
>>>>>>> physically exist.
>>>>>>
>>>>>> That was another potential flaw in the tests. I see no
>>>>>> controls that ensured that the listeners heard the
>>>>>> identically same selections of music. Therefore, the
>>>>>> listeners may have heard differences that did physically
>>>>>> exist - unfortunately they were due to random choices by
>>>>>> the experimenter, not audible differences that were
>>>>>> inherent in the cables.
>>>>>
>>>>> Mr. Costich opined that disproving the sound differences
>>>>> heard by audiophiles do not physically exist is not in
>>>>> the realm of statistical analysis.
>>>>>
>>>> I read all his posts, and saw no such thing. I did see him correct
>>>> other such misrepresentions of what he said.
>>>
>>> Along this subthread, I said the following to Mr. Costich concerning
>>> the test:
>>>
>>> " ... Mr. Costich, the test results aren't scientifically valid
>>> but it didn't disproved that the sound differences heard by
>>> participants did not physically exist."
>>>
>>> Mr. Costich replied:
>>>
>>> " Of course not. Certainty is not in the realm of statistical
>>> analysis...."
>>>
>>> He is saying that "certainty" about disproving a claim that the
>>> sound differences heard by participants do not physically
>>> exist -- is not in the realm of statistical analysis.
>>>
>>> Further, Mr. Costich supported his claim by saying:
>>>
>>> " Because statistical analysis is the only toll we have to test
>>> claims about the behavior of populations from samples. Any such test
>>> has
>>> error and the likelihood of error needs to be set in advance. The
>>> significance level is precisely the probabiliy of rejecting a true
>>> null hypothesis."
>>>
>>> ***
>>> You seem to suggest that Mr. Costich misspoke, but how so ?
>>>
>>> Are you inferring that Mr. Costich was untruthful about what he
>>> said and that he spoke too hastily ?
>>>
>>>> Basically borglet, you're not a reliable analyst in matters like
>>>> these.
>>
>>
>>
>> Unbeleivable! Your interpretation amazes me. Take a philosophy of
>> science course to learn that there is no 100% certainty in any science
>> save mathematics, if that is even a science.
>>
>> Physics is models based on observation of events repeating, or as
>> David Hume calls it "bad habit". The level of "certainty" or
>> probability of error can be reduced to infinitesimal levels but
>> uncertainty is still there even if we don't behave like it is. The
>> less controllable experiments are, and the farther the "science"
>> deviates from having rigorous mathematical models, the less certainty
>> you have. In particular you seem to be willing to abandon all use of
>> statistics as to you the results don't provide certainty (and they
>> never do).
>
>
>But clearly, such test obviously did not proved that subtle differences
>did not exist, and as you said, disproving the subtle difference that
>audiophiles hear is a certainty NOT in the realm of Statistical analysis.
>
>You seems to be saying still that statistic analysis or some other method
>can be use, but stat analysis is not it.
>
>What am I suppose to do, Mr. Costich ?
Go learn something about science and measurement, statistical design
and analysis. If test after test shows that the hypothesis that
listeners cannot identify cables, then most scientifically minded
rational people would conclude it's unlikely to be true.
>
>
Oliver Costich
January 22nd 08, 07:18 AM
On Mon, 21 Jan 2008 22:07:03 -0500, George M. Middius <cmndr _ george
@ comcast . net> wrote:
>
>
>McInturd whined:
>
>> >:-) (Laughing at you, Mr. Krueger, not with you.)
>
>> Could you take this unrelated drivel someplace else.
>
>Ollie, your endless prattling about statistics is equally irrelevant to
>RAO. Have you even read the charter? I doubt it very much. You come off
>like a pompous, self-important poseur who has no opinions about audio but
>an endless supply of envy for those who make their livings in the field.
>
>
>
I have lots of opinions about audio. I've been at it a long time. How
is debunking the prattle of those ignorant of science and analysis of
data pompous?
Lets do it like the public schools do. When people as just plain
wrong, don't correct them. God forbid learn something.
I smell a gold ear. The thread is about audio and so are my comments
on the analysis of an audio experiment.
Oliver Costich
January 22nd 08, 07:21 AM
On Tue, 22 Jan 2008 06:25:34 GMT, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>> JBorg, Jr. wrote:
>>>> Oliver Costich wrote:
>>>>> JBorg, Jr. wrote:
>>>>>> Oliver Costich wrote:
>>>>>>> Mr.clydeslick wrote:
>>>>>>>> Oliver Costich wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> snip
>>>>>>>>
>>>>>>>> Back to reality: 61% correct in one experiment fails to reject
>>>>>>>> that they can't tell the difference. If the claim is that
>>>>>>>> listeners can tell the better cable more the half the time, then
>>>>>>>> to support that you have to be able to reject that the in the
>>>>>>>> population of all audio interested listeners, the correct
>>>>>>>> guesses occur half the time or less. 61% of 39 doesn't do it.
>>>>>>>> (Null hypothesis is p=.5, alternative hypothesis is p>.5. The
>>>>>>>> null hypthesis cannot be rejected with the sample data given.)
>>>>>>>>
>>>>>>>> In other words, that 61% of a sample of 39 got the correct
>>>>>>>> result isn't sufficient evidence that in the general population
>>>>>>>> of listeners more than half can pick the better cable.
>>>>>>>>
>>>>>>>> So, I'd say "that's hardly that".
>>>>>>>
>>>>>>> you seem to be mixing difference with preference, you reference
>>>>>>> both, for the same test.
>>>>>>
>>>>>> For the purpose of statistical analysis it makes no difference.
>>>>>
>>>>> But for the purpose of sensible analysis, shouldn't it makes a
>>>>> difference.
>>>>
>>>> I don't think so. I can't see any way the statistical analysis would
>>>> be different.
>>>
>>> Preferences are, statistically, immeasureable if the claim is that
>>> listeners can tell the better cable more the half the time.
>>>
>>>
>>> Agree or Disagree ?
>>
>>
>> I have no idea what you are asking.
>
>
>You are admitting that, for the purpose of statistical analysis,
>it would make no difference whether the participant determine
>or discern subtle differences based on sound differences
>or sound preferences during audio testing.
Look! If they could discern these differences then they would make
correct choices. Since enough didn't make correct choices, you have no
suport for the existence of the subtle differences
>
>Mr. Costich, do you still meant to say that mixing differences with
>preferences during testing would make no difference for the purpose
>statistical analysis ?
What the hell are you trying to ask?
>
>
>Yes or No ?
>
>
>
>
>> snip
>
>
>
>
>
>
>
>
Oliver Costich
January 22nd 08, 07:25 AM
On Mon, 21 Jan 2008 22:07:03 -0500, George M. Middius <cmndr _ george
@ comcast . net> wrote:
>
>
>McInturd whined:
>
>> >:-) (Laughing at you, Mr. Krueger, not with you.)
>
>> Could you take this unrelated drivel someplace else.
>
>Ollie, your endless prattling about statistics is equally irrelevant to
>RAO. Have you even read the charter? I doubt it very much. You come off
>like a pompous, self-important poseur who has no opinions about audio but
>an endless supply of envy for those who make their livings in the field.
>
>
>
By the way this is from the charter: "This newsgroup is for discussing
scientific data, industry standards, testing procedures, engineering
and technical designs, scrutinizing claims, and related topics as it
pertains to audio."
Where's your problem?
Arny Krueger
January 22nd 08, 11:57 AM
"JBorg, Jr." > wrote in message
> Mr. Costich opined that disproving the sound differences
> heard by audiophiles do not physically exist is -- a
> certainty not in the realm of statistical analysis.
(1) Statistical analyses do not prove or disprove anything with absolute
certainty.
(2) Negative hypothesis are practically impossible to prove.
Following borglet's thinking - we should all sell everything we own and
spend it all on a wild night in Law Vegas, because we cannot prove with
absolute certainty that the world will end tomorrow.
Arny Krueger
January 22nd 08, 11:58 AM
"Oliver Costich" > wrote in
message
>
> I was referring to the particular post. On the other
> hand, based on your posts and ability to frame a question
> or provide a rational response, there is sufficient
> evidence to support the claim that you're an idiot at the
> 99% confidence level.
No, he's just a RAO troll, following in the footsteps of his spiritual
leader, Middius.
Arny Krueger
January 22nd 08, 12:01 PM
"dizzy" > wrote in message
> Arny Krueger wrote:
>
>>> I responded to this claim:
>>> "Not that can be retrieved using the search engine at
>>> www.aes.org, Mr. Krueger, using all the alternative
>>> spellings
>>> of your name, and searching both the index of published
>>> papers
>>> and the preprint index. Could you supply the references,
>>> please."
>> Mr Atkinson seems to have me confused with his research
>> department. I guess economic cut-backs have affected the
>> staffing at Stereophile and instead of relying on paid
>> staff, Mr Atkinson has been forced to go begging for
>> help on Usenet. :-(
>
> That's quite the illogical (and snotty) remark, Arny.
How snotty and hypocritical of you dizzy!
> Would it be expected, or even ethical, for him to have
> his employees doing research regarding your USENET
> claims, Arny, considering that they have nothing to do
> with Stereophile?
Given that Atkinson spent Stereophile money on my all-expenses paid trip to
New York regarding my USENET claims, one might say yes.
BTW it was a fun trip,
I won! ;-)
John Atkinson[_2_]
January 22nd 08, 12:37 PM
On Jan 21, 9:26 pm, Oliver Costich > wrote:
> Publications,degrees, etc. are not as correlated with
> wisdom to the degree most people assume.
I agree, But please note that I am not the one claiming to
possess authority based on a non-existent publications
resume.
John Atkinson
Editor, Stereophile
"Well-infomed" - The Wall Street Journal"
Clyde Slick
January 22nd 08, 02:19 PM
On 22 Ian, 01:25, "JBorg, Jr." > wrote:
> > Oliver Costich wrote:
> >> JBorg, Jr. wrote:
> >>> Oliver Costich wrote:
> >>>> JBorg, Jr. wrote:
> >>>>> Oliver Costich wrote:
> >>>>>> Mr.clydeslick wrote:
> >>>>>>> Oliver Costich wrote:
>
> >>>>>>> snip
>
> >>>>>>> Back to reality: 61% correct in one experiment fails to reject
> >>>>>>> that they can't tell the difference. If the claim is that
> >>>>>>> listeners can tell the better cable more the half the time, then
> >>>>>>> to support that you have to be able to reject that the in the
> >>>>>>> population of all audio interested listeners, the correct
> >>>>>>> guesses occur half the time or less. 61% of 39 doesn't do it.
> >>>>>>> (Null hypothesis is p=.5, alternative hypothesis is p>.5. The
> >>>>>>> null hypthesis cannot be rejected with the sample data given.)
>
> >>>>>>> In other words, that 61% of a sample of 39 got the correct
> >>>>>>> result isn't sufficient evidence that in the general population
> >>>>>>> of listeners more than half can pick the better cable.
>
> >>>>>>> So, I'd say "that's hardly that".
>
> >>>>>> you seem to be mixing difference with preference, you reference
> >>>>>> both, for the same test.
>
> >>>>> For the purpose of statistical analysis it makes no difference.
>
> >>>> But for the purpose of sensible analysis, shouldn't it makes a
> >>>> difference.
>
> >>> I don't think so. I can't see any way the statistical analysis would
> >>> be different.
>
> >> Preferences are, statistically, immeasureable if the claim is that
> >> listeners can tell the better cable more the half the time.
>
> >> Agree or Disagree ?
>
> > I have no idea what you are asking.
>
> You are admitting that, for the purpose of statistical analysis,
> it would make no difference whether the participant determine
> or discern subtle differences based on sound differences
> or sound preferences during audio testing.
>
> Mr. Costich, do you still meant to say that mixing differences with
> preferences during testing would make no difference for the purpose
> statistical analysis ?
>
> Yes or No ?
>
Hehehe, to be fair, give him the option to answer
"I don;t know"!!!!
Clyde Slick
January 22nd 08, 02:21 PM
On 22 Ian, 01:56, Oliver Costich > wrote:
> On Tue, 22 Jan 2008 03:37:05 GMT, "JBorg, Jr."
>
>
>
>
>
> > wrote:
> >> Oliver Costich wrote:
> >>> JBorg, Jr. wrote:
> >>>> John Corbett wrote:
>
> >>>> Well, I am a statistician.
> >>>> You seem to be so confused about statistics that you can neither
> >>>> perform the calculations nor understand what they mean.
>
> >>> Hello Mr. Corbett, I would like to know if it is appropriate to
> >>> assume that disproving sound differences heard by audiophiles
> >>> that I presume physically exist is -- a certainty not in the
> >>> realm of statistical analysis.
>
> >> Then what is it in the realm of? Religion?
>
> >No, Mr. Costich. *Disproving presence of subtle sound differences
> >heard by audiophiles is not in the realm of religion.
>
> >As a statician, how could you say that.
>
> Your premise is "sound differences heard by audiophiles that I presume
> physically exist". This is more mealy mouthed golden ears bull****.
> Some things sound different, some don't. When the experiments say they
> don't the true believers come up with convoluted nonsense based on
> assumptions with no basis other than religion-like belief.- Ascunde citatul -
>
"When the experiments say they
don't " --- no such thing!!!!
it says that 'those particular people' may not have heard differences.
Clyde Slick
January 22nd 08, 02:23 PM
On 22 Ian, 02:00, Oliver Costich > wrote:
> On Tue, 22 Jan 2008 03:48:18 GMT, "JBorg, Jr."
>
>
>
>
>
> > wrote:
> >> Oliver Costich wrote:
> >>> JBorg, Jr.wrote:
>
> >>> Well now! *Disproving that the sound differences heard by
> >>> audiophiles do not physically exist is -- certainty not in the
> >>> realm of statistical analysis.
>
> >> Disproving that the sound differences BELIEVED to beheard by
> >> audiophiles actually exist is not provable or disprovable by
> >> statistical methods [if] your standard is 100% certainty. Nothing is,
> >> other than 1+1=2 and its ilk.
>
> >If that is the case, what are the reason(s) you persistently refer to
> >audiophiles as *golden ear cult*, and why?
>
> Because they always fall back on bull**** like this when they fail to
> produce evidence.
>
>
>
> >> That's what the argument is about - some claim to hear things that
> >> allow them the distinguish but can't (at least in this test)
> >> demonstrate it.
>
> >But the test did not proved that the subtle difference did not exist.
>
> Of course not absolutely. But then again disproving that something
> exists when no one has observed it is pretty hard, like for
> leprechauns.
>
You are not proving whether or not differnces exist.
they may exist for some people, but not exist for others.
we are talking about perceptions.
there is no "THING" to exist, or not exist.
Oliver Costich
January 22nd 08, 04:04 PM
On Tue, 22 Jan 2008 06:25:34 GMT, "JBorg, Jr."
> wrote:
>> Oliver Costich wrote:
>>> JBorg, Jr. wrote:
>>>> Oliver Costich wrote:
>>>>> JBorg, Jr. wrote:
>>>>>> Oliver Costich wrote:
>>>>>>> Mr.clydeslick wrote:
>>>>>>>> Oliver Costich wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> snip
>>>>>>>>
>>>>>>>> Back to reality: 61% correct in one experiment fails to reject
>>>>>>>> that they can't tell the difference. If the claim is that
>>>>>>>> listeners can tell the better cable more the half the time, then
>>>>>>>> to support that you have to be able to reject that the in the
>>>>>>>> population of all audio interested listeners, the correct
>>>>>>>> guesses occur half the time or less. 61% of 39 doesn't do it.
>>>>>>>> (Null hypothesis is p=.5, alternative hypothesis is p>.5. The
>>>>>>>> null hypthesis cannot be rejected with the sample data given.)
>>>>>>>>
>>>>>>>> In other words, that 61% of a sample of 39 got the correct
>>>>>>>> result isn't sufficient evidence that in the general population
>>>>>>>> of listeners more than half can pick the better cable.
>>>>>>>>
>>>>>>>> So, I'd say "that's hardly that".
>>>>>>>
>>>>>>> you seem to be mixing difference with preference, you reference
>>>>>>> both, for the same test.
>>>>>>
>>>>>> For the purpose of statistical analysis it makes no difference.
>>>>>
>>>>> But for the purpose of sensible analysis, shouldn't it makes a
>>>>> difference.
>>>>
>>>> I don't think so. I can't see any way the statistical analysis would
>>>> be different.
>>>
>>> Preferences are, statistically, immeasureable if the claim is that
>>> listeners can tell the better cable more the half the time.
>>>
>>>
>>> Agree or Disagree ?
>>
>>
>> I have no idea what you are asking.
>
>
>You are admitting that, for the purpose of statistical analysis,
>it would make no difference whether the participant determine
>or discern subtle differences based on sound differences
>or sound preferences during audio testing.
>
>Mr. Costich, do you still meant to say that mixing differences with
>preferences during testing would make no difference for the purpose
>statistical analysis ?
>
>
>Yes or No ?
>
>
Yes. Preference is a one-tailed test. Difference is a 2-tailed test.
But who cares if they can tell a difference if you're trying to sell
expensive cable.
>
>
>> snip
>
>
>
>
>
>
>
>
Oliver Costich
January 22nd 08, 04:07 PM
On Tue, 22 Jan 2008 06:23:01 -0800 (PST), Clyde Slick
> wrote:
>On 22 Ian, 02:00, Oliver Costich > wrote:
>> On Tue, 22 Jan 2008 03:48:18 GMT, "JBorg, Jr."
>>
>>
>>
>>
>>
>> > wrote:
>> >> Oliver Costich wrote:
>> >>> JBorg, Jr.wrote:
>>
>> >>> Well now! *Disproving that the sound differences heard by
>> >>> audiophiles do not physically exist is -- certainty not in the
>> >>> realm of statistical analysis.
>>
>> >> Disproving that the sound differences BELIEVED to beheard by
>> >> audiophiles actually exist is not provable or disprovable by
>> >> statistical methods [if] your standard is 100% certainty. Nothing is,
>> >> other than 1+1=2 and its ilk.
>>
>> >If that is the case, what are the reason(s) you persistently refer to
>> >audiophiles as *golden ear cult*, and why?
>>
>> Because they always fall back on bull**** like this when they fail to
>> produce evidence.
>>
>>
>>
>> >> That's what the argument is about - some claim to hear things that
>> >> allow them the distinguish but can't (at least in this test)
>> >> demonstrate it.
>>
>> >But the test did not proved that the subtle difference did not exist.
>>
>> Of course not absolutely. But then again disproving that something
>> exists when no one has observed it is pretty hard, like for
>> leprechauns.
>>
>
>You are not proving whether or not differnces exist.
>they may exist for some people, but not exist for others.
>we are talking about perceptions.
>there is no "THING" to exist, or not exist.
Then these "perceptions" should be good enough to get statistically
valid results. Testing an individual is different than that for a
population.
George M. Middius
January 22nd 08, 04:37 PM
Ollie Not-So-Jolly said:
> >Ollie, your endless prattling about statistics is equally irrelevant to
> >RAO. Have you even read the charter? I doubt it very much. You come off
> >like a pompous, self-important poseur who has no opinions about audio but
> >an endless supply of envy for those who make their livings in the field.
> By the way this is from the charter: "This newsgroup is for discussing
> scientific data, industry standards, testing procedures, engineering
> and technical designs, scrutinizing claims, and related topics as it
> pertains to audio."
> Where's your problem?
How 'borgish of you to excerpt the tiniest, out-of-context rationalization
for your pollution of RAO. Why don't you review the *entire* charter? Get
back to me when you have figured out what "opinion" means.
Oliver Costich
January 22nd 08, 05:42 PM
On Tue, 22 Jan 2008 11:37:42 -0500, George M. Middius <cmndr _ george
@ comcast . net> wrote:
>
>
>Ollie Not-So-Jolly said:
>
>> >Ollie, your endless prattling about statistics is equally irrelevant to
>> >RAO. Have you even read the charter? I doubt it very much. You come off
>> >like a pompous, self-important poseur who has no opinions about audio but
>> >an endless supply of envy for those who make their livings in the field.
>
>> By the way this is from the charter: "This newsgroup is for discussing
>> scientific data, industry standards, testing procedures, engineering
>> and technical designs, scrutinizing claims, and related topics as it
>> pertains to audio."
>> Where's your problem?
>
>How 'borgish of you to excerpt the tiniest, out-of-context rationalization
>for your pollution of RAO. Why don't you review the *entire* charter? Get
>back to me when you have figured out what "opinion" means.
>
>
>
How is this out of context?
Evidently your definition of "opinions" excludes subjecting them to
standard scientific method.
George M. Middius
January 22nd 08, 07:04 PM
McInturd said:
> >How 'borgish of you to excerpt the tiniest, out-of-context rationalization
> >for your pollution of RAO. Why don't you review the *entire* charter? Get
> >back to me when you have figured out what "opinion" means.
> How is this out of context?
Apparently you don't read very well, or maybe you only read the parts that
appeal to your 'borgish nature.
> Evidently your definition of "opinions" excludes subjecting them to
> standard scientific method.
BWAHAHAHAHAHA!!!! "Scientific method" on a Usenet chat group! LOLOL!
Shhhh! I'm Listening to Reason!
January 22nd 08, 09:25 PM
On Jan 22, 5:58*am, "Arny Krueger" > wrote:
> No, he's just a RAO troll, following in the footsteps of his spiritual
> leader, Middius.
I must have missed where you'd "been there, done that".
Where are those articles you authored or coauthored for the JAES?
Publication dates? Subjects? Coauthors names?
Where is your degree from? Which discipline was it conferred in?
I so much want you to be my spiritual leader, GOIA, but I will not
follow a liar. In fact, I think I'd rather follow a troll than a liar.
Trolls are generally harmless and funny. Liars tend to be mean and
evil and filled with the devil's spirit.
Why not come clean, GOIA? The truth shall set you free.
Shhhh! I'm Listening to Reason!
January 22nd 08, 09:30 PM
On Jan 22, 6:01*am, "Arny Krueger" > wrote:
> "dizzy" > wrote in message
>
>
>
>
>
>
>
> > Arny Krueger wrote:
>
> >>> I responded to this claim:
> >>> "Not that can be retrieved using the search engine at
> >>>www.aes.org, Mr. Krueger, using all the alternative
> >>> spellings
> >>> of your name, and searching both the index of published
> >>> papers
> >>> and the preprint index. Could you supply the references,
> >>> please."
> >> Mr Atkinson seems to have me confused with his research
> >> department. I guess economic cut-backs have affected the
> >> staffing at Stereophile and instead of relying on paid
> >> staff, Mr Atkinson has been forced to go begging for
> >> help on Usenet. :-(
>
> > That's quite the illogical (and snotty) remark, Arny.
>
> How snotty and hypocritical of you dizzy!
Why would you say that?
Am I "snotty and hypocritical" for asking you to back up a claim you
made?
You're a sick individual.
> > Would it be expected, or even ethical, for him to have
> > his employees doing research regarding your USENET
> > claims, Arny, considering that they have nothing to do
> > with Stereophile?
>
> Given that Atkinson spent Stereophile money on my all-expenses paid trip to
> New York regarding my USENET claims, one might say yes.
One might, but one isn't.
Translation: "I lied. I have never been published in JAES. I do not
have an engineering degree except the one I conferred uopn myself in
my head."
> BTW it was a fun trip,
New York is a fun city.
> I won! ;-)
So you 'won' plane fare and a hotel room. A total of $1000? $1500?
And now you use this as justification for your lies. It is said that
every man has a price at which he will sell his integrity. Your price
was exceedingly low, GOIA. You should have held out for $1750.
George M. Middius
January 22nd 08, 09:42 PM
Shhhh! said:
> Where are those articles you authored or coauthored for the JAES?
> Publication dates? Subjects? Coauthors names?
I believe Turdy is konfused again. It wasn't the JAES but a different
periodical whose initials are JOSM. You can actually order it on Amazon:
http://tinyurl.com/2yxcnd
Shhhh! I'm Listening to Reason!
January 22nd 08, 09:57 PM
On Jan 21, 8:51*pm, Oliver Costich > wrote:
> On Mon, 21 Jan 2008 13:01:21 -0800 (PST), "Shhhh! I'm Listening to
> Reason!" > wrote:
> >On Jan 21, 1:00*pm, Oliver Costich > wrote:
> >> On Sat, 19 Jan 2008 10:54:17 -0800 (PST), "Shhhh! I'm Listening to
>
> >> Reason!" > wrote:
> >> >On Jan 19, 3:10*am, Oliver Costich > wrote:
> >> >> On Fri, 18 Jan 2008 13:19:23 -0800 (PST), "Shhhh! I'm Listening to
> >> >> Reason!" > wrote:
> >> >> >Oliver Costich wrote
>
> >> >> >> By the way, I don't use lamp cord or Home Depot interconnects in my
> >> >> >> system.
>
> >> >> >I do not use expensive wires or cables in my system. I just don't
> >> >> >really care if others do.
>
> >> >> I don't care what they use. I do care that they want to justify it
> >> >> with sloppy logic and BS.
>
> >> >I haven't seen any justifications, though I haven't read all the posts
> >> >in this thread.
>
> >> >As a matter of curiosity, what would happen to the results if, out of
> >> >a sample of 100 participants, 50 selected a certain product correctly
> >> >100% of the time and the other 50 selected incorrectly 100% of the
> >> >time?
>
> >> Would it be unusual to get 50 heads when flipping a coin 100 times?
> >> Getting 50 correct answers out of 100 participants is exactly what you
> >> would expext from random guessing (flipping coins).
>
> >Sorry, I didn't state my question clearly.
>
> >Assume a test with 100 participants and 10 trials. 50 of the
> >participants score 100% on all 10 trials (or correct at a
> >statistically significant level). 50 score 0% (or at some
> >statistically insignificant level) on all 10 trials. Would not the
> >overall results still show "random guessing"? Is so, could you still
> >reasonably attribute the results of those 50 that got it correct 100%
> >of the time to random guessing?
>
> What exactly are you testing? If it is whether individuals can
> corectly determine what you are testing, the for those who got the all
> correct, you can support the claim they are right more than half the
> time (guessing). For the others you can support that they are wrong
> more than half the time. Alternatively you could be testing the whole
> population with 1000 trials, and 50 people get their 500 right and 50
> others you'd toss the experiment as simply too bizarre. What do you
> think is the likelihood of a randomly selected sample giving that
> outcome? You'd look for other factors to explain the results. Maybe a
> statistics course is in order
I've taken statistics.
I think a true "random" population is counterproductive for perception
tests, as I said. In a true random sample of which painting someone
preferred, I'd expect the distribution of the random population sample
to approximate the percentages of colorblind, or totally blind, people
found in the general population, for example. One or two of that
sample may even know something about art.
> >I'm not suggesting that this was the case here, or relating this in
> >any way to the WSJ article. I'm just curious. It seems to me that for
> >issues of perception a truly "random" population is counterproductive.
>
> It is unless you are looking to home in on the truth. I don't know of
> any statistical method for drawing conclusions about population
> parameters from sample statistics that doesn't require that samples be
> simple random samples. Randomness alone is not enough. It has to be
> simple random which in this particualr case means that every group of
> 39 has an equally likely chance of being selected. One of the problems
> with this test is that the "respondents" were self-selected or
> otherwise not randomly selected.
You are going down a road I just specifically excluded. Why?
> It's like taking a poll on the death
> penalty by asking people who walk by your front door. If the test was
> sponsored by anyone who has an interest in speaker cable differences
> being heard, then agoin the test is suspect. Virtually every
> elementary statisitics text gives similare examples of faulty data
> collection.
Tell that to the opponents of global warming here. They do not
understand that. One of those people is even now claiming "proofs" in
this very thread, Isn't that ironic?
I understand that. Critical listening is not something people are born
with. Arny, for example, has stated that several times. So have
several others who are actually involved in audio testing. So you
necessarily have to select from a group of those who are interested in
the thing being tested if you use audio or some other related area of
perception as an example.
Otherwise, I would expect the test results to show "random guessing"
100% of the time.
Again, I don't really care. If cables were important to me, I'd buy
what I liked regardless. It's just not that big of a deal to me.
> The population need not be the whole world but just limiting it to
> people who think you can tell would still give a huge population
> relative to 39. It's interesting that even in this population, which I
> assume contained the 39 tested, the results are insufficient to
> support that hypothesis.
Perhaps Arny should go down this path. He could take out an ad in
Stereophile looking for people who claim to be able to hear the
differences in cables. Once he got, say, 250 people that he trained as
critical listeners he could randomly select 50 of them to take the
test. He could hold the test at one of the hotels around CES. He could
hold a drawing out of the entire population for a set of really
expensive cables as an ironic twist to generate interest (he should
buy these cables at retail so that nobody can cast aspersions).
This test would, unfortunately, cost Arny more than the $1000 that he
recently sold his integrity for.
> Better yet, make the population those who have not decided that you
> can't tell.
I think Arny's contest would settle the issue. I hope he follows
through with it.
> The condition is that you take a simple random sample from the
> population of interest. Then the result will hold only for that
> population.
I think Arny should go for it. Don't you?
George M. Middius
January 22nd 08, 10:30 PM
Shhhh! said:
> I think a true "random" population is counterproductive for perception
> tests, as I said.
Sorry, but this notion runs counter to borgma. All persons are the same,
utterly interchangeable, with no differences in perceptual ability.
"Testing" one is the same as "testing" a thousand.
Oliver Costich
January 22nd 08, 11:02 PM
On Tue, 22 Jan 2008 13:57:41 -0800 (PST), "Shhhh! I'm Listening to
Reason!" > wrote:
>On Jan 21, 8:51*pm, Oliver Costich > wrote:
>> On Mon, 21 Jan 2008 13:01:21 -0800 (PST), "Shhhh! I'm Listening to
>
>> Reason!" > wrote:
>> >On Jan 21, 1:00*pm, Oliver Costich > wrote:
>> >> On Sat, 19 Jan 2008 10:54:17 -0800 (PST), "Shhhh! I'm Listening to
>>
>> >> Reason!" > wrote:
>> >> >On Jan 19, 3:10*am, Oliver Costich > wrote:
>> >> >> On Fri, 18 Jan 2008 13:19:23 -0800 (PST), "Shhhh! I'm Listening to
>> >> >> Reason!" > wrote:
>> >> >> >Oliver Costich wrote
>>
>> >> >> >> By the way, I don't use lamp cord or Home Depot interconnects in my
>> >> >> >> system.
>>
>> >> >> >I do not use expensive wires or cables in my system. I just don't
>> >> >> >really care if others do.
>>
>> >> >> I don't care what they use. I do care that they want to justify it
>> >> >> with sloppy logic and BS.
>>
>> >> >I haven't seen any justifications, though I haven't read all the posts
>> >> >in this thread.
>>
>> >> >As a matter of curiosity, what would happen to the results if, out of
>> >> >a sample of 100 participants, 50 selected a certain product correctly
>> >> >100% of the time and the other 50 selected incorrectly 100% of the
>> >> >time?
>>
>> >> Would it be unusual to get 50 heads when flipping a coin 100 times?
>> >> Getting 50 correct answers out of 100 participants is exactly what you
>> >> would expext from random guessing (flipping coins).
>>
>> >Sorry, I didn't state my question clearly.
>>
>> >Assume a test with 100 participants and 10 trials. 50 of the
>> >participants score 100% on all 10 trials (or correct at a
>> >statistically significant level). 50 score 0% (or at some
>> >statistically insignificant level) on all 10 trials. Would not the
>> >overall results still show "random guessing"? Is so, could you still
>> >reasonably attribute the results of those 50 that got it correct 100%
>> >of the time to random guessing?
>>
>> What exactly are you testing? If it is whether individuals can
>> corectly determine what you are testing, the for those who got the all
>> correct, you can support the claim they are right more than half the
>> time (guessing). For the others you can support that they are wrong
>> more than half the time. Alternatively you could be testing the whole
>> population with 1000 trials, and 50 people get their 500 right and 50
>> others you'd toss the experiment as simply too bizarre. What do you
>> think is the likelihood of a randomly selected sample giving that
>> outcome? You'd look for other factors to explain the results. Maybe a
>> statistics course is in order
>
>I've taken statistics.
>
>I think a true "random" population is counterproductive for perception
>tests, as I said. In a true random sample of which painting someone
>preferred, I'd expect the distribution of the random population sample
>to approximate the percentages of colorblind, or totally blind, people
>found in the general population, for example. One or two of that
>sample may even know something about art.
First, for testing hypotheses, a random sample isn't enough. It needs
to be a SIMPLE random sample. There is a difference between a random
population (whatever that is, but I get the gist) and a random sample
from a population. That a population need not include everyone you
should have learned from that course.
What would you use to test perceptions?
>
>> >I'm not suggesting that this was the case here, or relating this in
>> >any way to the WSJ article. I'm just curious. It seems to me that for
>> >issues of perception a truly "random" population is counterproductive.
>>
>> It is unless you are looking to home in on the truth. I don't know of
>> any statistical method for drawing conclusions about population
>> parameters from sample statistics that doesn't require that samples be
>> simple random samples. Randomness alone is not enough. It has to be
>> simple random which in this particualr case means that every group of
>> 39 has an equally likely chance of being selected. One of the problems
>> with this test is that the "respondents" were self-selected or
>> otherwise not randomly selected.
>
>You are going down a road I just specifically excluded. Why?
Because a "random population" is a not a term used in statistics.
Populations are the whole collection of entities for which you want to
test (or estimate) a parameter. Samples can be random but populations
can't. It's not clear what you are talking about.
If you are suggesting that the population of concern is not everyone
who can hear, fine. Is it people who listen to music? How narrow do
you want to make it? It depends what you are out to test.
>
>> It's like taking a poll on the death
>> penalty by asking people who walk by your front door. If the test was
>> sponsored by anyone who has an interest in speaker cable differences
>> being heard, then agoin the test is suspect. Virtually every
>> elementary statisitics text gives similare examples of faulty data
>> collection.
>
>Tell that to the opponents of global warming here. They do not
>understand that. One of those people is even now claiming "proofs" in
>this very thread, Isn't that ironic?
>
>I understand that. Critical listening is not something people are born
>with. Arny, for example, has stated that several times. So have
>several others who are actually involved in audio testing. So you
>necessarily have to select from a group of those who are interested in
>the thing being tested if you use audio or some other related area of
>perception as an example.
You can restrict the population that way if you choose but then you
can't extend the conclusion of the test to larger ones. Everyone wants
to eliminate people who firmly believe you cannot dsitiguish between
the cables. You are left with people who believe you can tell and
those that don't know. You could further narrow it to people who
don't know and toss everyone with prejudices.
On the other hand maybe you just want to make the population thsoe who
claim that they can choose the more expensive cable. Is that the one
we're interested in? BTW, does anyone know how the sample at CES was
selected?
>
>Otherwise, I would expect the test results to show "random guessing"
>100% of the time.
>
>Again, I don't really care. If cables were important to me, I'd buy
>what I liked regardless. It's just not that big of a deal to me.
>
>> The population need not be the whole world but just limiting it to
>> people who think you can tell would still give a huge population
>> relative to 39. It's interesting that even in this population, which I
>> assume contained the 39 tested, the results are insufficient to
>> support that hypothesis.
>
>Perhaps Arny should go down this path. He could take out an ad in
>Stereophile looking for people who claim to be able to hear the
>differences in cables. Once he got, say, 250 people that he trained as
>critical listeners he could randomly select 50 of them to take the
>test. He could hold the test at one of the hotels around CES. He could
>hold a drawing out of the entire population for a set of really
>expensive cables as an ironic twist to generate interest (he should
>buy these cables at retail so that nobody can cast aspersions).
>
>This test would, unfortunately, cost Arny more than the $1000 that he
>recently sold his integrity for.
>
>> Better yet, make the population those who have not decided that you
>> can't tell.
>
>I think Arny's contest would settle the issue. I hope he follows
>through with it.
>
>> The condition is that you take a simple random sample from the
>> population of interest. Then the result will hold only for that
>> population.
>
>I think Arny should go for it. Don't you?
Oliver Costich
January 22nd 08, 11:12 PM
On Tue, 22 Jan 2008 13:57:41 -0800 (PST), "Shhhh! I'm Listening to
Reason!" > wrote:
>On Jan 21, 8:51*pm, Oliver Costich > wrote:
>> On Mon, 21 Jan 2008 13:01:21 -0800 (PST), "Shhhh! I'm Listening to
>
>> Reason!" > wrote:
>> >On Jan 21, 1:00*pm, Oliver Costich > wrote:
>> >> On Sat, 19 Jan 2008 10:54:17 -0800 (PST), "Shhhh! I'm Listening to
>>
>> >> Reason!" > wrote:
>> >> >On Jan 19, 3:10*am, Oliver Costich > wrote:
>> >> >> On Fri, 18 Jan 2008 13:19:23 -0800 (PST), "Shhhh! I'm Listening to
>> >> >> Reason!" > wrote:
>> >> >> >Oliver Costich wrote
>>
>> >> >> >> By the way, I don't use lamp cord or Home Depot interconnects in my
>> >> >> >> system.
>>
>> >> >> >I do not use expensive wires or cables in my system. I just don't
>> >> >> >really care if others do.
>>
>> >> >> I don't care what they use. I do care that they want to justify it
>> >> >> with sloppy logic and BS.
>>
>> >> >I haven't seen any justifications, though I haven't read all the posts
>> >> >in this thread.
>>
>> >> >As a matter of curiosity, what would happen to the results if, out of
>> >> >a sample of 100 participants, 50 selected a certain product correctly
>> >> >100% of the time and the other 50 selected incorrectly 100% of the
>> >> >time?
>>
>> >> Would it be unusual to get 50 heads when flipping a coin 100 times?
>> >> Getting 50 correct answers out of 100 participants is exactly what you
>> >> would expext from random guessing (flipping coins).
>>
>> >Sorry, I didn't state my question clearly.
>>
>> >Assume a test with 100 participants and 10 trials. 50 of the
>> >participants score 100% on all 10 trials (or correct at a
>> >statistically significant level). 50 score 0% (or at some
>> >statistically insignificant level) on all 10 trials. Would not the
>> >overall results still show "random guessing"? Is so, could you still
>> >reasonably attribute the results of those 50 that got it correct 100%
>> >of the time to random guessing?
>>
>> What exactly are you testing? If it is whether individuals can
>> corectly determine what you are testing, the for those who got the all
>> correct, you can support the claim they are right more than half the
>> time (guessing). For the others you can support that they are wrong
>> more than half the time. Alternatively you could be testing the whole
>> population with 1000 trials, and 50 people get their 500 right and 50
>> others you'd toss the experiment as simply too bizarre. What do you
>> think is the likelihood of a randomly selected sample giving that
>> outcome? You'd look for other factors to explain the results. Maybe a
>> statistics course is in order
>
>I've taken statistics.
>
>I think a true "random" population is counterproductive for perception
>tests, as I said. In a true random sample of which painting someone
>preferred, I'd expect the distribution of the random population sample
>to approximate the percentages of colorblind, or totally blind, people
>found in the general population, for example. One or two of that
>sample may even know something about art.
>
>> >I'm not suggesting that this was the case here, or relating this in
>> >any way to the WSJ article. I'm just curious. It seems to me that for
>> >issues of perception a truly "random" population is counterproductive.
>>
>> It is unless you are looking to home in on the truth. I don't know of
>> any statistical method for drawing conclusions about population
>> parameters from sample statistics that doesn't require that samples be
>> simple random samples. Randomness alone is not enough. It has to be
>> simple random which in this particualr case means that every group of
>> 39 has an equally likely chance of being selected. One of the problems
>> with this test is that the "respondents" were self-selected or
>> otherwise not randomly selected.
>
>You are going down a road I just specifically excluded. Why?
>
>> It's like taking a poll on the death
>> penalty by asking people who walk by your front door. If the test was
>> sponsored by anyone who has an interest in speaker cable differences
>> being heard, then agoin the test is suspect. Virtually every
>> elementary statisitics text gives similare examples of faulty data
>> collection.
>
>Tell that to the opponents of global warming here. They do not
>understand that. One of those people is even now claiming "proofs" in
>this very thread, Isn't that ironic?
>
>I understand that. Critical listening is not something people are born
>with. Arny, for example, has stated that several times. So have
>several others who are actually involved in audio testing. So you
>necessarily have to select from a group of those who are interested in
>the thing being tested if you use audio or some other related area of
>perception as an example.
>
Sorry, I sent the response to the first part prematurley.
>Otherwise, I would expect the test results to show "random guessing"
>100% of the time.
Why? Do you think people who have no interest can't hear?
>
>Again, I don't really care. If cables were important to me, I'd buy
>what I liked regardless. It's just not that big of a deal to me.
That's fine, but a different issue. I also buy what I like. I have
heard cables that sound different from one another and it's usually
due to some measureable characteristic of the cable. Better/worse is
harder.
>
>> The population need not be the whole world but just limiting it to
>> people who think you can tell would still give a huge population
>> relative to 39. It's interesting that even in this population, which I
>> assume contained the 39 tested, the results are insufficient to
>> support that hypothesis.
>
>Perhaps Arny should go down this path. He could take out an ad in
>Stereophile looking for people who claim to be able to hear the
>differences in cables. Once he got, say, 250 people that he trained as
>critical listeners he could randomly select 50 of them to take the
>test. He could hold the test at one of the hotels around CES. He could
>hold a drawing out of the entire population for a set of really
>expensive cables as an ironic twist to generate interest (he should
>buy these cables at retail so that nobody can cast aspersions).
How would you select the 250 people? If they volunteer, biased data
once again. In site of that why not test all 250 people and use the
bigger sample. Results from that would allow you to make statistical
conclusions about ALL people who claim the be able to choose correctly
even if you have no idea who they all are.
>
>This test would, unfortunately, cost Arny more than the $1000 that he
>recently sold his integrity for.
>
>> Better yet, make the population those who have not decided that you
>> can't tell.
>
>I think Arny's contest would settle the issue. I hope he follows
>through with it.
>
>> The condition is that you take a simple random sample from the
>> population of interest. Then the result will hold only for that
>> population.
>
>I think Arny should go for it. Don't you?
If he wants to, but I don't think the results would influence the
"true believers" in any case.
Oliver Costich
January 22nd 08, 11:14 PM
On Tue, 22 Jan 2008 14:04:57 -0500, George M. Middius <cmndr _ george
@ comcast . net> wrote:
>
>
>McInturd said:
>
>> >How 'borgish of you to excerpt the tiniest, out-of-context rationalization
>> >for your pollution of RAO. Why don't you review the *entire* charter? Get
>> >back to me when you have figured out what "opinion" means.
>
>> How is this out of context?
>
>Apparently you don't read very well, or maybe you only read the parts that
>appeal to your 'borgish nature.
>
>> Evidently your definition of "opinions" excludes subjecting them to
>> standard scientific method.
>
>BWAHAHAHAHAHA!!!! "Scientific method" on a Usenet chat group! LOLOL!
>
>
Sorry. I didn't realize we were limited her to the "pull it out of
your ass" approach.
Clyde Slick
January 22nd 08, 11:14 PM
On 22 Ian, 02:18, Oliver Costich > wrote:
> I smell a gold ear.
Prove it!!!!
..
Clyde Slick
January 22nd 08, 11:16 PM
On 22 Ian, 06:57, "Arny Krueger" > wrote:
>
> Following borglet's thinking - we should all sell everything we own and
> spend it all on a wild night in Law Vegas, because we cannot prove with
> absolute certainty that the world will end tomorrow.
Nor can you prove that you will actually have fun in Las Vegaqs
Clyde Slick
January 22nd 08, 11:20 PM
On 22 Ian, 07:01, "Arny Krueger" > wrote:
>
> BTW it was a fun trip,
>
> I won! ;-)-
http://www.golf-products.co.uk/prodimages/booby%20prize.JPG
Clyde Slick
January 22nd 08, 11:26 PM
On 22 Ian, 11:07, Oliver Costich > wrote:
> On Tue, 22 Jan 2008 06:23:01 -0800 (PST), Clyde Slick
>
>
>
>
>
> > wrote:
> >On 22 Ian, 02:00, Oliver Costich > wrote:
> >> On Tue, 22 Jan 2008 03:48:18 GMT, "JBorg, Jr."
>
> >> > wrote:
> >> >> Oliver Costich wrote:
> >> >>> JBorg, Jr.wrote:
>
> >> >>> Well now! *Disproving that the sound differences heard by
> >> >>> audiophiles do not physically exist is -- certainty not in the
> >> >>> realm of statistical analysis.
>
> >> >> Disproving that the sound differences BELIEVED to beheard by
> >> >> audiophiles actually exist is not provable or disprovable by
> >> >> statistical methods [if] your standard is 100% certainty. Nothing is,
> >> >> other than 1+1=2 and its ilk.
>
> >> >If that is the case, what are the reason(s) you persistently refer to
> >> >audiophiles as *golden ear cult*, and why?
>
> >> Because they always fall back on bull**** like this when they fail to
> >> produce evidence.
>
> >> >> That's what the argument is about - some claim to hear things that
> >> >> allow them the distinguish but can't (at least in this test)
> >> >> demonstrate it.
>
> >> >But the test did not proved that the subtle difference did not exist.
>
> >> Of course not absolutely. But then again disproving that something
> >> exists when no one has observed it is pretty hard, like for
> >> leprechauns.
>
> >You are not proving whether or not differnces exist.
> >they may exist for some people, but not exist for others.
> >we are talking about perceptions.
> >there is no "THING" to exist, or not exist.
>
> Then these "perceptions" should be good enough to get statistically
> valid results. Testing an individual is different than that for a
> population.- Ascunde citatul -
>
> - Afiºare text în citat -
The population, or at least most of
the populations, are irrelevant.
As for indiviual perception, for
waqht other consumer preferences do you blind test yourself for
and make statistiacal analyses?
What do you do about choosing Swiss cheese, steak, ice cream, undearm
deoderant, toilet paper,
strawberry jam, automobiles, pencil sharpeners, toasters, your wife?
Clyde Slick
January 22nd 08, 11:32 PM
On 22 Ian, 18:02, Oliver Costich > wrote:
> If you are suggesting that the population of concern is not everyone
> who can hear, fine. Is it people who listen to music? How narrow do
> you want to make it? It depends what you are out to test.
>
I want it to be people just like me. Identical to me, in every way,
shape and form.
>
> You can restrict the population that way if you choose but then you
> can't extend the conclusion of the test to larger ones.
You can't extens the conclusion to anyone,
other than those who took the test.
Clyde Slick
January 22nd 08, 11:33 PM
On 22 Ian, 18:12, Oliver Costich > wrote:
>
> How would you select the 250 people?
I wouldn't.
George M. Middius
January 22nd 08, 11:52 PM
McInturd said:
> If you are suggesting that the population of concern is not everyone
> who can hear, fine. Is it people who listen to music? How narrow do
> you want to make it? It depends what you are out to test.
[snip]
> You can restrict the population that way if you choose but then you
> can't extend the conclusion of the test to larger ones. Everyone wants
> to eliminate people who firmly believe you cannot dsitiguish between
> the cables. You are left with people who believe you can tell and
> those that don't know. You could further narrow it to people who
> don't know and toss everyone with prejudices.
I nominate Ollie the Collie for this month's RAO Obtuseness Award.
According to Ollie's illogic, haute cuisine should be judged by people who
never dine at fine restaurants. And art should be judged by people who can
barely read their comic books. And jewelry should be judged by those who
never purchase it and never wear it, and fine wine by those who
customarily knock back boilermakers and Thunderbird.
Let's hear it for the uninitiated, says Ollie the Molly, their opinions
are every bit as valuable as people who have spent years appreciating the
best goods on the market.
George M. Middius
January 22nd 08, 11:55 PM
McInturd said:
> >> >How 'borgish of you to excerpt the tiniest, out-of-context rationalization
> >> >for your pollution of RAO. Why don't you review the *entire* charter? Get
> >> >back to me when you have figured out what "opinion" means.
> >
> >> How is this out of context?
> >
> >Apparently you don't read very well, or maybe you only read the parts that
> >appeal to your 'borgish nature.
No "rebuttal" from the statistics-lover?
> >> Evidently your definition of "opinions" excludes subjecting them to
> >> standard scientific method.
> >
> >BWAHAHAHAHAHA!!!! "Scientific method" on a Usenet chat group! LOLOL!
> Sorry. I didn't realize we were limited her to the "pull it out of
> your ass" approach.
That's what Normals call an "excluded middle argument". Krooger kalls it
"abuse". Are you proud of yourself for abusing the Krooborg?
In seriousness, the notion that statistical prediction is part of the
scientific method used by real scientists is new to me. Did you misspeak,
or is a huge leap of logic invisible to me?
George M. Middius
January 22nd 08, 11:57 PM
Clyde Slick said:
> > You can restrict the population that way if you choose but then you
> > can't extend the conclusion of the test to larger ones.
> You can't extens the conclusion to anyone,
> other than those who took the test.
Another violation of borgma. Are you trying to set off a jihad on RAO?
Oliver Costich
January 23rd 08, 01:32 AM
On Tue, 22 Jan 2008 15:26:04 -0800 (PST), Clyde Slick
> wrote:
>On 22 Ian, 11:07, Oliver Costich > wrote:
>> On Tue, 22 Jan 2008 06:23:01 -0800 (PST), Clyde Slick
>>
>>
>>
>>
>>
>> > wrote:
>> >On 22 Ian, 02:00, Oliver Costich > wrote:
>> >> On Tue, 22 Jan 2008 03:48:18 GMT, "JBorg, Jr."
>>
>> >> > wrote:
>> >> >> Oliver Costich wrote:
>> >> >>> JBorg, Jr.wrote:
>>
>> >> >>> Well now! *Disproving that the sound differences heard by
>> >> >>> audiophiles do not physically exist is -- certainty not in the
>> >> >>> realm of statistical analysis.
>>
>> >> >> Disproving that the sound differences BELIEVED to beheard by
>> >> >> audiophiles actually exist is not provable or disprovable by
>> >> >> statistical methods [if] your standard is 100% certainty. Nothing is,
>> >> >> other than 1+1=2 and its ilk.
>>
>> >> >If that is the case, what are the reason(s) you persistently refer to
>> >> >audiophiles as *golden ear cult*, and why?
>>
>> >> Because they always fall back on bull**** like this when they fail to
>> >> produce evidence.
>>
>> >> >> That's what the argument is about - some claim to hear things that
>> >> >> allow them the distinguish but can't (at least in this test)
>> >> >> demonstrate it.
>>
>> >> >But the test did not proved that the subtle difference did not exist.
>>
>> >> Of course not absolutely. But then again disproving that something
>> >> exists when no one has observed it is pretty hard, like for
>> >> leprechauns.
>>
>> >You are not proving whether or not differnces exist.
>> >they may exist for some people, but not exist for others.
>> >we are talking about perceptions.
>> >there is no "THING" to exist, or not exist.
>>
>> Then these "perceptions" should be good enough to get statistically
>> valid results. Testing an individual is different than that for a
>> population.- Ascunde citatul -
>>
>> - Afiºare text în citat -
>
>The population, or at least most of
>the populations, are irrelevant.
>As for indiviual perception, for
>waqht other consumer preferences do you blind test yourself for
>and make statistiacal analyses?
>What do you do about choosing Swiss cheese, steak, ice cream, undearm
>deoderant, toilet paper,
>strawberry jam, automobiles, pencil sharpeners, toasters, your wife?
>
You are missing the point. This was about a tset that purported to
show something.
vBulletin® v3.6.4, Copyright ©2000-2025, Jelsoft Enterprises Ltd.