Reply
 
Thread Tools Display Modes
  #1   Report Post  
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive

I havd to share this from RAHE.

wrote:
Harry Lavo wrote:


In this test. That's all you can say for sure. However it is not an
uncommon phenomenon in abx testing. Sean Olive reportedly has to screen
out
the majority of potential testers because they cannot discriminate when
he
starts training for his abx tests, even when testing for known
differences
in sound.


Sean Olive doesn't do ABX tests. He doesn't "screen out" potential
testers, either; the article Sully referred to used a couple of hundred
listeners. What he has done is assembled an expert listening panel,
specially trained to identify specific differences in frequency
response. That's a tough task, and not everyone can do it, even with
training. But it has nothing to do with either ABX or preference
testing.


This is the second time in a week you have misrepresented Mr. Olive's
work, Harry. I suggest you ceasse referring to it until you learn
something about it.


In the work reported in the 2003 paper, Olive 'screened out' one
listener -- part of the group that underwent training at Harman to
become 'expert' listeners -- because his results were perfectly
'wrong' -- that is, they showed a perfect *negative* correlation
between loudspeaker preferences in 4-way and 3-way tests. As it turned
out, he suffered from broad-band hearing loss in one ear. All the
other listeners were audiometrically normal.


The various listeners, btw, consisted of audio retailers (n=250),
university students enrolled in engineering or music/recording
industry studies (14), field marketing and salespeople for Harman
(21), professional audio reviewers for popular audio and HT magazines
(6), and finally a set of Harman-trained 'expert' listeners (12),
divided into 36 groups ranging from 3 to 23 listeners per group (each
group, AFAICT, was 'monotypic' - only one 'type' of listener in each
group). Retailers, reviewers, and trained listeners took the 4-way
speaker comparison test; the 3-way comparison was performed by
retailers, trained listeners, marketers, and students.


Amusingly, when the 'listener performance' metric -- a measure of the
listener's ability to discriminate between loudspeakers, combined with
the consistence of their ratings -- was calculated for the different
listener occupations participating in the four-way loudspeaker test
(retailers, reviewers, and trained listeners), audio magazine
reviewers were found to have performed the *worst* on average (that is
, least discriminating and least reliable). In the three-way
loudspeaker tests (retailers, marketing people, students, trained
listeners) students tended to perform worst. In both tests trained
listeners performed best.


I quote: 'The reviewers' performance is something of a surprise given
that they are all paid to audition and review products for various
audiophile magazines. In terms of listening performance, they are
about equal to the marketing and sales people, who are well below the
performance of audio retailers and trained listeners."


That said, the other take-home message was that even with the
difference in performance, the rank order of the speakers by
preference was similar across all 36 listening groups groups -- the
various groups of listeners tended to converge on the same ideas of
'best' and 'worst' sound when they didn't know the brand and
appearance of the speaker. And the 'best' (most preferred)
loudspeakers had the smoothest, flattest and most extended frequency
responses maintained uniformly off axis, in acoustic anaechoic
measurements. This speaker had received a 'class A' rating for three
years running in one audiophile magazine. The least-preferred
loudspeaker was an electrostatic hybrid , and it also measured the
worst. This speaker had *also* received a class A rating for three
years running, and better still had been declared 'product of the
year', by the same audiophile mag (I wonder which?)


Another quote from Olive 2003, from the conclusion of the results
section: "It is the author's experience that most of the differences
in opinion about the sound quality of audio product(s) in our industry
are confounded by the influence of nuisance factors tha have nothing
to do with the product itself. These include differences in listening
rooms, loudspeaker positions, and personal prejudices (such as price,
brand, and reputation) known to strongly influence a person;s
judgement of sound quality (Toole & Olive, 1994). This study has only
reinforced this view. The remarkable consensus in loudspeaker
preference among these 268 listeners was only possible because the
judgements were all made under controlled double-blind listening
conditions."


  #2   Report Post  
Arny Krueger
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive

" wrote in
message

In the work reported in the 2003 paper, Olive 'screened
out' one listener -- part of the group that underwent
training at Harman to become 'expert' listeners --
because his results were perfectly 'wrong' -- that is,
they showed a perfect *negative* correlation between
loudspeaker preferences in 4-way and 3-way tests. As it
turned out, he suffered from broad-band hearing loss in
one ear. All the other listeners were audiometrically
normal.



The various listeners, btw, consisted of audio retailers
(n=250), university students enrolled in engineering or
music/recording industry studies (14), field marketing
and salespeople for Harman (21), professional audio
reviewers for popular audio and HT magazines (6), and
finally a set of Harman-trained 'expert' listeners (12),
divided into 36 groups ranging from 3 to 23 listeners per
group (each group, AFAICT, was 'monotypic' - only one
'type' of listener in each group). Retailers, reviewers,
and trained listeners took the 4-way speaker comparison
test; the 3-way comparison was performed by retailers,
trained listeners, marketers, and students.


Amusingly, when the 'listener performance' metric -- a
measure of the listener's ability to discriminate between
loudspeakers, combined with the consistence of their
ratings -- was calculated for the different listener
occupations participating in the four-way loudspeaker
test (retailers, reviewers, and trained listeners), audio
magazine reviewers were found to have performed the
*worst* on average (that is , least discriminating and
least reliable). In the three-way loudspeaker tests
(retailers, marketing people, students, trained
listeners) students tended to perform worst. In both
tests trained listeners performed best.


I quote: 'The reviewers' performance is something of a
surprise given that they are all paid to audition and
review products for various audiophile magazines. In
terms of listening performance, they are about equal to
the marketing and sales people, who are well below the
performance of audio retailers and trained listeners."


That said, the other take-home message was that even with
the difference in performance, the rank order of the
speakers by preference was similar across all 36
listening groups groups -- the various groups of
listeners tended to converge on the same ideas of 'best'
and 'worst' sound when they didn't know the brand and
appearance of the speaker. And the 'best' (most
preferred) loudspeakers had the smoothest, flattest and
most extended frequency responses maintained uniformly
off axis, in acoustic anaechoic measurements. This
speaker had received a 'class A' rating for three years
running in one audiophile magazine. The least-preferred
loudspeaker was an electrostatic hybrid , and it also
measured the worst. This speaker had *also* received a
class A rating for three years running, and better still
had been declared 'product of the year', by the same
audiophile mag (I wonder which?)


Another quote from Olive 2003, from the conclusion of the
results section: "It is the author's experience that most
of the differences in opinion about the sound quality of
audio product(s) in our industry are confounded by the
influence of nuisance factors tha have nothing to do with
the product itself. These include differences in
listening rooms, loudspeaker positions, and personal
prejudices (such as price, brand, and reputation) known
to strongly influence a person;s judgement of sound
quality (Toole & Olive, 1994). This study has only
reinforced this view. The remarkable consensus in
loudspeaker preference among these 268 listeners was only
possible because the judgements were all made under
controlled double-blind listening conditions."


Mike, you mean that Ludovic and Harry have been
misreprenting DBTs in general and Sean Olive's work again?

All those who are surprised please raise your hand and I'll
put a nice dunce cap in it for them to wear! ;-)


  #3   Report Post  
Harry Lavo
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive


" wrote in message
ink.net...
I havd to share this from RAHE.

wrote:
Harry Lavo wrote:


In this test. That's all you can say for sure. However it is not an
uncommon phenomenon in abx testing. Sean Olive reportedly has to
screen out
the majority of potential testers because they cannot discriminate when
he
starts training for his abx tests, even when testing for known
differences
in sound.


Sean Olive doesn't do ABX tests. He doesn't "screen out" potential
testers, either; the article Sully referred to used a couple of hundred
listeners. What he has done is assembled an expert listening panel,
specially trained to identify specific differences in frequency
response. That's a tough task, and not everyone can do it, even with
training. But it has nothing to do with either ABX or preference
testing.


This is the second time in a week you have misrepresented Mr. Olive's
work, Harry. I suggest you ceasse referring to it until you learn
something about it.


In the work reported in the 2003 paper, Olive 'screened out' one
listener -- part of the group that underwent training at Harman to
become 'expert' listeners -- because his results were perfectly
'wrong' -- that is, they showed a perfect *negative* correlation
between loudspeaker preferences in 4-way and 3-way tests. As it turned
out, he suffered from broad-band hearing loss in one ear. All the
other listeners were audiometrically normal.


The various listeners, btw, consisted of audio retailers (n=250),
university students enrolled in engineering or music/recording
industry studies (14), field marketing and salespeople for Harman
(21), professional audio reviewers for popular audio and HT magazines
(6), and finally a set of Harman-trained 'expert' listeners (12),
divided into 36 groups ranging from 3 to 23 listeners per group (each
group, AFAICT, was 'monotypic' - only one 'type' of listener in each
group). Retailers, reviewers, and trained listeners took the 4-way
speaker comparison test; the 3-way comparison was performed by
retailers, trained listeners, marketers, and students.


Amusingly, when the 'listener performance' metric -- a measure of the
listener's ability to discriminate between loudspeakers, combined with
the consistence of their ratings -- was calculated for the different
listener occupations participating in the four-way loudspeaker test
(retailers, reviewers, and trained listeners), audio magazine
reviewers were found to have performed the *worst* on average (that is
, least discriminating and least reliable). In the three-way
loudspeaker tests (retailers, marketing people, students, trained
listeners) students tended to perform worst. In both tests trained
listeners performed best.


I quote: 'The reviewers' performance is something of a surprise given
that they are all paid to audition and review products for various
audiophile magazines. In terms of listening performance, they are
about equal to the marketing and sales people, who are well below the
performance of audio retailers and trained listeners."


That said, the other take-home message was that even with the
difference in performance, the rank order of the speakers by
preference was similar across all 36 listening groups groups -- the
various groups of listeners tended to converge on the same ideas of
'best' and 'worst' sound when they didn't know the brand and
appearance of the speaker. And the 'best' (most preferred)
loudspeakers had the smoothest, flattest and most extended frequency
responses maintained uniformly off axis, in acoustic anaechoic
measurements. This speaker had received a 'class A' rating for three
years running in one audiophile magazine. The least-preferred
loudspeaker was an electrostatic hybrid , and it also measured the
worst. This speaker had *also* received a class A rating for three
years running, and better still had been declared 'product of the
year', by the same audiophile mag (I wonder which?)


Another quote from Olive 2003, from the conclusion of the results
section: "It is the author's experience that most of the differences
in opinion about the sound quality of audio product(s) in our industry
are confounded by the influence of nuisance factors tha have nothing
to do with the product itself. These include differences in listening
rooms, loudspeaker positions, and personal prejudices (such as price,
brand, and reputation) known to strongly influence a person;s
judgement of sound quality (Toole & Olive, 1994). This study has only
reinforced this view. The remarkable consensus in loudspeaker
preference among these 268 listeners was only possible because the
judgements were all made under controlled double-blind listening
conditions."


You might want to continue reading the posts over there. In the first
place, I wasn't talking about this specific test...that was NYOB's own dumb
mistake.

Next I was challenged by Bob that Sean didn't use ABX testing, to which I
replied by pulling Stewart and JJ's remarks at random from 109 Usenet posts
on the subject.

At which point Bob replied that, well, Sean wasn't Harman and those other
references don't count.

Oh no, he just works with Floyd Toole as the entire Harman International
testing department.

More than a little crap going down here.


  #4   Report Post  
Arny Krueger
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive

"Harry Lavo" wrote in message


At which point Bob replied that, well, Sean wasn't Harman
and those other references don't count.

Oh no, he just works with Floyd Toole as the entire
Harman International testing department.


More than a little crap going down here.


A lot of crap, and generally from the golden ears.

Speaker testing is a red herring in a discussion of
listening tests involving digital formats because it is a
completely different game. There's no controversy over the
idea that speakers sound different. ABX testing can
distinguish speakers from themselves if you just move them
around a bit in the room.

Monadic testing of speakers is also a red herring for
similar reasons and then some. Since there's no controvery
over the idea that speakers sound different, the ABX test
would be a poor choice. Speaker tests by so-called
objectivists have been monadic for one or more decades.
Check out the AES22 standard including speaker evaluation
form which can be downloaded from the web site belonging to
that well-known coven of objectivists - the AES.

So, when Lavo tries to claim some kind of victory when
so-called objectivists do monadic tests of speakers, its
really very old news. It is yet another example of Harry
speaking out of the back of his neck with a forked tongue.
:-(


  #5   Report Post  
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive


"Harry Lavo" wrote in message
...

" wrote in message
ink.net...
I havd to share this from RAHE.

wrote:
Harry Lavo wrote:


In this test. That's all you can say for sure. However it is not an
uncommon phenomenon in abx testing. Sean Olive reportedly has to
screen out
the majority of potential testers because they cannot discriminate
when he
starts training for his abx tests, even when testing for known
differences
in sound.


Sean Olive doesn't do ABX tests. He doesn't "screen out" potential
testers, either; the article Sully referred to used a couple of hundred
listeners. What he has done is assembled an expert listening panel,
specially trained to identify specific differences in frequency
response. That's a tough task, and not everyone can do it, even with
training. But it has nothing to do with either ABX or preference
testing.


This is the second time in a week you have misrepresented Mr. Olive's
work, Harry. I suggest you ceasse referring to it until you learn
something about it.


In the work reported in the 2003 paper, Olive 'screened out' one
listener -- part of the group that underwent training at Harman to
become 'expert' listeners -- because his results were perfectly
'wrong' -- that is, they showed a perfect *negative* correlation
between loudspeaker preferences in 4-way and 3-way tests. As it turned
out, he suffered from broad-band hearing loss in one ear. All the
other listeners were audiometrically normal.


The various listeners, btw, consisted of audio retailers (n=250),
university students enrolled in engineering or music/recording
industry studies (14), field marketing and salespeople for Harman
(21), professional audio reviewers for popular audio and HT magazines
(6), and finally a set of Harman-trained 'expert' listeners (12),
divided into 36 groups ranging from 3 to 23 listeners per group (each
group, AFAICT, was 'monotypic' - only one 'type' of listener in each
group). Retailers, reviewers, and trained listeners took the 4-way
speaker comparison test; the 3-way comparison was performed by
retailers, trained listeners, marketers, and students.


Amusingly, when the 'listener performance' metric -- a measure of the
listener's ability to discriminate between loudspeakers, combined with
the consistence of their ratings -- was calculated for the different
listener occupations participating in the four-way loudspeaker test
(retailers, reviewers, and trained listeners), audio magazine
reviewers were found to have performed the *worst* on average (that is
, least discriminating and least reliable). In the three-way
loudspeaker tests (retailers, marketing people, students, trained
listeners) students tended to perform worst. In both tests trained
listeners performed best.


I quote: 'The reviewers' performance is something of a surprise given
that they are all paid to audition and review products for various
audiophile magazines. In terms of listening performance, they are
about equal to the marketing and sales people, who are well below the
performance of audio retailers and trained listeners."


That said, the other take-home message was that even with the
difference in performance, the rank order of the speakers by
preference was similar across all 36 listening groups groups -- the
various groups of listeners tended to converge on the same ideas of
'best' and 'worst' sound when they didn't know the brand and
appearance of the speaker. And the 'best' (most preferred)
loudspeakers had the smoothest, flattest and most extended frequency
responses maintained uniformly off axis, in acoustic anaechoic
measurements. This speaker had received a 'class A' rating for three
years running in one audiophile magazine. The least-preferred
loudspeaker was an electrostatic hybrid , and it also measured the
worst. This speaker had *also* received a class A rating for three
years running, and better still had been declared 'product of the
year', by the same audiophile mag (I wonder which?)


Another quote from Olive 2003, from the conclusion of the results
section: "It is the author's experience that most of the differences
in opinion about the sound quality of audio product(s) in our industry
are confounded by the influence of nuisance factors tha have nothing
to do with the product itself. These include differences in listening
rooms, loudspeaker positions, and personal prejudices (such as price,
brand, and reputation) known to strongly influence a person;s
judgement of sound quality (Toole & Olive, 1994). This study has only
reinforced this view. The remarkable consensus in loudspeaker
preference among these 268 listeners was only possible because the
judgements were all made under controlled double-blind listening
conditions."


You might want to continue reading the posts over there. In the first
place, I wasn't talking about this specific test...that was NYOB's own
dumb mistake.

Harry, I didn't post this here to embarrass you, I'm not needed for that.
That I was sure Mr. Olive used ABX testing is in fact my error, that ABX is
one of the standards for audio testing is still a fac that many including
you seem to try and ignore.

Next I was challenged by Bob that Sean didn't use ABX testing, to which I
replied by pulling Stewart and JJ's remarks at random from 109 Usenet
posts on the subject.


At which point Bob replied that, well, Sean wasn't Harman and those other
references don't count.

Oh no, he just works with Floyd Toole as the entire Harman International
testing department.

More than a little crap going down here.

More than a little disembling on the part of those who don't like what ABX
keeps demonstrating.




  #7   Report Post  
George M. Middius
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive



Sillybot said:

Actually, I was the one


Who cares? You have no knowledge and no discrimination. Your fixation on
"tests" has nothing to do with listening to music. You're a pervert.




  #8   Report Post  
Harry Lavo
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive


"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message


At which point Bob replied that, well, Sean wasn't Harman
and those other references don't count.

Oh no, he just works with Floyd Toole as the entire
Harman International testing department.


More than a little crap going down here.


A lot of crap, and generally from the golden ears.

Speaker testing is a red herring in a discussion of listening tests
involving digital formats because it is a completely different game.
There's no controversy over the idea that speakers sound different. ABX
testing can distinguish speakers from themselves if you just move them
around a bit in the room.

Monadic testing of speakers is also a red herring for similar reasons and
then some. Since there's no controvery over the idea that speakers sound
different, the ABX test would be a poor choice. Speaker tests by so-called
objectivists have been monadic for one or more decades. Check out the
AES22 standard including speaker evaluation form which can be downloaded
from the web site belonging to that well-known coven of objectivists - the
AES.

So, when Lavo tries to claim some kind of victory when so-called
objectivists do monadic tests of speakers, its really very old news. It
is yet another example of Harry speaking out of the back of his neck with
a forked tongue. :-(



Nice little rant, Arny, but your reply has nothing whatsoever to the RAHE
quotes or my response. Because my post was incontrovertible to anybody who
read the exchange on RAHE.


  #9   Report Post  
Robert Morein
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive


" wrote in message
link.net...

"Harry Lavo" wrote in message
...

" wrote in message
ink.net...
I havd to share this from RAHE.

wrote:
Harry Lavo wrote:

In this test. That's all you can say for sure. However it is not

an
uncommon phenomenon in abx testing. Sean Olive reportedly has to
screen out
the majority of potential testers because they cannot discriminate
when he
starts training for his abx tests, even when testing for known
differences
in sound.

Sean Olive doesn't do ABX tests. He doesn't "screen out" potential
testers, either; the article Sully referred to used a couple of

hundred
listeners. What he has done is assembled an expert listening panel,
specially trained to identify specific differences in frequency
response. That's a tough task, and not everyone can do it, even with
training. But it has nothing to do with either ABX or preference
testing.

This is the second time in a week you have misrepresented Mr. Olive's
work, Harry. I suggest you ceasse referring to it until you learn
something about it.

In the work reported in the 2003 paper, Olive 'screened out' one
listener -- part of the group that underwent training at Harman to
become 'expert' listeners -- because his results were perfectly
'wrong' -- that is, they showed a perfect *negative* correlation
between loudspeaker preferences in 4-way and 3-way tests. As it turned
out, he suffered from broad-band hearing loss in one ear. All the
other listeners were audiometrically normal.


The various listeners, btw, consisted of audio retailers (n=250),
university students enrolled in engineering or music/recording
industry studies (14), field marketing and salespeople for Harman
(21), professional audio reviewers for popular audio and HT magazines
(6), and finally a set of Harman-trained 'expert' listeners (12),
divided into 36 groups ranging from 3 to 23 listeners per group (each
group, AFAICT, was 'monotypic' - only one 'type' of listener in each
group). Retailers, reviewers, and trained listeners took the 4-way
speaker comparison test; the 3-way comparison was performed by
retailers, trained listeners, marketers, and students.


Amusingly, when the 'listener performance' metric -- a measure of the
listener's ability to discriminate between loudspeakers, combined with
the consistence of their ratings -- was calculated for the different
listener occupations participating in the four-way loudspeaker test
(retailers, reviewers, and trained listeners), audio magazine
reviewers were found to have performed the *worst* on average (that is
, least discriminating and least reliable). In the three-way
loudspeaker tests (retailers, marketing people, students, trained
listeners) students tended to perform worst. In both tests trained
listeners performed best.


I quote: 'The reviewers' performance is something of a surprise given
that they are all paid to audition and review products for various
audiophile magazines. In terms of listening performance, they are
about equal to the marketing and sales people, who are well below the
performance of audio retailers and trained listeners."


That said, the other take-home message was that even with the
difference in performance, the rank order of the speakers by
preference was similar across all 36 listening groups groups -- the
various groups of listeners tended to converge on the same ideas of
'best' and 'worst' sound when they didn't know the brand and
appearance of the speaker. And the 'best' (most preferred)
loudspeakers had the smoothest, flattest and most extended frequency
responses maintained uniformly off axis, in acoustic anaechoic
measurements. This speaker had received a 'class A' rating for three
years running in one audiophile magazine. The least-preferred
loudspeaker was an electrostatic hybrid , and it also measured the
worst. This speaker had *also* received a class A rating for three
years running, and better still had been declared 'product of the
year', by the same audiophile mag (I wonder which?)


Another quote from Olive 2003, from the conclusion of the results
section: "It is the author's experience that most of the differences
in opinion about the sound quality of audio product(s) in our industry
are confounded by the influence of nuisance factors tha have nothing
to do with the product itself. These include differences in listening
rooms, loudspeaker positions, and personal prejudices (such as price,
brand, and reputation) known to strongly influence a person;s
judgement of sound quality (Toole & Olive, 1994). This study has only
reinforced this view. The remarkable consensus in loudspeaker
preference among these 268 listeners was only possible because the
judgements were all made under controlled double-blind listening
conditions."


You might want to continue reading the posts over there. In the first
place, I wasn't talking about this specific test...that was NYOB's own
dumb mistake.

Harry, I didn't post this here to embarrass you, I'm not needed for that.
That I was sure Mr. Olive used ABX testing is in fact my error, that ABX

is
one of the standards for audio testing is still a fac that many including
you seem to try and ignore.

Next I was challenged by Bob that Sean didn't use ABX testing, to which

I
replied by pulling Stewart and JJ's remarks at random from 109 Usenet
posts on the subject.


At which point Bob replied that, well, Sean wasn't Harman and those

other
references don't count.

Oh no, he just works with Floyd Toole as the entire Harman International
testing department.

More than a little crap going down here.

More than a little disembling on the part of those who don't like what ABX
keeps demonstrating.

Thanks for admitting you have an inferior mind.


  #10   Report Post  
Arny Krueger
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive

"George M. Middius" cmndr [underscore] george [at] comcast
[dot] net wrote in message

Sillybot said:

Actually, I was the one


Who cares?


People who are interested in accuracy.


You have no knowledge and no discrimination.


George, particularly ironic coming from one of the all-time
audio know-nothings of RAO like you. So far your ownly
demonstrated talent relates to making up childish nicknames.

Your fixation on "tests" has nothing to do with listening
to music.


If there's anybody on RAO that's fixated on tests, it has to
be George Middius.

You're a pervert.


Have you stopped beating your mother, George?




  #11   Report Post  
Arny Krueger
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive

"Harry Lavo" wrote in message

"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message


At which point Bob replied that, well, Sean wasn't
Harman and those other references don't count.

Oh no, he just works with Floyd Toole as the entire
Harman International testing department.


More than a little crap going down here.


A lot of crap, and generally from the golden ears.

Speaker testing is a red herring in a discussion of
listening tests involving digital formats because it is
a completely different game. There's no controversy over
the idea that speakers sound different. ABX testing can
distinguish speakers from themselves if you just move
them around a bit in the room. Monadic testing of
speakers is also a red herring for
similar reasons and then some. Since there's no
controvery over the idea that speakers sound different,
the ABX test would be a poor choice. Speaker tests by
so-called objectivists have been monadic for one or more
decades. Check out the AES22 standard including speaker
evaluation form which can be downloaded from the web
site belonging to that well-known coven of objectivists
- the AES. So, when Lavo tries to claim some kind of
victory when
so-called objectivists do monadic tests of speakers, its
really very old news. It is yet another example of
Harry speaking out of the back of his neck with a forked
tongue. :-(


Nice little rant, Arny, but your reply has nothing
whatsoever to the RAHE quotes or my response.


Balderdash.

Because my post was incontrovertible to anybody who read
the
exchange on RAHE.


Just because a post is internally incontrovertable doesn't
mean that it isn't a red herring in the larger context.


  #12   Report Post  
Robert Morein
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive


"Arny Krueger" wrote in message
. ..
"George M. Middius" cmndr [underscore] george [at] comcast
[dot] net wrote in message

Sillybot said:

Actually, I was the one


Who cares?


People who are interested in accuracy.


You have no knowledge and no discrimination.


George, particularly ironic coming from one of the all-time
audio know-nothings of RAO like you. So far your ownly
demonstrated talent relates to making up childish nicknames.

Your fixation on "tests" has nothing to do with listening
to music.


If there's anybody on RAO that's fixated on tests, it has to
be George Middius.

You're a pervert.


Have you stopped beating your mother, George?

Thanks for admitting you're a child molester.


  #13   Report Post  
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive


"Robert Morein" wrote in message
...

" wrote in message
link.net...

"Harry Lavo" wrote in message
...

" wrote in message
ink.net...
I havd to share this from RAHE.

wrote:
Harry Lavo wrote:

In this test. That's all you can say for sure. However it is not

an
uncommon phenomenon in abx testing. Sean Olive reportedly has to
screen out
the majority of potential testers because they cannot discriminate
when he
starts training for his abx tests, even when testing for known
differences
in sound.

Sean Olive doesn't do ABX tests. He doesn't "screen out" potential
testers, either; the article Sully referred to used a couple of

hundred
listeners. What he has done is assembled an expert listening panel,
specially trained to identify specific differences in frequency
response. That's a tough task, and not everyone can do it, even with
training. But it has nothing to do with either ABX or preference
testing.

This is the second time in a week you have misrepresented Mr. Olive's
work, Harry. I suggest you ceasse referring to it until you learn
something about it.

In the work reported in the 2003 paper, Olive 'screened out' one
listener -- part of the group that underwent training at Harman to
become 'expert' listeners -- because his results were perfectly
'wrong' -- that is, they showed a perfect *negative* correlation
between loudspeaker preferences in 4-way and 3-way tests. As it turned
out, he suffered from broad-band hearing loss in one ear. All the
other listeners were audiometrically normal.


The various listeners, btw, consisted of audio retailers (n=250),
university students enrolled in engineering or music/recording
industry studies (14), field marketing and salespeople for Harman
(21), professional audio reviewers for popular audio and HT magazines
(6), and finally a set of Harman-trained 'expert' listeners (12),
divided into 36 groups ranging from 3 to 23 listeners per group (each
group, AFAICT, was 'monotypic' - only one 'type' of listener in each
group). Retailers, reviewers, and trained listeners took the 4-way
speaker comparison test; the 3-way comparison was performed by
retailers, trained listeners, marketers, and students.


Amusingly, when the 'listener performance' metric -- a measure of the
listener's ability to discriminate between loudspeakers, combined with
the consistence of their ratings -- was calculated for the different
listener occupations participating in the four-way loudspeaker test
(retailers, reviewers, and trained listeners), audio magazine
reviewers were found to have performed the *worst* on average (that is
, least discriminating and least reliable). In the three-way
loudspeaker tests (retailers, marketing people, students, trained
listeners) students tended to perform worst. In both tests trained
listeners performed best.


I quote: 'The reviewers' performance is something of a surprise given
that they are all paid to audition and review products for various
audiophile magazines. In terms of listening performance, they are
about equal to the marketing and sales people, who are well below the
performance of audio retailers and trained listeners."


That said, the other take-home message was that even with the
difference in performance, the rank order of the speakers by
preference was similar across all 36 listening groups groups -- the
various groups of listeners tended to converge on the same ideas of
'best' and 'worst' sound when they didn't know the brand and
appearance of the speaker. And the 'best' (most preferred)
loudspeakers had the smoothest, flattest and most extended frequency
responses maintained uniformly off axis, in acoustic anaechoic
measurements. This speaker had received a 'class A' rating for three
years running in one audiophile magazine. The least-preferred
loudspeaker was an electrostatic hybrid , and it also measured the
worst. This speaker had *also* received a class A rating for three
years running, and better still had been declared 'product of the
year', by the same audiophile mag (I wonder which?)


Another quote from Olive 2003, from the conclusion of the results
section: "It is the author's experience that most of the differences
in opinion about the sound quality of audio product(s) in our industry
are confounded by the influence of nuisance factors tha have nothing
to do with the product itself. These include differences in listening
rooms, loudspeaker positions, and personal prejudices (such as price,
brand, and reputation) known to strongly influence a person;s
judgement of sound quality (Toole & Olive, 1994). This study has only
reinforced this view. The remarkable consensus in loudspeaker
preference among these 268 listeners was only possible because the
judgements were all made under controlled double-blind listening
conditions."

You might want to continue reading the posts over there. In the first
place, I wasn't talking about this specific test...that was NYOB's own
dumb mistake.

Harry, I didn't post this here to embarrass you, I'm not needed for that.
That I was sure Mr. Olive used ABX testing is in fact my error, that ABX

is
one of the standards for audio testing is still a fac that many including
you seem to try and ignore.

Next I was challenged by Bob that Sean didn't use ABX testing, to which

I
replied by pulling Stewart and JJ's remarks at random from 109 Usenet
posts on the subject.


At which point Bob replied that, well, Sean wasn't Harman and those

other
references don't count.

Oh no, he just works with Floyd Toole as the entire Harman
International
testing department.

More than a little crap going down here.

More than a little disembling on the part of those who don't like what
ABX
keeps demonstrating.

Thanks for admitting you have an inferior mind.

Thanks for demonstrating you are unable to quit stalking those you feel
aren't as smart as you are. It's nice to see you come clean about your own
character flaws.

It would be better however if you get over your admitted laziness when it
comes to doing bias controlled testing of things like amps.


  #14   Report Post  
surf
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive

"Arny Krueger" wrote :

Just because a post is internally incontrovertable doesn't mean that it
isn't a red herring in the larger context.



Thanks for admitting you were wrong again.

Don't you get tired of demonstrating your ineptitude?


  #15   Report Post  
Robert Morein
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive


" wrote in message
ink.net...

"Robert Morein" wrote in message
...

" wrote in message
link.net...

"Harry Lavo" wrote in message
...

" wrote in message
ink.net...
I havd to share this from RAHE.

wrote:
Harry Lavo wrote:

In this test. That's all you can say for sure. However it is

not
an
uncommon phenomenon in abx testing. Sean Olive reportedly has to
screen out
the majority of potential testers because they cannot

discriminate
when he
starts training for his abx tests, even when testing for known
differences
in sound.

Sean Olive doesn't do ABX tests. He doesn't "screen out" potential
testers, either; the article Sully referred to used a couple of

hundred
listeners. What he has done is assembled an expert listening panel,
specially trained to identify specific differences in frequency
response. That's a tough task, and not everyone can do it, even

with
training. But it has nothing to do with either ABX or preference
testing.

This is the second time in a week you have misrepresented Mr.

Olive's
work, Harry. I suggest you ceasse referring to it until you learn
something about it.

In the work reported in the 2003 paper, Olive 'screened out' one
listener -- part of the group that underwent training at Harman to
become 'expert' listeners -- because his results were perfectly
'wrong' -- that is, they showed a perfect *negative* correlation
between loudspeaker preferences in 4-way and 3-way tests. As it

turned
out, he suffered from broad-band hearing loss in one ear. All the
other listeners were audiometrically normal.


The various listeners, btw, consisted of audio retailers (n=250),
university students enrolled in engineering or music/recording
industry studies (14), field marketing and salespeople for Harman
(21), professional audio reviewers for popular audio and HT

magazines
(6), and finally a set of Harman-trained 'expert' listeners (12),
divided into 36 groups ranging from 3 to 23 listeners per group

(each
group, AFAICT, was 'monotypic' - only one 'type' of listener in each
group). Retailers, reviewers, and trained listeners took the 4-way
speaker comparison test; the 3-way comparison was performed by
retailers, trained listeners, marketers, and students.


Amusingly, when the 'listener performance' metric -- a measure of

the
listener's ability to discriminate between loudspeakers, combined

with
the consistence of their ratings -- was calculated for the different
listener occupations participating in the four-way loudspeaker test
(retailers, reviewers, and trained listeners), audio magazine
reviewers were found to have performed the *worst* on average (that

is
, least discriminating and least reliable). In the three-way
loudspeaker tests (retailers, marketing people, students, trained
listeners) students tended to perform worst. In both tests trained
listeners performed best.


I quote: 'The reviewers' performance is something of a surprise

given
that they are all paid to audition and review products for various
audiophile magazines. In terms of listening performance, they are
about equal to the marketing and sales people, who are well below

the
performance of audio retailers and trained listeners."


That said, the other take-home message was that even with the
difference in performance, the rank order of the speakers by
preference was similar across all 36 listening groups groups -- the
various groups of listeners tended to converge on the same ideas of
'best' and 'worst' sound when they didn't know the brand and
appearance of the speaker. And the 'best' (most preferred)
loudspeakers had the smoothest, flattest and most extended frequency
responses maintained uniformly off axis, in acoustic anaechoic
measurements. This speaker had received a 'class A' rating for three
years running in one audiophile magazine. The least-preferred
loudspeaker was an electrostatic hybrid , and it also measured the
worst. This speaker had *also* received a class A rating for three
years running, and better still had been declared 'product of the
year', by the same audiophile mag (I wonder which?)


Another quote from Olive 2003, from the conclusion of the results
section: "It is the author's experience that most of the differences
in opinion about the sound quality of audio product(s) in our

industry
are confounded by the influence of nuisance factors tha have nothing
to do with the product itself. These include differences in

listening
rooms, loudspeaker positions, and personal prejudices (such as

price,
brand, and reputation) known to strongly influence a person;s
judgement of sound quality (Toole & Olive, 1994). This study has

only
reinforced this view. The remarkable consensus in loudspeaker
preference among these 268 listeners was only possible because the
judgements were all made under controlled double-blind listening
conditions."

You might want to continue reading the posts over there. In the

first
place, I wasn't talking about this specific test...that was NYOB's

own
dumb mistake.

Harry, I didn't post this here to embarrass you, I'm not needed for

that.
That I was sure Mr. Olive used ABX testing is in fact my error, that

ABX
is
one of the standards for audio testing is still a fac that many

including
you seem to try and ignore.

Next I was challenged by Bob that Sean didn't use ABX testing, to

which
I
replied by pulling Stewart and JJ's remarks at random from 109 Usenet
posts on the subject.


At which point Bob replied that, well, Sean wasn't Harman and those

other
references don't count.

Oh no, he just works with Floyd Toole as the entire Harman
International
testing department.

More than a little crap going down here.

More than a little disembling on the part of those who don't like what
ABX
keeps demonstrating.

Thanks for admitting you have an inferior mind.

Thanks for demonstrating you are unable to quit stalking those you feel
aren't as smart as you are. It's nice to see you come clean about your

own
character flaws.

It would be better however if you get over your admitted laziness when it
comes to doing bias controlled testing of things like amps.

Thanks for admitting you feel you have an inferior mind.




  #16   Report Post  
Steven Sullivan
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive

Arny Krueger wrote:
"George M. Middius" cmndr [underscore] george [at] comcast
[dot] net wrote in message

Sillybot said:

Actually, I was the one


Who cares?


People who are interested in accuracy.



You have no knowledge and no discrimination.


George, particularly ironic coming from one of the all-time
audio know-nothings of RAO like you. So far your ownly
demonstrated talent relates to making up childish nicknames.


Your fixation on "tests" has nothing to do with listening
to music.


If there's anybody on RAO that's fixated on tests, it has to
be George Middius.


You're a pervert.


Have you stopped beating your mother, George?



My goodness, what an awful *lot* of perverts there are in audiophile-land:

'Our polling also reveals that audiophiles are increasingly willing to
go out on a sonic limb to find components, with a whopping
68% of respondents saying that they have already bought something
without first hearing it."
Jon Iverson, Stereophile, Oct 2005, p 18.






--
-S
"The most appealing intuitive argument for atheism is the mindblowing stupidity of religious
fundamentalists." -- Ginger Yellow
  #17   Report Post  
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive


"Robert Morein" wrote in message
...

" wrote in message
ink.net...

"Robert Morein" wrote in message
...

" wrote in message
link.net...

"Harry Lavo" wrote in message
...

" wrote in message
ink.net...
I havd to share this from RAHE.

wrote:
Harry Lavo wrote:

In this test. That's all you can say for sure. However it is

not
an
uncommon phenomenon in abx testing. Sean Olive reportedly has
to
screen out
the majority of potential testers because they cannot

discriminate
when he
starts training for his abx tests, even when testing for known
differences
in sound.

Sean Olive doesn't do ABX tests. He doesn't "screen out" potential
testers, either; the article Sully referred to used a couple of
hundred
listeners. What he has done is assembled an expert listening
panel,
specially trained to identify specific differences in frequency
response. That's a tough task, and not everyone can do it, even

with
training. But it has nothing to do with either ABX or preference
testing.

This is the second time in a week you have misrepresented Mr.

Olive's
work, Harry. I suggest you ceasse referring to it until you learn
something about it.

In the work reported in the 2003 paper, Olive 'screened out' one
listener -- part of the group that underwent training at Harman to
become 'expert' listeners -- because his results were perfectly
'wrong' -- that is, they showed a perfect *negative* correlation
between loudspeaker preferences in 4-way and 3-way tests. As it

turned
out, he suffered from broad-band hearing loss in one ear. All the
other listeners were audiometrically normal.


The various listeners, btw, consisted of audio retailers (n=250),
university students enrolled in engineering or music/recording
industry studies (14), field marketing and salespeople for Harman
(21), professional audio reviewers for popular audio and HT

magazines
(6), and finally a set of Harman-trained 'expert' listeners (12),
divided into 36 groups ranging from 3 to 23 listeners per group

(each
group, AFAICT, was 'monotypic' - only one 'type' of listener in
each
group). Retailers, reviewers, and trained listeners took the 4-way
speaker comparison test; the 3-way comparison was performed by
retailers, trained listeners, marketers, and students.


Amusingly, when the 'listener performance' metric -- a measure of

the
listener's ability to discriminate between loudspeakers, combined

with
the consistence of their ratings -- was calculated for the
different
listener occupations participating in the four-way loudspeaker test
(retailers, reviewers, and trained listeners), audio magazine
reviewers were found to have performed the *worst* on average (that

is
, least discriminating and least reliable). In the three-way
loudspeaker tests (retailers, marketing people, students, trained
listeners) students tended to perform worst. In both tests trained
listeners performed best.


I quote: 'The reviewers' performance is something of a surprise

given
that they are all paid to audition and review products for various
audiophile magazines. In terms of listening performance, they are
about equal to the marketing and sales people, who are well below

the
performance of audio retailers and trained listeners."


That said, the other take-home message was that even with the
difference in performance, the rank order of the speakers by
preference was similar across all 36 listening groups groups -- the
various groups of listeners tended to converge on the same ideas of
'best' and 'worst' sound when they didn't know the brand and
appearance of the speaker. And the 'best' (most preferred)
loudspeakers had the smoothest, flattest and most extended
frequency
responses maintained uniformly off axis, in acoustic anaechoic
measurements. This speaker had received a 'class A' rating for
three
years running in one audiophile magazine. The least-preferred
loudspeaker was an electrostatic hybrid , and it also measured the
worst. This speaker had *also* received a class A rating for three
years running, and better still had been declared 'product of the
year', by the same audiophile mag (I wonder which?)


Another quote from Olive 2003, from the conclusion of the results
section: "It is the author's experience that most of the
differences
in opinion about the sound quality of audio product(s) in our

industry
are confounded by the influence of nuisance factors tha have
nothing
to do with the product itself. These include differences in

listening
rooms, loudspeaker positions, and personal prejudices (such as

price,
brand, and reputation) known to strongly influence a person;s
judgement of sound quality (Toole & Olive, 1994). This study has

only
reinforced this view. The remarkable consensus in loudspeaker
preference among these 268 listeners was only possible because the
judgements were all made under controlled double-blind listening
conditions."

You might want to continue reading the posts over there. In the

first
place, I wasn't talking about this specific test...that was NYOB's

own
dumb mistake.

Harry, I didn't post this here to embarrass you, I'm not needed for

that.
That I was sure Mr. Olive used ABX testing is in fact my error, that

ABX
is
one of the standards for audio testing is still a fac that many

including
you seem to try and ignore.

Next I was challenged by Bob that Sean didn't use ABX testing, to

which
I
replied by pulling Stewart and JJ's remarks at random from 109
Usenet
posts on the subject.


At which point Bob replied that, well, Sean wasn't Harman and those
other
references don't count.

Oh no, he just works with Floyd Toole as the entire Harman
International
testing department.

More than a little crap going down here.

More than a little disembling on the part of those who don't like what
ABX
keeps demonstrating.

Thanks for admitting you have an inferior mind.

Thanks for demonstrating you are unable to quit stalking those you feel
aren't as smart as you are. It's nice to see you come clean about your

own
character flaws.

It would be better however if you get over your admitted laziness when it
comes to doing bias controlled testing of things like amps.

Thanks for admitting you feel you have an inferior mind.

Thanks for admitting you are delusional.


  #18   Report Post  
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive


Arny Krueger wrote:
"Harry Lavo" wrote in message


At which point Bob replied that, well, Sean wasn't Harman
and those other references don't count.

Oh no, he just works with Floyd Toole as the entire
Harman International testing department.


More than a little crap going down here.


A lot of crap, and generally from the golden ears.

Speaker testing is a red herring in a discussion of
listening tests involving digital formats because it is a
completely different game. There's no controversy over the
idea that speakers sound different. ABX testing can
distinguish speakers from themselves if you just move them
around a bit in the room.

Monadic testing of speakers is also a red herring for
similar reasons and then some. Since there's no controvery
over the idea that speakers sound different, the ABX test
would be a poor choice. Speaker tests by so-called
objectivists have been monadic for one or more decades.
Check out the AES22 standard including speaker evaluation
form which can be downloaded from the web site belonging to
that well-known coven of objectivists - the AES.

So, when Lavo tries to claim some kind of victory when
so-called objectivists do monadic tests of speakers, its
really very old news. It is yet another example of Harry
speaking out of the back of his neck with a forked tongue.
:-(


Arny says:
There's no controversy over the
idea that speakers sound different. ABX testing can
distinguish speakers from themselves if you just move them
around a bit in the room."

You're repeating the old mantras endlessly in the hope that you'll
outlast and outbore any skeptics.
Olive named his paper: "Differences In PERFORMANCE AND preference
....."
On p.808 he defined his performance "metric:".:"This metric
accounts for the listeners' ability to DISCRIMINATE between
loudspeakers as well as their ability to repeat their ratings...".
And in the preamble he said: Significant differences in PERFORMANCE....
were found among the different categories of listeners"
Finally he did not use ABX protocol because he found it
"unsuitable" for his task.
I see nothing wrong with using a common sense precaution of
double-blinding. I see a lot wrong with trumpeting certainties about a
never researched, never validated ABX protocol APPLIED TO COMPARING
MUSICAL REPRODUCTION OF AUDIO COMPONENTS. First research it: What kind
of panel, how selected to represent a listener variety from boom box
carriers to virtuoso flute players, how widely representative of
gender, age , training and experience, what statistical criteria are
you using., what degree of physical difference between the components
for study you'll allow?
Once you've done this field work come back and present you results
for independent review
Arny either we've been reading two different papers, or you two
learned gentlemen Sean Olive of the Nat. Research Ccil of Canada. ,
McGill Univ. PH.D. candidate. And AES Fellow . and you of the RAO are
in serious disagreement. If I were you I'd take it up with him
instead of lecturing me and this captive audience.
A little personal lesson. I learned in my professional life as a
consultant cardiologist to check my sources carefully before sounding
off in matters of life and death. I apply the same habit to any of my
written statements.
Check carefully before you take up the dueling sword. Thankfully you
don't elect the refuge in childish obscenities like your faithful
Sancho Panza NYOB.
Regards Ludovic M.

  #19   Report Post  
Arny Krueger
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive

wrote in message
oups.com

Arny Krueger wrote:



Speaker testing is a red herring in a discussion of
listening tests involving digital formats because it is a
completely different game. There's no controversy over
the idea that speakers sound different. ABX testing can
distinguish speakers from themselves if you just move
them around a bit in the room.


Monadic testing of speakers is also a red herring for
similar reasons and then some. Since there's no
controversy over the idea that speakers sound different,
the ABX test would be a poor choice. Speaker tests by
so-called objectivists have been monadic for one or more
decades. Check out the AES22 standard including speaker
evaluation form which can be downloaded from the web
site belonging to that well-known coven of objectivists
- the AES.


So, when Lavo tries to claim some kind of victory when
so-called objectivists do monadic tests of speakers, its
really very old news. It is yet another example of Harry
speaking out of the back of his neck with a forked
tongue. :-(


Arny says:


There's no controversy over the
idea that speakers sound different. ABX testing can
distinguish speakers from themselves if you just move
them around a bit in the room."


You're repeating the old mantras endlessly in the hope
that you'll outlast and outbore any skeptics.


What I'm doing Ludovic is countering your constant
repetition of old mantras.

Olive named his paper: "Differences In PERFORMANCE AND
preference ...."


Thanks for showing that even providing the complete name of
the paper would demolish your posturing, Ludovic.

On p.808 he defined his performance "metric:".:"This
metric
accounts for the listeners' ability to DISCRIMINATE
between loudspeakers as well as their ability to repeat
their ratings...".


Like I said, loudspeakers.

And in the preamble he said: Significant differences in
PERFORMANCE.... were found among the different categories
of listeners"


....in a context where it is a given that the alternatives
sound significantly different.

Finally he did not use ABX protocol because he found it
"unsuitable" for his task.


Completely understandable since there is no controversy as
to whether speakers sound different from each other. BTW
just in case you forgot again Ludovic, speakers generally
sound different so ABXing them to see if they sound
different is a waste of time.

I see nothing wrong with using a common sense precaution
of double-blinding. I see a lot wrong with trumpeting
certainties about a never researched, never validated
ABX protocol APPLIED TO COMPARING MUSICAL REPRODUCTION OF
AUDIO COMPONENTS.


You're lying again Ludovic, or maybe you're just too
hysterical to know that you aren't telling the truth.

First research it: What kind of panel,
how selected to represent a listener variety from boom
box carriers to virtuoso flute players, how widely
representative of gender, age , training and experience,
what statistical criteria are you using., what degree of
physical difference between the components for study
you'll allow?


If you want an answer that question Ludovic, first get back
into a reasonable context for asking it.

Once you've done this field work come back and present
you results for independent review
Arny either we've been reading two different papers, or
you two learned gentlemen Sean Olive of the Nat. Research
Ccil of Canada. , McGill Univ. PH.D. candidate. And AES
Fellow . and you of the RAO are in serious disagreement.


LOL!

If I were you I'd take it up with him
instead of lecturing me and this captive audience.


Sean and I generally agree about subjective testing. Some
evidence of this is in the Usenet archives.


A little personal lesson. I learned in my professional
life as a consultant cardiologist to check my sources
carefully before sounding off in matters of life and
death. I apply the same habit to any of my written
statements.


Not in this life you don't Ludovic. You're a classic loose
cannon.

Check carefully before you take up the dueling sword.
Thankfully you don't elect the refuge in childish
obscenities like your faithful Sancho Panza NYOB.


Fact is Ludovic, the way you trash the truth is obscene.


  #20   Report Post  
Arny Krueger
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive

"Steven Sullivan" wrote in message


'Our polling also reveals that audiophiles are
increasingly willing to
go out on a sonic limb to find components, with a whopping
68% of respondents saying that they have already bought
something without first hearing it."


Jon Iverson, Stereophile, Oct 2005, p 18.


No doubt they based their purchases on glowing testimonials
found in high end audio publications.




  #21   Report Post  
Ruud Broens
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive

: a
: disc innate
: hounded

: with
: Olive's ,
: is
: given
: the
: industry
: listening
: price,

: only
: first ::
: ABX
: More than a little crap going down here.
: Thanks for admitting you have an inferior mind.
:
: Thanks for demonstrating you are unable to quit stalking those you feel
: aren't as smart as you are. It's nice to see you come clean about your
: own
: character flaws.
:
: It would be better however if you get over your admitted laziness when it
: comes to doing bias controlled testing of things like amps.
:
: Thanks for admitting you feel you have an inferior mind.
:
: Thanks for admitting you are delusional.
:
R.
wraparound & snip artiste


  #22   Report Post  
surf
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive

" wrote ...

"Quaalude"


You have experience with them, don't you?


  #23   Report Post  
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive

Arny Krueger wrote:
wrote in message
oups.com

Arny Krueger wrote:



Speaker testing is a red herring in a discussion of
listening tests involving digital formats because it is a
completely different game. There's no controversy over
the idea that speakers sound different. ABX testing can
distinguish speakers from themselves if you just move
them around a bit in the room.


Monadic testing of speakers is also a red herring for
similar reasons and then some. Since there's no
controversy over the idea that speakers sound different,
the ABX test would be a poor choice. Speaker tests by
so-called objectivists have been monadic for one or more
decades. Check out the AES22 standard including speaker
evaluation form which can be downloaded from the web
site belonging to that well-known coven of objectivists
- the AES.


So, when Lavo tries to claim some kind of victory when
so-called objectivists do monadic tests of speakers, its
really very old news. It is yet another example of Harry
speaking out of the back of his neck with a forked
tongue. :-(


Arny says:


There's no controversy over the
idea that speakers sound different. ABX testing can
distinguish speakers from themselves if you just move
them around a bit in the room."


You're repeating the old mantras endlessly in the hope
that you'll outlast and outbore any skeptics.


What I'm doing Ludovic is countering your constant
repetition of old mantras.

Olive named his paper: "Differences In PERFORMANCE AND
preference ...."


Thanks for showing that even providing the complete name of
the paper would demolish your posturing, Ludovic.

On p.808 he defined his performance "metric:".:"This
metric
accounts for the listeners' ability to DISCRIMINATE
between loudspeakers as well as their ability to repeat
their ratings...".


Like I said, loudspeakers.

And in the preamble he said: Significant differences in
PERFORMANCE.... were found among the different categories
of listeners"


...in a context where it is a given that the alternatives
sound significantly different.

Finally he did not use ABX protocol because he found it
"unsuitable" for his task.


Completely understandable since there is no controversy as
to whether speakers sound different from each other. BTW
just in case you forgot again Ludovic, speakers generally
sound different so ABXing them to see if they sound
different is a waste of time.

I see nothing wrong with using a common sense precaution
of double-blinding. I see a lot wrong with trumpeting
certainties about a never researched, never validated
ABX protocol APPLIED TO COMPARING MUSICAL REPRODUCTION OF
AUDIO COMPONENTS.


You're lying again Ludovic, or maybe you're just too
hysterical to know that you aren't telling the truth.

First research it: What kind of panel,
how selected to represent a listener variety from boom
box carriers to virtuoso flute players, how widely
representative of gender, age , training and experience,
what statistical criteria are you using., what degree of
physical difference between the components for study
you'll allow?


If you want an answer that question Ludovic, first get back
into a reasonable context for asking it.

Once you've done this field work come back and present
you results for independent review
Arny either we've been reading two different papers, or
you two learned gentlemen Sean Olive of the Nat. Research
Ccil of Canada. , McGill Univ. PH.D. candidate. And AES
Fellow . and you of the RAO are in serious disagreement.


LOL!

If I were you I'd take it up with him
instead of lecturing me and this captive audience.


Sean and I generally agree about subjective testing. Some
evidence of this is in the Usenet archives.


A little personal lesson. I learned in my professional
life as a consultant cardiologist to check my sources
carefully before sounding off in matters of life and
death. I apply the same habit to any of my written
statements.


Not in this life you don't Ludovic. You're a classic loose
cannon.

Check carefully before you take up the dueling sword.
Thankfully you don't elect the refuge in childish
obscenities like your faithful Sancho Panza NYOB.


Fact is Ludovic, the way you trash the truth is obscene.

__________________________________________________ _______
__________________________________________________ _______
What I'm doing Ludovic is countering your constant
repetition of old mantras.
Olive named his paper: "Differences In PERFORMANCE AND
preference ...."

You answer: Thanks for showing that even providing the complete name of

the paper would demolish your posturing, Ludovic.

This is typical. In a normal, decent debate respectful of your
audience you'd now quote this "complete name" that
"demolishes" me.

But this is not a decent debate.. This is only A. Krueger and this is
only RAO. So one slyly insinuates sinister motive to the omission of
insubstantial part from the typing chore. One hopes of course that no
one will check . So here, damn you for forcing me into your
time-wasting nonsense games, is the complete title. "Differences in
PERFORMANCE AND ( my capitals L.M.) preference of trained versus
untrained listeners in loudspeaker tests: a case study"
Did I omit a comma somewhere Arny? That would be because I'm so
demolished that I can no longer distinguish real discussion from your
degrading version of it.

On p.808 he defined his performance "metric:".:"This
metric
accounts for the listeners' ability to DISCRIMINATE
between loudspeakers as well as their ability to repeat
their ratings...".


Like I said, loudspeakers.

And in the preamble he said: Significant differences in
PERFORMANCE.... were found among the different categories
of listeners"

Arny adds: "..in a context where it is a given that the alternatives
sound significantly different".

Would you please translate this gobbledygook into normal language of
communication between literate humans.? What on earth are you saying?
Sean Olive sounds clear enough to me without your pompous
pseudo-scientific parody of his clear statement.: He found and reported
SIGNIFICANT DIFFERENCES IN PERFORMANCE COMPARING LOUDSPEAKERS. Twist
and turn that is what he says.
Finally he did not use ABX protocol because he found it
"unsuitable" for his task.

You answer: Completely understandable since there is no controversy as
to whether speakers sound different from each other. BTW
just in case you forgot again Ludovic, speakers generally
sound different so ABXing them to see if they sound
different is a waste of time.

Arny- you can keep on repeating that "speakers "generally"sound
different" till you're blue in the face. But you can not say that
Olive agrees with you: his papers examines "the differences in
performance ...COMPARING LOUDSPEAKERS" Please don't forget to
harumph that I did not repeat the complete title this time for the
sinister reason of making you sound ridiculous. Because that was my
reason..

I see nothing wrong with using a common sense precaution
of double-blinding. I see a lot wrong with trumpeting
certainties about a never researched, never validated
ABX protocol APPLIED TO COMPARING MUSICAL REPRODUCTION OF
AUDIO COMPONENTS.

Krueger shrills: "You're lying again Ludovic, or maybe you're just
too
hysterical to know that you aren't telling the truth. "

Here we go again. In the end when up against it you couldn't resist
reaching for the classical last resort weapon of your tribe: assaulting
your opponent's character .and accusing him of bad faith. Obscenity
can not be far behind..
For me this stops any reasoned argument dead. I have no taste for
exchanging insults.
For your information- in other places arguments are about truth.
"Winning" your way does not interest me. I can say a thousand times
that you're wrong but I wouldn't call you a liar. The fact that you
think that winning matters at any price is a reflection on you. For
myself I'd never say you're lying. Not even that you're
hysterical. I think you're naturally prejudiced in favour of your
brain-child to a point of using everything eg. twisting your opponents
meaning often in a rather silly manner as for example in the
"complete title" exchange..

First research it: What kind of panel,
how selected to represent a listener variety from boom
box carriers to virtuoso flute players, how widely
representative of gender, age , training and experience,
what statistical criteria are you using., what degree of
physical difference between the components for study
you'll allow?

Krueger says: "If you want an answer that question Ludovic, first get
back
into a reasonable context for asking it".

More meaningless gobbledygook. What "reasonable context" you'll
allow? What's your's? Define it and answer: Where is your basic
research?
Once you've done this field work come back and present
you results for independent review
Arny either we've been reading two different papers, or
you two learned gentlemen Sean Olive of the Nat. Research
Ccil of Canada. , McGill Univ. PH.D. candidate. And AES
Fellow . and you of the RAO are in serious disagreement.

LOL!
If I were you I'd take it up with him
instead of lecturing me and this captive audience.

He assures me: "Sean and I generally agree about subjective testing.
Some
evidence of this is in the Usenet archives.

We're not talking about "subjective testing" whatever that may
mean. How do you "test" "subjectively". Subjectively you can
only voice your opinion.. We're talking about using ABX to compare
components for their musical reproduction characteristics.
Repeat : no quarrel with double blind comparison. It removes one source
of bias. But to imagine that just because you listened double blind
you're entitled to lay down a rule for me is nonsense. You're you
with your preferences, experience etc and I am I. There are many people
whose opinion sighted I'd prefer to many others double blinded.
As for ABX protocol it is another matter again..
Research it. Validate it. Show when and to whom it will get positive
results. Then come back.
Ludovic Mirabel

  #24   Report Post  
Arny Krueger
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive

wrote in message
oups.com

Arny adds: "..in a context where it is a given that the
alternatives sound significantly different".


Would you please translate this gobbledygook into normal
language of communication between literate humans.?


It already is, at least for people with a high school
reading level.

Ever go to a good college, Ludovic - a good college in an
english-speaking country?

What on earth are you saying?


Get a translator.

Sean Olive sounds clear enough
to me without your pompous pseudo-scientific parody of
his clear statement.: He found and reported SIGNIFICANT
DIFFERENCES IN PERFORMANCE COMPARING LOUDSPEAKERS. Twist
and turn that is what he says.



OK Ludovic, so I blew your little mind by using the word
"significantly" instead of "significant". Other than that
we're pretty much saying the same thing.

Finally he did not use ABX protocol because he found it
"unsuitable" for his task.


Which is hardly news because he was doing quantitive tests
for audible differences that were known to exist, not
go/no-go tests to see if an audible difference exists.

You answer: Completely understandable since there is no
controversy as to whether speakers sound different from
each other.


Ah so you can read English after all, Ludovic! Huzzah!

BTW
just in case you forgot again Ludovic, speakers generally
sound different so ABXing them to see if they sound
different is a waste of time.


Arny- you can keep on repeating that "speakers
"generally"sound different" till you're blue in the face.


Right, and its pretty clear that Ludovic can't get that
simple concept.

But you can not say that Olive agrees with you: his
papers examines "the differences in performance
...COMPARING LOUDSPEAKERS"


Which differs from what I said, how?


I see nothing wrong with using a common sense precaution
of double-blinding.



I see a lot wrong with trumpeting
certainties about a never researched, never validated
ABX protocol APPLIED TO COMPARING MUSICAL REPRODUCTION OF
AUDIO COMPONENTS.


Krueger shrills: "You're lying again Ludovic, or maybe
you're just too hysterical to know that you aren't
telling the truth. "



Here we go again. In the end when up against it you
couldn't resist reaching for the classical last resort
weapon of your tribe: assaulting your opponent's
character


Admittedly, when I assult your character Ludovic, I'm
picking on something that is very weak. It's not really an
assault, it more like a finger flick.

.and accusing him of bad faith. Obscenity can
not be far behind..


Ludovic, in your dreams, little man

For me this stops any reasoned argument dead. I have no
taste for exchanging insults.
For your information- in other places arguments are about
truth. "Winning" your way does not interest me.


Niether does rationality or truth seem to interest you,
Ludovic.

I can say
a thousand times that you're wrong but I wouldn't call
you a liar.


That has something to do with the fact that I try very hard
not to tell lies.

Of course with you Ludovic, there is a serious question
about whether you're lying or whether you're just that far
out of it.

The fact that you think that winning matters
at any price is a reflection on you.


Ludovic I'm not concerned with winning because such battle
as ever was is long over and your side lost. Remember that
you came into this situation with a bogus listening test
that was based on sighted evaluation of two different pieces
of equipment playing at the same time!


zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz!


  #25   Report Post  
George Middius
 
Posts: n/a
Default Arnii Krooborg's black magic "ssicicccnece"




The Krooborg plays the Inferiority Card.

Ever go to a good college, Ludovic


Do you mean one as good as a small community college in the hinterlands of
Michigan?



  #26   Report Post  
Arny Krueger
 
Posts: n/a
Default Arnii Krooborg's black magic "ssicicccnece"

"George Middius" wrote in
message
The Krooborg plays the Inferiority Card.

Ever go to a good college, Ludovic


Do you mean one as good as a small community college in
the hinterlands of Michigan?


To do better than Ludovic currently is, not even that.

George, why not give me an example of a small community
college in
the hinterlands of Michigan?

Let's see how much you know about Michigan schools.


  #27   Report Post  
George Middius
 
Posts: n/a
Default Arnii Krooborg's black magic "ssicicccnece"




Move over, Wilma. A cosmic snotstorm is brewing up north.

Do you mean one as good as a small community college in
the hinterlands of Michigan?


George, why not give me an example of a small community
college in the hinterlands of Michigan?


Thanks Mr. Kroofeces for admitting that you are nearly uneducated.


..
..
..
..
..
..

  #28   Report Post  
Steven Sullivan
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive

Arny Krueger wrote:
"Steven Sullivan" wrote in message


'Our polling also reveals that audiophiles are
increasingly willing to
go out on a sonic limb to find components, with a whopping
68% of respondents saying that they have already bought
something without first hearing it."


Jon Iverson, Stereophile, Oct 2005, p 18.


No doubt they based their purchases on glowing testimonials
found in high end audio publications.


Quite possibly -- the perverted poll-ee quoted afterwards
mentions 30-day return policies and 'good reviews' as impetus
enough to buy before hearing.


--
-S
"The most appealing intuitive argument for atheism is the mindblowing stupidity of religious
fundamentalists." -- Ginger Yellow
  #29   Report Post  
Arny Krueger
 
Posts: n/a
Default Arnii Krooborg's black magic "ssicicccnece"

"Ge0rge Middius" wrote in
message

Watch Ge0rge bob and weave, now that with my nudging he
finally figured out that calling "Oakland University" "a
small community college" sounds very stupid on the face of
it

Do you mean one as good as a small community college in
the hinterlands of Michigan?


Ge0rge, why not give me an example of a small community
college in the hinterlands of Michigan?


Thanks Mr. Kroofeces for admitting that you are nearly
uneducated.


IOW Ge0rge just dug himself another hole.


  #30   Report Post  
George Middius
 
Posts: n/a
Default Arnii Krooborg's black magic "ssicicccnece"



The Beast smears himself with you-know-what.

Thanks Mr. Kroofeces for admitting that you are nearly
uneducated.


IOW Ge0rge[sic] just dug himself another hole.


Thanks Mr. Kroo**** for admitting you aspire to live in the hole beneath an
outhouse.


..
..
..
..
..
..



  #31   Report Post  
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive


wrote in message
oups.com...
Arny Krueger wrote:
wrote in message
oups.com

Arny Krueger wrote:



Speaker testing is a red herring in a discussion of
listening tests involving digital formats because it is a
completely different game. There's no controversy over
the idea that speakers sound different. ABX testing can
distinguish speakers from themselves if you just move
them around a bit in the room.


Monadic testing of speakers is also a red herring for
similar reasons and then some. Since there's no
controversy over the idea that speakers sound different,
the ABX test would be a poor choice. Speaker tests by
so-called objectivists have been monadic for one or more
decades. Check out the AES22 standard including speaker
evaluation form which can be downloaded from the web
site belonging to that well-known coven of objectivists
- the AES.


So, when Lavo tries to claim some kind of victory when
so-called objectivists do monadic tests of speakers, its
really very old news. It is yet another example of Harry
speaking out of the back of his neck with a forked
tongue. :-(


Arny says:


There's no controversy over the
idea that speakers sound different. ABX testing can
distinguish speakers from themselves if you just move
them around a bit in the room."


You're repeating the old mantras endlessly in the hope
that you'll outlast and outbore any skeptics.


What I'm doing Ludovic is countering your constant
repetition of old mantras.

Olive named his paper: "Differences In PERFORMANCE AND
preference ...."


Thanks for showing that even providing the complete name of
the paper would demolish your posturing, Ludovic.

On p.808 he defined his performance "metric:".:"This
metric
accounts for the listeners' ability to DISCRIMINATE
between loudspeakers as well as their ability to repeat
their ratings...".


Like I said, loudspeakers.

And in the preamble he said: Significant differences in
PERFORMANCE.... were found among the different categories
of listeners"


...in a context where it is a given that the alternatives
sound significantly different.

Finally he did not use ABX protocol because he found it
"unsuitable" for his task.


Completely understandable since there is no controversy as
to whether speakers sound different from each other. BTW
just in case you forgot again Ludovic, speakers generally
sound different so ABXing them to see if they sound
different is a waste of time.

I see nothing wrong with using a common sense precaution
of double-blinding. I see a lot wrong with trumpeting
certainties about a never researched, never validated
ABX protocol APPLIED TO COMPARING MUSICAL REPRODUCTION OF
AUDIO COMPONENTS.


You're lying again Ludovic, or maybe you're just too
hysterical to know that you aren't telling the truth.

First research it: What kind of panel,
how selected to represent a listener variety from boom
box carriers to virtuoso flute players, how widely
representative of gender, age , training and experience,
what statistical criteria are you using., what degree of
physical difference between the components for study
you'll allow?


If you want an answer that question Ludovic, first get back
into a reasonable context for asking it.

Once you've done this field work come back and present
you results for independent review
Arny either we've been reading two different papers, or
you two learned gentlemen Sean Olive of the Nat. Research
Ccil of Canada. , McGill Univ. PH.D. candidate. And AES
Fellow . and you of the RAO are in serious disagreement.


LOL!

If I were you I'd take it up with him
instead of lecturing me and this captive audience.


Sean and I generally agree about subjective testing. Some
evidence of this is in the Usenet archives.


A little personal lesson. I learned in my professional
life as a consultant cardiologist to check my sources
carefully before sounding off in matters of life and
death. I apply the same habit to any of my written
statements.


Not in this life you don't Ludovic. You're a classic loose
cannon.

Check carefully before you take up the dueling sword.
Thankfully you don't elect the refuge in childish
obscenities like your faithful Sancho Panza NYOB.


Fact is Ludovic, the way you trash the truth is obscene.

__________________________________________________ _______
__________________________________________________ _______
What I'm doing Ludovic is countering your constant
repetition of old mantras.
Olive named his paper: "Differences In PERFORMANCE AND
preference ...."

You answer: Thanks for showing that even providing the complete name of

the paper would demolish your posturing, Ludovic.

This is typical. In a normal, decent debate respectful of your
audience you'd now quote this "complete name" that
"demolishes" me.

But this is not a decent debate.. This is only A. Krueger and this is
only RAO. So one slyly insinuates sinister motive to the omission of
insubstantial part from the typing chore. One hopes of course that no
one will check . So here, damn you for forcing me into your
time-wasting nonsense games, is the complete title. "Differences in
PERFORMANCE AND ( my capitals L.M.) preference of trained versus
untrained listeners in loudspeaker tests: a case study"
Did I omit a comma somewhere Arny? That would be because I'm so
demolished that I can no longer distinguish real discussion from your
degrading version of it.

On p.808 he defined his performance "metric:".:"This
metric
accounts for the listeners' ability to DISCRIMINATE
between loudspeakers as well as their ability to repeat
their ratings...".


Like I said, loudspeakers.

And in the preamble he said: Significant differences in
PERFORMANCE.... were found among the different categories
of listeners"

Arny adds: "..in a context where it is a given that the alternatives
sound significantly different".

Would you please translate this gobbledygook into normal language of
communication between literate humans.? What on earth are you saying?
Sean Olive sounds clear enough to me without your pompous
pseudo-scientific parody of his clear statement.: He found and reported
SIGNIFICANT DIFFERENCES IN PERFORMANCE COMPARING LOUDSPEAKERS. Twist
and turn that is what he says.
Finally he did not use ABX protocol because he found it
"unsuitable" for his task.

You answer: Completely understandable since there is no controversy as
to whether speakers sound different from each other. BTW
just in case you forgot again Ludovic, speakers generally
sound different so ABXing them to see if they sound
different is a waste of time.

Arny- you can keep on repeating that "speakers "generally"sound
different" till you're blue in the face. But you can not say that
Olive agrees with you: his papers examines "the differences in
performance ...COMPARING LOUDSPEAKERS" Please don't forget to
harumph that I did not repeat the complete title this time for the
sinister reason of making you sound ridiculous. Because that was my
reason..

I see nothing wrong with using a common sense precaution
of double-blinding. I see a lot wrong with trumpeting
certainties about a never researched, never validated
ABX protocol APPLIED TO COMPARING MUSICAL REPRODUCTION OF
AUDIO COMPONENTS.

Krueger shrills: "You're lying again Ludovic, or maybe you're just
too
hysterical to know that you aren't telling the truth. "

Here we go again. In the end when up against it you couldn't resist
reaching for the classical last resort weapon of your tribe: assaulting
your opponent's character .and accusing him of bad faith. Obscenity
can not be far behind..
For me this stops any reasoned argument dead. I have no taste for
exchanging insults.
For your information- in other places arguments are about truth.
"Winning" your way does not interest me. I can say a thousand times
that you're wrong but I wouldn't call you a liar. The fact that you
think that winning matters at any price is a reflection on you. For
myself I'd never say you're lying. Not even that you're
hysterical. I think you're naturally prejudiced in favour of your
brain-child to a point of using everything eg. twisting your opponents
meaning often in a rather silly manner as for example in the
"complete title" exchange..

First research it: What kind of panel,
how selected to represent a listener variety from boom
box carriers to virtuoso flute players, how widely
representative of gender, age , training and experience,
what statistical criteria are you using., what degree of
physical difference between the components for study
you'll allow?

Krueger says: "If you want an answer that question Ludovic, first get
back
into a reasonable context for asking it".

More meaningless gobbledygook. What "reasonable context" you'll
allow? What's your's? Define it and answer: Where is your basic
research?
Once you've done this field work come back and present
you results for independent review
Arny either we've been reading two different papers, or
you two learned gentlemen Sean Olive of the Nat. Research
Ccil of Canada. , McGill Univ. PH.D. candidate. And AES
Fellow . and you of the RAO are in serious disagreement.

LOL!
If I were you I'd take it up with him
instead of lecturing me and this captive audience.

He assures me: "Sean and I generally agree about subjective testing.
Some
evidence of this is in the Usenet archives.

We're not talking about "subjective testing" whatever that may
mean.


Of course we are you freaking idiot. Using objective methods to do so is
the part you don't like.

How do you "test" "subjectively". Subjectively you can
only voice your opinion.. We're talking about using ABX to compare
components for their musical reproduction characteristics.


Which are ultimately determined by the person or persons taking the test.
One could assume after so many of them that they apply universally, but that
would not necessarlly be true. ABX is still an objective way to get more
reliable subjective details.

Repeat : no quarrel with double blind comparison. It removes one source
of bias. But to imagine that just because you listened double blind
you're entitled to lay down a rule for me is nonsense. You're you
with your preferences, experience etc and I am I. There are many people
whose opinion sighted I'd prefer to many others double blinded.


Because you are not as smart as you ought to be.

As for ABX protocol it is another matter again..
Research it. Validate it. Show when and to whom it will get positive
results. Then come back.
Ludovic Mirabel

Why is it you won't admit that it's all been done. ABX is one of the AES
standards for determining differences along with others.


  #32   Report Post  
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive


"Robert Morein" wrote in message
...

" wrote in message
link.net...

"Harry Lavo" wrote in message
...

" wrote in message
ink.net...
I havd to share this from RAHE.

wrote:
Harry Lavo wrote:

In this test. That's all you can say for sure. However it is not

an
uncommon phenomenon in abx testing. Sean Olive reportedly has to
screen out
the majority of potential testers because they cannot discriminate
when he
starts training for his abx tests, even when testing for known
differences
in sound.

Sean Olive doesn't do ABX tests. He doesn't "screen out" potential
testers, either; the article Sully referred to used a couple of

hundred
listeners. What he has done is assembled an expert listening panel,
specially trained to identify specific differences in frequency
response. That's a tough task, and not everyone can do it, even with
training. But it has nothing to do with either ABX or preference
testing.

This is the second time in a week you have misrepresented Mr. Olive's
work, Harry. I suggest you ceasse referring to it until you learn
something about it.

In the work reported in the 2003 paper, Olive 'screened out' one
listener -- part of the group that underwent training at Harman to
become 'expert' listeners -- because his results were perfectly
'wrong' -- that is, they showed a perfect *negative* correlation
between loudspeaker preferences in 4-way and 3-way tests. As it turned
out, he suffered from broad-band hearing loss in one ear. All the
other listeners were audiometrically normal.


The various listeners, btw, consisted of audio retailers (n=250),
university students enrolled in engineering or music/recording
industry studies (14), field marketing and salespeople for Harman
(21), professional audio reviewers for popular audio and HT magazines
(6), and finally a set of Harman-trained 'expert' listeners (12),
divided into 36 groups ranging from 3 to 23 listeners per group (each
group, AFAICT, was 'monotypic' - only one 'type' of listener in each
group). Retailers, reviewers, and trained listeners took the 4-way
speaker comparison test; the 3-way comparison was performed by
retailers, trained listeners, marketers, and students.


Amusingly, when the 'listener performance' metric -- a measure of the
listener's ability to discriminate between loudspeakers, combined with
the consistence of their ratings -- was calculated for the different
listener occupations participating in the four-way loudspeaker test
(retailers, reviewers, and trained listeners), audio magazine
reviewers were found to have performed the *worst* on average (that is
, least discriminating and least reliable). In the three-way
loudspeaker tests (retailers, marketing people, students, trained
listeners) students tended to perform worst. In both tests trained
listeners performed best.


I quote: 'The reviewers' performance is something of a surprise given
that they are all paid to audition and review products for various
audiophile magazines. In terms of listening performance, they are
about equal to the marketing and sales people, who are well below the
performance of audio retailers and trained listeners."


That said, the other take-home message was that even with the
difference in performance, the rank order of the speakers by
preference was similar across all 36 listening groups groups -- the
various groups of listeners tended to converge on the same ideas of
'best' and 'worst' sound when they didn't know the brand and
appearance of the speaker. And the 'best' (most preferred)
loudspeakers had the smoothest, flattest and most extended frequency
responses maintained uniformly off axis, in acoustic anaechoic
measurements. This speaker had received a 'class A' rating for three
years running in one audiophile magazine. The least-preferred
loudspeaker was an electrostatic hybrid , and it also measured the
worst. This speaker had *also* received a class A rating for three
years running, and better still had been declared 'product of the
year', by the same audiophile mag (I wonder which?)


Another quote from Olive 2003, from the conclusion of the results
section: "It is the author's experience that most of the differences
in opinion about the sound quality of audio product(s) in our industry
are confounded by the influence of nuisance factors tha have nothing
to do with the product itself. These include differences in listening
rooms, loudspeaker positions, and personal prejudices (such as price,
brand, and reputation) known to strongly influence a person;s
judgement of sound quality (Toole & Olive, 1994). This study has only
reinforced this view. The remarkable consensus in loudspeaker
preference among these 268 listeners was only possible because the
judgements were all made under controlled double-blind listening
conditions."

You might want to continue reading the posts over there. In the first
place, I wasn't talking about this specific test...that was NYOB's own
dumb mistake.

Harry, I didn't post this here to embarrass you, I'm not needed for that.
That I was sure Mr. Olive used ABX testing is in fact my error, that ABX

is
one of the standards for audio testing is still a fac that many including
you seem to try and ignore.

Next I was challenged by Bob that Sean didn't use ABX testing, to which

I
replied by pulling Stewart and JJ's remarks at random from 109 Usenet
posts on the subject.


At which point Bob replied that, well, Sean wasn't Harman and those

other
references don't count.

Oh no, he just works with Floyd Toole as the entire Harman
International
testing department.

More than a little crap going down here.

More than a little disembling on the part of those who don't like what
ABX
keeps demonstrating.

Thanks for admitting you have an inferior mind.

Thanks for being so crushingly predictable.
Roll over.


  #33   Report Post  
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive


"Robert Morein" wrote in message
...

" wrote in message
ink.net...

"Robert Morein" wrote in message
...

" wrote in message
link.net...

"Harry Lavo" wrote in message
...

" wrote in message
ink.net...
I havd to share this from RAHE.

wrote:
Harry Lavo wrote:

In this test. That's all you can say for sure. However it is

not
an
uncommon phenomenon in abx testing. Sean Olive reportedly has
to
screen out
the majority of potential testers because they cannot

discriminate
when he
starts training for his abx tests, even when testing for known
differences
in sound.

Sean Olive doesn't do ABX tests. He doesn't "screen out" potential
testers, either; the article Sully referred to used a couple of
hundred
listeners. What he has done is assembled an expert listening
panel,
specially trained to identify specific differences in frequency
response. That's a tough task, and not everyone can do it, even

with
training. But it has nothing to do with either ABX or preference
testing.

This is the second time in a week you have misrepresented Mr.

Olive's
work, Harry. I suggest you ceasse referring to it until you learn
something about it.

In the work reported in the 2003 paper, Olive 'screened out' one
listener -- part of the group that underwent training at Harman to
become 'expert' listeners -- because his results were perfectly
'wrong' -- that is, they showed a perfect *negative* correlation
between loudspeaker preferences in 4-way and 3-way tests. As it

turned
out, he suffered from broad-band hearing loss in one ear. All the
other listeners were audiometrically normal.


The various listeners, btw, consisted of audio retailers (n=250),
university students enrolled in engineering or music/recording
industry studies (14), field marketing and salespeople for Harman
(21), professional audio reviewers for popular audio and HT

magazines
(6), and finally a set of Harman-trained 'expert' listeners (12),
divided into 36 groups ranging from 3 to 23 listeners per group

(each
group, AFAICT, was 'monotypic' - only one 'type' of listener in
each
group). Retailers, reviewers, and trained listeners took the 4-way
speaker comparison test; the 3-way comparison was performed by
retailers, trained listeners, marketers, and students.


Amusingly, when the 'listener performance' metric -- a measure of

the
listener's ability to discriminate between loudspeakers, combined

with
the consistence of their ratings -- was calculated for the
different
listener occupations participating in the four-way loudspeaker test
(retailers, reviewers, and trained listeners), audio magazine
reviewers were found to have performed the *worst* on average (that

is
, least discriminating and least reliable). In the three-way
loudspeaker tests (retailers, marketing people, students, trained
listeners) students tended to perform worst. In both tests trained
listeners performed best.


I quote: 'The reviewers' performance is something of a surprise

given
that they are all paid to audition and review products for various
audiophile magazines. In terms of listening performance, they are
about equal to the marketing and sales people, who are well below

the
performance of audio retailers and trained listeners."


That said, the other take-home message was that even with the
difference in performance, the rank order of the speakers by
preference was similar across all 36 listening groups groups -- the
various groups of listeners tended to converge on the same ideas of
'best' and 'worst' sound when they didn't know the brand and
appearance of the speaker. And the 'best' (most preferred)
loudspeakers had the smoothest, flattest and most extended
frequency
responses maintained uniformly off axis, in acoustic anaechoic
measurements. This speaker had received a 'class A' rating for
three
years running in one audiophile magazine. The least-preferred
loudspeaker was an electrostatic hybrid , and it also measured the
worst. This speaker had *also* received a class A rating for three
years running, and better still had been declared 'product of the
year', by the same audiophile mag (I wonder which?)


Another quote from Olive 2003, from the conclusion of the results
section: "It is the author's experience that most of the
differences
in opinion about the sound quality of audio product(s) in our

industry
are confounded by the influence of nuisance factors tha have
nothing
to do with the product itself. These include differences in

listening
rooms, loudspeaker positions, and personal prejudices (such as

price,
brand, and reputation) known to strongly influence a person;s
judgement of sound quality (Toole & Olive, 1994). This study has

only
reinforced this view. The remarkable consensus in loudspeaker
preference among these 268 listeners was only possible because the
judgements were all made under controlled double-blind listening
conditions."

You might want to continue reading the posts over there. In the

first
place, I wasn't talking about this specific test...that was NYOB's

own
dumb mistake.

Harry, I didn't post this here to embarrass you, I'm not needed for

that.
That I was sure Mr. Olive used ABX testing is in fact my error, that

ABX
is
one of the standards for audio testing is still a fac that many

including
you seem to try and ignore.

Next I was challenged by Bob that Sean didn't use ABX testing, to

which
I
replied by pulling Stewart and JJ's remarks at random from 109
Usenet
posts on the subject.


At which point Bob replied that, well, Sean wasn't Harman and those
other
references don't count.

Oh no, he just works with Floyd Toole as the entire Harman
International
testing department.

More than a little crap going down here.

More than a little disembling on the part of those who don't like what
ABX
keeps demonstrating.

Thanks for admitting you have an inferior mind.

Thanks for demonstrating you are unable to quit stalking those you feel
aren't as smart as you are. It's nice to see you come clean about your

own
character flaws.

It would be better however if you get over your admitted laziness when it
comes to doing bias controlled testing of things like amps.

Thanks for admitting you feel you have an inferior mind.

Good doggie.


  #34   Report Post  
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive


"surf" wrote in message
...
" wrote ...

"Quaalude"


You have experience with them, don't you?



  #35   Report Post  
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive


"surf" wrote in message
...
" wrote ...

"Quaalude"


You have experience with them, don't you?

Only in witnessing a few people I no longer associate with use them.
Seemed like a big waste. Take a drug that had not been legitimately
manufactured for years and then get goofy for 30 minutes and sleep for
several hours. Boring.




  #36   Report Post  
surf
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive

" wrote...

"surf" wrote...


" wrote ...

"Quaalude"

You have experience with them, don't you?


Only in witnessing a few people I no longer associate with use them.
Seemed like a big waste. Take a drug that had not been legitimately
manufactured for years and then get goofy for 30 minutes and sleep for
several hours. Boring.


Interesting. A 20 yr old, second hand experience.


  #37   Report Post  
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive


Arny Krueger wrote:
wrote in message
oups.com

Arny adds: "..in a context where it is a given that the
alternatives sound significantly different".


Would you please translate this gobbledygook into normal
language of communication between literate humans.?


It already is, at least for people with a high school
reading level.

Ever go to a good college, Ludovic - a good college in an
english-speaking country?

What on earth are you saying?


Get a translator.

Sean Olive sounds clear enough
to me without your pompous pseudo-scientific parody of
his clear statement.: He found and reported SIGNIFICANT
DIFFERENCES IN PERFORMANCE COMPARING LOUDSPEAKERS. Twist
and turn that is what he says.



OK Ludovic, so I blew your little mind by using the word
"significantly" instead of "significant". Other than that
we're pretty much saying the same thing.

Finally he did not use ABX protocol because he found it
"unsuitable" for his task.


Which is hardly news because he was doing quantitive tests
for audible differences that were known to exist, not
go/no-go tests to see if an audible difference exists.

You answer: Completely understandable since there is no
controversy as to whether speakers sound different from
each other.


Ah so you can read English after all, Ludovic! Huzzah!

BTW
just in case you forgot again Ludovic, speakers generally
sound different so ABXing them to see if they sound
different is a waste of time.


Arny- you can keep on repeating that "speakers
"generally"sound different" till you're blue in the face.


Right, and its pretty clear that Ludovic can't get that
simple concept.

But you can not say that Olive agrees with you: his
papers examines "the differences in performance
...COMPARING LOUDSPEAKERS"


Which differs from what I said, how?


I see nothing wrong with using a common sense precaution
of double-blinding.



I see a lot wrong with trumpeting
certainties about a never researched, never validated
ABX protocol APPLIED TO COMPARING MUSICAL REPRODUCTION OF
AUDIO COMPONENTS.


Krueger shrills: "You're lying again Ludovic, or maybe
you're just too hysterical to know that you aren't
telling the truth. "



Here we go again. In the end when up against it you
couldn't resist reaching for the classical last resort
weapon of your tribe: assaulting your opponent's
character


Admittedly, when I assult your character Ludovic, I'm
picking on something that is very weak. It's not really an
assault, it more like a finger flick.

.and accusing him of bad faith. Obscenity can
not be far behind..


Ludovic, in your dreams, little man

For me this stops any reasoned argument dead. I have no
taste for exchanging insults.
For your information- in other places arguments are about
truth. "Winning" your way does not interest me.


Niether does rationality or truth seem to interest you,
Ludovic.

I can say
a thousand times that you're wrong but I wouldn't call
you a liar.


That has something to do with the fact that I try very hard
not to tell lies.

Of course with you Ludovic, there is a serious question
about whether you're lying or whether you're just that far
out of it.

The fact that you think that winning matters
at any price is a reflection on you.


Ludovic I'm not concerned with winning because such battle
as ever was is long over and your side lost. Remember that
you came into this situation with a bogus listening test
that was based on sighted evaluation of two different pieces
of equipment playing at the same time!


zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz!


Quotes from Krueger's former appearances in this thread:
"Speaker testing is a red herring in a discussion of
listening tests involving digital formats because it is a
completely different game. There's no controversy over the
idea that speakers sound different".
"Since there's no controvery
over the idea that speakers sound different, the ABX test
would be a poor choice"
Differences are clear to Krueger. They are so clear that ABXing is not
needed. Apparently we have a new position from Arny the Resourceful..
ABX is for testing only if there is controversy about difference. It is
not necessary when testing for preference.
Silly me I thought that one of the ABX articles of faith was that you
cannot detect a preference unless you first detect a difference. Or is
it my English again?
Silly Sean Olive did not consult him though and went ahead testing how
different groups of listeners 1) discriminate (perform) 2) how they
prefer..when listening to loudspeakers.
He went as far as to say in the very summary at the very beginning:
"Significant differences in performance expressed in terms of
magnitude of the loudspeaker F statistic Fl were found among the
different categories of listeners"
And devotes three pages ( 818- 820) two graphs (figures 7 and 8) tand
two paragraphs ( par.3.11"Performance among different listening
groups"; par3 .12; "Occupation as a factor in listener
performance")
In the discussion he distinguishes unequivocally.between the listeners
Performance and Preference and stresses that they did NOT go together;
(p.820): "The differences between trained and untrained listeners (
"untrained" meant some 80+ % of the listeners - like in real life
L.M.) are mostly related to differences in performance." And
"However the loudspeaker rank ordering and the relative differences
in preference between them were quite similar for both trained and
untrained listeners"
Mr. Krueger without ifs, and ands, buts and clever-clever comments
with no relation to the matter at hand:
1) Did Olive test for difference between loudspeakers or did he not?
2) Did he use ABX or did he not?
3) Are you turning and twisting or just plain lying?
4) Is that what they taught you in your "good College" that you think I
should have learnt as well?
5) Have you found yet that one,single published report of ABX component
comparison, any component whatsoever, with a positive outcome. (
Published means submitted to an editorial judgement in a journal or a
mag. not private web site..)?
Regards Ludovic Mirabel

  #38   Report Post  
Arny Krueger
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive

wrote in message
oups.com

Quotes from Krueger's former appearances in this thread:


"Speaker testing is a red herring in a discussion of
listening tests involving digital formats because it is a
completely different game. There's no controversy over the
idea that speakers sound different".


If you've got a problem with that, then deal with it as it
sits, Ludovic.

"Since there's no controvery
over the idea that speakers sound different, the ABX test
would be a poor choice"


If you've got a problem with that, then deal with it as it
sits, Ludovic.

Differences are clear to Krueger. They are so clear that
ABXing is not needed.


If you've got a problem with that, then deal with it as it
sits, Ludovic.

Apparently we have a new position
from Arny the Resourceful.. ABX is for testing only if
there is controversy about difference.


If you think this is a new position Ludovic, then it speaks
to your ignorance.

Here's one of many examples of me saying essentially the
same thing in the distant past:

http://groups.google.com/group/rec.a...4999d89ed35602

Date: Tue, 29 May 2001 10:53:08 GMT

"DBTs are required if you want to talk
about subtle or controversial differences. There are a
wealth of
audible differences that are not subtle and for which there
is no
controversy. For example, most things involving speakers
other than
speaker wire are generally regarded as not being subtle."

So Ludovic, where the heck were you when this all happened
years ago?

It is not
necessary when testing for preference.


Incorrectly stated.

ABX is not a test of preference, it is a go/no-go test for
the presence of audible differences.

As my 2001 post says, if you want to do a qualitative test,
not a simple yes/no test, use ABC/hr:

"Please let me introduce you to the ABC/hr DBT that
produces
numbers comparing the relative *impairment* of the two
devices being
compared. This retrieval will provide many examples of its
use, and
additional details about different ways that it is
implemented:
http://www.google.com/search?hl=en&lr=&safe=off&q=abc+hidden+reference"

Note, ABC/hr is not the same as testing for preference. If
you want to test for preference, do a public opinion survey.


Silly me I thought that one of the ABX articles of faith
was that you cannot detect a preference unless you first
detect a difference. Or is it my English again?


It's your lack of being properly informed, Ludovic.

Silly Sean Olive did not consult him though and went
ahead testing how different groups of listeners 1)
discriminate (perform) 2) how they prefer..when
listening to loudspeakers.


Sean Olive knows all about the ABC/hr test.

He went as far as to say in the very summary at the very
beginning: "Significant differences in performance
expressed in terms of magnitude of the loudspeaker F
statistic Fl were found among the different categories of
listeners"
And devotes three pages ( 818- 820) two graphs (figures 7
and 8) tand two paragraphs ( par.3.11"Performance among
different listening groups"; par3 .12; "Occupation as a
factor in listener
performance")
In the discussion he distinguishes unequivocally.between
the listeners Performance and Preference and stresses
that they did NOT go together; (p.820): "The
differences between trained and untrained listeners (
"untrained" meant some 80+ % of the listeners - like in
real life L.M.) are mostly related to differences in
performance." And "However the loudspeaker rank ordering
and the relative differences
in preference between them were quite similar for both
trained and untrained listeners"
Mr. Krueger without ifs, and ands, buts and clever-clever
comments with no relation to the matter at hand:


1) Did Olive test for difference between loudspeakers or
did he not?


He did a qualitative test, not a simple yes/no test.

2) Did he use ABX or did he not?


Clearly Olive didn't use ABX for these tests, and clearly
he shouldn't have.

3) Are you turning and twisting or just plain lying?


I'm being consistent with the accepted state of the art, and
saying the same thing here I said here years ago.

4) Is that what they taught you in your "good College"
that you think I should have learnt as well?


Yawn, Ludovic is trying to be snotty again.

5) Have you found yet that one,single published report of
ABX component comparison, any component whatsoever, with
a positive outcome. ( Published means submitted to an
editorial judgement in a journal or a mag. not private
web site..)?


Asked and answered.

Yawn, Ludovic is trying to be snotty again.


  #39   Report Post  
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive


Arny Krueger wrote:
wrote in message
oups.com

Quotes from Krueger's former appearances in this thread:


"Speaker testing is a red herring in a discussion of
listening tests involving digital formats because it is a
completely different game. There's no controversy over the
idea that speakers sound different".


If you've got a problem with that, then deal with it as it
sits, Ludovic.

"Since there's no controvery
over the idea that speakers sound different, the ABX test
would be a poor choice"


If you've got a problem with that, then deal with it as it
sits, Ludovic.

Differences are clear to Krueger. They are so clear that
ABXing is not needed.


If you've got a problem with that, then deal with it as it
sits, Ludovic.

Apparently we have a new position
from Arny the Resourceful.. ABX is for testing only if
there is controversy about difference.


If you think this is a new position Ludovic, then it speaks
to your ignorance.

Here's one of many examples of me saying essentially the
same thing in the distant past:

http://groups.google.com/group/rec.a...4999d89ed35602

Date: Tue, 29 May 2001 10:53:08 GMT

"DBTs are required if you want to talk
about subtle or controversial differences. There are a
wealth of
audible differences that are not subtle and for which there
is no
controversy. For example, most things involving speakers
other than
speaker wire are generally regarded as not being subtle."

So Ludovic, where the heck were you when this all happened
years ago?

It is not
necessary when testing for preference.


Incorrectly stated.

ABX is not a test of preference, it is a go/no-go test for
the presence of audible differences.

As my 2001 post says, if you want to do a qualitative test,
not a simple yes/no test, use ABC/hr:

"Please let me introduce you to the ABC/hr DBT that
produces
numbers comparing the relative *impairment* of the two
devices being
compared. This retrieval will provide many examples of its
use, and
additional details about different ways that it is
implemented:
http://www.google.com/search?hl=en&lr=&safe=off&q=abc+hidden+reference"

Note, ABC/hr is not the same as testing for preference. If
you want to test for preference, do a public opinion survey.


Silly me I thought that one of the ABX articles of faith
was that you cannot detect a preference unless you first
detect a difference. Or is it my English again?


It's your lack of being properly informed, Ludovic.

Silly Sean Olive did not consult him though and went
ahead testing how different groups of listeners 1)
discriminate (perform) 2) how they prefer..when
listening to loudspeakers.


Sean Olive knows all about the ABC/hr test.

He went as far as to say in the very summary at the very
beginning: "Significant differences in performance
expressed in terms of magnitude of the loudspeaker F
statistic Fl were found among the different categories of
listeners"
And devotes three pages ( 818- 820) two graphs (figures 7
and 8) tand two paragraphs ( par.3.11"Performance among
different listening groups"; par3 .12; "Occupation as a
factor in listener
performance")
In the discussion he distinguishes unequivocally.between
the listeners Performance and Preference and stresses
that they did NOT go together; (p.820): "The
differences between trained and untrained listeners (
"untrained" meant some 80+ % of the listeners - like in
real life L.M.) are mostly related to differences in
performance." And "However the loudspeaker rank ordering
and the relative differences
in preference between them were quite similar for both
trained and untrained listeners"
Mr. Krueger without ifs, and ands, buts and clever-clever
comments with no relation to the matter at hand:


1) Did Olive test for difference between loudspeakers or
did he not?


He did a qualitative test, not a simple yes/no test.

2) Did he use ABX or did he not?


Clearly Olive didn't use ABX for these tests, and clearly
he shouldn't have.

3) Are you turning and twisting or just plain lying?


I'm being consistent with the accepted state of the art, and
saying the same thing here I said here years ago.

4) Is that what they taught you in your "good College"
that you think I should have learnt as well?


Yawn, Ludovic is trying to be snotty again.

5) Have you found yet that one,single published report of
ABX component comparison, any component whatsoever, with
a positive outcome. ( Published means submitted to an
editorial judgement in a journal or a mag. not private
web site..)?


Asked and answered.

Yawn, Ludovic is trying to be snotty again.


__________________________________________________ ____

Quotes from Krueger's former appearances in this thread:
"Speaker testing is a red herring in a discussion of
listening tests involving digital formats because it is a
completely different game. There's no controversy over the
idea that speakers sound different".

If you've got a problem with that, then deal with it as it
sits, Ludovic.

No Arny I have no problem with this- I".m glad you agree. Speakers
do sound different
"Since there's no controvery
over the idea that speakers sound different, the ABX test
would be a poor choice"

If you've got a problem with that, then deal with it as it
sits, Ludovic.

Yes I have a problem with that. There is no disagreement that speakers
do sound different. Therefore they constitute an ideal object for
demonstrating the validity of any proposed component testing method.
Especially the one in question that as yet failed ,repeat FAILED, to
show differences between any other audio component class to the
majority of panelists in any trial reported in any mag, anywhere.. The
controversy is not about obvious facts- it is about the best way of
investigating them. You say it is ABX. Show it!. Investigating means
answering detailed questions such as: what level of difference ibetween
what speakers is perceivable by what class of people. And others such
I notice you did not answer my question Nr,1: DID OLIVE TEST FOR
DIFFERENCE BETWEEN LOUDSPEAKERS OR DID HE NOT? In capitals so that you
do not miss it this time.
As I showed he did not think : no controversy- no investigation
necessary. He researched it double blind because ABX was "not
suitable". And you know what ?. Majority of his panelists could not
distinguish loudspeakers-one from the other- not even just double
blind. Just imagine the hash they would make of ABXing,
But when asked a straightforward like or dislike question the same
majority selected the better speakers. This is what Olive's
investigation made of that other objectivist mantra.: "You can't
tell the difference- then you can't have a preference". Remember
the loud huzzahs about the Zipper/Singh failure to differentiate under
ABX?
And please do not start dragging your red herrings down this trail.
Whether ABX/hr is useful in research or not I don't know and I
don't care. This is not a research forum and no one is asking
research questions.
You promote the ABX as a "don't leave home without this audiophile
tool " Stick to it. Do not reject the easy bits, like loudspeakers,
just because you're scared that what happened in Olive test would
happen to you and ABXing your panel would fail to tell the
loudspeakers from each other. Maybe even, Lord forbid, even you.
Arny let me now say something in sorrow rather than anger. You are an
inventive guy , a cut above average, you're bright and articulate
well, you forgot more about electronics than I will ever know. I am
told that. your ABX or its derivatives are used daily by researchers..
This should be plenty satisfying to you. You don't need to extend the
ABX empire to where it does not fit.
Believe me your not so smart dodges, red herrings, answers with no
relevance to the subject at hand, or street -wise ripostes like
"Asked and answered" when there was no answer,ever, do not add to
your stature- on the contrary. When I argue I always hope with one
part of my mind to be persuaded by a clever argument- you succeed only
in irritating - and not myself alone.
Ludovic Mirabel

  #40   Report Post  
Robert Morein
 
Posts: n/a
Default Since Quaaludeovic is so fond of Sean Olive


" wrote in message
ink.net...

"Robert Morein" wrote in message
...

" wrote in message
ink.net...

"Robert Morein" wrote in message
...

" wrote in message
link.net...

"Harry Lavo" wrote in message
...

" wrote in message
ink.net...
I havd to share this from RAHE.

wrote:
Harry Lavo wrote:

In this test. That's all you can say for sure. However it is

not
an
uncommon phenomenon in abx testing. Sean Olive reportedly has
to
screen out
the majority of potential testers because they cannot

discriminate
when he
starts training for his abx tests, even when testing for known
differences
in sound.

Sean Olive doesn't do ABX tests. He doesn't "screen out"

potential
testers, either; the article Sully referred to used a couple of
hundred
listeners. What he has done is assembled an expert listening
panel,
specially trained to identify specific differences in frequency
response. That's a tough task, and not everyone can do it, even

with
training. But it has nothing to do with either ABX or preference
testing.

This is the second time in a week you have misrepresented Mr.

Olive's
work, Harry. I suggest you ceasse referring to it until you

learn
something about it.

In the work reported in the 2003 paper, Olive 'screened out' one
listener -- part of the group that underwent training at Harman

to
become 'expert' listeners -- because his results were perfectly
'wrong' -- that is, they showed a perfect *negative* correlation
between loudspeaker preferences in 4-way and 3-way tests. As it

turned
out, he suffered from broad-band hearing loss in one ear. All the
other listeners were audiometrically normal.


The various listeners, btw, consisted of audio retailers (n=250),
university students enrolled in engineering or music/recording
industry studies (14), field marketing and salespeople for Harman
(21), professional audio reviewers for popular audio and HT

magazines
(6), and finally a set of Harman-trained 'expert' listeners (12),
divided into 36 groups ranging from 3 to 23 listeners per group

(each
group, AFAICT, was 'monotypic' - only one 'type' of listener in
each
group). Retailers, reviewers, and trained listeners took the

4-way
speaker comparison test; the 3-way comparison was performed by
retailers, trained listeners, marketers, and students.


Amusingly, when the 'listener performance' metric -- a measure of

the
listener's ability to discriminate between loudspeakers, combined

with
the consistence of their ratings -- was calculated for the
different
listener occupations participating in the four-way loudspeaker

test
(retailers, reviewers, and trained listeners), audio magazine
reviewers were found to have performed the *worst* on average

(that
is
, least discriminating and least reliable). In the three-way
loudspeaker tests (retailers, marketing people, students, trained
listeners) students tended to perform worst. In both tests

trained
listeners performed best.


I quote: 'The reviewers' performance is something of a surprise

given
that they are all paid to audition and review products for

various
audiophile magazines. In terms of listening performance, they are
about equal to the marketing and sales people, who are well below

the
performance of audio retailers and trained listeners."


That said, the other take-home message was that even with the
difference in performance, the rank order of the speakers by
preference was similar across all 36 listening groups groups --

the
various groups of listeners tended to converge on the same ideas

of
'best' and 'worst' sound when they didn't know the brand and
appearance of the speaker. And the 'best' (most preferred)
loudspeakers had the smoothest, flattest and most extended
frequency
responses maintained uniformly off axis, in acoustic anaechoic
measurements. This speaker had received a 'class A' rating for
three
years running in one audiophile magazine. The least-preferred
loudspeaker was an electrostatic hybrid , and it also measured

the
worst. This speaker had *also* received a class A rating for

three
years running, and better still had been declared 'product of the
year', by the same audiophile mag (I wonder which?)


Another quote from Olive 2003, from the conclusion of the results
section: "It is the author's experience that most of the
differences
in opinion about the sound quality of audio product(s) in our

industry
are confounded by the influence of nuisance factors tha have
nothing
to do with the product itself. These include differences in

listening
rooms, loudspeaker positions, and personal prejudices (such as

price,
brand, and reputation) known to strongly influence a person;s
judgement of sound quality (Toole & Olive, 1994). This study has

only
reinforced this view. The remarkable consensus in loudspeaker
preference among these 268 listeners was only possible because

the
judgements were all made under controlled double-blind listening
conditions."

You might want to continue reading the posts over there. In the

first
place, I wasn't talking about this specific test...that was NYOB's

own
dumb mistake.

Harry, I didn't post this here to embarrass you, I'm not needed for

that.
That I was sure Mr. Olive used ABX testing is in fact my error, that

ABX
is
one of the standards for audio testing is still a fac that many

including
you seem to try and ignore.

Next I was challenged by Bob that Sean didn't use ABX testing, to

which
I
replied by pulling Stewart and JJ's remarks at random from 109
Usenet
posts on the subject.


At which point Bob replied that, well, Sean wasn't Harman and

those
other
references don't count.

Oh no, he just works with Floyd Toole as the entire Harman
International
testing department.

More than a little crap going down here.

More than a little disembling on the part of those who don't like

what
ABX
keeps demonstrating.

Thanks for admitting you have an inferior mind.

Thanks for demonstrating you are unable to quit stalking those you feel
aren't as smart as you are. It's nice to see you come clean about your

own
character flaws.

It would be better however if you get over your admitted laziness when

it
comes to doing bias controlled testing of things like amps.

Thanks for admitting you feel you have an inferior mind.

Good doggie.

Mikey, you have an inferior mind.


Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Summing up [email protected] Audio Opinions 199 October 15th 05 12:18 AM
Time for the 'borgs to admit the truth George M. Middius Audio Opinions 368 October 11th 05 03:51 AM
Sean Olive on loudspeakers Nousaine High End Audio 1 September 29th 03 01:34 AM


All times are GMT +1. The time now is 10:57 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"