View Single Post
  #9   Report Post  
Steven Sullivan
 
Posts: n/a
Default DVD audio vs. SACD

wrote:
Harry Lavo wrote:


In this test. That's all you can say for sure. However it is not an
uncommon phenomenon in abx testing. Sean Olive reportedly has to screen out
the majority of potential testers because they cannot discriminate when he
starts training for his abx tests, even when testing for known differences
in sound.


Sean Olive doesn't do ABX tests. He doesn't "screen out" potential
testers, either; the article Sully referred to used a couple of hundred
listeners. What he has done is assembled an expert listening panel,
specially trained to identify specific differences in frequency
response. That's a tough task, and not everyone can do it, even with
training. But it has nothing to do with either ABX or preference
testing.


This is the second time in a week you have misrepresented Mr. Olive's
work, Harry. I suggest you ceasse referring to it until you learn
something about it.


In the work reported in the 2003 paper, Olive 'screened out' one
listener -- part of the group that underwent training at Harman to
become 'expert' listeners -- because his results were perfectly
'wrong' -- that is, they showed a perfect *negative* correlation
between loudspeaker preferences in 4-way and 3-way tests. As it turned
out, he suffered from broad-band hearing loss in one ear. All the
other listeners were audiometrically normal.


The various listeners, btw, consisted of audio retailers (n=250),
university students enrolled in engineering or music/recording
industry studies (14), field marketing and salespeople for Harman
(21), professional audio reviewers for popular audio and HT magazines
(6), and finally a set of Harman-trained 'expert' listeners (12),
divided into 36 groups ranging from 3 to 23 listeners per group (each
group, AFAICT, was 'monotypic' - only one 'type' of listener in each
group). Retailers, reviewers, and trained listeners took the 4-way
speaker comparison test; the 3-way comparison was performed by
retailers, trained listeners, marketers, and students.


Amusingly, when the 'listener performance' metric -- a measure of the
listener's ability to discriminate between loudspeakers, combined with
the consistence of their ratings -- was calculated for the different
listener occupations participating in the four-way loudspeaker test
(retailers, reviewers, and trained listeners), audio magazine
reviewers were found to have performed the *worst* on average (that is
, least discriminating and least reliable). In the three-way
loudspeaker tests (retailers, marketing people, students, trained
listeners) students tended to perform worst. In both tests trained
listeners performed best.


I quote: 'The reviewers' performance is something of a surprise given
that they are all paid to audition and review products for various
audiophile magazines. In terms of listening performance, they are
about equal to the marketing and sales people, who are well below the
performance of audio retailers and trained listeners."


That said, the other take-home message was that even with the
difference in performance, the rank order of the speakers by
preference was similar across all 36 listening groups groups -- the
various groups of listeners tended to converge on the same ideas of
'best' and 'worst' sound when they didn't know the brand and
appearance of the speaker. And the 'best' (most preferred)
loudspeakers had the smoothest, flattest and most extended frequency
responses maintained uniformly off axis, in acoustic anaechoic
measurements. This speaker had received a 'class A' rating for three
years running in one audiophile magazine. The least-preferred
loudspeaker was an electrostatic hybrid , and it also measured the
worst. This speaker had *also* received a class A rating for three
years running, and better still had been declared 'product of the
year', by the same audiophile mag (I wonder which?)


Another quote from Olive 2003, from the conclusion of the results
section: "It is the author's experience that most of the differences
in opinion about the sound quality of audio product(s) in our industry
are confounded by the influence of nuisance factors tha have nothing
to do with the product itself. These include differences in listening
rooms, loudspeaker positions, and personal prejudices (such as price,
brand, and reputation) known to strongly influence a person;s
judgement of sound quality (Toole & Olive, 1994). This study has only
reinforced this view. The remarkable consensus in loudspeaker
preference among these 268 listeners was only possible because the
judgements were all made under controlled double-blind listening
conditions."