Reply
 
Thread Tools Display Modes
  #1   Report Post  
Mark DeBellis
 
Posts: n/a
Default subjectivism vindicated, adopted by consumer reports

From the Sept. 2005 Consumer Reports:

"Which dishwasher is quieter: the Quiet Guard 7 or the Quiet Partner
III? Now shoppers can tell for themselves, at least at Sears stores.
The retailer is requiring that every dishwasher it sells ... bear a
sign indicating its noise level ... based on the average 'A-weighted'
decibels (dBA) measured during a dishwasher's run....

"Although the information can help buyers, it isn't ideal. Steve
Orfield, president of Orfield Laboratories ... says there are better
ways than dBA to judge a product's loudness.... Better, in his
opinion, are units of measure known as sones....

"But the best measure of appliance noise and sound quality, Orfield
says, is a human evaluation that goes beyond dBA or sones. For
Consumer Reports noise Ratings, panelists listen to models we've rated
in the past, then compare their noise levels to those of new models as
they run through their cycle. We use similar methods to judge the
noise of refrigerators and air conditioners."

Mark


  #2   Report Post  
Gary Eickmeier
 
Posts: n/a
Default

Mark DeBellis wrote:

From the Sept. 2005 Consumer Reports:

"Which dishwasher is quieter: the Quiet Guard 7 or the Quiet Partner
III? Now shoppers can tell for themselves, at least at Sears stores.
The retailer is requiring that every dishwasher it sells ... bear a
sign indicating its noise level ... based on the average 'A-weighted'
decibels (dBA) measured during a dishwasher's run....

"Although the information can help buyers, it isn't ideal. Steve
Orfield, president of Orfield Laboratories ... says there are better
ways than dBA to judge a product's loudness.... Better, in his
opinion, are units of measure known as sones....

"But the best measure of appliance noise and sound quality, Orfield
says, is a human evaluation that goes beyond dBA or sones. For
Consumer Reports noise Ratings, panelists listen to models we've rated
in the past, then compare their noise levels to those of new models as
they run through their cycle. We use similar methods to judge the
noise of refrigerators and air conditioners."


I think the key question is do they listen sighted or blind, and do they
listen in short snippets or full cycle.

GETTING EVEN

Gary Eickmeier

  #3   Report Post  
 
Posts: n/a
Default

Mark DeBellis wrote:
From the Sept. 2005 Consumer Reports:

"Which dishwasher is quieter: the Quiet Guard 7 or the Quiet Partner
III? Now shoppers can tell for themselves, at least at Sears stores.
The retailer is requiring that every dishwasher it sells ... bear a
sign indicating its noise level ... based on the average 'A-weighted'
decibels (dBA) measured during a dishwasher's run....

"Although the information can help buyers, it isn't ideal. Steve
Orfield, president of Orfield Laboratories ... says there are better
ways than dBA to judge a product's loudness.... Better, in his
opinion, are units of measure known as sones....

"But the best measure of appliance noise and sound quality, Orfield
says, is a human evaluation that goes beyond dBA or sones. For
Consumer Reports noise Ratings, panelists listen to models we've rated
in the past, then compare their noise levels to those of new models as
they run through their cycle. We use similar methods to judge the
noise of refrigerators and air conditioners."


So what? They listen to loudspeakers, too.

When things sound different, listening is always better than just
measuring. When they sound the same, however...

bob

  #4   Report Post  
 
Posts: n/a
Default

Hi Mark,

Fascinating. Although I think that the "objectivists" will respond as
follows:

- Given two dishwashers which measure identically, within 0.1 dB from
20 Hz to 20 KHz, it is predicted that a blind test will not allow the
listener to distinguish them by sound alone.

Mike



Mark DeBellis wrote:
From the Sept. 2005 Consumer Reports:

"Which dishwasher is quieter: the Quiet Guard 7 or the Quiet Partner
III? Now shoppers can tell for themselves, at least at Sears stores.
The retailer is requiring that every dishwasher it sells ... bear a
sign indicating its noise level ... based on the average 'A-weighted'
decibels (dBA) measured during a dishwasher's run....

"Although the information can help buyers, it isn't ideal. Steve
Orfield, president of Orfield Laboratories ... says there are better
ways than dBA to judge a product's loudness.... Better, in his
opinion, are units of measure known as sones....

"But the best measure of appliance noise and sound quality, Orfield
says, is a human evaluation that goes beyond dBA or sones. For
Consumer Reports noise Ratings, panelists listen to models we've rated
in the past, then compare their noise levels to those of new models as
they run through their cycle. We use similar methods to judge the
noise of refrigerators and air conditioners."

Mark


  #5   Report Post  
dodecatheon
 
Posts: n/a
Default

But surely the test is still double-blind?

Mark DeBellis wrote:
From the Sept. 2005 Consumer Reports:

"Which dishwasher is quieter: the Quiet Guard 7 or the Quiet Partner
III? Now shoppers can tell for themselves, at least at Sears stores.
The retailer is requiring that every dishwasher it sells ... bear a
sign indicating its noise level ... based on the average 'A-weighted'
decibels (dBA) measured during a dishwasher's run....

"Although the information can help buyers, it isn't ideal. Steve
Orfield, president of Orfield Laboratories ... says there are better
ways than dBA to judge a product's loudness.... Better, in his
opinion, are units of measure known as sones....

"But the best measure of appliance noise and sound quality, Orfield
says, is a human evaluation that goes beyond dBA or sones. For
Consumer Reports noise Ratings, panelists listen to models we've rated
in the past, then compare their noise levels to those of new models as
they run through their cycle. We use similar methods to judge the
noise of refrigerators and air conditioners."

Mark




  #6   Report Post  
 
Posts: n/a
Default

Not enough information to support your conclusion. One would think the
particular pattern of sounds, their frequency spectrum, the regularity of
percussion like sounds, etc. would be as equally of importance to each
person as would be pure spl. Were the tests blind? If done on a research
basis they could probably soon determine what particular sound events in
the cycle went with which "sound" score and continued testing could be
abridged. Now what I really want to know is if the wire in the power cord
made a difference or if various "stones" in the right place improved
scores. In audio the category most like this is loudspeakers.


From the Sept. 2005 Consumer Reports:

"Which dishwasher is quieter: the Quiet Guard 7 or the Quiet Partner
III? Now shoppers can tell for themselves, at least at Sears stores.
The retailer is requiring that every dishwasher it sells ... bear a
sign indicating its noise level ... based on the average 'A-weighted'
decibels (dBA) measured during a dishwasher's run....

"Although the information can help buyers, it isn't ideal. Steve
Orfield, president of Orfield Laboratories ... says there are better
ways than dBA to judge a product's loudness.... Better, in his
opinion, are units of measure known as sones....

"But the best measure of appliance noise and sound quality, Orfield
says, is a human evaluation that goes beyond dBA or sones. For
Consumer Reports noise Ratings, panelists listen to models we've rated
in the past, then compare their noise levels to those of new models as
they run through their cycle. We use similar methods to judge the
noise of refrigerators and air conditioners."

Mark


  #7   Report Post  
Harry Lavo
 
Posts: n/a
Default

"Mark DeBellis" wrote in message
...
From the Sept. 2005 Consumer Reports:

"Which dishwasher is quieter: the Quiet Guard 7 or the Quiet Partner
III? Now shoppers can tell for themselves, at least at Sears stores.
The retailer is requiring that every dishwasher it sells ... bear a
sign indicating its noise level ... based on the average 'A-weighted'
decibels (dBA) measured during a dishwasher's run....

"Although the information can help buyers, it isn't ideal. Steve
Orfield, president of Orfield Laboratories ... says there are better
ways than dBA to judge a product's loudness.... Better, in his
opinion, are units of measure known as sones....

"But the best measure of appliance noise and sound quality, Orfield
says, is a human evaluation that goes beyond dBA or sones. For
Consumer Reports noise Ratings, panelists listen to models we've rated
in the past, then compare their noise levels to those of new models as
they run through their cycle. We use similar methods to judge the
noise of refrigerators and air conditioners."

Mark



Son of a gun! And not even double-blind! How are those poor people ever to
make a choice?



  #9   Report Post  
Steven Sullivan
 
Posts: n/a
Default

http://www.orfieldlabs.com/Articles/...ions_May94.pdf

from Orfield's publication in Sound & Communicaitons
"Sound Quality PartVI: Associative Responses", May 1994

While there are many academic and engineering experts who have become
quite well grounded in the concept sof psychoacoustics which underlie much
of sound quality and specifically Zwicker's work (See Sound &
Communications , May 1992, Psychoacoustics: Facts and Models) there is
still great difficulty in explaining, even to many of those experts, the
fact that acoustic associative variables may have a far greater impact on
acoustic responses than the absolute value of the sound itself.

///


In evaluating an acoustical product, be it a consumer product or an audio
product, there are some simple methods of assessing its performance. The
two most common are direct acoustical measurement and informat listening
experiments. The measurements may demonstrate an analytical attribute,
such as frequency response or decibel level; listening will suggest and
initial reposne on the part of a consumer. The problem with these methods
is that they fail to assess one particualr variable in the consumer's mind
whihc may govenr the response to either of these sets of information, and
that is the user association set.

With regard to the above variables, the consuemer may have been trained
via advertising to expect 'flat frequency response' on the measurement
continuum. On eht listening continuum, the user may expect that more
expensive audio components have 'more bass'. Innumerable compoenents have
been sold claiming flat response and extended bass response. Researchers
in the audio field know that the listener's response to both these issues
often suggest that they have been biased, by marketing efforts, to prefer
the purchase of audio components which claim a certain specification and
sound quite bass [sic]. A large number of these products do not reproduce
sound accurately and distort the audio signal by overdriving the bass
response and providing poor mid-frequency response or masking
mid-frequency clarity. The specification claims give comfort to the
buyer, and the bass response adds a level of satisfaction (vibration) to
the experience.

By walking into the audio retailer wiuth these two association in mind,
the user feels confident that he has reasonable criteria for system
selection, although the crtierai have no correlation with high quality
audio. The consumer who purchases based on this view may also conclude
that he is quite pleased with the results, regardless of what many of us
would call a low quality audio system. There are a number of major audio
manufacturers who play very heavily on this associative marketing
knowledge and sell very poor products quite successfully.

In the above example, we must conslude that a decision to purchase an
acoustic product has been made based on product associative response
which, ion and of itself, is far more influential to the consumer than the
product performance. Associative response is particularly influential in
the acoustics field because of a number of facts regarding this market.
First the consuner is not technically knowledgable and therefore has
little confidence in his judgements in the presence of 'audiophiles'.
Second, his resulting criteria are often neither relevant to sound quality
nor very high. Any sound system with modest performance will generally
satisfy the consumer, and brand name distinctions can often succeed where
sound quality has not. It must be remembered that the object of marketing
is not to sell good products but to satisfy the consumer. This is often
more easily accomplished by marketing than by engineering."

[Orfield goes on to discuss semantic differential and forced-choice
protocols for marketing research, and for discrimination of difference,
respectively. Orfield does not make this explicit, but in if one was to
control for bias, one would do the tests randomized and blind, as per e.g.

Sensory Evaluation Basics
by Harry T. Lawless
http://www.nysaes.cornell.edu/fst/fa...oryprimer.html

"Most questions about perception of flavors or products will fall into
three categories. First, people want to know, "Are these two products
different?" This calls for the overall difference test, also referred to
as a discrimination test. These tests usually take the form of a
forced-choice procedure, where participants are asked to select one choice
from among a set of products in which only one is physically different
from some standard sample. The second common question is, "How are they
different?" In other words, the goal is to specify, in perceptual terms,
how products differ, in what qualities have they changed and to what
extent. This set of procedures is referred to as descriptive analysis. In
its most common form, a group of trained individuals examines the products
and provides numerical ratings for the perceive intensity of each
attribute.

"Since these methods involve a controlled stimulus and response
measurement scenario using human participants, sensory evaluation borrows
some practices from the behavioral sciences. In order to minimize biases
that may affect the validity or accuracy of a test, blind coding and
control of presentation order are critical. "Blind" coding is usually
achieved by labeling each sample with a meaningless name such as a
randomly chosen three digit number. Participants are provided with only
enough information about the sample to insure that it is viewed in an
appropriate frame of reference or category."




--

-S
  #10   Report Post  
Harry Lavo
 
Posts: n/a
Default

"Steven Sullivan" wrote in message
...
http://www.orfieldlabs.com/Articles/...ions_May94.pdf

from Orfield's publication in Sound & Communicaitons
"Sound Quality PartVI: Associative Responses", May 1994

While there are many academic and engineering experts who have become
quite well grounded in the concept sof psychoacoustics which underlie much
of sound quality and specifically Zwicker's work (See Sound &
Communications , May 1992, Psychoacoustics: Facts and Models) there is
still great difficulty in explaining, even to many of those experts, the
fact that acoustic associative variables may have a far greater impact on
acoustic responses than the absolute value of the sound itself.

///


In evaluating an acoustical product, be it a consumer product or an audio
product, there are some simple methods of assessing its performance. The
two most common are direct acoustical measurement and informat listening
experiments. The measurements may demonstrate an analytical attribute,
such as frequency response or decibel level; listening will suggest and
initial reposne on the part of a consumer. The problem with these methods
is that they fail to assess one particualr variable in the consumer's mind
whihc may govenr the response to either of these sets of information, and
that is the user association set.

With regard to the above variables, the consuemer may have been trained
via advertising to expect 'flat frequency response' on the measurement
continuum. On eht listening continuum, the user may expect that more
expensive audio components have 'more bass'. Innumerable compoenents have
been sold claiming flat response and extended bass response. Researchers
in the audio field know that the listener's response to both these issues
often suggest that they have been biased, by marketing efforts, to prefer
the purchase of audio components which claim a certain specification and
sound quite bass [sic]. A large number of these products do not reproduce
sound accurately and distort the audio signal by overdriving the bass
response and providing poor mid-frequency response or masking
mid-frequency clarity. The specification claims give comfort to the
buyer, and the bass response adds a level of satisfaction (vibration) to
the experience.

By walking into the audio retailer wiuth these two association in mind,
the user feels confident that he has reasonable criteria for system
selection, although the crtierai have no correlation with high quality
audio. The consumer who purchases based on this view may also conclude
that he is quite pleased with the results, regardless of what many of us
would call a low quality audio system. There are a number of major audio
manufacturers who play very heavily on this associative marketing
knowledge and sell very poor products quite successfully.

In the above example, we must conslude that a decision to purchase an
acoustic product has been made based on product associative response
which, ion and of itself, is far more influential to the consumer than the
product performance. Associative response is particularly influential in
the acoustics field because of a number of facts regarding this market.
First the consuner is not technically knowledgable and therefore has
little confidence in his judgements in the presence of 'audiophiles'.
Second, his resulting criteria are often neither relevant to sound quality
nor very high. Any sound system with modest performance will generally
satisfy the consumer, and brand name distinctions can often succeed where
sound quality has not. It must be remembered that the object of marketing
is not to sell good products but to satisfy the consumer. This is often
more easily accomplished by marketing than by engineering."

[Orfield goes on to discuss semantic differential and forced-choice
protocols for marketing research, and for discrimination of difference,
respectively. Orfield does not make this explicit, but in if one was to
control for bias, one would do the tests randomized and blind, as per e.g.


Orofield's assertions are just that ... assertions. To the degree they are
true (and I believe they are), they are true of the mass market, so witness
the "boombox subwoofer" of the one box HTV systems.

But that is a far cry from a group of experienced, discriminating
audiophiles listening to good quality audio gear. Can we be fooled. Of
course. But can we also discrimate small but important differences in
sound....for the experienced audiophile, most certainly the answer is
"often".

This excerpt says absolutely nothing about the value of short-snippet
comparative tests, such as abx.



Sensory Evaluation Basics
by Harry T. Lawless
http://www.nysaes.cornell.edu/fst/fa...oryprimer.html

"Most questions about perception of flavors or products will fall into
three categories. First, people want to know, "Are these two products
different?" This calls for the overall difference test, also referred to
as a discrimination test. These tests usually take the form of a
forced-choice procedure, where participants are asked to select one choice
from among a set of products in which only one is physically different
from some standard sample. The second common question is, "How are they
different?" In other words, the goal is to specify, in perceptual terms,
how products differ, in what qualities have they changed and to what
extent. This set of procedures is referred to as descriptive analysis. In
its most common form, a group of trained individuals examines the products
and provides numerical ratings for the perceive intensity of each
attribute.

"Since these methods involve a controlled stimulus and response
measurement scenario using human participants, sensory evaluation borrows
some practices from the behavioral sciences. In order to minimize biases
that may affect the validity or accuracy of a test, blind coding and
control of presentation order are critical. "Blind" coding is usually
achieved by labeling each sample with a meaningless name such as a
randomly chosen three digit number. Participants are provided with only
enough information about the sample to insure that it is viewed in an
appropriate frame of reference or category."


Very few people here or elsewhere on the net have argued against the value
of blind testing per se. Many of us have argued that it is impractical for
several reasons as a tool for selecting home audio equipment. Moreover many
of us have argued that the pro-abx'rs ridicule of all sighted testing is
overblown, and that sighted testing has its uses along with its dangers.

As I mentioned before, blind testing was always used in the development
stages of food product research, and as stated above, for discriminatory and
descriptive evaluation of product characteristics...sweetness level,
saltiness level, textural differences, etc. And this is how most companies
use abx testing in the development process of components.

However, as I also stated, these techniques were not used for final
evaluation among end users....instead monadic testing among samples of
200-300 people were used. However, similar descriptive scales were used on
a monadic basis, along with overall levels of satisfaction, evaluated
against either a reference product or reference standards established
through prior research. This, to my understanding, is how Harmon
International now does its speaker testing, and it is a superior method in
many respects and especially for final evaluation.

Moreover, such testing can also be used to measure purely subjective
effects, such as the influence of container shape and packaging on product
ratings, as I have also mentioned before. It is useful measuring real
differences (with externals held as blind as possible) and for measuring
imagined differences (with internals held constant). It is the kind of
testing needed to find out if perceived differences are real or not, and
only once subtle differences have been confirmed can one determine whether
abx-type testing can determine the same thing in a "more efficient" manner,
or miss it altogether.

Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
common mode rejection vs. crosstalk xy Pro Audio 385 December 29th 04 01:00 AM
Topic Police Steve Jorgensen Pro Audio 85 July 9th 04 11:47 PM


All times are GMT +1. The time now is 07:30 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"