Reply
 
Thread Tools Display Modes
  #1   Report Post  
watch king
 
Posts: n/a
Default Comments about Blind Testing

First I can say that I don't have much of a bias for or against any
type of speaker wire (or speaker cable system with terminations) or
any low voltage interconnect cable except I'd like the former to
deliver as much current as possible without changing impedence and
the latter to be super well shielded and matched to the components
being connected in terms the capacitative and inductive
characteristics the component's designers considered appropriate when
the preamp, tuner or other was designed and when it's specifications
were determined. I have been given dozens of different kinds of
speaker wire and cable systems to use in my CES booth displays and to
be used when I did speaker selling demos in stores. 20 years of
research testing loudspeakers for seller and buyers showed me that
the distortions in most speakers are so huge that it becomes nearly
impossible to hear any unique characteristics of speaker wire except
its ability to deliver current to cone speakers and voltage to
electrostatics. The very few loudspeakers that might be able to
demonstrate whether there was any audible improvement using one kind
or brand of wire (or even a cable system including unique
terminations), like the Quad 63 have other electrical components like
transformers in their systems making it seems unlikely that subtle
differences in wire can be significant enough to hear.

I believe this to be fact because of testing that I've done when
designing loudspeaker systems for audio companies, testing done as
the loudspeaker system development Imagineer for Disney during the
building of EPCOT and Tokyo Disneyland, and testing I've done for the
AES paper I delivered March 4, 1982 in Montreaux and later that April
to the Los Angeles chapter that showed that voice coils in
loudspeakers undergo so many changes of "sound reproduction
capability", that hearing the difference created by different wires
of the same guage and current carrying capacity would be like finding
the molecules of a specific raindrop after it had fallen into the
ocean. When I gave my paper on loudspeaker compression to the AES
chapter in Los Angeles in April 1982 a few of the engineers in
attendance calculated that the just the factors of non-constant
dynamic impedence and compression changes in loudspeakers that my
testing showed, would likely show up as fast variations in
temperature on the voice coil in excess of 100 degrees centigrade.
Almost immediately the chief engineer of Cetec-Gauss stood up to
mention that while he didn't think that it had really been important
to mention his company's testing in this arena up to that point in
time, he could verfiy that fast changes in voice coil temperatures of
well in excess of 100 degrees centigrade had been measured in many of
the tests Cetec-Gauss had done on their own products and those of
their competitors. These kinds of massive forces obliterate "subtle"
differences.

The louspeakers being discussed at that moment were large voice coil
types with designs that made cooling one of the most important
considerations (although I've tested hundreds of different component
loudspeakers for professional and home hifi use). The loudspeakers
used in home hifi systems do not cool nearly as well as pro
loudspeakers, and the negative effects on sonic characteristics due
to temperature change are much greater than those demonstrated by
professional loudspeakers. And that is just one of the factors
involved with loudspeaker compression. These changes in sonic
characteristics make it nearly impossible to do any real research on
speaker wires that could be relevant to audiophile listening because
the test to make such comparison "fair" would be literally impossible
to design. By the time listeners could focus on the sound playback of
of one of the test wire/products the second product in the test would
already be unfairly tested because the test loudspeaker system
(acoustic microscope equivalent) will not likely "sound" the same as
it did 30 seconds ago. This means that the test passage would need to
be made longer and restarted after a specific cooling off time and by
then the human acoustic memory is gone. This could be why so many
anecdotal testimonials involve hearing "things" after the time was
taken to disconnect one set of speaker wires and connect a second
set. The speakers probably cooled down and sounded better after the
"wire changing period". Maybe comparitive testing of large voltage
wire should be done using thin film or electrostatic headphones
because these devices are actually better at exposing tiny
differences in audio characteristics. As an aside I might suggest
that the way some speaker cable systems are produced they seem to
want to be part of the crossover for the louspeaker or maybe adjust
the frequency response. I've always wanted speaker wire to be able to
deliver gobs of current to some of the loudspeakers I've designed or
listened to, except with electrostatics when I want the speaker wire
to conduct the correct voltage from the amp to the speaker's input
irrespective of the changes in the input impedence of the speaker.

If there really was a wire that was substantially better sonically
then "super consumers" like Disney would do the testing (at whatever
cost) so that they could deliver the AES papers that would make
greater prestige for Disney. Disney has bought numerous "home hifi"
products and tested them (and sometimes even used them) because they
have an incredible investment in not only having the best of
"whatever" in their facilities, but in knowing more than any other
audio consumer in the world. Matsu****a may be an incredible audio
manufacturer and their leaf tweeter with extended response to 80kHz
is one of the few devices that can be used to test whether human
hearing really processes 20+kHz information when our brain decides on
how much "acoustic reality" is being reproduced. But even Matsu****a
has recognized that no company is as concerned with knowing what is
what in audio as much as Disney. And cost is no object for Disney, so
whether speaker wire was $.50 a foot or $5.00 a foot wouldn't matter
to a Disney engineer because (through AES) Disney promotes their use
of the audio equipment and materials proven to be sonically superior
in their theme parks. The test labs at WED Imagineering are better
outfitted by Bruel and Kjaer than almost any audio manufacturer in
the world. As "super-consumers", Disney is unrivalled and there isn't
really much difference in speaker wire sonically or there would be
Disney engineers giving papers on their "wire" findings at AES
conventions. As it is, Disney has strict rules about having enough
copper to carry the current or voltage to the speakers designed into
any facility or show in their theme parks, but even in the most
critical applications there is no special type of wire specified.

It's also a sure bet that the manufacturers of wire or wiring
harnesses for speakers don't really want there to ever be any one
special kind of wire or wiring system that proves to be superior to
others. In the audiophile marketplace, confusion about which wire is
best increases sales. Otherwise it wouldn't be possible to sell the
same consumers 3 or 4 different sets of wire or interconnect cable at
enormous cost. And the different types of manufacturers all have
their own reasons for not wanting any one type of wire to be
determined "best". These reasons vary even if sometimes when
confronted by an insecure retailer, a manufacturer may SAY their wire
is superior or the most cost effective sonically, but like I say,
"Confusion increases sales" so doing a real comparison test is the
opposite of what these manufacturers want. Higher sales keep
companies alive and that is their prime directive, so confusion in
the marketplace best fulfills that directive. Look at all the
possibilities.

The wire seller who truly believes his product is superior won't want
to bother wasting time, money and effort to prove something he
already believes. The wire seller who knows his product is NOT
superior won't want to be shown up. The wire seller who doesn't care
if his product is superior won't want to bother with something as
unimportant as testing, especially because of the costs involved and
the possibility he MAY be shown up. So it is against the interests of
wire sellers to participate in any tests which would clearly
determine which speaker wire was superior or at least which was best
for which speaker or for the money. Even if one crusading wire maker
were to support a true test of wire it would only allow every other
manufacturer to look at the results and then make a copy of the
product. And that doesn't consider the great unknown that could be
very deadly to any and all of the wire companies. It can happen in
the most bizarre ways but the results can be quite staggering.

In 1978 I was the professional products marketing manager at ESS
(Electrostatic Sound Systems) at a time when they sold professional
versions of the Heil tweeter, some professional amplifiers and had
the rights to import some European professional products. The
company took on a new ad agency for consumer products and this agency
did a month of field research in 20 retail outlets to see what they
had to work with for ESS' new ad/marketing campaign. The ad agency
found that in head to head sales demos using ESS speakers against any
other speaker brand of comparable price, store salespeople were able
to sell the sound superiority of ESS loudspeakers against any other
brand. With the various retailers around the country carrying various
mixes of loudspeaker lines, ESS seemed to be able to sell their
products against any other brand based on sound quality during a
demo. Store salesmen usually take the path of least resistance and
they would have sold more ESS loudspeakers except for one thing.
Sometimes when store salespeople suggested to potential customers
that they listen to speakers, the customers didn't want to listen to
ESS loudspeakers. Buyers would come in predisposed to listen to a JBL
speaker vs maybe an AR speaker, or a Bose speaker vs an Infinity
speaker. If the salepeople suggested listening to a JBL speaker vs an
ESS speaker, the potential customer would often change the second
contender to one of the other brands. ESS was confident they could
sell their product against any other brand but they needed to give
the market a reason to even listen to them, and their technology
didn't seem to be motivating consumers enough.

So at the big meetings with the ad agency about what sort of ad
campaign would convince consumers to at least listen to ESS
loudspeakers, it was determined that just using repetative ads
wouldn't do it. ESS had already had a high frequency of ads. Since
consumers said they gave JBL a listen due to their belief that pros
used JBL so perhaps JBL made the consumer product they wanted.
Consumers also said that as the inventor of the acoustic suspension
or direct reflecting or "whatever" loudspeaker, other companies had
some credibility with consumers. So the ad agency proposed that ESS
go out and do a sextuple blind listening test nationwide with
thousands of consumers under tightly controlled conditions, to get
documented evidence that ESS loudspeakers sounded better in various
price ranges compared to 9 other brands of loudspeakers. This was to
be the "credibility hook" ESS supposedly needed to have consumers
give them a listen.

The Physics, Psychology/psychoacoustics, Audiology and Music
departments at 4 major universities in California, Washington state,
Wisconsin and Georgia would check the test controls and an outside
accounting firm was brought in to tabulate and document the results.
Of course ESS wanted to keep as much of the information gained as
possible to themselves for future product development. But the gamble
was that if their products were sonically superior, they could create
an ad campaign that would convince more consumers to give their
speakers a listen in retail stores. They gave me the job of actually
running the test and managing all the different groups involved. I
was chosen because the test was supposed to use the highest quality
source material of classical, jazz, rock and pop music and some
natural sounds. This meant getting access to original master tapes
used to cut master pressing disks. I had the expertise needed to work
with the studio people to make the part of the test program material
using these tapes, I could manage a touring "road show" and I knew
how to use quality test equipment to replicate the test conditions in
each location.

The musical passages had to be long enough to have some kind of
repetative or sustained characeristics that would allow listeners to
hear differences in loudspeakers. The program was 40 minutes long and
covered every kind of musical material (although there was quite a
bit of well recorded vocals in nearly half of the recordings). The
speakers were all set up behind acoustically transparent screens that
were visually opaque. About half of the listeners were in the near
field. Almost every seat in the listening area was in a "sweet" spot.
The colleges did allot to promote the participation of their
students, staff and anyone who was interested, in newspapers and
local radio. There were four different programs and there were four
different price levels of speakers, although the programs were
rotated between the different price levels. Consumers could come back
to take the "test" up to three times if they so desired. Prizes were
given out via drawings and the various departments at the
universities were allowed to share some of the data collected at
their facility. The playback system was never driven to clipping, the
level of the balancing pre-amps was set for each speaker based on
their average output using pink noise and flat, A, B and C weighted
measurements both in the near and far fields. The locations of the
speakers on stands was varied and with 9 speakers of which only 6
could be tested at a time even the comparison match-ups were
constantly varied. There were tens of thousands of comparisons made
by thousands of listeners. Quite a few audio business theories and
guesses were tested in these circumstances.

The program material did make a difference in the test results. Two
different loudspeakers that were compared with different program
material might show different audience preferences depending on
whether the program material was pop, jazz or classical, so it seemed
that even the best loudspeakers were not always "best" for all kinds
of music. Of course there were some good loudspeakers that always
seemed to do well in head to head comparisons using any program
material and others that were just bad and were disliked by most
listeners no matter what the program material. Sometimes the room's
acoustics influenced the test results so that if the heat in the room
built up and the window's opening positions were changed, the test
results could change. ESS was able to show in random blind testing
that their products were sonically superior "enough" (statistically
valid results 5-8x greater than the margin of error) so that the test
results were about the equivalent of the in-store research done by
the ad agency. An interesting device was the "Comparison
Identifier", a large box in the front of the room which displayed a
number for "one" speaker in comparison vs the number for the "other"
speaker being tested. The box would produce a number between 1 and 9
but the speakers could be labeled with any number at any time even
during one listening test session. The test was actually quite well
designed so that no speaker had an advantage. A few of the really
poor sounding speakers were weeded out after the first hundred or so
testing/listening sessions and some cross price-level comparisons
could then be made. The marketing campaign (ESS Wins on Campus) was
introduced before the tour of test locations was completed and so in
Wisconsin during Homecoming week and at Georgia Tech the crowds of
listeners wanting to "take the test" were enormous.

The information not used for marketing purposes involved how 2
non-ESS branded loudspeakers did compared to each other. But one very
startling fact came to light and it forced the testing to stop and
the ad campaign to change. The most prefered loudspeaker for all
kinds of music was the same at every college. It was the same for any
test and any type of program material. Unfortunately this obviously
"most preferred" loudspeaker was not the most expensive ESS
loudspeaker. It was one of the least expensive ESS loudspeakers. The
design criteria for this model of loudspeaker was actually quite
elegant and as a model it shows how a device could be purposely made
so it might "beat a test". By the time I had watched listeners in
thousands of tests I could have designed a loudspeaker that would
have always won the test by a statistically significant margin. ESS
wasn't happy that their most expensive loudspeaker was only the most
preferred by listeners when compared to other company's expensive
loudspeakers. It seemed that when people made choices without visual
cues as to which speaker "should" be best, their ears guided them in
the direction of the most accurate sound, and that wasn't always the
most expensive or the biggest loudspeaker.

In addition this "up-the-line superiority" wasn't the same for all
companies. Whether by design or accident, some companies produced
speakers that sounded better as the price increased, but other
companies had results like ESS where the tests determined that some
of their less expensive loudspeakers were actually their most
accurate. It turned out that the corporate philosophy of ESS that was
dictated in part by the requirement for all mainline models to use
the Heil tweeter, worked against them in some ways.

There was also the fact that if the test could be duplicated because
it was controlled, then any company could eventually figure out the 4
or 5 keys factors in the listener's sonic decision and design a
loudspeaker that would "beat the test". This was an obvious outcome
because there really could only be one set of major sonic priorities
that people use to decide which loudspeaker sounds more accurate than
another. Perhaps the priorities would be slightly different so my 1,
2, 3, 4,and 5 sonic priorities are 2, 5, 1, 6 and 3 for you, but
generally there are only so many criteria that determine "realistic
sound". Our brains and ears have evolved over a few millions of years
making life and death decisions based on what our ancestors heard. So
our hearing developed in a certain way prioritizing certain acoustic
criteria. What was also obvious was that no one speaker company in
1978 made speaker systems that incorporated the most possible of
these criteria in their designs. There were also limitations imposed
by the fact that all the tested loudspeakers were bookshelf models
because that is what dominated the market at the time. Strangely
though, larger floor standing cabinets have many many sonic factors
going against them and so it is less expensive to make a more
realistic sounding speaker if it is a small or bookshelf model.

There is also the "monkey-wrench" factor that dictates that people
will be more disposed to spend more money if the loudpseaker they are
auditioning looks bigger and more imposing because that loudspeaker
seem visually to be worth more money and our visuals are telling our
brain that the larger loudspeaker should sound better than smaller
loudspeakers. In point of fact, the larger loudspeaker will likely
sound less accurate than if the same money was put into a smaller
loudspeaker. But the fact that this listening/test research could
have been used against its sponsor and the loudspeaker industry as a
whole points up how a simple sextuple blind test could become so
dangerous for the status quo in loudspeakers in general.

This should all give people pause when discussing comparison testing.
Is it possible to actually design a test so that the second item
being listened to, is not immediately disadvantaged (or advantaged)
by it's position in the test? Is there any reason for the
manufacturers to support such a test or even to acknowledge the
results as being valid for reasons of their own? Is it possible in
certain circumstances to even hear sonic differences without
resorting to a basic change in venue (like the need to use
headphones)? In fact, the loudspeaker and the source material are
acknowledged to be the links of the chain that have the most
problems, so if there is no effort to actually define how to reduce
the magnitude of distortons in these areas, how can any other portion
of the system even be tested? As I said in "the emperor's clothes"
thread, there have been less than perhaps 10 loudspeaker systems ever
made that can realistically reproduce the voice, the piano and
natural sounds "accurately". If there has been so little effort to
make realistic sounding loudspeakers and program material, what does
it matter if the other parts of systems are .1% improved or not?

Finally I have another anecdote that actually applies to me and what
I want out of a listening test for my audiophile system. In the 80s
when the compact disc was getting a firm foothold in audio, AES was
concerned enough about what the 18kHz brick wall filter would sound
like to try to develop an international listening test to see what
kind of sonic impact these filters could have. They developed a test
using playback systems that had lots of response output up to 50KHz,
the way the Panasonic leaf tweeter could provide and then using
playback material recorded with a sampling rate way past 80KHz they
tested AES members to see how easily they could detect the insertion
of filters of various types at various frequencies while music or
real life sounds were being played.

The idea was to push a button when you heard a filter inserted and
release it when the filter was removed. At 3KHz everything sounded
like it was playing over a telephone when the filter was inserted so
that was easy. At 10KHz almost everybody noticed filter insertion. At
15 and 18KHz there were still many many engineers who noticed the
filters. But at 23KHz I for one have only been able detect a filter
about 20% of the time and at 25KHz I didn't hear any differences. But
there was one young European recording engineer who kept getting it
right all the time well past 20KHz. He was able to focus in on the
hiss from the condenser microphone and pre-amp, plus the noise from
the mixing console and the dither in the recording while excluding
the other sounds in other spectra and he really could hear when these
very high frequency filters were inserted and removed. That makes a
total of one guy. I am absolutely certain that I would not want that
one guy to ever make a decision (for me) in a test about any audio
component item because I cannot hear what he hears so I don't want
the decision about what is "best" to be made on the basis of any
crieria except the ones I can hear.

I want a test to determine anything in audio to be so repeatable and
available that I can take the test myself because I don't want to
base what will be best for me on what someone else can hear or
someone else's taste in music. This is one of the things I learned by
running a test taken by thousands of people. We all don't hear the
same. We all don't make judgements on what sounds best using the same
program material. Our sonic priorities for phase/square wave
response, frequency response, dynamic reproduction capability, the
intrusion of spurious cabinet noises and constant directivity might
be similar but what is vastly different is what we DON'T hear. If I
can't hear something then it isn't worth me paying for it in a
product. I am a fussy listener compared to most and I am a trained
audio engineer who can focus on individual specta and instruments
while excluding others. But even so I don't want to take someone
else's word for what's best even if they are similarly trained ,
because someone else will either have better or worse hearing than
me, and I'll be the one spending the money on the equipment. Also
whatever test will be developed to make the decisions about what is
best, I want to be able to take it. But in case I didn't mention it,
I'd rather not pay for the test itself because I know from
experience, just how incredibly expensive this will all be. Paying
for credible comparative listening tests is the part I haven't quite
worked out yet. By the way, I've spoken to Ed Meitner about sonics
many times in the past and I'm sure that he could easily produce the
source player, turn-down/turn-up switches (we wouldn't want to make
full power switches between products would we?) and level balancing
pre-amp circuits needed for the kinds of test people here discuss,
but, of course there would be a price to pay and I wouldn't want to
be the one paying that price. Watchking

Listening isn't a competative sport, but buying equipment is.

We don't get enough sand in our glass




  #2   Report Post  
chung
 
Posts: n/a
Default Comments about Blind Testing

watch king wrote:
The loudspeakers
used in home hifi systems do not cool nearly as well as pro
loudspeakers, and the negative effects on sonic characteristics due
to temperature change are much greater than those demonstrated by
professional loudspeakers. And that is just one of the factors
involved with loudspeaker compression. These changes in sonic
characteristics make it nearly impossible to do any real research on
speaker wires that could be relevant to audiophile listening because
the test to make such comparison "fair" would be literally impossible
to design. By the time listeners could focus on the sound playback of
of one of the test wire/products the second product in the test would
already be unfairly tested because the test loudspeaker system
(acoustic microscope equivalent) will not likely "sound" the same as
it did 30 seconds ago. This means that the test passage would need to
be made longer and restarted after a specific cooling off time and by
then the human acoustic memory is gone. This could be why so many
anecdotal testimonials involve hearing "things" after the time was
taken to disconnect one set of speaker wires and connect a second
set. The speakers probably cooled down and sounded better after the
"wire changing period".


So quick A/B switching and using short snippets of sound are the most
effective for discrimination. I also found pink noise to be very
revealing for detecting level and frequency response differences.

  #3   Report Post  
Watch King
 
Posts: n/a
Default Comments about Blind Testing

I'm not sure this will work because Google groups seems to be an
unreliable post portal for this .rec group but here goes.

Are we assuming this is a double blind test with an indicator display
and that you are testing something other than loudspeakers? CD
players, tuners and interconnect cables are the easiest to test. Phono
cartridges and loudspeakers are difficult and headphones are nearly
impossible. Power amps and preamps are in the middle difficulty-wise.

Whenever possible it is best to give the test listeners a sense of the
music when music is the source. So for an easy to test item there
would be a reasonable period of musical lead-in and then a countdown
as the music came up to test level. Then during the sustained passage
(some operatic overtures are good for this and some symphonic passages
as well, and of course much of the quartet music produced is great for
this, as well as some repetative piano music), the music would be
brought to "test" level after which a number of 8-10 second
comparisons could easily be made. Usually it is best to have 3 or 4
direct head to head comparisons with one passage because that allows
the listener to be absolutely sure they can hear clearly which test
item is better than the other. (Of course Unsure should always be a
choice, but if Unsure is the most common response then there would
likely be no difference between test product X vs test product Y).
With test program other than music like pure spoken voice or natural
sounds, the listeners need to know what the material really sounds
like or the test isn't really valid. This would also be the case with
material like single classical guitar (eg. Segovia plays Bach) or
single flute, or single "a capella" voice. This is also especially
important for any pipe organ music. Musical memory is "helpful" here.
Go listen to an organ concert, record it binaurally and then use
headphones and when you test the CD player or interconnects, your
musical memory will help you judge.

Of course speakers shouldn't be tested this way. Speakers should be
played both together, loudish to warm up, with the test listeners "not
listening" hands over ears will help and then the warm speakers can be
compared. Alternatively a different speaker not part of that
particular head to head comparison can be played with the musical lead
in and then take a 1 second break and start the comparisons at full
test level between speaker X and speaker Y. Or alternatively with
speakers (only), a "test" can be run at full loudness but the results
not counted just to give the listeners the sense of what's coming
(more like a countdown 8 switch 7 switch 6 switch etc), then after the
first chorus and a return to the main, a real comparison test can be
run for useful results (eg. 4-5, 4-5, 5-4, 5-4 END, with 4 and 5
randomly chosen numbers for the two tested speakers for this one
portion of the test). Follow that with another comparison and another
until the good parts of that song are used up.

But for CD players and interconnect cables the testing can be very
straightforward. The "moderator" cannot alas partake of the test for
the sake of non-biased presentation. All lead-ins, song intros and
explanations have to be prerecorded and "played" to the listening
testers. Once the test starts it must finish or all results are
unusable. There are many many "control restrictions" needed and
requiring pre-test documentation of procedures. The musical program
material should be rotated throughout the program during different
tests to reduce the biases that "program material position" in the
test program can create. As often as possible the order which any item
is tested first should change. For high power switching of items like
amps, speaker cable and speakers make the loudness turndown steps
between test items pretty short on the order of .1 seconds from full
loudness to 0, then switch, then turn up in .1 seconds. By putting
time code onto a CD and having the switches time code driven this can
be accomplished. We used telephone touchtone signals to activate the
numerical display box. The switch shutdown/turnup can be programmed
right onto the CD material although duplicate disks would need to be
synchronized somehow if 2 CD players were being compared.

The most listening testers can seem to hold their sonic concentration
is betwen 20 minutes and 40 minutes. It is an intense experience. On
the other hand testers don't seem to be able to fully concentrate
until about 2 comparisons into the test or about 1-2 minutes. 20
minutes gives you barely enough time for one throwaway opener and then
6 bits of test material and 40 minutes can allow for 12 or so tests
passages but people start getting headaches and listening fatigue. If
need be, run the test a number of times with different program
material and with intervals of 30-75 minutes between tests. Don't
drink too many liquids before a test session. Getting up for the
bathroom ends any test with "No valid results". In other words no
distractions should be tolerated (no cellphones, no doorbells, no
chatting or physical communication between test listeners, sadly-no
crying babies and especially no "Just listen to this" kind of cueing.)
It's either done professionally or it's useless.

This is not to say that perhaps the character of test items A & B will
not be immediately noticable after 15 minutes of testing. They may
well be different enough to be immediately recognizable, but keep
concentration so as to provide results which can be used to determine
which item is more accurate or "better". When using one of the very
rare "transparent" test listening speakers to test other items,
between one and three chairs is about all a Quad ESL 63 or Martin
Logan CL-3 can accomodate in the sweet listening spot. Only a very
tiny (point souce) loudspeaker can produce the kind of superior
quality and wide soundstage with pinpoint imaging needed to make tests
with perhaps as many as a dozen possible test seats. Very small
loudspeakers with high power handling, very low spurious noise
generated by the cabinet, constant directivity, a single driver for
the voice band, reasonable bandwidth and phase alignment capability,
limit the number of louspeakers that can be used to perhaps 2 or 3
models that have ever been made in the history of audio. Big boxes
will not work for this kind of testing because front row seats will
hear something dramatically different from middle and back seats.
Remove any chairs not full of test listeners. Use preprinted pages
with only 2 columns of numbers on them to allow the two test item
numbers to be circled or a box to be checked. Don't be surprised if
the choice changes with program materials.

Listening tests may be exciting but they may not be fun. No matter how
people have travelled and might be leaving or how tight their
schedules are, if some component used but not being tested develops a
buzz or glitch or if the test aparatus malfunctions don't use any of
the results. Use the prerecorded "moderator" intros to cue listeners
as to what they might listen for, (eg. "the following quartet is
composed of flute, cello, violin and trumpet", or "on this recording
the piano is the only acoustic instrument and it is mic'd with 2
overstring and one soundboard mix microphone", or "the test comparison
will be done during the middle of the 3 minute drum solo") because if
there are anomolies to be heard let the testers know when to
concentrate the most closely. Watchking

listening isn't a competative sport, buying equipment is.

We don't get enough sand in our glass.

chung wrote in message news:4uVPb.124026$8H.329218@attbi_s03...
watch king wrote:
The loudspeakers
used in home hifi systems do not cool nearly as well as pro
loudspeakers, and the negative effects on sonic characteristics due
to temperature change are much greater than those demonstrated by
professional loudspeakers. And that is just one of the factors
involved with loudspeaker compression. These changes in sonic
characteristics make it nearly impossible to do any real research on
speaker wires that could be relevant to audiophile listening because
the test to make such comparison "fair" would be literally impossible
to design. By the time listeners could focus on the sound playback of
of one of the test wire/products the second product in the test would
already be unfairly tested because the test loudspeaker system
(acoustic microscope equivalent) will not likely "sound" the same as
it did 30 seconds ago. This means that the test passage would need to
be made longer and restarted after a specific cooling off time and by
then the human acoustic memory is gone. This could be why so many
anecdotal testimonials involve hearing "things" after the time was
taken to disconnect one set of speaker wires and connect a second
set. The speakers probably cooled down and sounded better after the
"wire changing period".


So quick A/B switching and using short snippets of sound are the most
effective for discrimination. I also found pink noise to be very
revealing for detecting level and frequency response differences.


  #4   Report Post  
Mkuller
 
Posts: n/a
Default Comments about Blind Testing

(Watch King) wrote:

snip

But for CD players and interconnect cables the testing can be very
straightforward. The "moderator" cannot alas partake of the test for
the sake of non-biased presentation. All lead-ins, song intros and
explanations have to be prerecorded and "played" to the listening
testers. Once the test starts it must finish or all results are
unusable. There are many many "control restrictions" needed and
requiring pre-test documentation of procedures. The musical program
material should be rotated throughout the program during different
tests to reduce the biases that "program material position" in the
test program can create. As often as possible the order which any item
is tested first should change. For high power switching of items like
amps, speaker cable and speakers make the loudness turndown steps
between test items pretty short on the order of .1 seconds from full
loudness to 0, then switch, then turn up in .1 seconds. By putting
time code onto a CD and having the switches time code driven this can
be accomplished. We used telephone touchtone signals to activate the
numerical display box. The switch shutdown/turnup can be programmed
right onto the CD material although duplicate disks would need to be
synchronized somehow if 2 CD players were being compared.

The most listening testers can seem to hold their sonic concentration
is betwen 20 minutes and 40 minutes. It is an intense experience. On
the other hand testers don't seem to be able to fully concentrate
until about 2 comparisons into the test or about 1-2 minutes. 20
minutes gives you barely enough time for one throwaway opener and then
6 bits of test material and 40 minutes can allow for 12 or so tests
passages but people start getting headaches and listening fatigue. If
need be, run the test a number of times with different program
material and with intervals of 30-75 minutes between tests. Don't
drink too many liquids before a test session. Getting up for the
bathroom ends any test with "No valid results". In other words no
distractions should be tolerated (no cellphones, no doorbells, no
chatting or physical communication between test listeners, sadly-no
crying babies and especially no "Just listen to this" kind of cueing.)
It's either done professionally or it's useless.


Once again you have brought up many important variables that may affect the
outcome and validity of any open-ended audio component comparison DBT using
music, particularly the amateur DIY variety that are strongly advocated by some
posters here.

What is your experience in using 'highly experienced listeners" (reviewers,
disc masterers, etc.) versus 'average listeners' for blind tests to determine
whether there is an audible difference between components?

One variable you did not mention is 'control of the switch'. John Atkinson of
Stereophile has said he personally has a lot of problems with any blind test
where he can't control the switch. What are your thoughts on that issue?
Regards,
Mike
  #5   Report Post  
Steven Sullivan
 
Posts: n/a
Default Comments about Blind Testing

Mkuller wrote:
(Watch King) wrote:


snip


But for CD players and interconnect cables the testing can be very
straightforward. The "moderator" cannot alas partake of the test for
the sake of non-biased presentation. All lead-ins, song intros and
explanations have to be prerecorded and "played" to the listening
testers. Once the test starts it must finish or all results are
unusable. There are many many "control restrictions" needed and
requiring pre-test documentation of procedures. The musical program
material should be rotated throughout the program during different
tests to reduce the biases that "program material position" in the
test program can create. As often as possible the order which any item
is tested first should change. For high power switching of items like
amps, speaker cable and speakers make the loudness turndown steps
between test items pretty short on the order of .1 seconds from full
loudness to 0, then switch, then turn up in .1 seconds. By putting
time code onto a CD and having the switches time code driven this can
be accomplished. We used telephone touchtone signals to activate the
numerical display box. The switch shutdown/turnup can be programmed
right onto the CD material although duplicate disks would need to be
synchronized somehow if 2 CD players were being compared.

The most listening testers can seem to hold their sonic concentration
is betwen 20 minutes and 40 minutes. It is an intense experience. On
the other hand testers don't seem to be able to fully concentrate
until about 2 comparisons into the test or about 1-2 minutes. 20
minutes gives you barely enough time for one throwaway opener and then
6 bits of test material and 40 minutes can allow for 12 or so tests
passages but people start getting headaches and listening fatigue. If
need be, run the test a number of times with different program
material and with intervals of 30-75 minutes between tests. Don't
drink too many liquids before a test session. Getting up for the
bathroom ends any test with "No valid results". In other words no
distractions should be tolerated (no cellphones, no doorbells, no
chatting or physical communication between test listeners, sadly-no
crying babies and especially no "Just listen to this" kind of cueing.)
It's either done professionally or it's useless.


Once again you have brought up many important variables that may affect the
outcome and validity of any open-ended audio component comparison DBT using
music, particularly the amateur DIY variety that are strongly advocated by some
posters here.


Any of the provisos he's cited would *also* apply to sighted comparison,
of course...but they certainly don't seem to be applied in the sighted
comparisons I read about every month.

But then again, nothing he's written even remotely supports the idea that
*sighted*, 'open ended' comparison, using music (and please, feel free
to add whatever new conditions you can conjure up),
advocated and practiced by the most audiophiles, including the two
main audiophile magazines, is a good way to test for difference at all.

And that's because -- and this is the crucial thing -- it
can't *ever* be a good method, for verifying subtle differnces.
In other words, in contrast to scientific methods,
the method advocated by
the main 'voices' of audiophilila, and people like yourself, is
*fundamentally and essentially flawed*,
as all researchers in the field of perception acknowledge.

DBT for audible difference is 'perfectable' -- sighted listening simply
*isn't*.

--

-S.

"They've got God on their side. All we've got is science and reason."
-- Dawn Hulsey, Talent Director



  #6   Report Post  
Mkuller
 
Posts: n/a
Default Comments about Blind Testing

And that's because -- and this is the crucial thing -- it
can't *ever* be a good method, for verifying subtle differnces.
In other words, in contrast to scientific methods,
the method advocated by
the main 'voices' of audiophilila, and people like yourself, is
*fundamentally and essentially flawed*,
as all researchers in the field of perception acknowledge.

DBT for audible difference is 'perfectable' -- sighted listening simply
*isn't*.


Neither one is perfect as it stands now. You happen to prefer your imperfectly
applied DBT which obscures differences over my method which doesn't provide
"adequate controls" for bias.

Otherwise, provide me an example of a 'perfect' DBT with a sensitivity which
has been verified to be, say around 0.2db - two times the difference Pinkerton
is demanding for his $4.5K Cable Challenge. (Our money is safe.)
Regards,
Mike

  #7   Report Post  
S888Wheel
 
Posts: n/a
Default Comments about Blind Testing

Any of the provisos he's cited would *also* apply to sighted comparison,
of course...but they certainly don't seem to be applied in the sighted
comparisons I read about every month.


If you don't like them why are you reading them?


But then again, nothing he's written even remotely supports the idea that
*sighted*, 'open ended' comparison, using music (and please, feel free
to add whatever new conditions you can conjure up),
advocated and practiced by the most audiophiles, including the two
main audiophile magazines, is a good way to test for difference at all.


Equipment reviews are not tests for differences per se. They are subjective
reviews of equipment used by the reviewer in the likely manner that the
consumer would use the product.


And that's because -- and this is the crucial thing -- it
can't *ever* be a good method, for verifying subtle differnces.


Varification is not an issue in subjective review for the most part. Using the
product as the consumer would use it is a reasonable way to evaluate equipment
if the consumer who reads the magazine evaluates equipment the same way. If you
read the reviews and don't like the fact that they are not scientifically
reliable, I suggest you read the disclaimer that suggests consumers shouldn't
rely on reviews alone and should audition equipment for themselves before
making any purchases.

In other words, in contrast to scientific methods,
the method advocated by
the main 'voices' of audiophilila, and people like yourself, is
*fundamentally and essentially flawed*,


Yes they are. As is the case for any subjective review. Stereophile is not
trying to be a scientific journal. Most journals that do subjective reviews of
hardware in any number of fields are every bit as unscientific.


DBT for audible difference is 'perfectable' -- sighted listening simply
*isn't*.


I wouldn't expect such absolute claims from a scientist.

  #8   Report Post  
Bruce Abrams
 
Posts: n/a
Default Comments about Blind Testing

"Mkuller" wrote in message
*snip* quoted text
What is your experience in using 'highly experienced listeners"

(reviewers,
disc masterers, etc.) versus 'average listeners' for blind tests to

determine
whether there is an audible difference between components?


Mike,

You have repeatedly brought up the notion that DBTs, particularly of the ABX
variety, only have validity in 'trained' listeners and are useless to the
untrained. So I ask, if we were to set up a double blind cable
discrimination test and prior to running the test, had each testor engage in
some ABX training and we subsequently charted their sensitivity to known
types of distortions, would you conclude that the ensuing cable test would
be valid even if all testors failed to discriminate between the cables?

One variable you did not mention is 'control of the switch'. John

Atkinson of
Stereophile has said he personally has a lot of problems with any blind

test
where he can't control the switch. What are your thoughts on that issue?
Regards,
Mike

  #9   Report Post  
Mkuller
 
Posts: n/a
Default Comments about Blind Testing

Bruce Abrams wrote:
Mike,

You have repeatedly brought up the notion that DBTs, particularly of the ABX
variety, only have validity in 'trained' listeners and are useless to the
untrained.


My point is that to be able to identify audible differences in a blind test,
the listeners need to be experienced enough to be able to 1.)recognize them and
2.)categorize them for memory. When I was first starting out in High End
audio, I recall comparing two preamps in an audio store with a friend. I could
tell they sounded a little different, but I was unable to tell exactly what was
different or decide which one I thought sounded better for my frined to
purchase. Today, after 20 or so years experience comparing differences in
audio components I would be much more likely to be able identify and qualify
the differences.

In research, subjects get training in learning to hear the specific artifact
they are supposed to identify. Since the differences with audio equipment can
be so many different things (timbre, dynamics, soundstaging, bass, etc.) how do
you train a subject adequetly?

So I ask, if we were to set up a double blind cable
discrimination test and prior to running the test, had each testor engage in
some ABX training and we subsequently charted their sensitivity to known
types of distortions, would you conclude that the ensuing cable test would
be valid even if all testors failed to discriminate between the cables?


No, subject training is only one of the many necessary variables. What about
demonstrating the actual sensitivity of the test source material (i.e.0.5dB or
5.0dB) before applying it to a situation where the amount of audible difference
is unknown. If the audible difference is 2dB and the test is only sensitive to
5dB, then you will get null results, as most do.
Regards,
Mike

  #10   Report Post  
Mkuller
 
Posts: n/a
Default Comments about Blind Testing

Thank you for a very intersting and thought-provoking post. I completely agree
that "testing" is not so simple as some would have us believe and that
experience and 'listening biases' play a large part.

One comment below:

"watch king" wrote:
If there really was a wire that was substantially better sonically
then "super consumers" like Disney would do the testing (at whatever
cost) so that they could deliver the AES papers that would make
greater prestige for Disney.


A couple of years ago, MIT was commissioned to completely rewire the
recording/performance facility at George Lucas' Skywalker Ranch in No. CA with
their cables.

Listening isn't a competative sport, but buying equipment is.


Right on.

Regards,
Mike



  #12   Report Post  
Michael Scarpitti
 
Posts: n/a
Default Comments about Blind Testing

"watch king" wrote in message ...


Maybe comparitive testing of large voltage
wire should be done using thin film or electrostatic headphones
because these devices are actually better at exposing tiny
differences in audio characteristics.

Guess how I do my listening tests?
Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
audio coax cable JYC High End Audio 239 January 18th 04 09:12 PM
Equation for blind testing? Scott Gardner Audio Opinions 160 January 11th 04 09:21 PM
DBT and science Michael Scarpitti High End Audio 35 October 13th 03 12:29 AM
Acoustically transparent but opaque material for blind speaker testing? Per Stromgren General 0 August 19th 03 09:33 AM


All times are GMT +1. The time now is 02:57 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"