View Single Post
  #113   Report Post  
Harry Lavo
 
Posts: n/a
Default

wrote in message ...
Harry Lavo wrote:
wrote in message

What you're suggesting here is to connect some component whose identity
is
unknown to you and then rate the sound you hear. You describe it as
carefully as you can using the english language. You then have someone
else
connect a competing component, and you once again describe the sound
you
hear. You repeat this process for each different component that you
have
available. Do I understand correctly?

If so, the test will only be meaningful if you can draw some conclusion
from
the descriptions, and that of course means comparison. There has to be
enough info to decide which component sounds the best, the worst and so
on.
IOW the descriptions have to be useful enough to allow you to rank
order
your preference on their basis. Frankly, I don't think you can do
this.

It would be even tougher--and probably more embarrassing--if there was
the
possibility of repetition, if the units under test were chosen
completely
at
random.


Norman, this is done all the time.


Not really. It's done in your field (product testing), but only in
cases where you already have strong reason to believe that perceptions
will at least be *different*, and you want to know in what ways they
are different. No one in his right mind would go to the expense of such
a test unless he were damn sure the things he was comparing at least
tasted different.


This is a fallacious argument. It can be used to determine if perceptions
are actually "real" just as easily as it can be used for other differences.
It is a subjective rating, and is used to report subjective results (such as
taste characteristics). If there are differences, and enough trials are
done, there will be a difference. If there are no differences, there will
not be. It is that simple.


This sort of test has never been used, to my knowledge, to do threshold
tests of perception (with the obvious, and hence very dubious,
exception of your Japanese hero).



So dubious that that team's subjective rating results correlated with actual
neurophysiological responses? So much for your having a scientifically open
mind.


It simply means devising a series of
meaningful rating scales (usually 1 to 5, low to high) for attributes you
consider important, or adapt the ones developed by others if they seem
satisfactory. Then after each listening session, you rate your
impressions
of what you just heard. After a few such sessions with each competing
component in the system and all else held constant, you can begin to get
a
feel for differences, if any.


An interesting choice of words: "begin to get a feel for difference."
The statistics of demonstrating a significant difference in threshold
perception using such a test would be mind-numbing, if they were
possible at all. For one thing, you'd need to be able to tell whether
the various factors you are testing for are indeed independent. The
statistics start to grow meaningless very fast if the supposedly
independent variables are not independent of each other.


They are clearly possible...but best done among groups of people totaling
150 or 200 people. That's why I said "get a feel for the difference". It
would take at least 30 trials of each variable spread over a fairly lengthy
period of time to allow for enough data for even moderate differences to be
measureable with statistical significance. But it could be done. And it is
the only way other than the large group monadic testing that I proposed over
a year ago to determine if in fact the perceptual differences are real. So
even though difficult, this is the kind of testing that must be done before
you can possibly claim that abx-style (comparative, short-snippet) testing
is valid. Because it is the closest thing possible to getting the influence
of the comparative-test itself out of the equation..

On the other hand, this is a perfectly logical approach when you know
two things taste different, and you want to know whether your future
customers will find one sweeter than the other, or smoother than the
other, etc.


It is also perfectly logical approach if two things might taste different.
The test indicates yay or nay, not your a priori assumptions.


Of course this is best done blind, but even
sighted it can help quantify perceived differences that are arrived at
monadically and wholly subjectively, with no forced comparison.


If it's not done blind, it can tell you absolutely nothing about the
*sound* of the equipment, because it would fail to exclude some very
obvious and powerful non-sonic influences on those perceptions.


It would help quantify a sighted reaction, which is what Norm claimed was
impossible giving rise to my original response.