Reply
 
Thread Tools Display Modes
  #201   Report Post  
S888Wheel
 
Posts: n/a
Default Subjectivist and Objectivist -- Are these misnomers? (WAS:

Objectivist -- Are these misnomers? (WAS:
From: chung
Date: 5/18/2004 9:55 PM Pacific Standard Time
Message-id: LMBqc.22804$gr.1936664@attbi_s52

Bromo wrote:

On 5/18/04 8:43 PM, in article D4yqc.22117$gr.1808882@attbi_s52, "Harry
Lavo" wrote:

Oh you've made your case very clear. As long as you can avoid dealing

with
other bias controlled experimental results you'll be quite happy to
continue
debating.

Nope, I'm not happy debating. I'd like to start setting up a test. But

so
far I can't even get serious suggestions to what to test (i.e. two

component
DUT's with fairly universal concurrence on both sides, i.e. objectivists
universally accept that there will be no difference; subjectivists
universally believe that there will be a difference.)


Except I think most Audiophiles would fall in a spectrum between the two
camps - the extremes being those that think that we have learned and can
measure everything there is to know - and that system integration is not
more difficult than comparing specification sheets (What we call
"objectivists")


That's not what we called objectivists. I would postulate that an
objectivist, as far as this newsgroup is concerned, is one who believes
in the validity of (a)standard controlled-bias testing like DBT's, and
(b) measurements.

- and those that feel that spec sheets are not what you hear
- and that testing and analysis is useless unless it is done with listening
to music (we call these folks "subjectivists").


I suggest you get your definitions straight. Check this webpage:

http://www.dself.dsl.pipex.com/ampin...o/subjectv.htm

In particular, pay attention to this:
***
A short definition of the Subjectivist position on power amplifiers
might read as follows:

* Objective measurements of an amplifier's performance are
unimportant compared with the subjective impressions received in
informal listening tests. Should the two contradict the objective
results may be dismissed out of hand.
* Degradation effects exist in amplifiers that are unknown to
engineering science, and are not revealed by the usual measurements.
* Considerable latitude may be used in suggesting hypothetical
mechanisms of audio impairment, such as mysterious capacitor
shortcomings and subtle cable defects, without reference to the
plausibility of the concept, or gathering any evidence to support it .
***








Of course these definintions of subjectivist positions were defined by a
self-proclaimed objectivist. You know it is rarely flatering when an
objectivist speaks for a subjectivist or visa versa.

Here is something that was said about all objectivists in a Stereophile
article: "For an objectivist, the musical experience begins with the
compression and rarefaction of the local atmosphere by a musical instrument and
ends with the decay of hydraulic pressure waves in the listener's cochlea; "

http://www.stereophile.com/asweseeit/602/

Maybe both sides would be better served if they were left to speak for
themselves.

  #202   Report Post  
Harry Lavo
 
Posts: n/a
Default Does anyone know of this challenge?

"Steven Sullivan" wrote in message
...
Nousaine wrote:
"Harry Lavo" wrote:


"normanstrong" wrote in message
news:khbqc.15167$gr.1357885@attbi_s52...
"Harry Lavo" wrote in message
...


snip not particularly relevant to what follows



Let's try this on for size: Suppose you have 2 speaker cables which
appear to have quite different sonic signatures. You have

essentially
unlimited time to evaluate them in any way you feel necessary. All

of
this is sighted, of course. (I recommend writing down your thoughts
as you evaluate the cables for future reference.) Is it your claim
that even this is not enough to be able to identify which cable is
connected without seeing it?

At some point, you're going to have to bite the bullet and say, "This
is Cable A. I reecognize the characteristics that I wrote down

during
the evaluative period." If not, I think we're wasting everybody's
time--Harry's as well--and talking past each other.


In all honesty, it doesn't make a difference. If I did the tests

months
apart (which would be the best way), I wouldn't even expect to remember

my
ratings accurately. What I would want to do is to listen once again,

the
same way, and rate the two components again the same way. Only this

time I
wouldn't know which was which. And if I did either remember or come up
independently with a similar sonic signature, and accurately duplicate

my
ratings under blind conditions, and the majority of other testees did

the
same, then statistically you'd have to say that the sighted differences

were
real and that blinding per se did not invalidate them. If on the other
hand, my initial ratings were "random" because the differences did not
really exist, then I could not duplicate them except by chance and over

the
group of testees the results would be random and would not correlate
statistically. And I could do all this without ever making a "choice".


OK; and if your results were not statistically confirmed by your second
listening then what would your conclusions be? You'll say that

"blinding"
caused the difference, whereas most everbody else would conclude that

the
subject was unreliable (which would be true) and that he/she didn't

really
"hear" definable acoustical differences the first time.



Also, I wonder if Harry would be willing to make a more modest claim,
if his second listening yielded a 'no difference' result: namely, that

*his*
'first listening' perception of difference between the two DUTs was

probably imaginary.
And having done so, would that experience temper/inform his other claims

of
having heard a difference? Would he, in effect, become more of an
'objectivist'?


I've already answered this to Tom at some length in a reply to his post. In
short the answer is "of course I would; that's exactly what I said above".
That is, if the group as a whole came up with no statistical signicance, it
would prove that the initial perceived differences were due to sighted bias.
On the other hand, me doing a "one-timer" would have no statistical
significance by itself, unless I did it twenty times. And the test is not
set up that way because you cannot easily do that many repeated
observational tests. Better to have twenty people do it once.


You may recall my discussion of food testing. Final testing was always

done
monadically, or proto-mondically (less frequent). Consumers were not
"comparing", they were evaluating. The statistical analysis between

the two
(or more) sets of testees/variables was what determined if there was in

fact
a difference/preference. And on individual attributes as well as some
overall satisfaction ratings, so the results could be understood in

depth.

OK; and what if those subjects were unable to reliably distinguish

between the
samples? Didn't you have built-in controls to test that too?



Seem to me that any 'monadic' evaluation is also a sort of comparison --

indeed,
any sensation taht we have to describe involves comparing, in the sense of

asking
yourself , e.g., does this taste like my *memory* of salty, sweet, bitter,

etc.
If the evulative report form is multiple-choice, this 'choosing' is all

the more explicit.
If the evaluative report form is scalar ('on a scale of 1-10 , with 1

being
sweet and 10 being bitter') there's still choice involved. There is

always
some sort of real or virtual reference that one is comparing the sensation

to.
I would posit that the same is true for a 'monadic' evaluation of, say,
a cable. You aren't directly comparing it to another real cable, but you

are
comparing what you hear to your store of memories of what 'smoothness',

'bass
articulation', or whatever, sound like. Otherwise you could not make
an 'evaluation'.


Of course, but isn't that how people arrive at the conclusions they do in
this hobby of ours? By designing the test and the scales properly, all
people have to do is make that subjective, right-brain included kind of
response. They don't have to make a "choice". And the statistics will tell
us the rest.
  #203   Report Post  
 
Posts: n/a
Default Does anyone know of this challenge?

Harry Lavo wrote:

We're not talking about accuracy here at all. We're talking about different
ways of determining if their are statistically significant differences, and
in what direction, between two DUT's.


Neither was I. What you missed was that the properties of what is being tested
has an influence on what kind testing is appropriate.




  #204   Report Post  
S888Wheel
 
Posts: n/a
Default Does anyone know of this challenge?

From: Howard Ferstler
Date: 5/17/2004 3:44 PM Pacific Standard Time
Message-id: cfbqc.16222$qA.2005223@attbi_s51

S888Wheel wrote:

From: Howard Ferstler

Date: 5/14/2004 11:01 AM Pacific Standard Time
Message-id: wP7pc.49177$xw3.2938368@attbi_s04

I have come into this one a bit late, but I do want to
interject my two-cents worth. I mean it is obvious that the
old "amps have a definable sound" argument lives on in
different forms, with different rationalizations, even if
the amps measure the same and are built by competent
designers, and sound the same during blind comparisons.


Do you know of any examples of amps actually measuring "the same"
that are
claimed to sound different?


Few amps (well, let's be candid, no amps) measure exactly
the same.


That isn't being candid that is simply being accurate. Thank you for
acknowledging your premise was faulty. That was the gist of the
question I
asked.

However, they often measure very close, indeed;
close enough to be subjective-performance clones of each
other. Yet reviewers often compare such amps (not level
matched, and not in such a way that they can compare
quickly) and then go off and describe differences that are
simply not going to exist. I believe they do so for two
reasons:


All vague assertions. I'm sure in some cases your assertions may be on
the
money but your brush is too broad for my liking.



First, they are often psychologically dependent upon audio
being
an esoteric, mysterious, and fascinating hobby.

Going the
brass tacks route would undermine those motivations. In
other words, they like the mystery of it all as much as
their readers.


Attacking the assumed underlying thoughts and intentions of those with
whom you
don't agree is a mistake IMO. It shows nothing but your own
predispositions. It
is pure conjecture spiced with prejudice IMO.



Second, they depend upon a magazine readership that shares
the views noted in the first reason, above. If they
delivered a brass-tacks review those readers would be likely
to protest to the editor and possibly cancel subscriptions.


More conjecture. Besides, you are totally mistaken. The magazines you
speak of
were grass roots underground publications that were born out of a
belief that
they were telling the truth about audio components. The audience
followed after
the direction of these magazines had been set.



When I started writing for Fanfare years ago and pretty much
stated my opinions regarding the so-called "sound" of
amplifiers and CD players the editor got a substantial
number of "I'm canceling my subscription" letters from audio
buffs, even though the magazine was one of the best
recording-review publications in business. I mean, the
reason to subscribe to the magazine was to get good record
reviews, not to indulge in audio fantasies.

Unfortunately, the editor (who was not an audiophile) had
for some time been employing one or two equipment reviewers
and equipment-oriented commentators who had managed to build
up a following of true-believer audio enthusiasts. The
introduction of my skepticism into the mix tended to rile
those people, and the editor contacted me and expressed
serious concern about the potential for a "subscription
cancellation" problem to get out of hand.

Incidentally, I left the magazine after a year of working
for the editor for a number of reasons, but the lack of
support was not one of them. One involved just getting tired
of writing rebuttal letters. Another involved the low pay,
since like Mickey Spillane I write for money.


But they did let you write what you believed. And yet you seem to be
claiming
that this is not what is going on in other magazines.



OK, we read about this amp "sound" thing all the time, both
in manufacturer ads and in enthusiast magazine test reports,
and of course we hear it proclaimed by numerous high-end
boutique sales people.

Certain reviewers are particularly bad. One may get hold of
an esoteric, "super-duper" (and expensive or even
super-expensive) amp, and after discussing its sometimes
arcane features and maybe even doing some rudimentary
measurements (or spouting often bizarre manufacturer
specifications),


What would constitute a "bizarre manufacturer specification?"


It is not unusual at all for a high-end manufacturer to make
statements about soundstaging, depth, focus, etc., although
they usually leave that sort of thing to reviewers. While
such statements are not specifications, per se, they come
across as such to impressionable consumers.


Balony. Cite one person who has mistaken such subjective descriptions
as
"specifications" after that you can take a shot at actually answering
the
question. What constitutes a "bizarre manufacturer specification?"

Probably, we see
more hyperbole with upscale CD players than we do with
amplifiers.

that reviewer may engage in an almost
poetic monologue regarding its sound quality.


Nothing wrong with "poetry" in the relm of editorial comment. It is a

chosen
form of communication. Some people enjoy it.


Ah, now we get to the gist of your typical high-end
journalism technique. Yep, many of those who read
fringe-audio publications want a proto-mystical analysis of
products.


Sorry but poetic licence is not a form of mysticism.

They DO NOT want measurements, and they also DO
NOT want some hardware geek discussing precise
listening/comparing techniques.


Who doesn't want measurements? Stereophile? They seem to have a lot
of
measurements. Who doesn't want measurements?

They want to have the
reviewer discuss how a particular piece of esoteric and
expensive equipment transported him to another realm.


Again, balony. Cite one magazine that states this goal.

They
most definitely do not want the reviewer to fail to play
what I like to call "the game."


Define "the game" then prove your assertion.



Often, he will do this while comparing it to other units
(expensive and super-expensive) that he has on hand or has
had on hand. He may not "hear" profound differences, but he
rhapsodizes about the subtle ones he does hear.


So? Not everyone wants subjective review to be clinical. Some find
that

boring.

This is odd. They want to know what gear works well or at
least what gear will do the job with the smallest amount of
cash outlay (at least sharp consumers behave this way), and
yet according to you what really interests some of those
high-end types is a review that does not bore them. So, if
the choice is between a non-boring review that spouts
hyperbole and a brass-tacks review that delivers the goods,
you claim that some guys will opt for the hyperbole?


No. You have decided it is an either or situation. I haven't. One can
be both
rigorous in thier examination of equipment and poetic in their
subjective
description of that equipment.

OK, but
in that case they get sub-par sound (or at least par sound
at inflated prices) for their lack of intellectual effort.


Balony. One does not get subpar sound because of poetic reviews of
equipment.
That doesn't even make sense.



If they want that, I suppose it is their business. However,
I tend to believe that many such individuals are not
inherently born that way. They are created by certain,
misleading audio-journalism techniques.


You are free to believe what you want. It seems some of your beliefs
as you
describe them are built on prejudices instead of facts.

There is nothing wrong with different writers writing editorial with

different
styles.


There is more to it than style. Certainly, it is possible to
deliver a well-written and accurate review that would
satisfy someone who is really interested in knowing the
facts - as opposed to someone who simply wants to be
entertained with a piece of fluff literature.


A subjective review is by nature, subjective. Even those reviews seem
to get
the "facts" right about the equipment.



Comparing the test unit to one he "had on hand" in the past
is of course absurd, particularly when it comes to subtle
differences, because there is no way anyone could do a
meaningful comparison between units that were not set up and
listened to at the same time.


That is your opinion. You are entitled to it. But lets get down to
the
underlying meaning. Would you say this is true in the case of
speakers that
exhibit subtle differences? Are such observations meaningless?


Speakers tend to be gross enough in their differences for a
non-blind approach to work. I mean, the differences are
there and what it boils down to is a matter of taste to a
degree.


You didn't answer the question.


I have compared wide-dispersion speaker designs to those
that deliver a clean, narrowly focussed first-arrival signal
and have little in the way of overpowering room reverb, and
I can tell you that both approaches will work. Indeed, I
have two AV systems in my house for reviewing purposes, with
one set up making use of wide-dispersion main speakers (and
center and surround speakers, too) and with the other being
captained by a pair of speakers designed to deliver a
phase-coherent first-arrival signal. This second system also
has wide-dispersion surround speakers, however, because you
need that sort of thing with surround sound.


You still didn't answer the question.


Anyway, both sound quite good, but each has its strong
points.

Now, the fact is that when comparing speakers it is still
important to level match and do quick switching. I do this
when I compare for product reviews and it is obvious as hell
that the approach is as vital when comparing speakers as
when comparing anything else.


And just how do you level match speakers with gross dispersion and
frequency
response differences? Not that it has anything to do with the question
that was
not answered.


I may measure speakers (doing a room curve) and compare them
that way over a period of time (I did this in issues 94 and
95 of The Sensible Sound a while back), but I make a point
of noting that the curves are only starting points. Rough
curves will indicate spectral-balance problems, but two
systems with very similar room curves (at least in my rooms)
may sound quite different in terms of spaciousness and
soundstaging. They sound very similar in terms of spectral
balance, however, and I rate that parameter very high.

Incidentally, level matching with speakers is rather tricky,
since there may be overlaps with each system's response
curves that make it impossible to set up a balance point at
a single frequency: i.e. doing it at 1 kHz, for example. One
speaker may have a slight dip there and the other may have a
moderate peak. Once level matched that way, they will not
actually be balanced well at all.

My technique involves doing two integrated moving-microphone
RTA curves (one for each speaker) and then adjusting levels
so that the curves overlap as much as possible.


OK I'll ask the question again in hope that you might answer it this
time.
Would you say it is true in the case of speakers that
exhibit subtle differences? That such observations meaningless?


However, even when he has
another "reference" unit on hand to compare with the device
being reviewed the confrontation may be seriously flawed,
mainly because levels are not matched and the comparison
procedures are such that quick switching is impossible.


One does not have to use quich switching to make relevant
observations

about
what they hear.


What they "think" they hear. They need to do the work level
matched and with quick switching to validate what they
"think" they hear when doing sloppy comparisons.


I agree that level matching is a good idea for direct comparisons. I
don't buy
your assertion that people "need" to do quick switching to know what
they are
hearing.

I have done
that sort of thing with speakers and the results were
revealing. Needless to say, with amps (and CD players, too)
the revelations were even more profound.

Actually, few of those reviewers who get involved with amp
reviewing do a blind comparison even once at the beginning
of their reviewing careers - just to see just how revealing
it will be of similarities.


How do you know what other reviewers have and have not done?


Well, when I read reviews that have the reviewers going on
and on about the different sound of two amps they have on
hand (or even with the second amp not on hand, due to its
being reviewed some time back), or the sound of two CD
players, or the sound of two sets of speaker wires or
interconnects, I pretty much conclude that the guy either
does not know what he is talking about or else is in the
business of entertaining readers instead of informing them.


OK so you are speculating based on your own biases on the subject. You
really
don't know at all.



If they did do some careful, level-matched, quick-switch
comparisons between amps (or between wires or between CD
players) they might change their tune - if their motivations
involved speculation for its own sake. I firmly believe that
some reviewers have done this sort of thing and rejected the
results.


More speculation.

They did so not because they did not believe them,
but because they DID believe them and realized how bad such
news would be for the high-end product-reviewing business.


The lines have been drawn many years ago and plenty of finger
pointing and
posturing has transpired. I believe that this is mostly what your post
has to
offer. finger pointing at "the other side" and posturing about *thier*
beliefs
as filtered through your beliefs.



There is no audio-fun
romanticism to be had in that kind of brass-tacks behavior.


Are you saying there is no fun to be had in the "objectivist"
approach to
audio?


There is plenty of fun, but only for those who realize that
brass-tacks thinking about audio can be fun. For those who
romanticize the hobby such behavior may be the very
definition of dull.


I think you will do better to speak about what is in your mind and not
what is
in the minds of others. It looks less than objective to me.




For a lot of people, audio involves a lot more than sound
quality and accurately reproducing input signals.

Interestingly, some "reviewers" go beyond commenting upon
imagined subtle differences and will instead make
proclamations about the vast differences between the amp
under test and one or more reference units. The comments are
often absurd in the extreme, with the commentary going on
and on about soundstaging, depth, focus, transparency,
dynamics, and the like.


Do you really believe comentary on the characteristics of any given

playback
system in terms of imaging soundstage, depth and focus are inherently

absurd?

They are when the commentary involves amplifier sound.


Well then why not just cut to the chase and simply say that you object
to any
claims about amplifier sound. The nature of the claims are obviously
irrelevant
if you believe there is no such thing.


Actually, if someone did an improper level-match comparison
between two amps (that is, they did a global level match and
did not realize that the two amps might have channel-balance
differences that would make them be balanced differently -
even though the average levels from all channels were the
same) they might hear soundstaging differences.

However, if amps (or CD players, and certainly wires) are
not screwed up in some way, and are balanced and level
matched properly, soundstaging, depth, and focus should not
be an issue. It would be an issue with speakers, of course,
as I noted above. Radiation-pattern differences could have a
huge impact with those items.

Do you think that comments on a playback system's dynamic range and
transparency are absurd?


Well, you can have dynamic-range differences with amps,
because one of two involved in a comparison might hit its
clipping limit before the other. Then, you would hear
differences. However, I have never said that was not
possible. I said that up to their clipping limits, all
properly designed amps (and most mainstream jobs are
designed properly) sound the same.


You said.."The comments are
often absurd in the extreme, with the commentary going on
and on about soundstaging, depth, focus, transparency,
dynamics, and the like" So maybe comments on dynamics are not so
absurd.


Of course, I should
probably further qualify that and say that with some wild
and weird speaker loads a weak-kneed amp might have
problems. I think that Pinkerton has pointed that out,
because he has done some comparisons with some pretty
demanding speaker loads.

The problem is that without close,
level-matched comparing, opinions of that kind are not only
a big joke they are also misleading the readers, and
misleading readers, no matter how entertaining the report's
often flowery text, is not the job of a product reviewer.


The job of a product reviewer is determined by the editorial staff
of any

given
publication. Not by you.


The job of a product reviewer is to tell the truth.


Most people think they are telling the truth.



I have done a fair amount of comparing between amps, using
some pretty good ancillary hardware, and with careful level
matching. Let me tell you that although some amps might be
very, very slightly different sounding from the mainstream
(I found one that did, but shortly after the comparison it
went up in smoke), nobody is going to be able to pinpoint
such differences without doing some very close listening and
precise comparing.


You are unfortunately making universal proclamations based on your
personal
experience here. What may be a slight difference to your
sensibilities may

be a
substantial difference to someone else's sensibilities and what you
can or
cannot do in terms of comparing things may or may not be universal.


That is why it is best for reviewers to do comparisons level
matched, with quick switching, and probably blind or double
blind if they feel that they are likely to be biased. They
certainly ought to do it that way during the initial part of
their reviewing career, in order to see just how much alike
amps (and CD players and wires, needless to say) sound. They
certainly owe their readers more than poetic claptrap.

What's more, an amp that does sound a tad different from
mainstream models (here I am talking about some tube units,
particularly single-ended versions) is probably going to not
be as accurate an amplifying device as those others.
Ironically, many of those good performing mainstream amps
can be found contained inside of modestly priced AV
receivers, at places like Best Buy and Circuit City.

OK, now fantasy is sometimes fun and I do not begrudge any
reader who wants to fantasize about his audio components.


It seems you do IME.


In what way?


You spend a lot of time complaining about it.

While I have some strong ideas about things
like speaker radiation patterns and the need for a sensible
approach to low-bass reproduction, I certainly do not
attempt to offer up a one-sided slant to those topics when I
write for my readers.

And I do NOT begrudge any reader who wants to fantasize
about his components, even though I kind of wonder about his
motivations for being involved with the hobby.


I think you have said otherwise on other forums.



What I do begrudge are reviewers who capitalize on the naive
approach some enthusiasts have when it comes to purchasing
components. Hell, I do not even begrudge the reviewer who
has prejudices that he keeps to himself. It is when he
allows those prejudices to fool readers that I get a bit up


in arms.


Are you so sure you are not guilty of doing that very same thing?



I
rather enjoy fantasizing myself when I am off line. However,
when reviewing, reviewers should be different. They should
deal with brass tacks and not speculations - even if
speculations make for more poetic literature.


Again, it is up to the editorial staff of the journal to determine
what
recviewers should and should not be doing.


Reviewers owe it to their readers to be honest.


Your apparent prejudices in regards to those reviewers are not
evidence of any
dishonesty on their part.

The
editorial staff also owes the same approach to those
readers. Sure, some people are happy being suckered.


People on both sides of the line accuse the other side of being
suckers and of
suckering others. Now if you have any specific proof of any specific
acts of
the fraud you broadly claim to exist on the "other side" then please
offer it
up. I am all for the exposure of fraud. Vague finger pointing and
posturing
doesn't prove anything other than one's own biases. By the way I do
believe
such fraud does exist. 1. It is bound to in any market. 2. I have seen
it
clearly exposed once.

However, those who see that as an OK thing and capitalize on
it by creating more suckers through baloney journalism are
not the kind of journalists that audio needs. Not if audio
wants to be a viable, long-term hobby.


The hobby has lasted a long time despite your objections to certain
beliefs.
  #205   Report Post  
Bob Marcus
 
Posts: n/a
Default Doing an "evaluation" test, or has it already been done?

"Bruce J. Richman" wrote:

I've often considered the objectivist viewpoint that "all competent
amplifiers
operating within their power ranges with appropriate speakers sound the
same",
etc. possibly true *for the measurable variables that they are interested
in*,
but nonetheless possibly not true - nor measurable by a-b or a-b-x tests -
for
the sound qualities that subjectivists are interested in.


The fallacy here is the assumption that "the sound qualities that
subjectivists are interested in" have causes beyond what measurements or ABX
tests can detect. There is no evidence that this is true.

No doubt I'll be
challenged on this view, but let me explain.

When one reads a subjective review, or perhaps does one's own review either
in
a showroom or in one's home, one *might* be perceiving sonic qualities
either
not measured nor easily defined by the usual objectivist standards.* For
example, Harry has used the word "musicality".*


A term with no clear definition. Nor is there any evidence that it means the
same thing to different audiophiles.

And I might use the same term,
and others might make refernce to the imaging, soundstaging or *depth of

field"
qualities associated with a particular piece of equiopment.*


Are these "qualities associated with a particular piece of equiopment"?
These are all mental constructs. The imaging isn't "real"--the sound is
being produced at only two points. Our brains construct these images based
on sounds reaching our ears from all directions, as a result of the
interaction between the speakers and the room. The audio system's
contribution to this process is the direct sound--simply changes in air
pressure--radiating from the speakers. And that sound can be fully measured.
After all, beyond frequency and amplitude, what else is there coming out of
a speaker?

That's why objectivists don't buy the notion that there are things they
can't measure, or things that ABX tests can't detect. We don't have to
"measure imaging"; all we have to do is to measure the things that cause our
brains to "image."

(Before anyone jumps on the point, I'll concede that radiation patterns of
loudspeakers and room interactions are extremely complex and certainly not
reduceable to simple measurements. But loudspeakers aren't part of the
obj/subj debate. And components ahead of the speakers have no impact on
these radiation patterns--which is why it's so funny to read reviewers who
talk about certain cables "opening up the sound.")

Still others may
simply say "this sounds more realistic me" (than another component being
compared).* While it may be perfectly acceptable to the objectivists to
consider only variables that can be measured in terms of frequency response
or
various kinds of distortion, I would be reluctant - as I think would be
most
subjectivists - to attribute the various variables I've mentioned above to
specific, capable of replication, measurements to measure these things.


What else is there to attribute them to? Sound really is just frequency and
amplitude. Every effect must have a cause, and those are the only possible
causes.

Also, how often, even within the frequency response realm, are complete
graphs
presented that *might* account for a particular component being perceived
as
relatively analytical, dark, or lean - all terms frequently used by
subjectivists?


I don't know. How often? (And what's your point?)

This is one of the reasons that I feel the 2 "camps" are really operating
from
almost totally different frames-of-reference and the endless challenges and
disputes about the value of double blind testing, are, in practical terms,
unlikely to convince anybody of anything they don't already strongly
believe.


Can't argue with that!

bob

__________________________________________________ _______________
MSN Toolbar provides one-click access to Hotmail from any Web page – FREE
download! http://toolbar.msn.click-url.com/go/...ave/direct/01/


  #206   Report Post  
 
Posts: n/a
Default Subjectivist and Objectivist -- Are these misnomers? (WAS:

False dichotomy. The current benchmark for determining if there are
audible differences in wire and amps is a listening alone test. One is
asked to determine a difference, any difference one chooses to seek or
think exists, by listening alone. The electrical parameters of the item
are not considered as part of the analysis of the listening alone test.
The question is not are there things yet to be discovered but if there any
demonstratable things which could change the audibility of a difference.
All gear has differences, but the benchmark shows that they are below the
threshold of audibilty, which is even higher when music is used; and that
is the source for thelistening alone benchmark test. There is a
dichotomy,imo , but not to be discussed now and it has nothing to do with
specsheets or measurements.


Except I think most Audiophiles would fall in a spectrum between the two
camps - the extremes being those that think that we have learned and can
measure everything there is to know - and that system integration is not
more difficult than comparing specification sheets (What we call
"objectivists")- and those that feel that spec sheets are not what you hear
- and that testing and analysis is useless unless it is done with listening
to music (we call these folks "subjectivists").

It seems to me that the titles are misnomers to a large degree - and it is
rare that people will be extreme to one degree or other.

It is a little like engineering design - there are people that tend toward
extensive simulation and those that iterate on the bench. The most talented
engineers tend to be able to work in both worlds - since simulations
generally tech you a lot about the principles and tend to show trends rather
well- but more complicated simulations generally fall short of exact
predictions (at least at RF frequencies) - at the end, on the bench, you
need to get the circuits or system to behave as per our design targets.
Without one or the other the design is incomplete!

Kind of like getting a sheaf of data on a speaker - and buying it based
solely upon those sheets without listening to it. Like plunking down a non
refundable $2000/pr or something.

So in a roundabout way -- my querstion is: Are the terms "subjectivists"
and "objectivists" misnomers? Can a so-called "objectivist" get "lost in
the woods" as badly as a so-called "subjectivist" but in a different way?

  #207   Report Post  
Nousaine
 
Posts: n/a
Default "Evaluation" test or chicken fat test, how to decide?

"Harry Lavo" wrote:

wrote in message ...
"Easy does not equate to useful if they are not accurate. At best, then,
they would be a waste of time. At worst, they would lead to misleading
conclusions.

The decision not to do it is very logical if you have reason to doubt the
validity of the test, until and unless the test is validated."


In another place you said the "evaluation" approach was often used by
reviewers and that is why anything else needs to be "validated" by
"evaluation" first. Leaving aside that dubious bit of logic and
precondition, how do we "evaluate" the "validity" of the "evaluation"
approach? This assumes the "evaluation" experience is in fact a valid
test or is it just sometime spent listening carefully. Frankly I see your
continuing drumbeat about the "evaluation" test a bit too much a strawman.
I offered my chicken fat test as an example of what you propose on this
ng. In it I said any other test must be validated while having one's left
hand submerged in chicken fat because that is how I listen and any other
test must first be done thusly to make sure it too is valid. Sure it is
easier not to use chicken fat, but not doing so is just a waste of time if
another test is not valid, and we willn't know until and unless it is done
with chicken fat.


Let me put it this way. I didn't say just reviewers. I said reviewers and
most audiophiles. They take equipment home. They listen to it. They say
things like " the soundstage just opened up", etc. The objectivists here
say it is all just imaginary, the result of sighted bias. So how do you
prove that.


You imply that objectivists (your word) don't listen. Don't take equipment
home. Nobody says that all of it is "just imaginary".

But it is true that the urban legends of amp/cable sound do appear to be the
result of sighted bias, simply because no one has ever been able to provide a
single experiment or demonstration where the 'sound' of an amplifier or wire of
nominal competence could be "heard" by a user, manufacturer or merchandiser of
his own gear when even the most modest of bias controls were implemented (cloth
over terminal in personal reference system using personally selected
recordings.)


You can't do it by substituting another test.


As I said the doubt about, to use your term, open-ended evaluation comes from
the fact that it hasn't been validated as a useful tool when it comes to
reliable and valid assessment of acoustical sound quality.

Even so it often has no practical alternative (say with loudspeakers) but even
then few end-users or review staff implement even the slightest field-levelers
for evaluation (level matching, reference equipment, common program material,
rating categories) purposes.

And yet, the hue and cry is loud and rampant against any credible evlidence
that has used to find the truth in listening. Toole built a career and a large
body of evidence at Canada's NRC. Lip****z and Vanderkooy, Greiner, Shanefield
and others have done so in academia, Carlstrom, Clark, Muller and Kreuger
developed the ABX method specifically developed to highlight even the smallest
audible differences in ways that enhance them. Yet none of those people have
"found" amp differences that were not directly tied to already known
performance characteristics.

Indeed the codec developers use a similar technique to verify subtle
differences. Amp/cable guys use those? Nope.

Why not? I don't think it's because they are too hard to use or implement. I
think its because they don't give results that are acceptable to high-enders.
They'd rather hypothesize about decision-making process and long resource
eating experiments that no one (not even them) will undertake.

You have to do it by simply "blinding" the listener leaving the test as
close as possible to what they do naturally.


Those were the Sunshine Trials, my 5-week amp test, "Flying Blind" and "Wired
Wisom."


That is what the evaluate sighted vs. blind leg of the control test is
designed to do...with just enough analytic rigor to make statistical
analysis possible.

You can't just switch to comparative testing, because that is yet a second
variable in addition to blinding, versus the original sighted listening.


And they like to make up new "variables". IMO its all listening. The idea here,
it seems to me, is to add enough extra data so that when the results are all in
one can search through the rubble hoping to find some "data" that can be
twisted around and used to support the original hypothesis.

I say this only because of the unfortunate case of the null capacitor
dialectric experiment where even within the past couple years it has been
reported that subjects were able to identify cap dialectric by sound alone when
there was NO data that supported even a hope of that being true when considered
in context.

Why is this so hard to understand? Especially for those of you who are
scientists. I'm not saying evaluative listening is "validated". I'm saying
that is what is done. If you want to say it is all imaginary and will go
away with blinding, then *that* is what you add as a variable...blinding.
Not another, completely different type of test.


Well why not just validate it Harry? But as far as "what is done" I do blind
tests. Others have been doing them right along. The first published one I have
a copy of, about consumer audio equipment, was published in 1976. ABX machines
were first available in the early 80s. Blind tests have been used to validate
all consumer data-reduction techniques dating back to DCC.

They are a genuine part of the accepted evaluation and testing methodology for
everything EXCEPT high-end audio and would have to say, parapsychology.
  #208   Report Post  
Bromo
 
Posts: n/a
Default Subjectivist and Objectivist -- Are these misnomers? (WAS:

On 5/19/04 7:11 PM, in article ,
" wrote:

False dichotomy. The current benchmark for determining if there are
audible differences in wire and amps is a listening alone test. One is
asked to determine a difference, any difference one chooses to seek or
think exists, by listening alone. The electrical parameters of the item
are not considered as part of the analysis of the listening alone test.
The question is not are there things yet to be discovered but if there any
demonstratable things which could change the audibility of a difference.
All gear has differences, but the benchmark shows that they are below the
threshold of audibilty, which is even higher when music is used; and that
is the source for thelistening alone benchmark test. There is a
dichotomy,imo , but not to be discussed now and it has nothing to do with
specsheets or measurements.


Okay -

Some questions - please answer if you think this would be Subjectivist or
Objectivist behavior:

1. If someone had a stereo system - and he or she positioned the speakers
so that they "sounded better" in their new positions - and did no
measurements or blind testing -- Objectivist or Subjectivist

2. A person takes a microphone in their living room - and measures their
speakers with a test tone - and from that says that the speakers are bad for
everyone because of the measurement results.

3. Someone measures the vibration of their turntable and places it on a
damping platform - measuring an improvement in the vibration - thinking that
the system is that much better, but makes no claim as to the new sound?

4. Someone who goes to a Early Music concert - and compares the sound he or
she heard to the same performance on CD in the listening area while taking
measurements only in the listening room with a microphone.
  #209   Report Post  
Bruce J. Richman
 
Posts: n/a
Default Doing an "evaluation" test, or has it already been done?

Bob Marcus wrote:

"Bruce J. Richman" wrote:

I've often considered the objectivist viewpoint that "all competent=20
amplifiers
operating within their power ranges with appropriate speakers sound the=

=20
same",
etc. possibly true *for the measurable variables that they are interest=

ed=20
in*,
but nonetheless possibly not true - nor measurable by a-b or a-b-x test=

s -=20
for
the sound qualities that subjectivists are interested in.


The fallacy here is the assumption that "the sound qualities that=20
subjectivists are interested in" have causes beyond what measurements or=

ABX=20
tests can detect. There is no evidence that this is true.


There is no fallacy because there is no statement that measurements per s=
e can
not account for various perceptual phenomena experienced by subjectivists=
who
attribute sonic differences to certain pieces of equipment. However, unl=
ike
the omniscient objectivists who consider the subject closed and not subje=
ct to
debate, I suspect that many subjectivists would consider the possibility =
that
certain variables routinely named in reviews (see myh original post) may =
have
measurement correlates. Indeed, one of the points of John Atkinson's
measurements, for example, which accompany his Stereophile reviews, is to=
, when
evident, point out certain correlates between various frequency, distorti=
on or
other technical measurements and subjective impressions obtained by revie=
wers.=20
Of course, ABX tests are irrelevant in this regard. Once an objectivist =
has,
of course, ruled out any and all possible measurement variations as posib=
ly
accounting for any perceived differences, the futility of debating those =
with
different frames of reference becomes even more evident.

No doubt I'll be
challenged on this view, but let me explain.

When one reads a subjective review, or perhaps does one's own review ei=

ther=20
in
a showroom or in one's home, one *might* be perceiving sonic qualities=20
either
not measured nor easily defined by the usual objectivist standards.=C2=A0=

For
example, Harry has used the word "musicality".=C2=A0


A term with no clear definition. Nor is there any evidence that it means=

the=20
same thing to different audiophiles.


Nor was there a claim made that it did have a clear definition or the sam=
e
thing to di9fferent audiophiles. That said, one can certainly ask audiop=
hiles
to describe more specifically what they mean when they use such terms, or=
more
precise ones such as "lean", "more body", etc., and then determine empiri=
cally
to what extent there is agreement or disagreement amongst different obser=
vers.=20
For subjectivists, I would suspect that what would be a more relevant =3D=
and
practical =3D question would be the extent to which a given component is
"preferred" to another for the same reason by a group of listeners. For
example, if 75% of a group preferred component A to component B, and when
asked, were able to reasonably attribute the same approximate reason for =
their
preference - in terms of some sonic qualities, this would, of course, nev=
er
meet oobjectivist standards in which only measurements currently accepted=
by
that group are of importance, but they might well be relevant to subjecti=
vists
who place greater value on listening experiences in a natural environment=
than
on argument purely by specifications. Also, unless one is willing to ass=
ume
that all possible measurements have already been discovered and enshrined=
as
all there is to know, it would seem reasonable to assume that some subjec=
tive
qualities could be correlated to some extent with specific measurements y=
et to
be tried. =20

And I might use the same term,
and others might make refernce to the imaging, soundstaging or *depth o=

f
field"
qualities associated with a particular piece of equiopment.=C2=A0


Are these "qualities associated with a particular piece of equiopment"?=20
These are all mental constructs.=20


On the contrary, these are descriptions of how music is actually experien=
ced by
many listeners. Of course, perceptions are involved, but these perceptio=
ns are
influenced by the methods used in recording the music and reproducing it
through the audio system.

The imaging isn't "real"--the sound is=20
being produced at only two points. Our brains construct these images bas=

ed=20
on sounds reaching our ears from all directions, as a result of the=20
interaction between the speakers and the room. The audio system's=20
contribution to this process is the direct sound--simply changes in air=20
pressure--radiating from the speakers. And that sound can be fully measu=

red.=20
After all, beyond frequency and amplitude, what else is there coming out=

of=20
a speaker?


It would seem obvious that the ability of a given component to replicate =
the
intentions of the recording team in producing a given set of instrumentat=
ion
and/or vocals in which instruments and vocalists appear to the listener t=
o
appear in different places in the soundfield is *not* as simplistic as yo=
u
claim. More specifically, it goes without saying that the proportion of =
the
amplitude of a given instrument, for example, assigned to the 2 channels =
after
mixdown in the recording will, by design, attempt to "locate" the instrum=
ent in
the sound field (e.g. strings on the left, woodwinds in the center, doubl=
e
basses and cellos on the right in a typical symphony setup). It does not=
seem
beyond the realm of possiblity that some components might be more precise=
or
accurate (pick whatever adjective you prefer) at transferring the recordi=
ng
engineer's intentions to the listening room of a subjectivist who appreci=
ates
things such as "imaging" ability.

That's why objectivists don't buy the notion that there are things they=20
can't measure, or things that ABX tests can't detect. We don't have to=20
"measure imaging"; all we have to do is to measure the things that cause=

our=20
brains to "image."


There was no claim made that certain things can't be measured - just that=
the
variables sometimes discussed by subjectivists are not usually subject to=
any
*attempt* to measure them. It might well be possibld, for example, to me=
asure
"imaging" if one could measure the relative amplitude of certain single
instruments, or a vocalist's voice, at the speaker sources. One would ex=
pect,
for example, that a singer centered between the speakers, would have roug=
hly
equal amplitudes coming from both left and right speakers. Other instrum=
ents
in the orchestral or band mix would presumably have different proportions=
from
left and right depending on their locations.

(Before anyone jumps on the point, I'll concede that radiation patterns =

of=20
loudspeakers and room interactions are extremely complex and certainly n=

ot=20
reduceable to simple measurements.=20


On this point we agree.

But loudspeakers aren't part of the=20
obj/subj debate.


Of course not. However, since loudspeaker are used by both camps to make=
their
judgments, I would think that their interaction with the compoents under =
test
could certainly be a relevant factor in determining test results. Many
reviewers have commented on the relative synergy or lack of synergy betwe=
en a
certain product, for example, and a certain speaker. Now, as an objectiv=
ist,
you may not accept this line of reasoning, but consider, as you've mentio=
ned,
the variation in radiation patterns, and I'll add in other speaker comple=
xities
such as resistance curves (said as the owner of electrostatics that have =
wild
resistance swings and definitely *don't* sound the same with every amplif=
ier or
preamplifier), sensitivities, possible crossover effects, etc.

And components ahead of the speakers have no impact on=20
these radiation patterns--which is why it's so funny to read reviewers w=

ho=20
talk about certain cables "opening up the sound.")


One can always find extremes to ridicule. I lose very little sleep over =
the
hyperbole of many cable manufacturers. But I don't think they are reifie=
d by
too many subjectivists.


Still others may
simply say "this sounds more realistic me" (than another component bein=

g
compared).=C2=A0 While it may be perfectly acceptable to the objectivis=

ts to
consider only variables that can be measured in terms of frequency resp=

onse=20
or
various kinds of distortion, I would be reluctant - as I think would be=

=20
most
subjectivists - to attribute the various variables I've mentioned above=

to
specific, capable of replication, measurements to measure these things.


What else is there to attribute them to? Sound really is just frequency =

and=20
amplitude. Every effect must have a cause, and those are the only possib=

le=20
causes.


See my comments above re. imaging. Amplitude differerences may be respon=
sible
in some cases. Also, what I had in mind in making my comments was not to
disagree with your argument re. frequency and amplitude as the only salie=
nt
measurements, but in *how* they might be measured by an objectivist - or
perhaps more typically, on a specification sheet, in which, for example, =
a
frequency range with plus and minus db points is given, but little attent=
ion is
paid to how that "range" actually operates into a given speaker load, or =
how it
might actually vary at different points along the response curve. It wou=
ld
certainly seem possible that there could be some peaks and valleys in thi=
s
curve, for example, that might interact with a given speakers *own* set o=
f
technical characteristics, to produce a certain "character", if you will.=
I
apologize for using a real life, subjective term .


Also, how often, even within the frequency response realm, are complete=

=20
graphs
presented that *might* account for a particular component being perceiv=

ed=20
as
relatively analytical, dark, or lean - all terms frequently used by
subjectivists?


I don't know. How often? (And what's your point?)


The question was rhetorical. And the point, as illustrated above, is
self-evident, except to those that might assume that all questions have b=
een
answered and are not debatable. Again, more evidence of the total waste =
of
time in trying to talk about extreme objectivist - subjectivist differenc=
es.

This is one of the reasons that I feel the 2 "camps" are really operati=

ng=20
from
almost totally different frames-of-reference and the endless challenges=

and
disputes about the value of double blind testing, are, in practical ter=

ms,
unlikely to convince anybody of anything they don't already strongly=20
believe.


Can't argue with that!


Really? That's a surprise. On practically everything else, I recom=
mend
we agree to disagree. But given your agreement with my final paragraph, =
why
monopolize RAHE, to a large extent, with endless discussions of this old
argument?

I plead guilty to injecting my comments here, but generally speaking, I
usually steer clear of entering this endless cycle of retorts.

YMMV.

bob

_________________________________________________ ________________
MSN Toolbar provides one-click access to Hotmail from any Web page =E2=80=

=93 FREE=20
download! http://toolbar.msn.click-url.com/go/...ave/direct/01/








Bruce J. Richman

  #210   Report Post  
chung
 
Posts: n/a
Default Subjectivist and Objectivist -- Are these misnomers? (WAS:

S888Wheel wrote:
Objectivist -- Are these misnomers? (WAS:

From: chung
Date: 5/18/2004 9:55 PM Pacific Standard Time
Message-id: LMBqc.22804$gr.1936664@attbi_s52

Bromo wrote:

On 5/18/04 8:43 PM, in article D4yqc.22117$gr.1808882@attbi_s52, "Harry
Lavo" wrote:

Oh you've made your case very clear. As long as you can avoid dealing

with
other bias controlled experimental results you'll be quite happy to
continue
debating.

Nope, I'm not happy debating. I'd like to start setting up a test. But

so
far I can't even get serious suggestions to what to test (i.e. two

component
DUT's with fairly universal concurrence on both sides, i.e. objectivists
universally accept that there will be no difference; subjectivists
universally believe that there will be a difference.)

Except I think most Audiophiles would fall in a spectrum between the two
camps - the extremes being those that think that we have learned and can
measure everything there is to know - and that system integration is not
more difficult than comparing specification sheets (What we call
"objectivists")


That's not what we called objectivists. I would postulate that an
objectivist, as far as this newsgroup is concerned, is one who believes
in the validity of (a)standard controlled-bias testing like DBT's, and
(b) measurements.

- and those that feel that spec sheets are not what you hear
- and that testing and analysis is useless unless it is done with listening
to music (we call these folks "subjectivists").


I suggest you get your definitions straight. Check this webpage:

http://www.dself.dsl.pipex.com/ampin...o/subjectv.htm

In particular, pay attention to this:
***
A short definition of the Subjectivist position on power amplifiers
might read as follows:

* Objective measurements of an amplifier's performance are
unimportant compared with the subjective impressions received in
informal listening tests. Should the two contradict the objective
results may be dismissed out of hand.
* Degradation effects exist in amplifiers that are unknown to
engineering science, and are not revealed by the usual measurements.
* Considerable latitude may be used in suggesting hypothetical
mechanisms of audio impairment, such as mysterious capacitor
shortcomings and subtle cable defects, without reference to the
plausibility of the concept, or gathering any evidence to support it .
***



Of course these definintions of subjectivist positions were defined by a
self-proclaimed objectivist. You know it is rarely flatering when an
objectivist speaks for a subjectivist or visa versa.


So, given that you frequent this newsgroup, is there anything in Self's
definition you deem inaccurate?

Here is something that was said about all objectivists in a Stereophile
article: "For an objectivist, the musical experience begins with the
compression and rarefaction of the local atmosphere by a musical instrument and
ends with the decay of hydraulic pressure waves in the listener's cochlea; "


Have you met an objectivist that behaves in such a way?


http://www.stereophile.com/asweseeit/602/

Maybe both sides would be better served if they were left to speak for
themselves.




  #211   Report Post  
Nousaine
 
Posts: n/a
Default Doing an "evaluation" test, or has it already been done?

(Bruce J. Richman)
wrote:

Harry Lavo wrote:

wrote in message news:Udrqc.1168$zw.477@attbi_s01...
Harry, every point you want to make about testing, all of them, can be
accomplished by an abx test. Using it you can know what a and b are and
have them visable at all times. You can "evaluate" and jot notes and do
whatever you wish for as long as you wish listening to either a or b alone
or in comparsion for as many times as you wish. Once you havd down pat
what you think clearly different, preference can be formed but is
irrelevant,
any difference at all you think exist should be clearly there when you hit
the x choice. You can listen to x as long as you wish, consult your notes
etc. by which to identify if it is a or b. What part of the above doesn't
in fact accomplish your goals? In fact it could take less time as the x
choice is done at the end of an "evaluation" period and the entire period
need not be repeated, unless you choose to do so. We can go further,
there have been abx tests where folk took as long as they wished, we can
consult these results to get at least a preliminary look at the test you
are really proposing; as an indication if it is worth redoing. What you
propose is the above without the abx being a part of the "evaluation"
period, but it should make no difference at all if doing the "evaluation"
the abx box is sitting there already for the x choice to be made. Once a
minimal number of "evaluation" and x have been done, you can compare notes
and results without regard to the stats if you wish, but of course the
stats will say if x choices were at a level different then that of random
guessing.


You seem to miss the part about having to make a conscious "choice" (a left
brain function) versus simply evaluating the musicality of the equipment (a
right brain choice). I am not convinced (because I have never been shown a
shred of evidence) that they measure the same thing. That is why I have
proposed the test the way I have. You cannot "assume" that a test that is
valid for determining if a specific non-musical artifact can be heard is
also valid for open-end evaluation of audio components. They are two
different things entirely. I personally think the a-b-x test is even more
suspect than a straight a-b.









I've often considered the objectivist viewpoint that "all competent
amplifiers
operating within their power ranges with appropriate speakers sound the
same",
etc. possibly true *for the measurable variables that they are interested
in*,
but nonetheless possibly not true - nor measurable by a-b or a-b-x tests -
for
the sound qualities that subjectivists are interested in.


In the Sunshine trials no measurements were ever made. The closest to
measurements were level matching at 100,1000 1nd 10,000 Hz. Yet the subject was
unable to reliably his Pass Aleph monoblocks from a modest Yamaha integrated
amplifier when even the most modest of bias controls were implemented (cloths
placed over I/O terminals) using his personally selected programs in his
reference system.

No doubt I'll be
challenged on this view, but let me explain.

When one reads a subjective review, or perhaps does one's own review either
in
a showroom or in one's home, one *might* be perceiving sonic qualities either
not measured nor easily defined by the usual objectivist standards. For
example, Harry has used the word "musicality". And I might use the same
term,
and others might make refernce to the imaging, soundstaging or *depth of
field"
qualities associated with a particular piece of equiopment. Still others may
simply say "this sounds more realistic me" (than another component being
compared). While it may be perfectly acceptable to the objectivists to
consider only variables that can be measured in terms of frequency response
or
various kinds of distortion, I would be reluctant - as I think would be most
subjectivists - to attribute the various variables I've mentioned above to
specific, capable of replication, measurements to measure these things.


So? Who cares? If you cannot tell them apart with your eyes closed who cares
what measurements are or what "variables" you are listening for?

Also, how often, even within the frequency response realm, are complete
graphs
presented that *might* account for a particular component being perceived as
relatively analytical, dark, or lean - all terms frequently used by
subjectivists?

This is one of the reasons that I feel the 2 "camps" are really operating
from
almost totally different frames-of-reference and the endless challenges and
disputes about the value of double blind testing, are, in practical terms,
unlikely to convince anybody of anything they don't already strongly believe.

Chacun a son gout!

Bruce J. Richman


The 'debates' will be endless because the camp without any credible supporting
evidence has no other resort except "debate" including hypothesizing long,
expensive experiments that will never be done.

  #212   Report Post  
Nousaine
 
Posts: n/a
Default Does anyone know of this challenge?

"Harry Lavo" wrote:

"Steven Sullivan" wrote in message
...
Nousaine wrote:
"Harry Lavo"
wrote:

"normanstrong" wrote in message
news:khbqc.15167$gr.1357885@attbi_s52...
"Harry Lavo" wrote in message
...


snip not particularly relevant to what follows



Let's try this on for size: Suppose you have 2 speaker cables which
appear to have quite different sonic signatures. You have

essentially
unlimited time to evaluate them in any way you feel necessary. All

of
this is sighted, of course. (I recommend writing down your thoughts
as you evaluate the cables for future reference.) Is it your claim
that even this is not enough to be able to identify which cable is
connected without seeing it?

At some point, you're going to have to bite the bullet and say, "This
is Cable A. I reecognize the characteristics that I wrote down

during
the evaluative period." If not, I think we're wasting everybody's
time--Harry's as well--and talking past each other.


In all honesty, it doesn't make a difference. If I did the tests

months
apart (which would be the best way), I wouldn't even expect to remember

my
ratings accurately. What I would want to do is to listen once again,

the
same way, and rate the two components again the same way. Only this

time I
wouldn't know which was which. And if I did either remember or come up
independently with a similar sonic signature, and accurately duplicate

my
ratings under blind conditions, and the majority of other testees did

the
same, then statistically you'd have to say that the sighted differences

were
real and that blinding per se did not invalidate them. If on the other
hand, my initial ratings were "random" because the differences did not
really exist, then I could not duplicate them except by chance and over

the
group of testees the results would be random and would not correlate
statistically. And I could do all this without ever making a "choice".


OK; and if your results were not statistically confirmed by your second
listening then what would your conclusions be? You'll say that

"blinding"
caused the difference, whereas most everbody else would conclude that

the
subject was unreliable (which would be true) and that he/she didn't

really
"hear" definable acoustical differences the first time.



Also, I wonder if Harry would be willing to make a more modest claim,
if his second listening yielded a 'no difference' result: namely, that

*his*
'first listening' perception of difference between the two DUTs was

probably imaginary.
And having done so, would that experience temper/inform his other claims

of
having heard a difference? Would he, in effect, become more of an
'objectivist'?


I've already answered this to Tom at some length in a reply to his post. In
short the answer is "of course I would; that's exactly what I said above".
That is, if the group as a whole came up with no statistical signicance, it
would prove that the initial perceived differences were due to sighted bias.
On the other hand, me doing a "one-timer" would have no statistical
significance by itself, unless I did it twenty times. And the test is not
set up that way because you cannot easily do that many repeated
observational tests. Better to have twenty people do it once.


You may recall my discussion of food testing. Final testing was always

done
monadically, or proto-mondically (less frequent). Consumers were not
"comparing", they were evaluating. The statistical analysis between

the two
(or more) sets of testees/variables was what determined if there was in

fact
a difference/preference. And on individual attributes as well as some
overall satisfaction ratings, so the results could be understood in

depth.

OK; and what if those subjects were unable to reliably distinguish

between the
samples? Didn't you have built-in controls to test that too?



Seem to me that any 'monadic' evaluation is also a sort of comparison --

indeed,
any sensation taht we have to describe involves comparing, in the sense of

asking
yourself , e.g., does this taste like my *memory* of salty, sweet, bitter,

etc.
If the evulative report form is multiple-choice, this 'choosing' is all

the more explicit.
If the evaluative report form is scalar ('on a scale of 1-10 , with 1

being
sweet and 10 being bitter') there's still choice involved. There is

always
some sort of real or virtual reference that one is comparing the sensation

to.
I would posit that the same is true for a 'monadic' evaluation of, say,
a cable. You aren't directly comparing it to another real cable, but you

are
comparing what you hear to your store of memories of what 'smoothness',

'bass
articulation', or whatever, sound like. Otherwise you could not make
an 'evaluation'.


Of course, but isn't that how people arrive at the conclusions they do in
this hobby of ours?


Sure they do. They take stuff home (sometimes) and then compare it to what they
are currently using. Otherwise they take the recommendations of friends,
salesman and magazine reviews.

None of them I know EVER (except for some SMWTMS members) score components on
an evaluative scale and make judgements on that.

By designing the test and the scales properly, all
people have to do is make that subjective, right-brain included kind of
response. They don't have to make a "choice". And the statistics will tell
us the rest.


So how many of the components you now own were purchased based on this
evaluative basis?

You'd like me to tell you how I decided on the dozen amplifiers I now own were
chosen? How about my speakers? Media devices?

Exactly none were based on long term open-ended listening evaluation. The
amplifiers were chosen on size (both power capability and physical size),
features (newer ones must have level controls and dual banana outputs) and
price. Yet all of them sound exactly the same under double blind listening
conditions and some of them have sounded exactly like high-end amplifiers in
controlled listening tests in other venues.

My media devices have been chosen primarily on functionality. None of them have
sonic defects that are detectable even under long term evaluation.

Processors are all Lexicon because they are sonically transparent until you use
the surround modes which can transform all my 2-channel media into excellent
surround performance.

Speakers? All based on measured performance. Subwoofer, self designed and
constructed because I couldn't find any commercial unit that would play
currently available programs at reference playback levels.

I'm as an advanced enthusiast as I can find. Of course my position in the
industry helps me evaluate equipment that I will never consider owning and
allows access to accomodation pricing on some things. But my system (not
counting measurement and video gear) could be duplicated by any enthusiast in
2-channel format (with subwoofer) for a few thousand dollars.
I've never felt the need for in-home extensive long term evaluative listening
to make decisions that were sonically satisfying.

Results from published bias controlled tests were very useful. Measurement gear
has been helpful but if I had someone else to do same and publish results I
wouldn't need that either.

So why would I want to assist Harry on his validation study for his technique
(yes, I see that as validating his method)? Why not? I've given everybody else
the same chance.

  #213   Report Post  
normanstrong
 
Posts: n/a
Default "Evaluation" test or chicken fat test, how to decide?

"Nousaine" wrote in message
...

They are a genuine part of the accepted evaluation and testing

methodology for
everything EXCEPT high-end audio and, I would have to say,

parapsychology.

The parapsychology example is interesting, since J. B. Rhine could
trot out lots of data supporting the existence of ESP. How did he
get such results? It's not too difficult.

1. Throw out unfavorable results, disqualifying them with any reason
you can think of. If a testee guesses wrong 100% of the time--which
is unlikely, but possible--throw out the entire test on the theory
that he must be cheating. You don't have to do this very often to
make a large effect on the data.

2. Commence recording the data at a point where favorable results
start and calling previous unfavorable results 'warm up'. A perfect
example is the guy that claims he can flip a coin heads 4 times in a
row. If the first flip comes up tails, you simply say you were
checking the coin. This immediately doubles your chances of success.

3. There are bound to be occasional individuals who guess right a
surprisingly high percentage of the time. Call those people
"sensitives." Having assumed the existence of ESP, and labelled this
individual as having lots of it, it becomes easy to justify throwing
out bad results as being the result of fatigue. You can't change the
results, but you sure can stop the test and throw out the results up
to that point.

This last is common; I've done it myself. It goes right along with
discarding 'outliers'.

Norm Strong

  #214   Report Post  
Steven Sullivan
 
Posts: n/a
Default Doing an "evaluation" test, or has it already been done?

Bruce J. Richman wrote:
Bob Marcus wrote:


"Bruce J. Richman" wrote:

I've often considered the objectivist viewpoint that "all competent
amplifiers
operating within their power ranges with appropriate speakers sound the
same",
etc. possibly true *for the measurable variables that they are interested
in*,
but nonetheless possibly not true - nor measurable by a-b or a-b-x tests -
for
the sound qualities that subjectivists are interested in.


The fallacy here is the assumption that "the sound qualities that
subjectivists are interested in" have causes beyond what measurements or ABX
tests can detect. There is no evidence that this is true.


There is no fallacy because there is no statement that measurements per se can
not account for various perceptual phenomena experienced by subjectivists who
attribute sonic differences to certain pieces of equipment. However, unlike
the omniscient objectivists who consider the subject closed and not subject to
debate, I suspect that many subjectivists would consider the possibility that
certain variables routinely named in reviews (see myh original post) may have
measurement correlates. Indeed, one of the points of John Atkinson's
measurements, for example, which accompany his Stereophile reviews, is to, when
evident, point out certain correlates between various frequency, distortion or
other technical measurements and subjective impressions obtained by reviewers.


Of course, ABX tests are irrelevant in this regard. Once an objectivist has,
of course, ruled out any and all possible measurement variations as posibly
accounting for any perceived differences, the futility of debating those with
different frames of reference becomes even more evident.


Hmm, I wonder, is John Atkinson providing bench test figures for cables and
interconnects these days?

My occasional experience of Stereophile is that when measurements *fail* to
correlate with the sometimes extravagant claims made in the review, they are simply
ignored. When they *can* be made to explain some aspect of the reviewer's
experience, they are cited.

It would seem obvious that the ability of a given component to replicate the
intentions of the recording team in producing a given set of instrumentation
and/or vocals in which instruments and vocalists appear to the listener to
appear in different places in the soundfield is *not* as simplistic as you
claim.


It is also obvious that unless you are familiar with the studio in which
the recording was mixed and mastered, then you simply can't say how closely
the intentions of the recording team were replicated in your home environment.
I suspect this is one reason whyt he 'Absolute Sound' was posited years ago
as the 'reference standard'...though that,, too , is highly variable.
It is the 'absolute sound' from seventh row center in Carnegie Hall?

--

-S.

"They've got God on their side. All we've got is science and reason."
-- Dawn Hulsey, Talent Director

  #215   Report Post  
Steven Sullivan
 
Posts: n/a
Default Does anyone know of this challenge?

Harry Lavo wrote:
"Steven Sullivan" wrote in message
OK; and if your results were not statistically confirmed by your second
listening then what would your conclusions be? You'll say that

"blinding"
caused the difference, whereas most everbody else would conclude that

the
subject was unreliable (which would be true) and that he/she didn't

really
"hear" definable acoustical differences the first time.



Also, I wonder if Harry would be willing to make a more modest claim,
if his second listening yielded a 'no difference' result: namely, that

*his*
'first listening' perception of difference between the two DUTs was

probably imaginary.
And having done so, would that experience temper/inform his other claims

of
having heard a difference? Would he, in effect, become more of an
'objectivist'?


I've already answered this to Tom at some length in a reply to his post. In
short the answer is "of course I would; that's exactly what I said above".
That is, if the group as a whole came up with no statistical signicance, it
would prove that the initial perceived differences were due to sighted bias.
On the other hand, me doing a "one-timer" would have no statistical
significance by itself, unless I did it twenty times. And the test is not
set up that way because you cannot easily do that many repeated
observational tests. Better to have twenty people do it once.


Indeed, but that sidesteps the question I asked.

If it requires twenty iterations for *you* to verify a claim about what *you*
hear, then why are you making any unqualified claims of difference at all
about cables, amps, transports?

Shouldn't you be adding some sort of 'I could be wrong' proviso as a matter
of course?

The issue is existing claims of difference.
I question whether routine audiophile claims of difference arise from the
sort of purely 'evaluative' method you describe. I question whether you yourself
even use this 'evaluative' method as described, which you say sidesteps
any 'comparative' cognition. (I'm talking abotu the 'sighted' evaluative
method, not the 'blinded ' version)

Seem to me that any 'monadic' evaluation is also a sort of comparison --

indeed,
any sensation taht we have to describe involves comparing, in the sense of

asking
yourself , e.g., does this taste like my *memory* of salty, sweet, bitter,

etc.
If the evulative report form is multiple-choice, this 'choosing' is all

the more explicit.
If the evaluative report form is scalar ('on a scale of 1-10 , with 1

being
sweet and 10 being bitter') there's still choice involved. There is

always
some sort of real or virtual reference that one is comparing the sensation

to.
I would posit that the same is true for a 'monadic' evaluation of, say,
a cable. You aren't directly comparing it to another real cable, but you

are
comparing what you hear to your store of memories of what 'smoothness',

'bass
articulation', or whatever, sound like. Otherwise you could not make
an 'evaluation'.


Of course, but isn't that how people arrive at the conclusions they do in
this hobby of ours?


Yes. Which is one reason why I quesiton whether the purely 'evaluative' mode of
comparison as described by yourself, even exists in the hobby.

By designing the test and the scales properly, all
people have to do is make that subjective, right-brain included kind of
response. They don't have to make a "choice". And the statistics will tell
us the rest.


Again, I don't know where you are getting yoru views on brain lateralization,
but the seem rather simplistic and outdated. Still, I'm trying to find some literature support for
them , particularly as regards auditory comparison, but failing so far.

--

-S.

"They've got God on their side. All we've got is science and reason."
-- Dawn Hulsey, Talent Director



  #216   Report Post  
Stewart Pinkerton
 
Posts: n/a
Default Doing an "evaluation" test, or has it already been done?

On Wed, 19 May 2004 17:37:22 GMT, (Bruce J. Richman)
wrote:

Stewart Pinkerton wrote:

On Tue, 18 May 2004 21:46:37 GMT,
(Bruce J. Richman)
wrote:

I've often considered the objectivist viewpoint that "all competent amplifiers
operating within their power ranges with appropriate speakers sound the same",
etc. possibly true *for the measurable variables that they are interested in*,
but nonetheless possibly not true - nor measurable by a-b or a-b-x tests - for
the sound qualities that subjectivists are interested in. No doubt I'll be
challenged on this view, but let me explain.


First, explain which part of the quoted 'objectivist' standpoint has
*anything* to do with anything *measurable*. These are based on
controlled *listening* tests, and have nothing to do with
measurements.

The "controlled listening tests" obviously involve the listeners determining
whether the DUT's sound the same or different. This is a form of measurement,
although on a dichotomous basis rather than an interval scale. Every data
point recorded in an ABX test or even in a more simple A/B comparison is
obviously a measurement of the observer's ability to differentiate or not
differentiate between the the 2 components being evaluated.


Now you are simply playing with semantics. None of that has anything
to do with the 'measurable variables' which you claim are of interest
to objectivists.

When one reads a subjective review, or perhaps does one's own review either in
a showroom or in one's home, one *might* be perceiving sonic qualities either
not measured nor easily defined by the usual objectivist standards. For
example, Harry has used the word "musicality". And I might use the same term,
and others might make refernce to the imaging, soundstaging or *depth of field"
qualities associated with a particular piece of equiopment. Still others may
simply say "this sounds more realistic me" (than another component being compared).


Fine - but does it actually sound *different* from the other
component. If not, then expressions of preference based on sound
quality are hardly relevant................

Again, for those that consider various technical specifrications or
bias-controlled testing to be the one and only determinant of differences, I'm
sure that this is *not* relevant.


Measurements are irrelevant here, and bias-controlled listening is the
*only* determinant of differences which has any validity outside the
skull of the individual listener.

Hence, my comments about different
frames-of=reference. However, for those subjectivists who choose to let their
perceptions of difference play a role in choosing what components they use or
purchase - of course, preferences are relevant. The cerntral point is that
these components *may* sound different to *them*, and if asked how or why, they
may describe perceptions that cannot easily be disregarded and/or ridiculed by
traditional frequency or distortion variable measurements.


The trouble is that they no longer hear these 'differences' when they
don't *know* what's connected....................

I don't profess to
know exactly how one would go about measuring, for example, differences in
"imaging" or perceptions of "more body" in the sound of a particular component,
for example, but it may well be that certain types of measurements might be
available that could answer these questions.


Measurement is irrelevant here - can they still hear differences in
'imaging' or 'body' when they don't *know* what's connected?

I suspect that for most
subjectivists, however, following through on their preferences will remain
preferable and all that is needed. Attempts at conversion via derision of
their positions has, at least on RAHE, largely been a waste of time IMHO.

While it may be perfectly acceptable to the objectivists to
consider only variables that can be measured in terms of frequency response

or
various kinds of distortion, I would be reluctant - as I think would be most
subjectivists - to attribute the various variables I've mentioned above to
specific, capable of replication, measurements to measure these things.
Also, how often, even within the frequency response realm, are complete graphs
presented that *might* account for a particular component being perceived as
relatively analytical, dark, or lean - all terms frequently used by
subjectivists?


This is a total strawman. I repeat, the 'objectivist' standpoint is
based on *listening* tests, not measurements. In fact, I have always
preferred the term 'reliable and repeatable subjectivist' for my own
position, but that's rather long-winded! :-)

It is certainly no strawman whatsoever. See my comments above re. listening
tests themselves being binary measurements in which data is collected and used
to support and promote their use.


It is the purest semantic strawman, set up in defence of a lost
position - no one has *ever* before suggested that this is any kind of
'measurement'.
--

Stewart Pinkerton | Music is Art - Audio is Engineering

  #217   Report Post  
Bruce J. Richman
 
Posts: n/a
Default Doing an "evaluation" test, or has it already been done?

Tom Nousaine wrote:

(Bruce J. Richman)
wrote:

Harry Lavo wrote:

wrote in message news:Udrqc.1168$zw.477@attbi_s01...
Harry, every point you want to make about testing, all of them, can be
accomplished by an abx test. Using it you can know what a and b are and
have them visable at all times. You can "evaluate" and jot notes and do
whatever you wish for as long as you wish listening to either a or b

alone
or in comparsion for as many times as you wish. Once you havd down pat
what you think clearly different, preference can be formed but is
irrelevant,
any difference at all you think exist should be clearly there when you

hit
the x choice. You can listen to x as long as you wish, consult your

notes
etc. by which to identify if it is a or b. What part of the above

doesn't
in fact accomplish your goals? In fact it could take less time as the x
choice is done at the end of an "evaluation" period and the entire period
need not be repeated, unless you choose to do so. We can go further,
there have been abx tests where folk took as long as they wished, we can
consult these results to get at least a preliminary look at the test you
are really proposing; as an indication if it is worth redoing. What you
propose is the above without the abx being a part of the "evaluation"
period, but it should make no difference at all if doing the "evaluation"
the abx box is sitting there already for the x choice to be made. Once a
minimal number of "evaluation" and x have been done, you can compare

notes
and results without regard to the stats if you wish, but of course the
stats will say if x choices were at a level different then that of random
guessing.


You seem to miss the part about having to make a conscious "choice" (a left
brain function) versus simply evaluating the musicality of the equipment (a
right brain choice). I am not convinced (because I have never been shown a
shred of evidence) that they measure the same thing. That is why I have
proposed the test the way I have. You cannot "assume" that a test that is
valid for determining if a specific non-musical artifact can be heard is
also valid for open-end evaluation of audio components. They are two
different things entirely. I personally think the a-b-x test is even more
suspect than a straight a-b.









I've often considered the objectivist viewpoint that "all competent
amplifiers
operating within their power ranges with appropriate speakers sound the
same",
etc. possibly true *for the measurable variables that they are interested
in*,
but nonetheless possibly not true - nor measurable by a-b or a-b-x tests -
for
the sound qualities that subjectivists are interested in.


In the Sunshine trials no measurements were ever made. The closest to
measurements were level matching at 100,1000 1nd 10,000 Hz. Yet the subject
was
unable to reliably his Pass Aleph monoblocks from a modest Yamaha integrated
amplifier when even the most modest of bias controls were implemented (cloths
placed over I/O terminals) using his personally selected programs in his
reference system.


Measurements take many forms, including the decision to select either "A" or
:"B" as being dissimilar either to a reference or each other. Binary
meassurements *are* involved in comparative evaluations whether conducted blind
or sighted. All your cited results indicate per se was that Steve Zipser could
not make the discriminations between the DUT's that he claimed he could. As
for other results you will no doubt cite, as you have always done, to support
your position, various posters (not myself) have frequently questioned the
validity of the type of testing you support.

No doubt I'll be
challenged on this view, but let me explain.

When one reads a subjective review, or perhaps does one's own review either
in
a showroom or in one's home, one *might* be perceiving sonic qualities

either
not measured nor easily defined by the usual objectivist standards. For
example, Harry has used the word "musicality". And I might use the same
term,
and others might make refernce to the imaging, soundstaging or *depth of
field"
qualities associated with a particular piece of equiopment. Still others

may
simply say "this sounds more realistic me" (than another component being
compared). While it may be perfectly acceptable to the objectivists to
consider only variables that can be measured in terms of frequency response
or
various kinds of distortion, I would be reluctant - as I think would be most
subjectivists - to attribute the various variables I've mentioned above to
specific, capable of replication, measurements to measure these things.


So? Who cares? If you cannot tell them apart with your eyes closed who cares
what measurements are or what "variables" you are listening for?


Obviously, you don't - that's a given. And perhaps you represent the
objectivists that don't respect individual preferences derived from perceptions
of certain qualities that you derogate and minimize.
Again, the frames-of-reference of the 2 camps are so disparate as to make
conversations basically useless.

Also, how often, even within the frequency response realm, are complete
graphs
presented that *might* account for a particular component being perceived as
relatively analytical, dark, or lean - all terms frequently used by
subjectivists?

This is one of the reasons that I feel the 2 "camps" are really operating
from
almost totally different frames-of-reference and the endless challenges and
disputes about the value of double blind testing, are, in practical terms,
unlikely to convince anybody of anything they don't already strongly

believe.

Chacun a son gout!

Bruce J. Richman


The 'debates' will be endless because the camp without any credible
supporting
evidence has no other resort except "debate" including hypothesizing long,
expensive experiments that will never be done.




Au contraire, the debates will contninue because of some irrational need on the
party of some from the other camp to try and convert audiophiles who prefer to
make their audio equipment decisions in ways of their own choosing.

It should also be noted that the proposal of complex experiments has (a) not
been proposed by myself, and (b) should not be opposed at any rate if the
objective is to obtain further information that might be useful.





Bruce J. Richman

  #218   Report Post  
Bruce J. Richman
 
Posts: n/a
Default Doing an "evaluation" test, or has it already been done?

Steven Sullivan wrote:

Bruce J. Richman wrote:
Bob Marcus wrote:


"Bruce J. Richman" wrote:

I've often considered the objectivist viewpoint that "all competent
amplifiers
operating within their power ranges with appropriate speakers sound the
same",
etc. possibly true *for the measurable variables that they are interested


in*,
but nonetheless possibly not true - nor measurable by a-b or a-b-x tests

-
for
the sound qualities that subjectivists are interested in.

The fallacy here is the assumption that "the sound qualities that
subjectivists are interested in" have causes beyond what measurements or

ABX
tests can detect. There is no evidence that this is true.


There is no fallacy because there is no statement that measurements per se

can
not account for various perceptual phenomena experienced by subjectivists

who
attribute sonic differences to certain pieces of equipment. However,

unlike
the omniscient objectivists who consider the subject closed and not subject

to
debate, I suspect that many subjectivists would consider the possibility

that
certain variables routinely named in reviews (see myh original post) may

have
measurement correlates. Indeed, one of the points of John Atkinson's
measurements, for example, which accompany his Stereophile reviews, is to,

when
evident, point out certain correlates between various frequency, distortion

or
other technical measurements and subjective impressions obtained by

reviewers.

Of course, ABX tests are irrelevant in this regard. Once an objectivist

has,
of course, ruled out any and all possible measurement variations as posibly
accounting for any perceived differences, the futility of debating those

with
different frames of reference becomes even more evident.


Hmm, I wonder, is John Atkinson providing bench test figures for cables and
interconnects these days?


Not that I know of.

My occasional experience of Stereophile is that when measurements *fail* to
correlate with the sometimes extravagant claims made in the review, they are
simply
ignored. When they *can* be made to explain some aspect of the reviewer's
experience, they are cited.


It might be that some reviewers are less prone to hyperbole or poetic license
than others. Of course, Mr. Atkinson would have to comment on your
observations.

It would seem obvious that the ability of a given component to replicate

the
intentions of the recording team in producing a given set of

instrumentation
and/or vocals in which instruments and vocalists appear to the listener to
appear in different places in the soundfield is *not* as simplistic as you
claim.


It is also obvious that unless you are familiar with the studio in which
the recording was mixed and mastered, then you simply can't say how closely
the intentions of the recording team were replicated in your home
environment.
I suspect this is one reason whyt he 'Absolute Sound' was posited years ago
as the 'reference standard'...though that,, too , is highly variable.
It is the 'absolute sound' from seventh row center in Carnegie Hall?

--


Agreed in general. Ideally, it would be helpful to know what the recording
engineers intentions were in localizing various instruments or singers in the
final production.


-S.

"They've got God on their side. All we've got is science and reason."
-- Dawn Hulsey, Talent Director









Bruce J. Richman

  #219   Report Post  
Harry Lavo
 
Posts: n/a
Default Doing an "evaluation" test, or has it already been done?

"Steven Sullivan" wrote in message
news:9Q4rc.28812$gr.2644804@attbi_s52...
Bruce J. Richman wrote:
Bob Marcus wrote:


"Bruce J. Richman" wrote:

I've often considered the objectivist viewpoint that "all competent
amplifiers
operating within their power ranges with appropriate speakers sound

the
same",
etc. possibly true *for the measurable variables that they are

interested
in*,
but nonetheless possibly not true - nor measurable by a-b or a-b-x

tests -
for
the sound qualities that subjectivists are interested in.

The fallacy here is the assumption that "the sound qualities that
subjectivists are interested in" have causes beyond what measurements

or ABX
tests can detect. There is no evidence that this is true.


There is no fallacy because there is no statement that measurements per

se can
not account for various perceptual phenomena experienced by

subjectivists who
attribute sonic differences to certain pieces of equipment. However,

unlike
the omniscient objectivists who consider the subject closed and not

subject to
debate, I suspect that many subjectivists would consider the possibility

that
certain variables routinely named in reviews (see myh original post) may

have
measurement correlates. Indeed, one of the points of John Atkinson's
measurements, for example, which accompany his Stereophile reviews, is

to, when
evident, point out certain correlates between various frequency,

distortion or
other technical measurements and subjective impressions obtained by

reviewers.

Of course, ABX tests are irrelevant in this regard. Once an objectivist

has,
of course, ruled out any and all possible measurement variations as

posibly
accounting for any perceived differences, the futility of debating those

with
different frames of reference becomes even more evident.


Hmm, I wonder, is John Atkinson providing bench test figures for cables

and
interconnects these days?


I don't recall Stereophile testing cables in years.

My occasional experience of Stereophile is that when measurements *fail*

to
correlate with the sometimes extravagant claims made in the review, they

are simply
ignored. When they *can* be made to explain some aspect of the reviewer's
experience, they are cited.


John is becoming more and more vocal in these circumstances as he gains
correlative knowledge, particularly in amplifiers.

It would seem obvious that the ability of a given component to replicate

the
intentions of the recording team in producing a given set of

instrumentation
and/or vocals in which instruments and vocalists appear to the listener

to
appear in different places in the soundfield is *not* as simplistic as

you
claim.


It is also obvious that unless you are familiar with the studio in which
the recording was mixed and mastered, then you simply can't say how

closely
the intentions of the recording team were replicated in your home

environment.
I suspect this is one reason whyt he 'Absolute Sound' was posited years

ago
as the 'reference standard'...though that,, too , is highly variable.
It is the 'absolute sound' from seventh row center in Carnegie Hall?


The fact is when the Abso!ute Sound was still struggling to get off the
ground Harry Pearson invested in a Revox A700 and good professional mics so
that he could record and use the master tapes as a reference. I was already
doing semi-professional recording on a portable Ampex 440B using 3 way
Schoeps and Neumann mikes and Gately mixers...and had been doing so for some
years. So I had those tapes to draw upon and a pretty intimate familiarity
with live music and its recording. That reference has stood me in good
stead ever since.

  #220   Report Post  
Bob Marcus
 
Posts: n/a
Default Does anyone know of this challenge?

Steven Sullivan wrote:

snip

The issue is existing claims of difference.
I question whether routine audiophile claims of difference arise from the
sort of purely 'evaluative' method you describe.Â*Â* I question whether you

yourself
even use this 'evaluative' method as described, which you say sidesteps
any 'comparative' cognition.Â* (I'm talking abotu the 'sighted' evaluative
method, not the 'blinded ' version)


It's quite obvious that Harry DOESN'T use this technique at all. After all,
he insists that the first step in his "experiment" is to devise the list of
criteria by which subjectivists *will* evaluate components. Well, if no list
exists yet, how does he do it now? And if you need such a list to conduct
the experiment, how can you then claim that the experiment exactly mirrors
what subjectivists do in their everyday assessment of audio components?

snip

Again, I don't know where you are getting yoru views on brain
lateralization,
but the seem rather simplistic and outdated.Â* Still, I'm trying to find
some

literature support for
them , particularly as regards auditory comparison, but failing so far.


Don't hold your breath. So far as I can tell, Harry is hanging his entire
argument on a flagrant overinterpretation of the simple scientific discovery
that our brain (or a part of our brain) reacts differently to music (or some
characteristic of music) than to other sounds.

IOW, he is exhibiting classic behavior:

1. He starts from legitimate scientific findings.

2. He wildly overinterprets and misinterprets those findings to support a
pet theory that has no other scientific backing, and indeed runs counter to
generally accepted (by scientists, not hobbyists) scientific findings.

3. He proclaims that the real scientific findings of real scientists are not
proven valid until they have disproved his pseudoscientific theory.

And then he wonders why none of us seem willing to devote our own
retirements to this exercise.

bob

__________________________________________________ _______________
Best Restaurant Giveaway Ever! Vote for your favorites for a chance to win
$1 million! http://local.msn.com/special/giveaway.asp



  #221   Report Post  
Nousaine
 
Posts: n/a
Default Doing an "evaluation" test, or has it already been done?

(Bruce J. Richman)

....large snips......

Tom Nousaine wrote:



I've often considered the objectivist viewpoint that "all competent
amplifiers
operating within their power ranges with appropriate speakers sound the
same",
etc. possibly true *for the measurable variables that they are interested
in*,
but nonetheless possibly not true - nor measurable by a-b or a-b-x tests -
for
the sound qualities that subjectivists are interested in.


In the Sunshine trials no measurements were ever made. The closest to
measurements were level matching at 100,1000 1nd 10,000 Hz. Yet the subject
was
unable to reliably his Pass Aleph monoblocks from a modest Yamaha integrated
amplifier when even the most modest of bias controls were implemented

(cloths
placed over I/O terminals) using his personally selected programs in his
reference system.


Measurements take many forms, including the decision to select either "A" or
:"B" as being dissimilar either to a reference or each other. Binary
meassurements *are* involved in comparative evaluations whether conducted
blind
or sighted. All your cited results indicate per se was that Steve Zipser
could
not make the discriminations between the DUT's that he claimed he could. As
for other results you will no doubt cite, as you have always done, to support
your position, various posters (not myself) have frequently questioned the
validity of the type of testing you support.


Pardom me for forgetting that all attempts at validating or confirming claims
are considered measurements when they fail to confirm. While it is true that
some ardent subjectivists "question" the methods used for verifying sound
quality assessments and claims it seems to me that when the basis for
judgements are confined to sound quality and sound quality alone and
subjectivists still cannot
verify the identity of amplifiers and wires of which they have intimate
familiarity they should be producing more credible evidence to support their
case. But instead they'll just continue to 'debate.'

No doubt I'll be
challenged on this view, but let me explain.

When one reads a subjective review, or perhaps does one's own review either
in
a showroom or in one's home, one *might* be perceiving sonic qualities

either
not measured nor easily defined by the usual objectivist standards. For
example, Harry has used the word "musicality". And I might use the same
term,
and others might make refernce to the imaging, soundstaging or *depth of
field"
qualities associated with a particular piece of equiopment. Still others

may
simply say "this sounds more realistic me" (than another component being
compared). While it may be perfectly acceptable to the objectivists to
consider only variables that can be measured in terms of frequency response
or
various kinds of distortion, I would be reluctant - as I think would be

most
subjectivists - to attribute the various variables I've mentioned above to
specific, capable of replication, measurements to measure these things.


So? Who cares? If you cannot tell them apart with your eyes closed who cares
what measurements are or what "variables" you are listening for?


Obviously, you don't - that's a given. And perhaps you represent the
objectivists that don't respect individual preferences derived from
perceptions
of certain qualities that you derogate and minimize.


Individual preferences have never been unrespected except by subjectivists.
They are what they are. At least objectivists wrap opinions around data and
items that can be demonstrated and verified. We don't invent unspecified and
undefined terms like Musicality to embrace mystical ideas.

Again, the frames-of-reference of the 2 camps are so disparate as to make
conversations basically useless.


Agreed. Subjectivists need to put their beliefs into experiments that verify
the claims.

The 'debates' will be endless because the camp without any credible
supporting
evidence has no other resort except "debate" including hypothesizing long,
expensive experiments that will never be done.




Au contraire, the debates will contninue because of some irrational need on
the
party of some from the other camp to try and convert audiophiles who prefer
to
make their audio equipment decisions in ways of their own choosing.


Strawman argument. Nobody is suggesting that audiophiles should or should not
buy amplifiers (or whatever) based on whatever basis they feel necessary. What
they need to stop doing is claiming that their asmplifiers have special sound
quality attributes based on acoustical characteristics that cannot be
identified when a figurative blindfold is produced.
And they should cease recommending these products to neophytes and newbies
based on these attributes that have never been shown to exist.

Or they should stop carping over the extant evidence and produce some credible
evidence of their own to support their claims.

None of this says that you shouldn't be happy with any decision you've made
about any gear you've acquired or tweaks or modifications you've made.

It should also be noted that the proposal of complex experiments has (a) not
been proposed by myself, and (b) should not be opposed at any rate if the
objective is to obtain further information that might be useful.


I strongly urge that the prosposer of that experiment take every effort to move
on with it. Validation of his open-ended evaluation approach (which unlike he
claims is not widely used among audiophiles) is a good idea and should be
validated.

  #223   Report Post  
Nousaine
 
Posts: n/a
Default Does anyone know of this challenge?

"Harry Lavo"

....snips...."Nousaine" wrote in message


But that is what
I
said.

(and I further said that if real differences existed then I
expected they would show up even if blinded in the evaluative

test).

2) I further said thatÀš if the quick-switch comparative blind

test
showed
the same results as the blind evaluative test, then I would swing

over
and
support your test as validated.

Pray tell, what fault can you possibly find with a test that allows
those
definitive conclusions to be reached (by me, and presumably by many
other
subjectivists here.)Àš

I and others have found numerous faults with it. Purely as a

practical
matter, it's impossible to pull off. I even suggested an alternative
that
would be far more straightforward, and meets every requirement you

have
insisted on, and you rejected that. Under the circumstances, I can
understand Tom's suspicion that you were merely throwing up smoke.


Your test did not "meet..every requirement I have insisted on". For

it
continued to be based on quick swith a-b testing, a technique that is

of
itself being questioned as possibly contributing to erroneous
conclusions.


Only by you and perhaps a small group of 'believers.'

That is why I proposed a test that started where the subjectivists
live...with extended evaluative listening, and changed only the
condiditon
of "blindness" not the listening techniques themselves. That is why I
rejected the approach as I said at the time. It is inviting in its
simplicity but it would not sway subjectivists including myself

because
of
this flaw.


Why not start with you. Don't you have at least one of an evaluated component
in your possession with a full evaluation in hand? I have "another" one.


As for the "many other subjectivists here," I think you are being
presumptuous. Finding flaws with bias-controlled tests seems to be

part
of
what makes one a subjectivist. I see no reason to believe that your
test,
even if you could pull it off, would be any different.

bob


That's not how I read it/them at all. They may have some differences
with
me / my way of thinking, but the main problem they have is the
*assumption*
(unverified) that a test that is good for picking out small level
differences in codec artecfact and other known acoustic anomalies is a
suitable technique for open ended evaluation of audio components.

Once
that
criticism is addressed, I think you will find most objections melt

away
if
your assumptions prove true.


Please. Those folks addressing codec artifacts are dealing with exactly

the
same issues ..... subtle acoustical differences no matter what the cause

are
what the more interested parties have interest in.

As far as I can see the "only" reason you don't like bias-controlled tests

is
that they do not support your prior held beliefs.

And you hold out the idea that until bias controlled tests can support

your
prior held beliefs they will remain "unvalidated."

And your "proposed" validation test won't be considered as conclusive

until the
results would be the "same" as those obtained under un-bias controlled
conditions. Excuse me if I wonder why?


Once again you are deliberately and totally misrepresenting what I have
said, Tom. I have said *ABSOLUTELY* no such thing. If you continue this, I
can only conclude that you are deliberately falsehooding.

I have told you *three times* to stop and check what I actually have said.
You have not. I expect an apology!!


No apology coming from here. But tell me if your evaluative tests (at least one
of which you already hold in your hand) are not confirmed with a
promimal-switched test (or cable swap if you wish) as to identification of the
identity of the amps what will you say.

And how easy would it be to 'remember' what was being tested and your prior
answers with only 2 possible alternatives? And how would you propose to retain
blindness?

IMO a better way to validate your testing method would be to see if you could
produce positive identification of two amplifiers you deemed to be sonically
different which I had measured as having no response errors greater than 0.2 dB
over the audible range in a switched test of whatever length you want. Or we
could use (ala Richard Clarke) a equalizer for the comparative unit.

It woud take 1-2 days max. And then could be carted elsewhere.

You would only have to be able to identify the amplifier you already have a
personal relationship with to prove your point. Indeed you woulkd never have to
even listen to the comparative amplifier.

It seems to me that your method depends on 'learning' the sound of a device.
Does it not? If that were true then why would you need a long evaluation for
learning another amplifier? Wouldn't the characteristics of your amplifier be
immediately apparent whenever you "hear" it? No matter what the session length?
Special material, special passages." Sure, have your own.

But it that you are arguing that taking an amplifier home and checking the
sound is a more effective method of assessing the "sound" of the device than
careful experimental design.

And that experimental designs which have limited the true acoustical sound
delivered to the loudspeaker terminals are somehow "missing" or masking
important acoustical elements.

This, in spite of, the long series of experiments that have tested this. All
this based on a hypothesized experiment which has such cost and time that it
will never be conducted (certainly not by high-end companies; who should be the
first up to bat) and whose first trial won't be conducted by the proposer of
the experiment.

Interested enthusiasts have been here and done that work already. What I'm
waiting for is the first high-end apologist that will deliver a convincing
replicable experiment that his amplifier/wire creates a "different" sound let
alone a "better" sound.

But since the very idea of the test
possibly
being flawed is so threatening that even acknowledging the possibility
seems
beyond the objectivist ken, there is little movement on either side.

It seems to me the 'threatening' part is validating open-ended
uncontrolled
bias evaluation. One should be able to do that easily by showing the
ability to
come to the same conclusions under listener-bias controlled conditions
following open-ended evaluation. Why won't you? Why has no one else

done
so?


Again, Tom, you give evidence that you either don't read or don't want to
understand what I say. That is exactly the purpose of the evaluative
sighted-evaluative blind leg of my proposed test. And I just repeated

the
reasons for that in the lates post you are just now responding to.

Please
re-read paragraph #1 above and tell me what about believing "blind

results"
(if they support your position) I am above believing?


Please read your proposal. "When" bias controls are shown to give the same
results that open tests would give then you'll accept them. No?


Not at all. What my proposal said was if they validate the result, then the
differences existed. If the differences go away under blind but otherwise
identical evaluative testing, then clearly sighted bias was at work. But I
also said that we would know that for sure because you weren't mixing in a
second variable - that being a change in test. I have said this over and
over since I first set out the proposed test, and you simply keep
misrepresenting my view to fit your own bias about my position.


OK then I would assume that you would be able to reliably identify your
personal amplifier when it was replaced by another device with nominally
similar electrical performance in a cable swap test in your listening room?

No? If not, why not? If not why the debate?

What has not been "validated" are the conclusions you gain from
non-controlled
open listening as to acoustical cause. To imply that simply removing

the
sight
of I/O terminals or otherwise hiding the answers somehow changes the
acoustics
of the situation (inducing masking) is a misdirection. If on the other
hand,
conclusions you hold are simply the result of a myriad of influences

many
of
which have no acoustical cause why should anyone else but you care?

As I have said repeatedly, one casual, undocumented, anecdotal case does

not
prove your case, however interesting.

]

So let's make it 2; Harry. I have a compilation of over 2 dozen experiments
none of which has confirmed amp/cable sound. What's your body of evidence?



How about over two dozen performed prior to May 1990?


Two dozen what? I'm talking about long, extended listening tests without
comparative listening, but rather evaluative listening. You've only talked
about one case that even comes close, and you've never shared details of
that one.


What details would you like?

There is good reason for starting
with sighted, open-ended evaluative listening...because that is where

most
audiophiles and reviewers start and make their judgments.


Sure and that's where all Urban Legends start too. So? Of course, I'm not
dismissing all listener observations either. It's just that those about
amp/wire sound have been put to the test.


You want to prove that it is sighted bias, you have to keep everything the
same including the way they reach their conclusion, only changing the
"blind". You haven't rigorously done that.

]
Actually the case that hasn't been confirmed has been Harry's There is good
reason for starting to examine the comment....

with sighted, open-ended evaluative listening...because that is where

most
audiophiles and reviewers start and make their judgments.


I think this is wishful thinking and I see NO evidence of "evaluative method"
in any of the common review text in the sense that Harry has suggested. I do
see evaluative comments but never with scaling and I often see comparative
comments as well. I just don't see the skill-level that Mr Lavo has suggested.

So whatever..... where is the necessity for a method based on "what everybody
does" when that method is not a method but a lack of controls.

Harry has amplifiers he feels meet the criteria of "better" sonics but he

won't
put himself to the experimental test of a simple bias controlled

experiment.

Instead he postulates a lengthy, costly experiment that is based on having

the
results "conform" to the results he already "knows" to be true before

he'll
accept them.


Where did amplifiers enter into this? I talked about a sighted listening
test done seven years ago when I chose an amplifier to replace my D90B. A
choice I have been happy with ever since. So I'm to go out and buy two amps
I don't like seven years later to prove to you that I can do it again,
blind? Get real!


OK let's get "real" you are postulating an experimental method that does yet
exist at least as far as my review of the latest issue of Stereophile magazine
as somehow being "valid" because everybody uses it. No?

And then you suggest that the majority of audiophiles use it. IME I don't know
of ANY audiophiles that use your method. Nobody I know has the chance to take
home equipment for more than a few days for audition. Nobody I know uses an
evaluative form and does scaled scoring for amps/wires. And I'm talking about
people who have at least a hundred thousand dollars invested.

In other words if a long listening biased controlled test doesn't support

amp
differences then he'll just say that it was masking real difference.


Nope, I said I would do exactly that and support your conclusion *when* it
is is part of a carefully designed control test. The one I outlined.


OK then why not be the first subject? But let's also start with a description
of sound quality categories? Off-line?


This is the classic experimental dodge. Invent a needless experiment that

will
be unlikely to be conducted (with pre-held results that have to be

confirned or
the experiment would be "invalid" or "unconclusive") and you have the idea
here.


Who else has done this to make it a classic, Tom? Who else has the
background in test design to have proposed this here on RAHE. I don't think
so.


OK; then let's go. You are the first subject. Do you not have an amplifier that
has certain sound quality characteristics that you of which you are familiar
and have scored on an evaluative basis?

When you send me by private e-mail a copy of that score and a description of
the amplification device I'll supply an amplifier of similar capability and
you'll supply your device to me for evaluation and to make sure that you are
un-tempted by comparative evaluation.


So if that is
erroneous, you have to *prove* it with rigorous testing...and you do that

by
doing the exact same test but "blind" instead of sighted. You do not do

it
by changing the test technique as well as going blind. My case all along

is
that you have switched two variables at once, yet you impute the

difference
in results to only one of those variables, while in fact the other

variable
may be at fault. How much clearer can I make it!!


Oh you've made your case very clear. As long as you can avoid dealing with
other bias controlled experimental results you'll be quite happy to

continue
debating.


Nope, I'm not happy debating. I'd like to start setting up a test. But so
far I can't even get serious suggestions to what to test (i.e. two component
DUT's with fairly universal concurrence on both sides, i.e. objectivists
universally accept that there will be no difference; subjectivists
universally believe that there will be a difference.)


Universal is another red-herring. Some subjectivists say that "all amplifiers
sound different" So which is it? All, any, a few, maybe some. a few
..........none?

Don't YOU have one Mr Lavo?
  #224   Report Post  
Nousaine
 
Posts: n/a
Default Subjectivist and Objectivist -- Are these misnomers? (WAS:

(S888Wheel) wrote:

Objectivist -- Are these misnomers? (WAS:

From: chung

Date: 5/18/2004 9:55 PM Pacific Standard Time
Message-id: LMBqc.22804$gr.1936664@attbi_s52

Bromo wrote:

On 5/18/04 8:43 PM, in article D4yqc.22117$gr.1808882@attbi_s52, "Harry
Lavo" wrote:

Oh you've made your case very clear. As long as you can avoid dealing

with
other bias controlled experimental results you'll be quite happy to
continue
debating.

Nope, I'm not happy debating. I'd like to start setting up a test. But

so
far I can't even get serious suggestions to what to test (i.e. two

component
DUT's with fairly universal concurrence on both sides, i.e. objectivists
universally accept that there will be no difference; subjectivists
universally believe that there will be a difference.)

Except I think most Audiophiles would fall in a spectrum between the two
camps - the extremes being those that think that we have learned and can
measure everything there is to know - and that system integration is not
more difficult than comparing specification sheets (What we call
"objectivists")


That's not what we called objectivists. I would postulate that an
objectivist, as far as this newsgroup is concerned, is one who believes
in the validity of (a)standard controlled-bias testing like DBT's, and
(b) measurements.

- and those that feel that spec sheets are not what you hear
- and that testing and analysis is useless unless it is done with

listening
to music (we call these folks "subjectivists").


I suggest you get your definitions straight. Check this webpage:

http://www.dself.dsl.pipex.com/ampin...o/subjectv.htm

In particular, pay attention to this:
***
A short definition of the Subjectivist position on power amplifiers
might read as follows:

* Objective measurements of an amplifier's performance are
unimportant compared with the subjective impressions received in
informal listening tests. Should the two contradict the objective
results may be dismissed out of hand.
* Degradation effects exist in amplifiers that are unknown to
engineering science, and are not revealed by the usual measurements.
* Considerable latitude may be used in suggesting hypothetical
mechanisms of audio impairment, such as mysterious capacitor
shortcomings and subtle cable defects, without reference to the
plausibility of the concept, or gathering any evidence to support it .
***








Of course these definintions of subjectivist positions were defined by a
self-proclaimed objectivist. You know it is rarely flatering when an
objectivist speaks for a subjectivist or visa versa.

Here is something that was said about all objectivists in a Stereophile
article: "For an objectivist, the musical experience begins with the
compression and rarefaction of the local atmosphere by a musical instrument
and
ends with the decay of hydraulic pressure waves in the listener's cochlea; "

http://www.stereophile.com/asweseeit/602/

Maybe both sides would be better served if they were left to speak for
themselves.


Actually the definition "For an objectivist, the musical experience begins with
the
compression and rarefaction of the local atmosphere by a musical instrument
and
ends with the decay of hydraulic pressure waves in the listener's cochlea; "

is pretty good; of course the subjectivist position would have to be:

"For a subjectivist, the musical experience sometimes begins with the
compression and rarefaction of the local atmosphere by a musical instrument
and even occasionally ends with the decay of hydraulic pressure waves in the
listener's cochlea; but most of it happens in the listeners imagination that's
why we call it "imagin-in" "

  #225   Report Post  
S888Wheel
 
Posts: n/a
Default Subjectivist and Objectivist -- Are these misnomers? (WAS:

From: chung
Date: 5/19/2004 9:46 PM Pacific Standard Time
Message-id: iKWqc.82515$iF6.7051868@attbi_s02

S888Wheel wrote:
Objectivist -- Are these misnomers? (WAS:

From: chung

Date: 5/18/2004 9:55 PM Pacific Standard Time
Message-id: LMBqc.22804$gr.1936664@attbi_s52

Bromo wrote:

On 5/18/04 8:43 PM, in article D4yqc.22117$gr.1808882@attbi_s52, "Harry
Lavo" wrote:

Oh you've made your case very clear. As long as you can avoid dealing
with
other bias controlled experimental results you'll be quite happy to
continue
debating.

Nope, I'm not happy debating. I'd like to start setting up a test. But
so
far I can't even get serious suggestions to what to test (i.e. two
component
DUT's with fairly universal concurrence on both sides, i.e. objectivists
universally accept that there will be no difference; subjectivists
universally believe that there will be a difference.)

Except I think most Audiophiles would fall in a spectrum between the two
camps - the extremes being those that think that we have learned and can
measure everything there is to know - and that system integration is not
more difficult than comparing specification sheets (What we call
"objectivists")

That's not what we called objectivists. I would postulate that an
objectivist, as far as this newsgroup is concerned, is one who believes
in the validity of (a)standard controlled-bias testing like DBT's, and
(b) measurements.

- and those that feel that spec sheets are not what you hear
- and that testing and analysis is useless unless it is done with

listening
to music (we call these folks "subjectivists").


I suggest you get your definitions straight. Check this webpage:

http://www.dself.dsl.pipex.com/ampin...o/subjectv.htm

In particular, pay attention to this:
***
A short definition of the Subjectivist position on power amplifiers
might read as follows:

* Objective measurements of an amplifier's performance are
unimportant compared with the subjective impressions received in
informal listening tests. Should the two contradict the objective
results may be dismissed out of hand.
* Degradation effects exist in amplifiers that are unknown to
engineering science, and are not revealed by the usual measurements.
* Considerable latitude may be used in suggesting hypothetical
mechanisms of audio impairment, such as mysterious capacitor
shortcomings and subtle cable defects, without reference to the
plausibility of the concept, or gathering any evidence to support it .
***



Of course these definintions of subjectivist positions were defined by a
self-proclaimed objectivist. You know it is rarely flatering when an
objectivist speaks for a subjectivist or visa versa.


So, given that you frequent this newsgroup, is there anything in Self's
definition you deem inaccurate?


Yes. At least for me each one is either inaccurate or skewed to imply a
misleading meaning. Lets take the first one. IMO if one amplifier measures with
less distortion than another but the amp with higher distortion sounds better
in a given system then the one that sounds better is the prefered amp. The
implication though is that the subjectivist disregard the measurements all
together. Well, I hope the designers are paying attention to the relevant
measurements and how they relate to sonic impressions and moving forward with
their designs from there. But as it stands it is a misleading statement about
subjectivists. It would be just as misleading to say that objectivists will
prefer equipment based on measurements despite how it might *actually* sound. I
think the truth is that objectivists are more interested in the measurements
than the subjectivists but that does not mean that measurments are being
dismissed all together. The second point. First off I'm not sure what is meant
by "engineering science." But I do believe there is nothing magical about audio
and that all parameters of audio that can be heard can also be measured. as for
the third point, it simply does not apply to me at all. I know my limitations
when it comes to technology and I know better than to ascribe hypothetical
cause and effects to various designs of audio components. My hypothesis of any
cause and effect are usually born of trial and error while carefully allowing
one variable in my trials.


Here is something that was said about all objectivists in a Stereophile
article: "For an objectivist, the musical experience begins with the
compression and rarefaction of the local atmosphere by a musical instrument

and
ends with the decay of hydraulic pressure waves in the listener's cochlea;

"

Have you met an objectivist that behaves in such a way?


I have interacted with some that at first blush seemed to but upon further
converstation did not. My point was that the misrepresentations go in both
directions. I guess you agree that this was one of those misrepresentations of
an objectivist by a subjectivist. I don't know and you don't know that this
author has never meat an objectivist who actually meets this description. the
real problem is with the single universal description for a broad group of
people with diverse opinions.




http://www.stereophile.com/asweseeit/602/

Maybe both sides would be better served if they were left to speak for
themselves.











  #226   Report Post  
Harry Lavo
 
Posts: n/a
Default Does anyone know of this challenge?

"Steven Sullivan" wrote in message
news:Xw5rc.85748$iF6.7308354@attbi_s02...
Harry Lavo wrote:
"Steven Sullivan" wrote in message
OK; and if your results were not statistically confirmed by your

second
listening then what would your conclusions be? You'll say that

"blinding"
caused the difference, whereas most everbody else would conclude

that
the
subject was unreliable (which would be true) and that he/she didn't

really
"hear" definable acoustical differences the first time.


Also, I wonder if Harry would be willing to make a more modest claim,
if his second listening yielded a 'no difference' result: namely, that

*his*
'first listening' perception of difference between the two DUTs was

probably imaginary.
And having done so, would that experience temper/inform his other

claims
of
having heard a difference? Would he, in effect, become more of an
'objectivist'?


I've already answered this to Tom at some length in a reply to his post.

In
short the answer is "of course I would; that's exactly what I said

above".
That is, if the group as a whole came up with no statistical signicance,

it
would prove that the initial perceived differences were due to sighted

bias.
On the other hand, me doing a "one-timer" would have no statistical
significance by itself, unless I did it twenty times. And the test is

not
set up that way because you cannot easily do that many repeated
observational tests. Better to have twenty people do it once.


Indeed, but that sidesteps the question I asked.

If it requires twenty iterations for *you* to verify a claim about what

*you*
hear, then why are you making any unqualified claims of difference at all
about cables, amps, transports?


And you see me doing this where?

Shouldn't you be adding some sort of 'I could be wrong' proviso as a

matter
of course?


Unless I have collaborating evidence, yes. Just as you should be making
clear that a null on a double blind test a) doesn't prove a negative, and b)
may be using a test that has not been validated for open-ended evaluation
purposes.


The issue is existing claims of difference.
I question whether routine audiophile claims of difference arise from the
sort of purely 'evaluative' method you describe. I question whether you

yourself
even use this 'evaluative' method as described, which you say sidesteps
any 'comparative' cognition. (I'm talking abotu the 'sighted' evaluative
method, not the 'blinded ' version)


I don't use it exclusively, but I use it largely. As I told you before I
evaluate...when I think I can zero in on an issue, I compare, then I
evaluate again. Its an iterative process. The comparison is always for a
specific "evaluative" effect...similar to listening to a certain side-effect
of a codec. I do not "compare" for overall difference, nor do I make a
formal comparison choice or preference. If any such exists it arises
naturally from identification of and examination of audible characteristics
of the product(s). In many cases, the evaluations have been purely
mondadic, as I haven't had the product to compare with (e.g. an amp dies).


Seem to me that any 'monadic' evaluation is also a sort of

comparison --
indeed,
any sensation taht we have to describe involves comparing, in the

sense of
asking
yourself , e.g., does this taste like my *memory* of salty, sweet,

bitter,
etc.
If the evulative report form is multiple-choice, this 'choosing' is

all
the more explicit.
If the evaluative report form is scalar ('on a scale of 1-10 , with 1

being
sweet and 10 being bitter') there's still choice involved. There is

always
some sort of real or virtual reference that one is comparing the

sensation
to.
I would posit that the same is true for a 'monadic' evaluation of,

say,
a cable. You aren't directly comparing it to another real cable, but

you
are
comparing what you hear to your store of memories of what

'smoothness',
'bass
articulation', or whatever, sound like. Otherwise you could not make
an 'evaluation'.


Of course, but isn't that how people arrive at the conclusions they do

in
this hobby of ours?


Yes. Which is one reason why I quesiton whether the purely 'evaluative'

mode of
comparison as described by yourself, even exists in the hobby.


It is actually the most rigorous research technique you can use...monadic,
evaluative. That's the irony in all this objecting you are doing. I'm
actually proposing a more sophisticated and rigorous test than anybody else
on this forum uses or has used, to the best of my knowledge. And that is
based on studies in behavioral psychology and twenty years of applied
sensory application in the sophisticated world of consumer package goods.


By designing the test and the scales properly, all
people have to do is make that subjective, right-brain included kind of
response. They don't have to make a "choice". And the statistics will

tell
us the rest.


Again, I don't know where you are getting yoru views on brain

lateralization,
but the seem rather simplistic and outdated. Still, I'm trying to find

some literature support for
them , particularly as regards auditory comparison, but failing so far.


I'm postulating it, of course, but it is based on some findings of brain
studies done over the last twenty-five years that find the right brain
generally takes the intuitive and sensory lead, while the left takes the
lead in analytical and logical calculations, such as deciding on a choice.
  #227   Report Post  
Bob Marcus
 
Posts: n/a
Default Subjectivist and Objectivist -- Are these misnomers? (WAS:

Michael Scarpitti wrote:

1. Measurements of an amplifier's performance are generally
unimportant compared with the impressions received in extended
listening tests informed by familiarity with live music of the same
kind. Comparative listening sessions pitting amp against amp ...


Whoops! So much for Harry's insistence that subjectivists rely on
"evaluative" not "comparative" techniques.

bob

__________________________________________________ _______________
MSN Toolbar provides one-click access to Hotmail from any Web page – FREE
download! http://toolbar.msn.click-url.com/go/...ave/direct/01/
  #228   Report Post  
chung
 
Posts: n/a
Default Doing an "evaluation" test, or has it already been done?

Bruce J. Richman wrote:



The "controlled listening tests" obviously involve the listeners determining
whether the DUT's sound the same or different. This is a form of measurement,
although on a dichotomous basis rather than an interval scale. Every data
point recorded in an ABX test or even in a more simple A/B comparison is
obviously a measurement of the observer's ability to differentiate or not
differentiate between the the 2 components being evaluated.


That's got to be one of the most convoluted explanations (should I say
excuses?) I have ever seen.

So when you listen to two pieces of equipment, A and B, and you decide A
is better, have you made a measurement? According to your definition,
you have, since the fact that you prefer A over B is obviously a
measurement of your ability to differentiate between A and B.

Seems to me that you, being a subjectivist, based on
selections/preferences on measurements, too! You're sure you're not an
objectivist?
  #229   Report Post  
chung
 
Posts: n/a
Default Subjectivist and Objectivist -- Are these misnomers? (WAS:

Michael Scarpitti wrote:

chung wrote in message news:LMBqc.22804$gr.1936664@attbi_s52...



I suggest you get your definitions straight. Check this webpage:

http://www.dself.dsl.pipex.com/ampin...o/subjectv.htm

In particular, pay attention to this:
***
A short definition of the Subjectivist position on power amplifiers
might read as follows:

1.* Objective measurements of an amplifier's performance are
unimportant compared with the subjective impressions received in
informal listening tests. Should the two contradict the objective
results may be dismissed out of hand.
2.* Degradation effects exist in amplifiers that are unknown to
engineering science, and are not revealed by the usual measurements.
3.* Considerable latitude may be used in suggesting hypothetical
mechanisms of audio impairment, such as mysterious capacitor
shortcomings and subtle cable defects, without reference to the
plausibility of the concept, or gathering any evidence to support it .
***


May I revise?
1. Measurements of an amplifier's performance are generally
unimportant compared with the impressions received in extended
listening tests informed by familiarity with live music of the same
kind. Comparative listening sessions pitting amp against amp (when
possible, using the acknowledged best available) should reveal the
overall quality level of the product under consideration in relation
to the state of the art. [Reasoning: All that matters is how it sounds
in comparison to 'reality'.]

2. Degradation effects MAY exist in amplifiers that are unknown to
engineering science, which are not revealed by the usual measurements.
[Reasoning: Complex waveforms may behave in ways that are not entirely
described by the usual methods.]

I would not agree with point #3. I would also disagree with the
adjective 'objective' (measurements) in point 1. There is no basis for
claiming measurements are 'objective'.


Now you are trying to justify why you yourself are a subjectivist. I
don't see anything you wrote conflicting with Doug Self's description.
Doug stated the symptoms, and you are trying to justify those symptoms.

Now when you said measurements are not objective, you have totally lost
me and, I'm sure, others. Measurements are repeatable, based on
instruments, and are not subjective. For instance, you run a frequency
response measurement, and the results are what the instruments measure.
If someone else use the same instrument and run the same test, the
results are the same. So how are measurements not objective?
  #230   Report Post  
Steven Sullivan
 
Posts: n/a
Default Doing an "evaluation" test, or has it already been done?

Bruce J. Richman wrote:
Steven Sullivan wrote:


measurement correlates. Indeed, one of the points of John Atkinson's
measurements, for example, which accompany his Stereophile reviews, is to,

when
evident, point out certain correlates between various frequency, distortion

or
other technical measurements and subjective impressions obtained by

reviewers.

Of course, ABX tests are irrelevant in this regard. Once an objectivist

has,
of course, ruled out any and all possible measurement variations as posibly
accounting for any perceived differences, the futility of debating those

with
different frames of reference becomes even more evident.


Hmm, I wonder, is John Atkinson providing bench test figures for cables and
interconnects these days?


Not that I know of.


Which seems odd. Amps and speakers are deemed different enough to merit
bench tests, and certainly are claimed to sound different; cables and
interconnects are claimed to sound different, but aren't worthy of bench
tests?


My occasional experience of Stereophile is that when measurements *fail* to
correlate with the sometimes extravagant claims made in the review, they are
simply
ignored. When they *can* be made to explain some aspect of the reviewer's
experience, they are cited.


It might be that some reviewers are less prone to hyperbole or poetic license
than others. Of course, Mr. Atkinson would have to comment on your
observations.


He reads this ng once in awhile, so maybe he will.



--

-S.

"They've got God on their side. All we've got is science and reason."
-- Dawn Hulsey, Talent Director



  #231   Report Post  
Nousaine
 
Posts: n/a
Default "Evaluation" test or chicken fat test, how to decide?

"normanstrong" wrote:



"Nousaine" wrote in message
...

They are a genuine part of the accepted evaluation and testing

methodology for
everything EXCEPT high-end audio and, I would have to say,

parapsychology.

The parapsychology example is interesting, since J. B. Rhine could
trot out lots of data supporting the existence of ESP. How did he
get such results? It's not too difficult.

1. Throw out unfavorable results, disqualifying them with any reason
you can think of. If a testee guesses wrong 100% of the time--which
is unlikely, but possible--throw out the entire test on the theory
that he must be cheating. You don't have to do this very often to
make a large effect on the data.

2. Commence recording the data at a point where favorable results
start and calling previous unfavorable results 'warm up'. A perfect
example is the guy that claims he can flip a coin heads 4 times in a
row. If the first flip comes up tails, you simply say you were
checking the coin. This immediately doubles your chances of success.

3. There are bound to be occasional individuals who guess right a
surprisingly high percentage of the time. Call those people
"sensitives." Having assumed the existence of ESP, and labelled this
individual as having lots of it, it becomes easy to justify throwing
out bad results as being the result of fatigue. You can't change the
results, but you sure can stop the test and throw out the results up
to that point.

This last is common; I've done it myself. It goes right along with
discarding 'outliers'.

Norm Strong


Thanks for recalling the Rhine case. To their credit I haven't seen much
'cooking the books' by subjectivists. The notable exception was the claim
consistently made by one subjectivist about audibility of capacitor dialectric
dredged up by searching through the null data and pulling small bits out that
'might' have supported the original hypothsis if not taken in context and later
claiming that it did so.

But on the other hand, perhaps there has not been a lotof data-dredging by
subjectivists because they've conducted so few experiments.
  #232   Report Post  
Harry Lavo
 
Posts: n/a
Default Doing an "evaluation" test, or has it already been done?

"Nousaine" wrote in message
news:B0arc.87227$xw3.4878918@attbi_s04...
(Bruce J. Richman)

...large snips......

Tom Nousaine wrote:




snip, not relevant to below


No doubt I'll be
challenged on this view, but let me explain.

When one reads a subjective review, or perhaps does one's own review

either
in
a showroom or in one's home, one *might* be perceiving sonic qualities
either
not measured nor easily defined by the usual objectivist standards.

For
example, Harry has used the word "musicality". And I might use the

same
term,
and others might make refernce to the imaging, soundstaging or *depth

of
field"
qualities associated with a particular piece of equiopment. Still

others
may
simply say "this sounds more realistic me" (than another component

being
compared). While it may be perfectly acceptable to the objectivists to
consider only variables that can be measured in terms of frequency

response
or
various kinds of distortion, I would be reluctant - as I think would be

most
subjectivists - to attribute the various variables I've mentioned above

to
specific, capable of replication, measurements to measure these things.

So? Who cares? If you cannot tell them apart with your eyes closed who

cares
what measurements are or what "variables" you are listening for?


Obviously, you don't - that's a given. And perhaps you represent the
objectivists that don't respect individual preferences derived from
perceptions
of certain qualities that you derogate and minimize.


Individual preferences have never been unrespected except by

subjectivists.
They are what they are. At least objectivists wrap opinions around data

and
items that can be demonstrated and verified. We don't invent unspecified

and
undefined terms like Musicality to embrace mystical ideas.


No mysticism at all. It's simply a summary term covering a bunch of
atributes that I feel are important to the reproduction of music.

Again, the frames-of-reference of the 2 camps are so disparate as to make
conversations basically useless.


Agreed. Subjectivists need to put their beliefs into experiments that

verify
the claims.


And objectivists need to stop assuming the end point.

The 'debates' will be endless because the camp without any credible
supporting
evidence has no other resort except "debate" including hypothesizing

long,
expensive experiments that will never be done.




Au contraire, the debates will contninue because of some irrational need

on
the
party of some from the other camp to try and convert audiophiles who

prefer
to
make their audio equipment decisions in ways of their own choosing.


Strawman argument. Nobody is suggesting that audiophiles should or should

not
buy amplifiers (or whatever) based on whatever basis they feel necessary.

What
they need to stop doing is claiming that their asmplifiers have special

sound
quality attributes based on acoustical characteristics that cannot be
identified when a figurative blindfold is produced.
And they should cease recommending these products to neophytes and newbies
based on these attributes that have never been shown to exist.


Not necessary to stop anything as long as your "universal test" is
unvalidated for the purpose of open-ended evaluation and selection of
equipment. Perhaps you should stop recommending that everything pretty much
sounds the same until you do so.

Or they should stop carping over the extant evidence and produce some

credible
evidence of their own to support their claims.


But you won't meet us halfway on a verifying test. So how much progress can
be made?

None of this says that you shouldn't be happy with any decision you've

made
about any gear you've acquired or tweaks or modifications you've made.


Why thank you Tom. How gracious!

It should also be noted that the proposal of complex experiments has (a)

not
been proposed by myself, and (b) should not be opposed at any rate if the
objective is to obtain further information that might be useful.


I strongly urge that the prosposer of that experiment take every effort to

move
on with it. Validation of his open-ended evaluation approach (which unlike

he
claims is not widely used among audiophiles) is a good idea and should be
validated.


Then I assume you will drop your sniping if I try to move it forward, and
actually cooperate? And urge your fellow objectivists to do the same?

  #233   Report Post  
Bruce J. Richman
 
Posts: n/a
Default Doing an "evaluation" test, or has it already been done?

Stewart Pinkerton wrote:

On Wed, 19 May 2004 17:37:22 GMT, (Bruce J. Richman)
wrote:

Stewart Pinkerton wrote:

On Tue, 18 May 2004 21:46:37 GMT,
(Bruce J. Richman)
wrote:

I've often considered the objectivist viewpoint that "all competent

amplifiers
operating within their power ranges with appropriate speakers sound the

same",
etc. possibly true *for the measurable variables that they are interested

in*,
but nonetheless possibly not true - nor measurable by a-b or a-b-x tests -

for
the sound qualities that subjectivists are interested in. No doubt I'll be
challenged on this view, but let me explain.

First, explain which part of the quoted 'objectivist' standpoint has
*anything* to do with anything *measurable*. These are based on
controlled *listening* tests, and have nothing to do with
measurements.

The "controlled listening tests" obviously involve the listeners determining
whether the DUT's sound the same or different. This is a form of

measurement,
although on a dichotomous basis rather than an interval scale. Every data
point recorded in an ABX test or even in a more simple A/B comparison is
obviously a measurement of the observer's ability to differentiate or not
differentiate between the the 2 components being evaluated.


Now you are simply playing with semantics. None of that has anything
to do with the 'measurable variables' which you claim are of interest
to objectivists.


And you are mischaracterrzing the statements I've made. No semantics are
involved in describing double blind testing in which choices are presumably
being made and recorded by a proctor or monitor. A/B choices do indeed involve
measurement of the testee's ability to discriminate between 2 sources. Are you
seriously trying to insinuate that bias-controlled testing does not involve
measurement of the test subjects' ability to discriminate between 2 components?
If so, then it would be you that are playing with semantics, or simply trying
to be argumentative, and not myself.

When one reads a subjective review, or perhaps does one's own review

either in
a showroom or in one's home, one *might* be perceiving sonic qualities

either
not measured nor easily defined by the usual objectivist standards. For
example, Harry has used the word "musicality". And I might use the same

term,
and others might make refernce to the imaging, soundstaging or *depth of

field"
qualities associated with a particular piece of equiopment. Still others

may
simply say "this sounds more realistic me" (than another component being

compared).

Fine - but does it actually sound *different* from the other
component. If not, then expressions of preference based on sound
quality are hardly relevant................

Again, for those that consider various technical specifrications or
bias-controlled testing to be the one and only determinant of differences,

I'm
sure that this is *not* relevant.


Measurements are irrelevant here, and bias-controlled listening is the
*only* determinant of differences which has any validity outside the
skull of the individual listener.

Hence, my comments about different
frames-of=reference. However, for those subjectivists who choose to let

their
perceptions of difference play a role in choosing what components they use

or
purchase - of course, preferences are relevant. The cerntral point is that
these components *may* sound different to *them*, and if asked how or why,

they
may describe perceptions that cannot easily be disregarded and/or ridiculed

by
traditional frequency or distortion variable measurements.


The trouble is that they no longer hear these 'differences' when they
don't *know* what's connected....................


I don't profess to
know exactly how one would go about measuring, for example, differences in
"imaging" or perceptions of "more body" in the sound of a particular

component,
for example, but it may well be that certain types of measurements might be
available that could answer these questions.


Measurement is irrelevant here - can they still hear differences in
'imaging' or 'body' when they don't *know* what's connected?


An empirical question that has not, IMHO, been answered, and is quite
predictably, of little interest to some for whom all data necessary to cement
their position has already been collected.

I suspect that for most
subjectivists, however, following through on their preferences will remain
preferable and all that is needed. Attempts at conversion via derision of
their positions has, at least on RAHE, largely been a waste of time IMHO.

While it may be perfectly acceptable to the objectivists to
consider only variables that can be measured in terms of frequency

response
or
various kinds of distortion, I would be reluctant - as I think would be

most
subjectivists - to attribute the various variables I've mentioned above to
specific, capable of replication, measurements to measure these things.
Also, how often, even within the frequency response realm, are complete

graphs
presented that *might* account for a particular component being perceived

as
relatively analytical, dark, or lean - all terms frequently used by
subjectivists?

This is a total strawman. I repeat, the 'objectivist' standpoint is
based on *listening* tests, not measurements. In fact, I have always
preferred the term 'reliable and repeatable subjectivist' for my own
position, but that's rather long-winded! :-)

It is certainly no strawman whatsoever. See my comments above re. listening
tests themselves being binary measurements in which data is collected and

used
to support and promote their use.


It is the purest semantic strawman, set up in defence of a lost
position - no one has *ever* before suggested that this is any kind of
'measurement'.
--


That is a ridiculous assertion. Collection of data from trials of listening
comparisons is, of course, a form of measurement. To suggest that it is not is
to insult the intelligence of any experimenter, including this one, that has
ever conducted comparative evaluations, whether it be of audio equipment, or
anything else. Claiming that bias-controlled testing does not involve
measurements would then lead to the conclusion that such tests are worthless.
Neither I nor anybody else I know has *ever* made that assertion. Measurements
of how a person compares 2 products can be obtained in many ways - one of which
is bias-controlled testing. Not all measurements are done with laboratory
equipment.


Stewart Pinkerton | Music is Art - Audio is Engineering









Bruce J. Richman

  #234   Report Post  
Bob Marcus
 
Posts: n/a
Default Doing an "evaluation" test, or has it already been done?

"Bruce J. Richman" wrote:

Bob Marcus wrote:

"Bruce J. Richman" wrote:

I've often considered the objectivist viewpoint that "all competent
amplifiers
operating within their power ranges with appropriate speakers sound

the
same",
etc. possibly true *for the measurable variables that they are

interested
in*,
but nonetheless possibly not true - nor measurable by a-b or a-b-x

tests -
for
the sound qualities that subjectivists are interested in.


The fallacy here is the assumption that "the sound qualities that
subjectivists are interested in" have causes beyond what measurements

or ABX
tests can detect. There is no evidence that this is true.


There is no fallacy because there is no statement that measurements
per se can
not account for various perceptual phenomena experienced by
subjectivists who
attribute sonic differences to certain pieces of equipment. However,
unlike
the omniscient objectivists who consider the subject closed and not
subject to
debate,


This is quite disgraceful, sir. If you want to have a debate, the least
you can do is not start out by slandering your opponents.

I suspect that many subjectivists would consider the possibility that
certain variables routinely named in reviews (see myh original post)
may have
measurement correlates. Indeed, one of the points of John Atkinson's
measurements, for example, which accompany his Stereophile reviews, is
to,

when
evident, point out certain correlates between various frequency,
distortion or
other technical measurements and subjective impressions obtained by
reviewers.
Of course, ABX tests are irrelevant in this regard. Once an
objectivist has,
of course, ruled out any and all possible measurement variations as
posibly
accounting for any perceived differences, the futility of debating
those with
different frames of reference becomes even more evident.


If you want futility, try debating someone who makes no effort even to
understand what you are saying, and merely repeats misstatements about
your positions.

No doubt I'll be
challenged on this view, but let me explain.

When one reads a subjective review, or perhaps does one's own review

either
in
a showroom or in one's home, one *might* be perceiving sonic

qualities
either
not measured nor easily defined by the usual objectivist standards.Â

For
example, Harry has used the word "musicality".Â

A term with no clear definition. Nor is there any evidence that it

means the
same thing to different audiophiles.


Nor was there a claim made that it did have a clear definition or the
same
thing to di9fferent audiophiles. That said, one can certainly ask
audiophiles
to describe more specifically what they mean when they use such terms,
or more
precise ones such as "lean", "more body", etc., and then determine
empirically
to what extent there is agreement or disagreement amongst different
observers.
For subjectivists, I would suspect that what would be a more relevant
= and
practical = question would be the extent to which a given component is
"preferred" to another for the same reason by a group of listeners.
For
example, if 75% of a group preferred component A to component B, and
when
asked, were able to reasonably attribute the same approximate reason
for their
preference - in terms of some sonic qualities, this would, of course,
never
meet oobjectivist standards in which only measurements currently
accepted by
that group are of importance,


Again, a bald-faced misrepresentation. Have you no shame?

but they might well be relevant to subjectivists
who place greater value on listening experiences in a natural
environment than
on argument purely by specifications. Also, unless one is willing to
assume
that all possible measurements have already been discovered and
enshrined as
all there is to know,


And yet again.

it would seem reasonable to assume that some subjective
qualities could be correlated to some extent with specific
measurements yet to
be tried.

And I might use the same term,
and others might make refernce to the imaging, soundstaging or

*depth of
field"
qualities associated with a particular piece of equiopment.Â

Are these "qualities associated with a particular piece of

equiopment"?
These are all mental constructs.


On the contrary, these are descriptions of how music is actually
experienced

by
many listeners. Of course, perceptions are involved, but these
perceptions

are
influenced by the methods used in recording the music and reproducing
it
through the audio system.


Obviously.

The imaging isn't "real"--the sound is
being produced at only two points. Our brains construct these images

based
on sounds reaching our ears from all directions, as a result of the
interaction between the speakers and the room. The audio system's
contribution to this process is the direct sound--simply changes in

air
pressure--radiating from the speakers. And that sound can be fully

measured.
After all, beyond frequency and amplitude, what else is there coming

out of
a speaker?


It would seem obvious that the ability of a given component to
replicate the
intentions of the recording team in producing a given set of
instrumentation
and/or vocals in which instruments and vocalists appear to the
listener to
appear in different places in the soundfield is *not* as simplistic as
you
claim.


But you’re about to demonstrate just how simple it is…

More specifically, it goes without saying that the proportion of the
amplitude


Note that word: Amplitude. That’s something we can measure. And it’s
something we can detect differences in using DBTs. So just what is it
about imaging that you think objectivists don’t understand?

of a given instrument, for example, assigned to the 2 channels after
mixdown in the recording will, by design, attempt to "locate" the
instrument

in
the sound field (e.g. strings on the left, woodwinds in the center,
double
basses and cellos on the right in a typical symphony setup). It does
not seem
beyond the realm of possiblity that some components might be more
precise or
accurate (pick whatever adjective you prefer) at transferring the
recording
engineer's intentions to the listening room of a subjectivist who
appreciates
things such as "imaging" ability.


Of course it’s not beyond the realm of possibility for two components
to differ in their ability to accurately reproduce amplitude
differences between channels. But we can measure those differences, and
we can detect them in DBTs. So just what is it about imaging that you
think objectivists don’t understand?


That's why objectivists don't buy the notion that there are things

they
can't measure, or things that ABX tests can't detect. We don't have to
"measure imaging"; all we have to do is to measure the things that

cause our
brains to "image."


There was no claim made that certain things can't be measured - just
that the
variables sometimes discussed by subjectivists are not usually subject
to any
*attempt* to measure them.


Why should we bother to measure them? You seem to think that
measurements are important here. They aren’t. What we can hear is
important. So is sorting out what we truly hear from what we only
imagine that we hear. But of course, if you can’t mischaracterize
objectivists as people obsessed with measurement, then you haven’t got
a case.

It might well be possibld, for example, to measure
"imaging" if one could measure the relative amplitude of certain single
instruments, or a vocalist's voice, at the speaker sources. One would
expect,
for example, that a singer centered between the speakers, would have
roughly
equal amplitudes coming from both left and right speakers. Other
instruments
in the orchestral or band mix would presumably have different
proportions from
left and right depending on their locations.


And what makes you think we can’t measure this?


(Before anyone jumps on the point, I'll concede that radiation

patterns of
loudspeakers and room interactions are extremely complex and

certainly not
reduceable to simple measurements.


On this point we agree.

But loudspeakers aren't part of the
obj/subj debate.


Of course not. However, since loudspeaker are used by both camps to
make

their
judgments, I would think that their interaction with the compoents
under test
could certainly be a relevant factor in determining test results.


I’m not much interested in what you *think*. I’m interested in whether
you have any evidence that an amplifier or cable can affect imaging,
other than through easily measured effects on amplitude and frequency
response.

Many
reviewers have commented on the relative synergy or lack of synergy
between a
certain product, for example, and a certain speaker. Now, as an
objectivist,
you may not accept this line of reasoning, but consider, as you've
mentioned,
the variation in radiation patterns, and I'll add in other speaker

complexities
such as resistance curves (said as the owner of electrostatics that
have wild
resistance swings and definitely *don't* sound the same with every
amplifier

or
preamplifier), sensitivities, possible crossover effects, etc.


What is it that you don’t think I’ve considered? Why should I give any
credence to anyone who talks about “synergy” between speakers and other
components and doesn’t’ even offer a coherent definition of the term?

All “synergy” appears to mean is that this speaker sounds better with
this amp than with that amp. Fine. Then you should be able to tell the
two amps apart blind, when driving that speaker. Show me that you can,
and I’ll believe that this “synergy” is real. (Please note: There is no
mention of measurement in this paragraph. That’s your hang-up, not
mine.)

And components ahead of the speakers have no impact on
these radiation patterns--which is why it's so funny to read

reviewers who
talk about certain cables "opening up the sound.")


One can always find extremes to ridicule. I lose very little sleep
over the
hyperbole of many cable manufacturers. But I don't think they are
reified by
too many subjectivists.


Are you joking? Loads of them buy it hook, line, and sinker. Just read
any high-end discussion site other than this one.


Still others may
simply say "this sounds more realistic me" (than another component

being
compared). While it may be perfectly acceptable to the

objectivists to
consider only variables that can be measured in terms of frequency

response
or
various kinds of distortion, I would be reluctant - as I think would

be
most
subjectivists - to attribute the various variables I've mentioned

above to
specific, capable of replication, measurements to measure these

things.

What else is there to attribute them to? Sound really is just

frequency and
amplitude. Every effect must have a cause, and those are the only

possible
causes.


See my comments above re. imaging. Amplitude differerences may be
responsible
in some cases.


They are also measurable. They are also detectable in DBTs. So what are
objectivists missing?

Also, what I had in mind in making my comments was not to
disagree with your argument re. frequency and amplitude as the only
salient
measurements, but in *how* they might be measured by an objectivist -
or
perhaps more typically, on a specification sheet,


Whoa--who said spec sheets were the be-all and end-all of measurements?

in which, for example, a
frequency range with plus and minus db points is given, but little
attention

is
paid to how that "range" actually operates into a given speaker load,
or how

it
might actually vary at different points along the response curve. It
would
certainly seem possible that there could be some peaks and valleys in
this
curve, for example, that might interact with a given speakers *own*
set of
technical characteristics, to produce a certain "character", if you
will. I
apologize for using a real life, subjective term .


So objectivists can’t measure everything because some measurements
don’t appear on the typical spec sheet? What kind of argument is this?


Also, how often, even within the frequency response realm, are

complete
graphs
presented that *might* account for a particular component being

perceived
as
relatively analytical, dark, or lean - all terms frequently used by
subjectivists?


I don't know. How often? (And what's your point?)


The question was rhetorical. And the point, as illustrated above, is
self-evident,


If it were self-evident, I’d have understood it. I have absolutely no
idea what your point was here.

except to those that might assume that all questions have been
answered and are not debatable.


You’ve run out of arguments again, so you’re back to this slander.

Again, more evidence of the total waste of
time in trying to talk about extreme objectivist - subjectivist
differences.

This is one of the reasons that I feel the 2 "camps" are really

operating
from
almost totally different frames-of-reference and the endless

challenges and
disputes about the value of double blind testing, are, in practical

terms,
unlikely to convince anybody of anything they don't already strongly
believe.


Can't argue with that!


Really? That's a surprise. On practically everything else, I
recommend
we agree to disagree. But given your agreement with my final
paragraph, why
monopolize RAHE, to a large extent, with endless discussions of this
old
argument?


Who’s monopolizing anything? And isn’t the pot calling the kettle black
here?

To answer the more general point, I’m arguing with you not because I
think you are remotely convincible, but because I know there are others
reading this newsgroup whose minds are not made up. Not to respond to
your misrepresentations of the objectivist position would, I fear, lead
them to conclude that your characterizations of us were accurate.

I plead guilty to injecting my comments here, but generally speaking,
I
usually steer clear of entering this endless cycle of retorts.


I recommend that you steer clear until you’ve made some effort at least
to understand what we are saying.

bob

__________________________________________________ _______________
Express yourself with the new version of MSN Messenger! Download today
- it's FREE!
http://messenger.msn.click-url.com/g...ave/direct/01/

  #235   Report Post  
Bromo
 
Posts: n/a
Default Subjectivist and Objectivist -- Are these misnomers? (WAS:

On 5/20/04 6:45 PM, in article vyarc.171$ny.233577@attbi_s53, "Nousaine"
wrote:

Actually the definition "For an objectivist, the musical experience begins
with
the
compression and rarefaction of the local atmosphere by a musical instrument
and
ends with the decay of hydraulic pressure waves in the listener's cochlea; "

is pretty good; of course the subjectivist position would have to be:

"For a subjectivist, the musical experience sometimes begins with the
compression and rarefaction of the local atmosphere by a musical instrument
and even occasionally ends with the decay of hydraulic pressure waves in the
listener's cochlea; but most of it happens in the listeners imagination that's
why we call it "imagin-in" "


Actually - an "objectivist" (the non-Ayn Rand type) and "subjectivist" tend
to resemble each other in many ways. Both are interested in having good
sounding systems - and both have a method (trial and error or measurement)
of achieving this. They BOTH tend to have systems that sound good.

I would say both extremes are flawed. An extreme "objectivist" would never
feel the need to actually listen to a sound system before purchase since the
specification and measurements would say enough - but would be fully willing
to hire a lab to determine if a piece of equipment is suitable.



  #237   Report Post  
Bromo
 
Posts: n/a
Default Subjectivist and Objectivist -- Are these misnomers? (WAS:

On 5/20/04 6:49 PM, in article , "S888Wheel"
wrote:

Here is something that was said about all objectivists in a Stereophile
article: "For an objectivist, the musical experience begins with the
compression and rarefaction of the local atmosphere by a musical instrument

and
ends with the decay of hydraulic pressure waves in the listener's cochlea;

"

Have you met an objectivist that behaves in such a way?


I have interacted with some that at first blush seemed to but upon further
converstation did not. My point was that the misrepresentations go in both
directions. I guess you agree that this was one of those misrepresentations of
an objectivist by a subjectivist. I don't know and you don't know that this
author has never meat an objectivist who actually meets this description. the
real problem is with the single universal description for a broad group of
people with diverse opinions.


I have found out that 99% of audio-enthusiast people like music a lot and do
not like a lot of the baloney in marketing high end to people.

I think both Objectivist and Subjectivist like equipment that sounds good to
them. They choose their equipment with slightly different techniques (but I
am pretty sure both would listen to it before purchase to make sure) and
optimize their systems (or not) by different means.

I think a lot of subjectivists who are not grounded in fundamental
principles of science *can* get fooled - but objectivists can as well but
can also spend a lot of time trying to "debunk" the claims made by
unscrupulous marketers, and subjectivists.

At the end of the day - if you can afford it, and you like your systems
sound when playing your music - that should be enough.
  #239   Report Post  
Bob Marcus
 
Posts: n/a
Default Subjectivist and Objectivist -- Are these misnomers? (WAS:

Bromo wrote:

Actually - an "objectivist" (the non-Ayn Rand type) and "subjectivist" tend
to resemble each other in many ways.Â* Both are interested in having good
sounding systems - and both have a method (trial and error or measurement)
of achieving this.Â* They BOTH tend to have systems that sound good.


I suspect that objectivists in practice pay a lot less attention to
measurements than you think we do. The real difference is that we also don't
pay much attention to subjective impressions that we know could be
imaginary.

When I'm looking for a component (other than speakers, obviously), what I
spend most of my time on are features and other non-sonic attributes. The
next amp I buy, for example, will go in the living room--but outside the
entertainment unit, so it has to fit in the corner on the floor, where the
wife can pretend it's not there. And while I'll glance at the spec sheet,
I'll also give it a listen, just to assure myself that there's nothing
obviously wrong with it. In fact, there's a good chance that the salesman
won't be able to tell whether I'm an objectivist or a subjectivist--until he
tries to sell me a "really great IC" and I tell him what he can do with it.

I would say both extremes are flawed.Â*


I would say both extremes are caricatures. This one sure is:

An extreme "objectivist" would never
feel the need to actually listen to a sound system before purchase since
the
specification and measurements would say enough - but would be fully
willing
to hire a lab to determine if a piece of equipment is suitable.


If I had unlimited time and money to throw into this hobby, I think I'd
learn how to measure equipment for myself. But I don't, so I just wing it
like (most of) the rest of you.

bob

__________________________________________________ _______________
MSN Toolbar provides one-click access to Hotmail from any Web page – FREE
download! http://toolbar.msn.click-url.com/go/...ave/direct/01/

  #240   Report Post  
S888Wheel
 
Posts: n/a
Default Subjectivist and Objectivist -- Are these misnomers? (WAS:

From: "Bob Marcus"
Date: 5/20/2004 9:39 PM Pacific Standard Time
Message-id: _Jfrc.1950$JC5.259524@attbi_s54

Bromo wrote:

Actually - an "objectivist" (the non-Ayn Rand type) and "subjectivist" tend
to resemble each other in many ways.ÂÂ* Both are interested in having good
sounding systems - and both have a method (trial and error or measurement)
of achieving this.ÂÂ* They BOTH tend to have systems that sound good.


I suspect that objectivists in practice pay a lot less attention to
measurements than you think we do.


I suspect that varies from person to person. Nousaine just posted that
measurements are the primary criteria for his decisions.

The real difference is that we also don't
pay much attention to subjective impressions that we know could be
imaginary.


Really? How do you choose speakers then? Are most objectivists conducting bias
controled auditions of speakers? I think not.


I would say both extremes are caricatures. This one sure is:

An extreme "objectivist" would never
feel the need to actually listen to a sound system before purchase since
the
specification and measurements would say enough - but would be fully
willing
to hire a lab to determine if a piece of equipment is suitable.



It may be the extreme but it seems to be the reality for the extreme and not
just a caricature. But the same is true for the subjectivists. The extreme does
exist and is every bit as wacky.

Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
ALL amps are equal?? Pug Fugley Car Audio 60 August 17th 04 03:33 AM
Light weight system challenge Sonoman Car Audio 6 May 2nd 04 01:05 AM
Note to the Idiot George M. Middius Audio Opinions 222 January 8th 04 07:13 PM
Mechanic blames amplifier for alternator failing?? Help>>>>>>>>>>> SHRED© Car Audio 57 December 13th 03 10:24 AM
Southeast Invitational Sound Challenge SQ 240 Car Audio 0 August 12th 03 03:09 PM


All times are GMT +1. The time now is 07:04 AM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"