Reply
 
Thread Tools Display Modes
  #281   Report Post  
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

"The amps sounded different and consistently so, in a way that I am
completely incapable of producing. I would have had to create and keep
track of 7 different sonic signaures, which is preposterous.

Your claim
to be an exception to listening tests showing results near the level of
guessing first need to be established.


I claim no 'exception' or special ability. I take my time and listen
carefully, and I learned over the years how components differ.

You'll not 'argue' me out of my senses."

Your claim to hear anything in the above context must first be valididated
to see if your reports exclude you from the tests showing a level similar
to guessing. That you make such claims also makes a claim of exception
because they contrast with the tests. I'm not concerned with your mental
state after presentation with the test results, only in your claim to be
an exception to them. Do the test and you remain free to accept them or
not. Only the test will confirm if continued interest in your reports
merit further interest. Your senses are not at issue, we must conclude
they are of the common sort which produce the test results, or otherwise
demonstrated by testing. We must conclude that your senses are subject to
the same perception process that produces all manner of end states which
have no physical reality, to which we all are subject.

  #282   Report Post  
Harry Lavo
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

"normanstrong" wrote in message
...
As for the right way to test your theory (that long-term

evaluative
listening is more sensitive to sonic differences than an ABX

test), a
simple, double-blind preference test would serve. Wouldn't give

you the
results you want, but that's not my problem.


No it will not...it presumes the test is already validated. That's

the
purpose of this whole "control" test...to find out if it is valid

and gives
the same results...with the effects of "blinding" separated from the

change
in test technique.


There's almost no chance of any blind cable test giving the same
results as a sighted one. What is at issue here is what conclusions
can be drawn from that fact. My guess is that Harry will claim that
such results show that blind testing is useless, since it gives null
results. On the other hand, null results is exactly what I would
expect, and what all blind cable tests have produced--so far.

It would be a lot more fun to compare 2 speakers whose differences are
great enough to give a reasonable expectation of interesting results
when tested blind. I would suggest speakers of about the same size
and type of design, but wildly different prices and presumed quality.
The comparison should be done blind first, then sighted. Evaluations
should be written, using language understandable by the public at
large, and with no communication between different listeners. Along
with the evaluation, there should be an opportunity to guess the MSRP;
usually good for a laugh. By eliminating quick switching we make the
test simpler to run and more satisfactory to the subjective
audiophiles in the group.


Thanks for your support trying to resolve the issue, Norm.

Probelem with speakers is....almost nobody (even objectivists) would argue
that the blind tests will give null.

Just as their is no large unanimity among both camps that their is an
appreciable difference in the camps...even though some minority of
subjectivists feel cables may have a sound, but even they acknowledge that
it is sometimes difficult to hear.

We need to test something where their is a fairly clear difference between
the camps...that's why I nominated inexpensive cd players, or a SACD versus
CD test. Most subjectivists feel there are audible differences in these
comparisons (at least among some CD players) and most objectivists (based on
the sample on this forum) feel that their are no differences in these cases,
given identical source important (difficult but not impossible to find for
the SACD vs. CD comparison).
  #283   Report Post  
Harry Lavo
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

"Steven Sullivan" wrote in message
...
Harry Lavo wrote:
Then you haven't been paying attention. Here is what I have said
(repeatedly) in a nutshell.


1) The main problem I have with double-blind is its tremendous
impracticality in actual use in the home.


2) The main problem I have with Tom's DBT a-b and a-b-x tests is that

they
force the ear-brain into a short-term comparative mode, versus the long

term
evaluative mode used by audiophiles (listen at several times under

several
conditions to each unit, sometimes compare rapidly to listen to a

specific
effect, then go back to evaluative listening, etc.). I posit that this

is
allows the right brain as well as the left brain to "weigh in". I

believe
DBT comparisons are okay *if you know what specific sonic artifac you

are
listening for". Not for open ended testing where you don't know going

in
what you are listening for. This, I posit, is when confusion sets in

and
all you can hear are obvious volume or frequency response differences.


But this purely hypothetical 'problem' , which has no *positive* evidence
in its support, is in any case of NO CONSEQUENCE if one, such as
yourself, has *already* identified two components as being different,

under
*your* preferred, sighted, comparative mode -- which as you say, is the

*typical*
comparative mode for audiophiles. In this case, one has already
identified and desribed to oneself, the characteristic 'sound' of each

component.
One 'knows' what to listen for.

*All* that is required at this point, therefore, is to present the
two components under blinded conditions, to that person.
If they have 'memorized' a real difference, then there should be no

problem
whatever in identifyimg it under such conditions. If you insist
that the blind comparison be 'long term' and involve 'ratings' or
whatever, fine. Just make sure it's blind.

Tests like the ones Tom Nousaine conducted on Steve Zipser involved a

listener who
*already* claimed to 'know' the difference between two components, from
sighted experience. He 'knew' what to listen for. He 'knew' what
his preferred amplifier sounded like. Or so he thought.

Given your dogged advocacy of a so far entirely speculative set of
psychological/cognitive problems with 'forced' comparison,
I propose again that you offer *yourself*, and a pair of components you

ALREADY
believe sound different, from your experience with them, as a test case
for YOUR hypotheses. From your posts it appears there must be at
least two cables or amps you already have evaluated, and believe to
sound different.


I make no claims that I could do it in a quick-switch environment with Tom
standing beside me, or even having coffee in the next room. And I don't
think Tom would like to be my apartment mate for a couple of weeks while I
reached a decision. And even then, since I don't know whether blinding or
quick switching causes the null, I would not predict the outcome. Although
as I have pointed out in my control test proposal, the only way I could
determine this / choose to believe that binding is the culprit would be
after spending a long and equal time doing the evaluation blinded as well as
sighted.
  #285   Report Post  
Nousaine
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

"Harry Lavo" wrote:

"Stewart Pinkerton" wrote in message
news:ECegc.151212$gA5.1814292@attbi_s03...
On Sat, 17 Apr 2004 04:56:33 GMT, "Harry Lavo"
wrote:

"Bob Marcus" wrote in message
news:KmVfc.4829$aM4.16670@attbi_s53...


I've criticized your "test" at length before, but here are the

highlights:

1) You have no coherent, testable hypothesis.

Sure I do. It is that blinding per se, when done on an relaxed,
longer-term, evaluative basis, is not likely to change the results of
sighted listening done under the same conditions. But that the switch to
blind a-b testing, or a-b-x testing will tend the results toward null
because of ear-brain confusion. The control test is set up *exactly* to
separate the two things.


You are contradicting yourself, since there is absolutely nothing to
prevent an ABX test being carried out on a relaxed, long-term basis.


But as a practical matter they are not. In fact Tom purposely restricts his
evaluation disk to 20 second snippets. Also, they are "comparison" tests
rather than evaluative tests, which tend to put the emphasis on switching,
not on listening in depth.


There is no time limit on programs in most ABX testing. It is true that my
evaluation disc does have 20 second to 2.5 minute segments of especially chosen
tough audio selections precisely because that enables more efficient evaluation
of sonic attributes and it allow the use of identical programs for my personal
evaluations.

Also the ABX technique allows more time-proximate comparisons which highlights
difference.

The longest experiment I've conducted was 16-weeks in duration. The longest
amplifier comparison had a 5-week in-situ warm-up period.


  #286   Report Post  
Nousaine
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

"normanstrong"

As for the right way to test your theory (that long-term

evaluative
listening is more sensitive to sonic differences than an ABX

test), a
simple, double-blind preference test would serve. Wouldn't give

you the
results you want, but that's not my problem.


No it will not...it presumes the test is already validated. That's

the
purpose of this whole "control" test...to find out if it is valid

and gives
the same results...with the effects of "blinding" separated from the

change
in test technique.


There's almost no chance of any blind cable test giving the same
results as a sighted one. What is at issue here is what conclusions
can be drawn from that fact. My guess is that Harry will claim that
such results show that blind testing is useless, since it gives null
results. On the other hand, null results is exactly what I would
expect, and what all blind cable tests have produced--so far.

It would be a lot more fun to compare 2 speakers whose differences are
great enough to give a reasonable expectation of interesting results
when tested blind. I would suggest speakers of about the same size
and type of design, but wildly different prices and presumed quality.
The comparison should be done blind first, then sighted. Evaluations
should be written, using language understandable by the public at
large, and with no communication between different listeners. Along
with the evaluation, there should be an opportunity to guess the MSRP;
usually good for a laugh. By eliminating quick switching we make the
test simpler to run and more satisfactory to the subjective
audiophiles in the group.

Norm Strong


Toole conducted a similar experiment some time ago where a 3-piece sub/sat
system was compared to a set of floor stranding tower speakers open/blind
scored on a 1-10 scale by the same subjects. He reported that subjects scored
the two much more closely in sound quality under blind connditions than they
did under open conditions. IOW the towers were "much better" than the sub/sat
when the subjects could see them and only somewhat better when the speakers
were behind an opague but acoustically transparent screen.

  #287   Report Post  
Nousaine
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

Steven Sullivan wrote:

Harry Lavo wrote:
Then you haven't been paying attention. Here is what I have said
(repeatedly) in a nutshell.


1) The main problem I have with double-blind is its tremendous
impracticality in actual use in the home.


2) The main problem I have with Tom's DBT a-b and a-b-x tests is that they
force the ear-brain into a short-term comparative mode, versus the long

term
evaluative mode used by audiophiles (listen at several times under several
conditions to each unit, sometimes compare rapidly to listen to a specific
effect, then go back to evaluative listening, etc.). I posit that this is
allows the right brain as well as the left brain to "weigh in". I believe
DBT comparisons are okay *if you know what specific sonic artifac you are
listening for". Not for open ended testing where you don't know going in
what you are listening for. This, I posit, is when confusion sets in and
all you can hear are obvious volume or frequency response differences.


But this purely hypothetical 'problem' , which has no *positive* evidence
in its support, is in any case of NO CONSEQUENCE if one, such as
yourself, has *already* identified two components as being different, under
*your* preferred, sighted, comparative mode -- which as you say, is the
*typical*
comparative mode for audiophiles. In this case, one has already
identified and desribed to oneself, the characteristic 'sound' of each
component.
One 'knows' what to listen for.

*All* that is required at this point, therefore, is to present the
two components under blinded conditions, to that person.
If they have 'memorized' a real difference, then there should be no problem
whatever in identifyimg it under such conditions. If you insist
that the blind comparison be 'long term' and involve 'ratings' or
whatever, fine. Just make sure it's blind.

Tests like the ones Tom Nousaine conducted on Steve Zipser involved a
listener who
*already* claimed to 'know' the difference between two components, from
sighted experience. He 'knew' what to listen for. He 'knew' what
his preferred amplifier sounded like. Or so he thought.

Given your dogged advocacy of a so far entirely speculative set of
psychological/cognitive problems with 'forced' comparison,
I propose again that you offer *yourself*, and a pair of components you
ALREADY
believe sound different, from your experience with them, as a test case
for YOUR hypotheses. From your posts it appears there must be at
least two cables or amps you already have evaluated, and believe to
sound different.


Harry's theory also contains the assumption that preferences determined under
open conditions carry some kind of scientific authority.

And that if a subject does not come to identical conclusions under
bias-controlled conditions that means that such controls mask 'real' sonic
difference instead of the more logical conclusion that the conclusions were not
sound-based in the first place.

So the experiment was proposed with an eye toward a biased outcome. When
subjects form different or statistically non-uniform results (which would be
likely when the units actually had identical sound) then Harry would call the
controlled tests "invalid" instead of concluding that the open tests were not
based on sound but some other mechanism.

And I agree that Harry should be the first subject in any experiment because he
already has equipment the sound of which he "knows". It seems extremely
unlikely that simply putting a blanket over the I/O terminals would stop him
from "hearing" his own equipment no matter what the length of the audition.

To suggest otherwise would mean that no one would ever be able to enjoy a 30
second or 3-minute recording under any conditions. There wouldn't be time to
get into the right listening mode.

And, I continue to wonder how Steve Zipser with long term knowledge of his
reference amplifier could have all those intimate details disappear when
nothing more than a blanket was placed over the I/O terminals in comparison to
a completely different unit.

I can't understand how a simple cloth could 'mask' differences gleaned under
long term conditions AND that a completely unknown and presumed inferior device
could suddenly become sonically equivalent to a well-known device with clearly
identifiable sound.

As for practicality of either technique Harry's method requires lengthy IN-HOME
audition of all possible candidates BEFORE any decision can be made. It seems
to me that a double blind test has no less practicality. Indeed it may even be
more practical because it does not require hours/weeks of audition.

  #288   Report Post  
Harry Lavo
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

"Bob Marcus" wrote in message
...
Harry Lavo wrote:

"Bob Marcus" wrote in message
news:KmVfc.4829$aM4.16670@attbi_s53...
Harry Lavo wrote:

Why don't you contribute constructively rather than destructively.

Why
don't you point out exactly how "this is not one of them" and how

this
"does
not have much to do with good test design" and then propose

althernative
ways to test my theory. Since I posited the test we have heard

nothing
but
negatives from you.

I've criticized your "test" at length before, but here are the

highlights:

1) You have no coherent, testable hypothesis.


Sure I do. It is that blinding per se, when done on an relaxed,
longer-term, evaluative basis, is not likely to change the results of
sighted listening done under the same conditions. But that the switch to
blind a-b testing, or a-b-x testing will tend the results toward null
because of ear-brain confusion. The control test is set up *exactly* to
separate the two things.


Well, no, it's far too complex to do this job. If you want to compare two
tests, with only sightedness as the variable, then you certainly don't

need
THREE tests. A bigger problem is that there is no way statistically to
compare the multiplicity of results you would get using the evaluation
approach you propose. That's the virtue of the preference test I
proposed--there are only two possible answers. Whereas, if you ask
audiophiles to "evaluate" components based on, say, ten either-or criteria
(a la Oohashi, who I believe is your model here), each subject has 1,024
possible answers. How do you tell whether his sighted answers match his
blind answers? There's no meaningful statistical standard, nor is there

any
way of determining--without a huge amount of research--whether the

criteria
are themselves independent, which would be another requirement.


Sorry, doesn't fly. If the comparative test itself is the problem, blind vs
not blind will show no difference. Since their are two variables, there
have to be two tests, controlling one variable in each matched pair.

As to statistical complexity of evaluative testing, it is not all that
complex. If the pair shows a statistical difference on one or more
characteristics sighted, then that is the standard (and presumably it will
if the test components are well chosen). Then if the blind test shows
comparable statistical significances on one or more of these variables, it
shows that blinding per se does not invalidate the sighted test (whether on
some or all variables would be interesting in and of itself).

Then, whether or not the comparative, iterative test (blind) supported the
evaluative test (blind) would answer the question test technique.


2) You ask your subjects to do the impossible--namely, to conduct two
independent subjective evaluations of the same equipment. Can't be

done.
Three's no way the first can't affect the second.


Absolutely not, one evaluative sighted test and one evaluative blind test
per subject...that's why several dozen subjects are required.


Can't be done. Subjects will recall their sighted evaluations when they do
their blind ones, so instead of the latter being an independent

evaluation,
all they'll be doing is trying to match their previous evaluations to the
two components they are listening to now.


What's wrong with that. It is a necessary part of the test. If after a few
weeks time and this time blind, subjects can still identify the components
under test by accurately recording the same subjective evaluation (as
measured by statistical significance) then the blinding has not nulled the
sighted evaluation. That is *exactly* what this stage of the testing is
designed to determine. Their is no prejudgement involved...simply the
results (whatever they are) of the first stage sighted test as a benchmark.

The other advantage of my proposed preference test is that it leaves the
subject free to listen however he wants, just as your theory ought to
demand. Whereas you want to impose an artificial "scorecard evaluation,"
which may be nothing like that subject's actual practice.


Agreed in principle, although I think most audiophiles at least keep a
scorecard in their head ("bass more defined, dynamic", "broader soundstage",
etc.) I would make it explicit here simply to allow statistical analysis.
The subject can still take or spend most of his time in completely
subjective listening, and only do the "rating" at the end. Of course, lot's
of care must be put into the rating factors to make sure that all here agree
nothing significant has been left out and that their is no undue redundancy.

Then a 16
trial run for each person using Tom's traditional A-B or A-B-X test.


As I said above, if your goal is to compare sighted to blind evaluative
approaches, this step is unnecessary.


Absolutely not. This is another main objective of the test...deviding the
"blind" effect from the "comparative test" effect.

As for the right way to test your theory (that long-term evaluative
listening is more sensitive to sonic differences than an ABX test), a
simple, double-blind preference test would serve. Wouldn't give you

the
results you want, but that's not my problem.


No it will not...it presumes the test is already validated. That's the
purpose of this whole "control" test...to find out if it is valid and

gives
the same results...with the effects of "blinding" separated from the

change
in test technique.


I think you'll see that my longer proposal does exactly what you ask--it
compares sighted results to blind results using exactly the same listening
method, to see if they give the same results. And, unlike you, I have
defined statistically what "same" means.


I agree your proposal is similar, but also potentially misleading since it
relies on lots of dissimilar comparisons of dissimilar equipment that may /
may not actually have differences (a null comparison of units that show no
difference sighted does not mean much). Moreover, doing away with
evaluative ratings is wrong IMO because this is what *lead* audiophiles to
their choices and it is important to understand if/what of these evaluative
factors (if any) make the transition from sighted to blind.

  #289   Report Post  
normanstrong
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

"Michael Scarpitti" wrote in message
news:Ruqgc.8483$hw5.7851@attbi_s53...
chung wrote in message

news:hO2gc.148131$gA5.1797802@attbi_s03...

You still have not answered this. If the differences are so

obvious, why
not do a DBT to prove that the differences are real?


There is nothing to be gained, that's why! The differences are so
dramatic that it is not worth my time.....


That is precisely when a blind test is most needed--when the
differences are "dramatic". Fortuntely, that's also the time when
good results should be obtained.

Norm Strong

  #290   Report Post  
Walter Bushell
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

In article R_zgc.164105$K91.417638@attbi_s02,
wrote:

"The amps sounded different and consistently so, in a way that I am
completely incapable of producing. I would have had to create and keep
track of 7 different sonic signaures, which is preposterous.

Your claim
to be an exception to listening tests showing results near the level of
guessing first need to be established.


I claim no 'exception' or special ability. I take my time and listen
carefully, and I learned over the years how components differ.

You'll not 'argue' me out of my senses."

Your claim to hear anything in the above context must first be valididated
to see if your reports exclude you from the tests showing a level similar
to guessing. That you make such claims also makes a claim of exception
because they contrast with the tests. I'm not concerned with your mental
state after presentation with the test results, only in your claim to be
an exception to them. Do the test and you remain free to accept them or
not. Only the test will confirm if continued interest in your reports
merit further interest. Your senses are not at issue, we must conclude
they are of the common sort which produce the test results, or otherwise
demonstrated by testing. We must conclude that your senses are subject to
the same perception process that produces all manner of end states which
have no physical reality, to which we all are subject.


Once you begin to understand that your mind plays tricks on you, you
move to another level of self understanding. If I ever have to face a
trial while falsely accused, I would hope for a jury that knows this,
particularly if the evidence against me is eye witness.

We know that magicians can do appear to do impossible things, we don't
believe they actually can bend the law of reality, why do people believe
there ears when presented with the same paradox?

Anyway, can we take this discussion to apply in spades to power cords?
And whether we should expect a $10,000 power cord to improve a CD player
much more than a $1000 dollar one, for example.


  #291   Report Post  
Stewart Pinkerton
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

On Sun, 18 Apr 2004 07:54:25 GMT, (Michael
Scarpitti) wrote:

chung wrote in message news:hO2gc.148131$gA5.1797802@attbi_s03...
Michael Scarpitti wrote:

Stewart Pinkerton wrote in message ...
On 15 Apr 2004 04:14:08 GMT,
(Michael
Scarpitti) wrote:

chung wrote in message
news:wL2fc.135909$K91.350047@attbi_s02...

So it should be a slam dunk to tell them apart in a DBT, no? Just do
one!

You obviously have never heard a TA-N88B. This is a digital amp. It
has a completely different kind of presentation, wholly without the
kind of distortion that concentional transistors or tubes have.

It does however have other distortions of its own, which *may* be
audible. Note that no one is saying that it's impossible for you to
have heard differences with this amp, just that you have no way of
*knowing* that this was the case.

I am completely astonished at such a response. If I plug the leads
into this amp, and I hear astonishing clarity, and then switch back to
my old amp, and the 'astonishing clarity' disapperas, what am I to
conclude? That some genie has waved a magic wand right when I made the
switch?


You still have not answered this. If the differences are so obvious, why
not do a DBT to prove that the differences are real?


There is nothing to be gained, that's why! The differences are so
dramatic that it is not worth my time.....


We've heard this before, many times. In each and every case, the
claimant failed to prove his case in a blind test. Conviction of
personal infallibility is not evidence.

I can raise the level by a few dB, and you will hear astonishing
clarity. From any competent amp.

It is almost certainly the case that
you had no chance of differentiating 7 amplifiers, but you seem unable
to accept this.

None of the 7 amps sounded the same. Each had a recognizable sonic
signature. In particular, the TA-N88B was so special that my
non-audiophile friend said 'wow'.


Well, anytime you tell someone you have a new amp, and play it a little
lounder, your friend will say "wow".


That's not the case. You were not there. I reapeat and insist that the
Sony TA-N88B amp is so much clearer than other amps that anyone who
spends more than 3 seconds listening will notice it.


Absolute nonsense! I have set up several 'bypass' tests to compare a
power amp with a straight wire link. In each case, the amplifier
contributed nothing to the sound, hence could be considered to be
sonically transparent. *If* the Sony sounds different, it is because
it *adds* something to the sound, not because it is superior.

BTW, that Sony is *known* to have some quite nasty HF artifacts -
perhaps that is what you are confusing with 'clarity'?
--

Stewart Pinkerton | Music is Art - Audio is Engineering

  #292   Report Post  
Harry Lavo
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

"Nousaine" wrote in message
news:W4Kgc.157303$gA5.1886001@attbi_s03...

Sullivan's commens snipped for clarity, since response is to Tom's points


Harry's theory also contains the assumption that preferences determined

under
open conditions carry some kind of scientific authority.


Nope, the "authority" comes from the fact that this is the normal approach
used by audiophiles, and thus is the most widespread. It is the technique
you are attempting to prove is invalid.

And that if a subject does not come to identical conclusions under
bias-controlled conditions that means that such controls mask 'real' sonic
difference instead of the more logical conclusion that the conclusions

were not
sound-based in the first place.


Nope, all the control test does is show whether under identical conditions
blinding decreases or negates differences rated under sighted conditions.
If it does, you are correct. If it doesn't (but subsequent quick-switch
a--b testing does) then the problem lies in quick-switching and comparing,
versus evaluating. That's all. I simply start where most audiophiles are,
and move to where you are...but controlling blindness and test design
factors as two separate variables that must be isolated to really understand
what is happening.

So the experiment was proposed with an eye toward a biased outcome. When
subjects form different or statistically non-uniform results (which would

be
likely when the units actually had identical sound) then Harry would call

the
controlled tests "invalid" instead of concluding that the open tests were

not
based on sound but some other mechanism.


You have to prove and validate the test first. Otherwise, your example
above is based on faith, not on science.

And I agree that Harry should be the first subject in any experiment

because he
already has equipment the sound of which he "knows". It seems extremely
unlikely that simply putting a blanket over the I/O terminals would stop

him
from "hearing" his own equipment no matter what the length of the

audition.


The test is not based on one person, since it cannot be statistically
validated except over a sample of two dozen or so audiophiles. And before
that can happen, we need to agree on what is to be tested, what evaluative
factors are to be included, and on a written set of protocols to be used.
I'm happy to lead a discussion / provided and modify with the group. And I
am perfectly happy to be one of or even the lead person in doing the test
once all this has been hammered out.

To suggest otherwise would mean that no one would ever be able to enjoy a

30
second or 3-minute recording under any conditions. There wouldn't be time

to
get into the right listening mode.


Rhetorical hogwash, Tom.

And, I continue to wonder how Steve Zipser with long term knowledge of his
reference amplifier could have all those intimate details disappear when
nothing more than a blanket was placed over the I/O terminals in

comparison to
a completely different unit.


He didn't do evaluative ratings. He had to do a comparative choice, with
you standing over his shoulder (figuratively if not literally). That
changes things, I believe.

I can't understand how a simple cloth could 'mask' differences gleaned

under
long term conditions AND that a completely unknown and presumed inferior

device
could suddenly become sonically equivalent to a well-known device with

clearly
identifiable sound.


Again, depends on the test technique. Did the subject then have
weeks/months to evaluate the two options before having to make an identity
choice? I think not.

As for practicality of either technique Harry's method requires lengthy

IN-HOME
audition of all possible candidates BEFORE any decision can be made. It

seems
to me that a double blind test has no less practicality. Indeed it may

even be
more practical because it does not require hours/weeks of audition.


Except that it begs the question that is attempting to be resolved. It only
works if you grant the test a priori validity. And I and many others are
not willing to grant that...that's the whole concept of a control test.

  #293   Report Post  
Bob Marcus
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

Harry Lavo wrote:

"Nousaine" wrote in message
news:W4Kgc.157303$gA5.1886001@attbi_s03...

Sullivan's commens snipped for clarity, since response is to Tom's

points

Harry's theory also contains the assumption that preferences determined

under
open conditions carry some kind of scientific authority.


Nope, the "authority" comes from the fact that this is the normal approach
used by audiophiles, and thus is the most widespread.Â* It is the technique
you are attempting to prove is invalid.


WE are attempting to prove nothing. The test Tom uses is recognized by all
experts in the field of human hearing perception as an appropriate and
reliable test for sonic differences--ANY sonic differences--and is used
every day by said experts to do just that both in the audio industry and in
academia. It is you who are trying to prove that it is somehow uniquely
inadequate for the specific task of comparing high-end components.

And that if a subject does not come to identical conclusions under
bias-controlled conditions that means that such controls mask 'real'

sonic
difference instead of the more logical conclusion that the conclusions

were not
sound-based in the first place.


Nope, all the control test does is show whether under identical conditions
blinding decreases or negates differences rated under sighted conditions.
If it does, you are correct.Â* If it doesn't (but subsequent quick-switch
a--b testing does) then the problem lies in quick-switching and comparing,
versus evaluating.Â* That's all. I simply start where most audiophiles are,
and move to where you are...but controlling blindness and test design
factors as two separate variables that must be isolated to really
understand
what is happening.


To be precise, you are controlling test design, as you call it, in order to
determine the difference if any between sighted and blind tests. But I agree
with the conclusions you would draw from the results.

So the experiment was proposed with an eye toward a biased outcome. When
subjects form different or statistically non-uniform results (which

would
be
likely when the units actually had identical sound) then Harry would

call
the
controlled testsÂ* "invalid" instead of concluding that the open tests

were
not
based on sound but some other mechanism.


You have to prove and validate the test first.Â* Otherwise, your example
above is based on faith, not on science.


As of right now, it is your theory that is based on faith, not science,
because you haven't done a speck of science to back it up. (And because it
runs counter to a whole lot of scientific findings, but we'll let that
pass.)

And I agree that Harry should be the first subject in any experiment

because he
already has equipment the sound of which he "knows". It seems extremely
unlikely that simply putting a blanket over the I/O terminals would stop

him
from "hearing" his own equipment no matter what the length of the

audition.


The test is not based on one person, since it cannot be statistically
validated except over a sample of two dozen or so audiophiles.Â*


Actually, it could, if that subject had sufficient patience to conduct
multiple blind trials. But then somebody would complain about listener
fatigue!

And before
that can happen, we need to agree on what is to be tested, what evaluative
factors are to be included, and on a written set of protocols to be used.


For your test, yes, we would have to agree on those things. But given that
it is YOUR test, it is incumbent on you to come up with--and justify--a set
of evaluative factors. As I have explained elsewhere, this would be an
extremely difficult undertaking even for an expert in psychoacoustics--which
you ain't. (I'm not sure any regular participant in rahe would be up to the
task, frankly.)

And given that I, for one, believe that such an exercise is neither possible
nor necessary--and would make the test LESS sensitive by imposing a
listening protocol on the subject--I don't see the point. I've proposed an
alternative approach that--except for the time factor, which will be a
problem no matter what test you use--is thoroughly practicable and meets
every condition you've posed.

bob

__________________________________________________ _______________
FREE pop-up blocking with the new MSN Toolbar – get it now!
http://toolbar.msn.com/go/onm00200415ave/direct/01/

  #294   Report Post  
Bob Marcus
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

Harry Lavo wrote:

"Bob Marcus" wrote in message
...
Harry Lavo wrote:

"Bob Marcus" wrote in message
news:KmVfc.4829$aM4.16670@attbi_s53...
Harry Lavo wrote:

Why don't you contribute constructively rather than destructively.

Why
don't you point out exactly how "this is not one of them" and how

this
"does
not have much to do with good test design" and then propose
althernative
ways to test my theory. Since I posited the test we have heard

nothing
but
negatives from you.

I've criticized your "test" at length before, but here are the
highlights:

1) You have no coherent, testable hypothesis.


Sure I do. It is that blinding per se, when done on an relaxed,
longer-term, evaluative basis, is not likely to change the results of
sighted listening done under the same conditions. But that the switch

to
blind a-b testing, or a-b-x testing will tend the results toward null
because of ear-brain confusion. The control test is set up *exactly* to
separate the two things.


Well, no, it's far too complex to do this job. If you want to compare

two
tests, with only sightedness as the variable, then you certainly don't

need
THREE tests. A bigger problem is that there is no way statistically to
compare the multiplicity of results you would get using the evaluation
approach you propose. That's the virtue of the preference test I
proposed--there are only two possible answers. Whereas, if you ask
audiophiles to "evaluate" components based on, say, ten either-or

criteria
(a la Oohashi, who I believe is your model here), each subject has 1,024
possible answers. How do you tell whether his sighted answers match his
blind answers? There's no meaningful statistical standard, nor is there

any
way of determining--without a huge amount of research--whether the

criteria
are themselves independent, which would be another requirement.


Sorry, doesn't fly.Â* If the comparative test itself is the problem, blind
vs
not blind will show no difference.Â* Since their are two variables, there
have to be two tests, controlling one variable in each matched pair.


If you get the same results in sighted and blind evaluative tests, then you
know that it is the comparative nature of ABX tests that causes them to be
insensitive. If you get different results, that tells you that the sighted
evaluations are flawed because of biases resulting the subjects' knowledge
of what they are listening to.

But if you really want to do an ABX test, go ahead and waste your time. It
won't tell you a thing extra.

As to statistical complexity of evaluative testing, it is not all that
complex.Â* If the pair shows a statistical difference on one or more
characteristics sighted, then that is the standard (and presumably it will
if the test components are well chosen).Â*


Unlike my proposed preference test, your approach presumes not only that
subjects will hear differences, but that they will agree on what those
differences are. This greatly complicates the task of findings components to
compare.

Then if the blind test shows
comparable statistical significances on one or more of these variables, it
shows that blinding per se does not invalidate the sighted test (whether on
some or all variables would be interesting in and of itself).


Not at all. It depends on the number of variables--the more you have, the
more likely it is that you will get a statistically significant result for
at least one of them by chance alone. That's why the statistics is
complex--if, that is, all of teh variables are known to be independent. If
it is not known that all of the variables are independent, then the
statistics is well nigh impossible.

Then, whether or not the comparative, iterative test (blind) supported the
evaluative test (blind) would answer the question test technique.


2) You ask your subjects to do the impossible--namely, to conduct

two
independent subjective evaluations of the same equipment. Can't be

done.
Three's no way the first can't affect the second.


Absolutely not, one evaluative sighted test and one evaluative blind

test
per subject...that's why several dozen subjects are required.


Can't be done. Subjects will recall their sighted evaluations when they

do
their blind ones, so instead of the latter being an independent

evaluation,
all they'll be doing is trying to match their previous evaluations to

the
two components they are listening to now.


What's wrong with that.Â* It is a necessary part of the test.Â* If after a
few
weeks time and this time blind, subjects can still identify the components
under test by accurately recording the same subjective evaluation (as
measured by statistical significance) then the blinding has not nulled the
sighted evaluation.Â*


But now you're comparing/identifying, rather than evaluating, according to
your own definitions. If that's what you want to do, fine, but just do it.
Do a sighted evaluation, let people fill out a scorecard, then let them
consult that scorecard in the blind evaluation and determine which amp
matches which set of characteristics.

A preference test, by the way, is just a single-variable version of this
latter approach. And there is no theoretical reason why you need more than
one variable.

That is *exactly* what this stage of the testing is
designed to determine.Â* Their is no prejudgement involved...simply the
results (whatever they are) of the first stage sighted test as a benchmark.

The other advantage of my proposed preference test is that it leaves the
subject free to listen however he wants, just as your theory ought to
demand. Whereas you want to impose an artificial "scorecard evaluation,"
which may be nothing like that subject's actual practice.


Agreed in principle, although I think most audiophiles at least keep a
scorecard in their head ("bass more defined, dynamic", "broader
soundstage",
etc.)Â* I would make it explicit here simply to allow statistical analysis.


As I point out above, there is no need for such complex statistical
analysis. Also, making it explicit requires you to impose an analytical
framework on the subjects, rather than letting them decide what to listen
for and what is important to them. If you want to conduct a blind test
that's as close to what audiophiles do every day as possible, my preference
test has your highly prescriptive and overly complex scorecard evaluation
beat hands down.

The subject can still take or spend most of his time in completely
subjective listening, and only do the "rating" at the end.Â* Of course,
lot's
of care must be put into the rating factors to make sure that all here
agree
nothing significant has been left out and that their is no undue
redundancy.


Actually, years of research will be required to determine that there is no
redundancy. Proving that two variables are independent is fairly
straightforward. Proving that ten are is a life's undertaking.

Then a 16
trial run for each person using Tom's traditional A-B or A-B-X test.


As I said above, if your goal is to compare sighted to blind evaluative
approaches, this step is unnecessary.


Absolutely not.Â* This is another main objective of the test...deviding the
"blind" effect from the "comparative test" effect.


In other words, despite your previous protestations, you do not accept the
necessity of blind testing. If that is the case, why should I take you
seriously?

As for the right way to test your theory (that long-term evaluative
listening is more sensitive to sonic differences than an ABX test),

a
simple, double-blind preference test would serve. Wouldn't give you

the
results you want, but that's not my problem.


No it will not...it presumes the test is already validated. That's the
purpose of this whole "control" test...to find out if it is valid and

gives
the same results...with the effects of "blinding" separated from the

change
in test technique.


I think you'll see that my longer proposal does exactly what you ask--it
compares sighted results to blind results using exactly the same

listening
method, to see if they give the same results. And, unlike you, I have
defined statistically what "same" means.


I agree your proposal is similar, but also potentially misleading since it
relies on lots of dissimilar comparisons of dissimilar equipment that may /
may not actually have differences (a null comparison of units that show no
difference sighted does not mean much).


So all you have to do is find two components that audiophiles are willing to
express a preference between. Given all the subjectivist stuff we read here
and elsewhere, that can't be too hard, can it?

Â* Moreover, doing away with
evaluative ratings is wrong IMO because this is what *lead* audiophiles to
their choices and it is important to understand if/what of these evaluative
factors (if any) make the transition from sighted to blind.


But now you've created a hypothesis that's too complex to test. It's one
thing to test whether perceptions change from sighted to blind, holding all
else equal (which my preference test does). But you're also testing a
hypothesis about how audiophiles evaluate components. You may well be right,
in some general way. But your test requires you to be right in a very
specific way--that you can list a set of attributes that covers what
audiophiles actually listen for. You have no real basis (other than anecdote
and conjecture) for constructing that list.

The only reason I can see for insisting on such an impossibly complex test
is that you want to ensure that the test will never be performed, so that
you can continue forever insisting against all evidence that we can't know
for sure that ABX works because we haven't done YOUR test. And that is what
I think you are doing.

bob

__________________________________________________ _______________
Stop worrying about overloading your inbox - get MSN Hotmail Extra Storage!
http://join.msn.com/?pgmarket=en-us&...ave/direct/01/

  #296   Report Post  
Panzzi
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

Walter Bushell wrote in
:

Once you begin to understand that your mind plays tricks on you, you
move to another level of self understanding. If I ever have to face a
trial while falsely accused, I would hope for a jury that knows this,
particularly if the evidence against me is eye witness.

We know that magicians can do appear to do impossible things, we don't
believe they actually can bend the law of reality, why do people
believe there ears when presented with the same paradox?

Anyway, can we take this discussion to apply in spades to power cords?
And whether we should expect a $10,000 power cord to improve a CD
player much more than a $1000 dollar one, for example.


You are listening to music. If anything, I mean anything that can enhance
your listening pleasure, that's all that count! No matter your mind is
playing trick with you, or something like that.

I mean, why is it so hard to understand? We don't need scientific evidence
on enjoying music, we need our own judgement, our instance! We believe to
our ears, to our brain because it is actually what we are hearing!

Panzzi

  #297   Report Post  
Harry Lavo
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

"Bob Marcus" wrote in message
news:eC%gc.163161$gA5.1923725@attbi_s03...
Harry Lavo wrote:

"Bob Marcus" wrote in message
...
Harry Lavo wrote:

"Bob Marcus" wrote in message
news:KmVfc.4829$aM4.16670@attbi_s53...
Harry Lavo wrote:

Why don't you contribute constructively rather than

destructively.
Why
don't you point out exactly how "this is not one of them" and how

this
"does
not have much to do with good test design" and then propose
althernative
ways to test my theory. Since I posited the test we have heard

nothing
but
negatives from you.

I've criticized your "test" at length before, but here are the
highlights:

1) You have no coherent, testable hypothesis.


Sure I do. It is that blinding per se, when done on an relaxed,
longer-term, evaluative basis, is not likely to change the results of
sighted listening done under the same conditions. But that the switch

to
blind a-b testing, or a-b-x testing will tend the results toward null
because of ear-brain confusion. The control test is set up *exactly*

to
separate the two things.

Well, no, it's far too complex to do this job. If you want to compare

two
tests, with only sightedness as the variable, then you certainly don't

need
THREE tests. A bigger problem is that there is no way statistically to
compare the multiplicity of results you would get using the evaluation
approach you propose. That's the virtue of the preference test I
proposed--there are only two possible answers. Whereas, if you ask
audiophiles to "evaluate" components based on, say, ten either-or

criteria
(a la Oohashi, who I believe is your model here), each subject has

1,024
possible answers. How do you tell whether his sighted answers match

his
blind answers? There's no meaningful statistical standard, nor is

there
any
way of determining--without a huge amount of research--whether the

criteria
are themselves independent, which would be another requirement.


Sorry, doesn't fly. If the comparative test itself is the problem, blind
vs
not blind will show no difference. Since their are two variables, there
have to be two tests, controlling one variable in each matched pair.


If you get the same results in sighted and blind evaluative tests, then

you
know that it is the comparative nature of ABX tests that causes them to be
insensitive. If you get different results, that tells you that the sighted
evaluations are flawed because of biases resulting the subjects' knowledge
of what they are listening to.

But if you really want to do an ABX test, go ahead and waste your time. It
won't tell you a thing extra.


It will confirm your first point above. If we didn't do it, there would be
another three years of discussion and defense here from the objectivists,
just as you fear from the subjectivists. So for the test to be neutral, it
has to conclusively close both ends of the loop.

As to statistical complexity of evaluative testing, it is not all that
complex. If the pair shows a statistical difference on one or more
characteristics sighted, then that is the standard (and presumably it

will
if the test components are well chosen).Â


Unlike my proposed preference test, your approach presumes not only that
subjects will hear differences, but that they will agree on what those
differences are. This greatly complicates the task of findings components

to
compare.


I not only presume, but it is essential, that the subjectivist audio
community *believe* that the units under test sound different for the test
to be valid. It is also essential that the large majority of the
objectivist camp believe the units under test do not / can not/ will not
sound different. But I still believe it is worthwhile doing. There does
seem to be some broad antecdotal consensus about the sound of certain items
within the subjective comments of the audiophile community, and I would use
those as a starting point. And then ask the objectivist commenty for their
opinions / comments on the comparison to make sure they see the two units as
supposedly equal in sound / no different.

Then if the blind test shows
comparable statistical significances on one or more of these variables,

it
shows that blinding per se does not invalidate the sighted test (whether

on
some or all variables would be interesting in and of itself).


Not at all. It depends on the number of variables--the more you have, the
more likely it is that you will get a statistically significant result for
at least one of them by chance alone. That's why the statistics is
complex--if, that is, all of teh variables are known to be independent. If
it is not known that all of the variables are independent, then the
statistics is well nigh impossible.


But it is not at all likely that you would get that same rating significance
blind in the follow up test.

Then, whether or not the comparative, iterative test (blind) supported

the
evaluative test (blind) would answer the question test technique.


2) You ask your subjects to do the impossible--namely, to conduct

two
independent subjective evaluations of the same equipment. Can't be

done.
Three's no way the first can't affect the second.


Absolutely not, one evaluative sighted test and one evaluative blind

test
per subject...that's why several dozen subjects are required.

Can't be done. Subjects will recall their sighted evaluations when

they
do
their blind ones, so instead of the latter being an independent

evaluation,
all they'll be doing is trying to match their previous evaluations to

the
two components they are listening to now.


What's wrong with that. It is a necessary part of the test. If after a
few
weeks time and this time blind, subjects can still identify the

components
under test by accurately recording the same subjective evaluation (as
measured by statistical significance) then the blinding has not nulled

the
sighted evaluation.Â


But now you're comparing/identifying, rather than evaluating, according to
your own definitions. If that's what you want to do, fine, but just do it.
Do a sighted evaluation, let people fill out a scorecard, then let them
consult that scorecard in the blind evaluation and determine which amp
matches which set of characteristics.


No, let them evaluate the two units under test in depth, just as they did
sighted. Statistical analysis, not choice, will determine if the two
results are the same or different.

I still don't think you grasp the fact that the difference in the techniques
is not to have to decide to choose (the left brain approach), but rather let
the "choice" grow explicitly out of the evaluative experience (the right
brain approach).

A preference test, by the way, is just a single-variable version of this
latter approach. And there is no theoretical reason why you need more than
one variable.

That is *exactly* what this stage of the testing is
designed to determine. Their is no prejudgement involved...simply the
results (whatever they are) of the first stage sighted test as a

benchmark.

The other advantage of my proposed preference test is that it leaves

the
subject free to listen however he wants, just as your theory ought to
demand. Whereas you want to impose an artificial "scorecard

evaluation,"
which may be nothing like that subject's actual practice.


Agreed in principle, although I think most audiophiles at least keep a
scorecard in their head ("bass more defined, dynamic", "broader
soundstage",
etc.)Â I would make it explicit here simply to allow statistical

analysis.

As I point out above, there is no need for such complex statistical
analysis. Also, making it explicit requires you to impose an analytical
framework on the subjects, rather than letting them decide what to listen
for and what is important to them. If you want to conduct a blind test
that's as close to what audiophiles do every day as possible, my

preference
test has your highly prescriptive and overly complex scorecard evaluation
beat hands down.


Nope, they can listen and decide totally subjectively. All they have to do
is then to translate their "impressions" into ratings on the scale. They
are simply recording the conclusions they came to, not making a forced
choice. They are evaluating the two units separately (monadically) just as
they did sighted unless they specifically ask for a switch to check out a
specific characteristic.

The subject can still take or spend most of his time in completely
subjective listening, and only do the "rating" at the end. Of course,
lot's
of care must be put into the rating factors to make sure that all here
agree
nothing significant has been left out and that their is no undue
redundancy.


Actually, years of research will be required to determine that there is no
redundancy. Proving that two variables are independent is fairly
straightforward. Proving that ten are is a life's undertaking.


Methinks you protest too much. It requires thought, feedback from the group
in iterations, but it is not rocket science. The first thing that must be
done however is to decide on the units to test.

Then a 16
trial run for each person using Tom's traditional A-B or A-B-X test.

As I said above, if your goal is to compare sighted to blind

evaluative
approaches, this step is unnecessary.


Absolutely not. This is another main objective of the test...deviding

the
"blind" effect from the "comparative test" effect.


In other words, despite your previous protestations, you do not accept the
necessity of blind testing. If that is the case, why should I take you
seriously?


What on earth are you talking about? This is a non-sequitor.


As for the right way to test your theory (that long-term

evaluative
listening is more sensitive to sonic differences than an ABX

test),
a
simple, double-blind preference test would serve. Wouldn't give

you
the
results you want, but that's not my problem.


No it will not...it presumes the test is already validated. That's

the
purpose of this whole "control" test...to find out if it is valid and

gives
the same results...with the effects of "blinding" separated from the

change
in test technique.

I think you'll see that my longer proposal does exactly what you

ask--it
compares sighted results to blind results using exactly the same

listening
method, to see if they give the same results. And, unlike you, I have
defined statistically what "same" means.


I agree your proposal is similar, but also potentially misleading since

it
relies on lots of dissimilar comparisons of dissimilar equipment that may

/
may not actually have differences (a null comparison of units that show

no
difference sighted does not mean much).


So all you have to do is find two components that audiophiles are willing

to
express a preference between. Given all the subjectivist stuff we read

here
and elsewhere, that can't be too hard, can it?


That's only half the equation. The other half is what the objectivists
think of those same two components. I nominate a redbook test between the
Panasonic S55 or S85 or equivalent later model and the least expensive Sony
DVD/SACD player. Both have MSRP's in the 100-200 range, and both feature
hi-rez as well as DVD and Redbook reproduction. So they should be roughly
equivalent, and if I read the objectivist sentiment here correctly, the
Redbook technology is a ten-year old "settled issue" and "most all players
sound alike". Morover, this is a very practical choice for many people, at
least for a second system if not the first.

 Moreover, doing away with
evaluative ratings is wrong IMO because this is what *lead* audiophiles

to
their choices and it is important to understand if/what of these

evaluative
factors (if any) make the transition from sighted to blind.


But now you've created a hypothesis that's too complex to test. It's one
thing to test whether perceptions change from sighted to blind, holding

all
else equal (which my preference test does). But you're also testing a
hypothesis about how audiophiles evaluate components. You may well be

right,
in some general way. But your test requires you to be right in a very
specific way--that you can list a set of attributes that covers what
audiophiles actually listen for. You have no real basis (other than

anecdote
and conjecture) for constructing that list.


This group can provide the iterative feedback necessary to make the list a
good one. This group is uniquely suited to try to undertake/move this test
along.

The only reason I can see for insisting on such an impossibly complex test
is that you want to ensure that the test will never be performed, so that
you can continue forever insisting against all evidence that we can't know
for sure that ABX works because we haven't done YOUR test. And that is

what
I think you are doing.


Thanks for your vote of confidence in my motives. My purpose is to as
closely as possible within a tightly controlled, scientific test, approach
the conditions many audiophiles use at home in sighted testing on one end,
and Tom's DBT approach on the other, controlling the variables in between.
And not incidentally, learning all it is possible to learn upon the way
about what really happens in sighted versus blind testing as it affects
perceptions. And the same for forced comparison versus evaluative
comparisons.

  #298   Report Post  
Michael Scarpitti
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

Stewart Pinkerton wrote in message news:tvWgc.173042$JO3.101084@attbi_s04...

That's not the case. You were not there. I reapeat and insist that the
Sony TA-N88B amp is so much clearer than other amps that anyone who
spends more than 3 seconds listening will notice it.


Absolute nonsense! I have set up several 'bypass' tests to compare a
power amp with a straight wire link. In each case, the amplifier
contributed nothing to the sound, hence could be considered to be
sonically transparent. *If* the Sony sounds different, it is because
it *adds* something to the sound, not because it is superior.

BTW, that Sony is *known* to have some quite nasty HF artifacts -
perhaps that is what you are confusing with 'clarity'?


The nonsense is coming from you. Have you or have you not heard the
TA-N88B? If you had, we would not be having this conversation. The amp
is staggeringly clearer than any conventional amp. The problem with it
was stability.

The other amps you have listened to --- ALL of them -- have nothing in
common with this digital amp. It reveals levels of detail you never
could hear with a conventional amp.

  #299   Report Post  
Ban
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

Harry Lavo wrote:

Nope, the "authority" comes from the fact that this is the normal
approach used by audiophiles, and thus is the most widespread. It is
the technique you are attempting to prove is invalid.


Harry,
there is a *big* difference evaluating a loudspeaker or whatever component
and just to find out if there are any differences between two or more
setups. Our ears can find out the differences much easier, because we do not
need to qualify a source as "better" or "worse". It is just like taking up
the phone. You recognize the caller immediately from his voice, you do not
need 5min of conversation. Much more important is the instantanous switching
without gaps, so the short term memory is not lost.

When audiophiles need long evaluation time it is probably, that the brain
has to eliminate disturbing sounds, which are covering subtle sonic
information. It is like you are living near a busy street, after a few days
or even weeks, the noise of the street is not heard any more, the brain has
learned to eliminate it.

But if there are differences between one sound sample and the next, they
jump immediately into the ear, so to say.

I think everyone should find this out by himself. Maybe there is a
difference between persons, but I doubt it, because all the results of
scientific work seem to support my personal experience.

--
ciao Ban
Bordighera, Italy
  #300   Report Post  
Bob Marcus
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

Harry Lavo wrote:

"Bob Marcus" wrote in message
news:eC%gc.163161$gA5.1923725@attbi_s03...


snip

If you get the same results in sighted and blind evaluative tests, then
you
know that it is the comparative nature of ABX tests that causes them to

be
insensitive. If you get different results, that tells you that the

sighted
evaluations are flawed because of biases resulting the subjects'

knowledge
of what they are listening to.

But if you really want to do an ABX test, go ahead and waste your time.

It
won't tell you a thing extra.


It will confirm your first point above.* If we didn't do it, there would be
another three years of discussion and defense here from the objectivists,
just as you fear from the subjectivists.


Not if you compare components that we agree in advance are nominally
competent. But wouldn't "three years of discussion and defense here from the
objectivists" about your evidence be an improvement on the current
situation, which is that YOU HAVE NO EVIDENCE?

* So for the test to be neutral, it
has to conclusively close both ends of the loop.

As to statistical complexity of evaluative testing, it is not all that
complex. If the pair shows a statistical difference on one or more
characteristics sighted, then that is the standard (and presumably it

will
if the test components are well chosen).Â


Unlike my proposed preference test, your approach presumes not only that
subjects will hear differences, but that they will agree on what those
differences are. This greatly complicates the task of findings

components
to
compare.


I not only presume, but it is essential, that the subjectivist audio
community *believe* that the units under test sound different for the test
to be valid.*


No, it's only essential that you find 20 people who hear a difference
sighted. That's all. I don't give two hoots about what the "subjective
audiophile community" thinks.

It is also essential that the large majority of the
objectivist camp believe the units under test do not / can not/ will not
sound different.


Equally easy to do. Though easier for amps than CD players.

* But I still believe it is worthwhile doing.* There does
seem to be some broad antecdotal consensus about the sound of certain items
within the subjective comments of the audiophile community,


Again, no. We can match up each individual's impressions sighted to that
same individual's impressions blind. We don't need the various subjects to
agree among themselves in order to do a test of whether impressions change
from sighted to blind.

and I would use
those as a starting point.* And then ask the objectivist commenty for their
opinions / comments on the comparison to make sure they see the two units
as
supposedly equal in sound / no different.


We don't have opinions on this matter. We have measurements.

Then if the blind test shows
comparable statistical significances on one or more of these variables,

it
shows that blinding per se does not invalidate the sighted test

(whether
on
some or all variables would be interesting in and of itself).


Not at all. It depends on the number of variables--the more you have,

the
more likely it is that you will get a statistically significant result

for
at least one of them by chance alone. That's why the statistics is
complex--if, that is, all of teh variables are known to be independent.

If
it is not known that all of the variables are independent, then the
statistics is well nigh impossible.


But it is not at all likely that you would get that same rating
significance
blind in the follow up test.


This is a non sequitur.

Then, whether or not the comparative, iterative test (blind) supported

the
evaluative test (blind) would answer the question test technique.


2) You ask your subjects to do the impossible--namely, to

conduct
two
independent subjective evaluations of the same equipment. Can't

be
done.
Three's no way the first can't affect the second.


Absolutely not, one evaluative sighted test and one evaluative

blind
test
per subject...that's why several dozen subjects are required.

Can't be done. Subjects will recall their sighted evaluations when

they
do
their blind ones, so instead of the latter being an independent
evaluation,
all they'll be doing is trying to match their previous evaluations

to
the
two components they are listening to now.


What's wrong with that. It is a necessary part of the test. If

after a
few
weeks time and this time blind, subjects can still identify the

components
under test by accurately recording the same subjective evaluation (as
measured by statistical significance) then the blinding has not nulled

the
sighted evaluation.Â


But now you're comparing/identifying, rather than evaluating, according

to
your own definitions. If that's what you want to do, fine, but just do

it.
Do a sighted evaluation, let people fill out a scorecard, then let them
consult that scorecard in the blind evaluation and determine which amp
matches which set of characteristics.


No, let them evaluate the two units under test in depth, just as they did
sighted.* Statistical analysis, not choice, will determine if the two
results are the same or different.

I still don't think you grasp the fact that the difference in the
techniques
is not to have to decide to choose (the left brain approach), but rather
let
the "choice" grow explicitly out of the evaluative experience* (the right
brain approach).


It's not a fact, it's pseudoscientific speculation on your part. Besides,
the SECOND time they do the test (the blind trial), it will become a
comparative experience, because they'll have their initial evaluations in
mind as they listen, and will be comparing what they are hearing now to what
they thought then. (This is assuming your comparative-evaluative conjecture
isn't entirely fanciful.)

A preference test, by the way, is just a single-variable version of this
latter approach. And there is no theoretical reason why you need more

than
one variable.

That is *exactly* what this stage of the testing is
designed to determine. Their is no prejudgement involved...simply the
results (whatever they are) of the first stage sighted test as a

benchmark.

The other advantage of my proposed preference test is that it leaves

the
subject free to listen however he wants, just as your theory ought

to
demand. Whereas you want to impose an artificial "scorecard

evaluation,"
which may be nothing like that subject's actual practice.


Agreed in principle, although I think most audiophiles at least keep a
scorecard in their head ("bass more defined, dynamic", "broader
soundstage",
etc.)Â I would make it explicit here simply to allow statistical

analysis.

As I point out above, there is no need for such complex statistical
analysis. Also, making it explicit requires you to impose an analytical
framework on the subjects, rather than letting them decide what to

listen
for and what is important to them. If you want to conduct a blind test
that's as close to what audiophiles do every day as possible, my

preference
test has your highly prescriptive and overly complex scorecard

evaluation
beat hands down.


Nope, they can listen and decide totally subjectively.* All they have to do
is then to translate their "impressions" into ratings on the scale.*
They
are simply recording the conclusions they came to, not making a forced
choice.* They are evaluating the two units separately (monadically) just as
they did sighted unless they specifically ask for a switch to check out a
specific characteristic.


I'm sorry. I had envisioned a rather simpler approach, in which you asked
them things like, which amp is brighter, which amp has clearer highs, that
sort of thing. What you seem to be saying here is that you will ask them to
rate each amp's brightness, say, on a scale of one to ten. In that case,
there would be billions of possible answers, and statistical comparisons
would be quite impossible.


The subject can still take or spend most of his time in completely
subjective listening, and only do the "rating" at the end. Of course,
lot's
of care must be put into the rating factors to make sure that all here
agree
nothing significant has been left out and that their is no undue
redundancy.


Actually, years of research will be required to determine that there is

no
redundancy. Proving that two variables are independent is fairly
straightforward. Proving that ten are is a life's undertaking.


Methinks you protest too much.* It requires thought, feedback from the
group
in iterations, but it is not rocket science.* The first thing that must be
done however is to decide on the units to test.


You are being naive. We can speculate all we want about what we think
audiophiles listen for, but if you want to do a serious test, then you need
some basis for KNOWING what audiophiles listen for. You haven't got one, and
it would be a lifetime's work for someone with a far better background in
the field than you to get one.

Again, audiophiles determine preferences every day. We don't need to know
the basis on which they do it to test the robustness of those preferences.

* Then a 16
trial run for each person using Tom's traditional A-B or A-B-X

test.

As I said above, if your goal is to compare sighted to blind

evaluative
approaches, this step is unnecessary.


Absolutely not. This is another main objective of the test...deviding

the
"blind" effect from the "comparative test" effect.


In other words, despite your previous protestations, you do not accept

the
necessity of blind testing. If that is the case, why should I take you
seriously?


What on earth are you talking about?* This is a non-sequitor.


I thought we had agreed that there was no blind effect.


As for the right way to test your theory (that long-term

evaluative
listening is more sensitive to sonic differences than an ABX

test),
a
simple, double-blind preference test would serve. Wouldn't give

you
the
results you want, but that's not my problem.


No it will not...it presumes the test is already validated. That's

the
purpose of this whole "control" test...to find out if it is valid

and
gives
the same results...with the effects of "blinding" separated from

the
change
in test technique.

I think you'll see that my longer proposal does exactly what you

ask--it
compares sighted results to blind results using exactly the same
listening
method, to see if they give the same results. And, unlike you, I

have
defined statistically what "same" means.

I agree your proposal is similar, but also potentially misleading since

it
relies on lots of dissimilar comparisons of dissimilar equipment that

may
/
may not actually have differences (a null comparison of units that show

no
difference sighted does not mean much).


So all you have to do is find two components that audiophiles are

willing
to
express a preference between. Given all the subjectivist stuff we read

here
and elsewhere, that can't be too hard, can it?


That's only half the equation.* The other half is what the objectivists
think of those same two components.


I told you what the objectivists think of those same components. If they
measure as nominally competent, then we confidently predict that their
differences are inaudible.

* I nominate a redbook test between the
Panasonic S55 or S85 or equivalent later model and the least expensive Sony
DVD/SACD player.* Both have MSRP's in the 100-200 range, and both feature
hi-rez as well as DVD and Redbook reproduction.


By the time you get around to testing anything, Harry, these technologies
will be dead.

* So they should be roughly
equivalent, and if I read the objectivist sentiment here correctly, the
Redbook technology is a ten-year old "settled issue" and "most all players
sound alike".*


That's not the same as saying that any two specific players are both
nominally competent.

Morover, this is a very practical choice for many people, at
least for a second system if not the first.

 Moreover, doing away with
evaluative ratings is wrong IMO because this is what *lead* audiophiles

to
their choices and it is important to understand if/what of these

evaluative
factors (if any) make the transition from sighted to blind.


But now you've created a hypothesis that's too complex to test. It's one
thing to test whether perceptions change from sighted to blind, holding

all
else equal (which my preference test does). But you're also testing a
hypothesis about how audiophiles evaluate components. You may well be

right,
in some general way. But your test requires you to be right in a very
specific way--that you can list a set of attributes that covers what
audiophiles actually listen for. You have no real basis (other than

anecdote
and conjecture) for constructing that list.


This group can provide the iterative feedback necessary to make the list a
good one.* This group is uniquely suited to try to undertake/move this test
along.


Again, you are being naive, or you simply don't understand what you are
proposing. You don't design scientific experiments by asking people what
they think oughta work.

The only reason I can see for insisting on such an impossibly complex

test
is that you want to ensure that the test will never be performed, so

that
you can continue forever insisting against all evidence that we can't

know
for sure that ABX works because we haven't done YOUR test. And that is

what
I think you are doing.


Thanks for your vote of confidence in my motives.* My purpose is to as
closely as possible within a tightly controlled, scientific test, approach
the conditions many audiophiles use at home in sighted testing on one end,
and Tom's DBT approach on the other, controlling the variables in between.
And not incidentally, learning all it is possible to learn upon the way
about what really happens in sighted versus blind testing as it affects
perceptions.* And the same for forced comparison versus evaluative
comparisons.


Then do it, if you think it's worth doing.

bob

__________________________________________________ _______________
MSN Toolbar provides one-click access to Hotmail from any Web page – FREE
download! http://toolbar.msn.com/go/onm00200413ave/direct/01/


  #301   Report Post  
Stewart Pinkerton
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

On 20 Apr 2004 02:12:40 GMT, (Michael
Scarpitti) wrote:

(Nousaine) wrote in message ...
(Michael Scarpitti)


Well, anytime you tell someone you have a new amp, and play it a little
lounder, your friend will say "wow".

That's not the case. You were not there. I reapeat and insist that the
Sony TA-N88B amp is so much clearer than other amps that anyone who
spends more than 3 seconds listening will notice it.


Harry Lavo says that it takes long term listening to make this kind of
evaluation and any 3-second listening test is far too short.


Not with this amp. That is the point. It was a dramatically clearer
amp than anything I have ever heard.


Since there exist *many* amps which are sonically transparent, this is
clearly an unlikely claim.

It was a digital amp.


Actually, it's not a digital amp at all. It's a 'switch mode' PWM
analogue amp, otherwise known as Class D, very popular in pro-audio
because you can make a powerful amp that doesn't weight much. However,
such amps frequently suffer from HF artifacts which can noticeably
colour the sound. Some listeners tend to confuse this with clarity.
It's a plain fact that the TAN88, a design from the '70s, simply did
not have access to the ultra-fast switching devices now used for such
amplifiers, and which have enormously improved their performance.

It's interesting that you should be so vocal - and so intransigent -
about this early, and quite flawed, design, when the vastly superior
modern Class D pro-audio amps made by Mackie et al tend to be sneered
at by so-called 'high enders'.
--

Stewart Pinkerton | Music is Art - Audio is Engineering

  #302   Report Post  
Nousaine
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

"Bob Marcus" wrote:


Harry Lavo wrote:

"Bob Marcus" wrote in message
...
Harry Lavo wrote:

"Bob Marcus" wrote in message
news:KmVfc.4829$aM4.16670@attbi_s53...
Harry Lavo wrote:

Why don't you contribute constructively rather than destructive=

ly.
Why
don't you point out exactly how "this is not one of them" and h=

ow
this
"does
not have much to do with good test design" and then propose
althernative
ways to test my theory. Since I posited the test we have heard

nothing
but
negatives from you.

I've criticized your "test" at length before, but here are the
highlights:

1) You have no coherent, testable hypothesis.


Sure I do. It is that blinding per se, when done on an relaxed,
longer-term, evaluative basis, is not likely to change the results =

of
sighted listening done under the same conditions. But that the swit=

ch=20
to
blind a-b testing, or a-b-x testing will tend the results toward nu=

ll
because of ear-brain confusion. The control test is set up *exactly=

* to
separate the two things.

Well, no, it's far too complex to do this job. If you want to compar=

e=20
two
tests, with only sightedness as the variable, then you certainly don=

't
need
THREE tests. A bigger problem is that there is no way statistically =

to
compare the multiplicity of results you would get using the evaluati=

on
approach you propose. That's the virtue of the preference test I
proposed--there are only two possible answers. Whereas, if you ask
audiophiles to "evaluate" components based on, say, ten either-or=20

criteria
(a la Oohashi, who I believe is your model here), each subject has 1=

,024
possible answers. How do you tell whether his sighted answers match =

his
blind answers? There's no meaningful statistical standard, nor is th=

ere
any
way of determining--without a huge amount of research--whether the

criteria
are themselves independent, which would be another requirement.


Sorry, doesn't fly.=C2=A0 If the comparative test itself is the problem=

, blind=20
vs
not blind will show no difference.=C2=A0 Since their are two variables,=

there
have to be two tests, controlling one variable in each matched pair.


If you get the same results in sighted and blind evaluative tests, then =

you=20
know that it is the comparative nature of ABX tests that causes them to =

be=20
insensitive. If you get different results, that tells you that the sight=

ed=20
evaluations are flawed because of biases resulting the subjects' knowled=

ge=20
of what they are listening to.

But if you really want to do an ABX test, go ahead and waste your time. =

It=20
won't tell you a thing extra.

As to statistical complexity of evaluative testing, it is not all that
complex.=C2=A0 If the pair shows a statistical difference on one or mor=

e
characteristics sighted, then that is the standard (and presumably it w=

ill
if the test components are well chosen).=C2=A0


Unlike my proposed preference test, your approach presumes not only that=

=20
subjects will hear differences, but that they will agree on what those=20
differences are. This greatly complicates the task of findings component=

s to=20
compare.

Then if the blind test shows
comparable statistical significances on one or more of these variables,=

it
shows that blinding per se does not invalidate the sighted test (whethe=

r on
some or all variables would be interesting in and of itself).


Not at all. It depends on the number of variables--the more you have, th=

e=20
more likely it is that you will get a statistically significant result f=

or=20
at least one of them by chance alone. That's why the statistics is=20
complex--if, that is, all of teh variables are known to be independent. =

If=20
it is not known that all of the variables are independent, then the=20
statistics is well nigh impossible.

Then, whether or not the comparative, iterative test (blind) supported =

the
evaluative test (blind) would answer the question test technique.


2) You ask your subjects to do the impossible--namely, to conduc=

t=20
two
independent subjective evaluations of the same equipment. Can't =

be
done.
Three's no way the first can't affect the second.


Absolutely not, one evaluative sighted test and one evaluative blin=

d=20
test
per subject...that's why several dozen subjects are required.

Can't be done. Subjects will recall their sighted evaluations when t=

hey=20
do
their blind ones, so instead of the latter being an independent

evaluation,
all they'll be doing is trying to match their previous evaluations t=

o=20
the
two components they are listening to now.


What's wrong with that.=C2=A0 It is a necessary part of the test.=C2=A0=

If after a=20
few
weeks time and this time blind, subjects can still identify the compone=

nts
under test by accurately recording the same subjective evaluation (as
measured by statistical significance) then the blinding has not nulled =

the
sighted evaluation.=C2=A0


But now you're comparing/identifying, rather than evaluating, according =

to=20
your own definitions. If that's what you want to do, fine, but just do i=

t.=20
Do a sighted evaluation, let people fill out a scorecard, then let them=20
consult that scorecard in the blind evaluation and determine which amp=20
matches which set of characteristics.

A preference test, by the way, is just a single-variable version of this=

=20
latter approach. And there is no theoretical reason why you need more th=

an=20
one variable.

That is *exactly* what this stage of the testing is
designed to determine.=C2=A0 Their is no prejudgement involved...simply=

the
results (whatever they are) of the first stage sighted test as a benchm=

ark.

The other advantage of my proposed preference test is that it leaves=

the
subject free to listen however he wants, just as your theory ought t=

o
demand. Whereas you want to impose an artificial "scorecard evaluati=

on,"
which may be nothing like that subject's actual practice.


Agreed in principle, although I think most audiophiles at least keep a
scorecard in their head ("bass more defined, dynamic", "broader=20
soundstage",
etc.)=C2=A0 I would make it explicit here simply to allow statistical a=

nalysis.

As I point out above, there is no need for such complex statistical=20
analysis. Also, making it explicit requires you to impose an analytical=20
framework on the subjects, rather than letting them decide what to liste=

n=20
for and what is important to them. If you want to conduct a blind test=20
that's as close to what audiophiles do every day as possible, my prefere=

nce=20
test has your highly prescriptive and overly complex scorecard evaluatio=

n=20
beat hands down.

The subject can still take or spend most of his time in completely
subjective listening, and only do the "rating" at the end.=C2=A0 Of cou=

rse,=20
lot's
of care must be put into the rating factors to make sure that all here=20
agree
nothing significant has been left out and that their is no undue=20
redundancy.


Actually, years of research will be required to determine that there is =

no=20
redundancy. Proving that two variables are independent is fairly=20
straightforward. Proving that ten are is a life's undertaking.

Then a 16
trial run for each person using Tom's traditional A-B or A-B-X test.

As I said above, if your goal is to compare sighted to blind evaluat=

ive
approaches, this step is unnecessary.


Absolutely not.=C2=A0 This is another main objective of the test...devi=

ding the
"blind" effect from the "comparative test" effect.


In other words, despite your previous protestations, you do not accept t=

he=20
necessity of blind testing. If that is the case, why should I take you=20
seriously?

As for the right way to test your theory (that long-term evaluat=

ive
listening is more sensitive to sonic differences than an ABX tes=

t),=20
a
simple, double-blind preference test would serve. Wouldn't give =

you
the
results you want, but that's not my problem.


No it will not...it presumes the test is already validated. That's =

the
purpose of this whole "control" test...to find out if it is valid a=

nd
gives
the same results...with the effects of "blinding" separated from th=

e
change
in test technique.

I think you'll see that my longer proposal does exactly what you ask=

--it
compares sighted results to blind results using exactly the same=20

listening
method, to see if they give the same results. And, unlike you, I hav=

e
defined statistically what "same" means.


I agree your proposal is similar, but also potentially misleading since=

it
relies on lots of dissimilar comparisons of dissimilar equipment that m=

ay /
may not actually have differences (a null comparison of units that show=

no
difference sighted does not mean much).


So all you have to do is find two components that audiophiles are willin=

g to=20
express a preference between. Given all the subjectivist stuff we read h=

ere=20
and elsewhere, that can't be too hard, can it?

=C2=A0 Moreover, doing away with
evaluative ratings is wrong IMO because this is what *lead* audiophiles=

to
their choices and it is important to understand if/what of these evalua=

tive
factors (if any) make the transition from sighted to blind.


But now you've created a hypothesis that's too complex to test. It's one=

=20
thing to test whether perceptions change from sighted to blind, holding =

all=20
else equal (which my preference test does). But you're also testing a=20
hypothesis about how audiophiles evaluate components. You may well be ri=

ght,=20
in some general way. But your test requires you to be right in a very=20
specific way--that you can list a set of attributes that covers what=20
audiophiles actually listen for. You have no real basis (other than anec=

dote=20
and conjecture) for constructing that list.

The only reason I can see for insisting on such an impossibly complex te=

st=20
is that you want to ensure that the test will never be performed, so tha=

t=20
you can continue forever insisting against all evidence that we can't kn=

ow=20
for sure that ABX works because we haven't done YOUR test. And that is w=

hat=20
I think you are doing.

bob


This seems to be where the legacy of objections has led over time. Double=
blind
ABX testing for audio was originally developed specifically to allow an a=
mp
sound proponent to demonstrate to that fully complementary operation was
sonically superior.=20

That proponent decided that his open sonic observations were not validate=
d but
the technique(s) and experiments continued into the mid-70s. However the
protestations also began when certain subjective opinions were not valida=
ted by
such experiments.=20

I've personally conducted experiments over the years which specifically
addressed a number of salient points and also never once under any condit=
ions
verified the sound of nominally competent amps and wires:

1) Swicthing too slow (ABX)
2) Switching too fast (Cable Swaps)
3) Unknown reference system (your place)
4) test not long term (5 weeks; 16 weeks)
5) no personal control of switching (single subject sessions)
6) sample sizes too small (published tests average 90 trials)
7) wrong program material (bring your own)=20
8) proctor bias (double blind)

plus I ordinarily pay subjects.

I find that the Lavo experiment interesting in that in all the hue and cr=
y he
still hasn't validated open listening under his rules either. Indeed that=
's why
the hue and cry; open listening just hasn't stood the bias controls valid=
ation
test. The "control test" argument variation has much the same content as =
the
old "you haven't tested every amp/cable in the world argument" and comple=
tely
ignores that open listening testing simply hasn't passed the reliability =
acid
test.

Not that it couldn't. The performance category rating idea is good and we=
use
it with Listening Technology but I've never seen a amp/wire sound advocat=
e
actually use it in decision making. It's simply not a tool that audiohile=
s, as
a class, use.=20

And the comparative/evaluative argument just doesn't hold water. When any
product is being evaluated in any way any differences between it and any =
other
similar product will be magnified when they are side-by-side. =20

It is true that I personally can evaluate an amplifier singly (does it ha=
ve the
right power output rating, the right protection, the right size, dual ban=
ana
outputs, rca inputs and level controls and can it pass the straight wire =
test?)
but the relative differences between it and its competition will be most
highlighted when they are side-by-each. This is why people use and bring =
carpet
samples home when picking product. What would you say to a salesman who w=
anted
to sent you home with samples one at a time over the space of several day=
s?=20

  #304   Report Post  
Michael Scarpitti
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

"Bob Marcus" wrote in message ...

It will confirm your first point above. If we didn't do it, there would be
another three years of discussion and defense here from the objectivists,
just as you fear from the subjectivists.


Not if you compare components that we agree in advance are nominally
competent. But wouldn't "three years of discussion and defense here from the
objectivists" about your evidence be an improvement on the current
situation, which is that YOU HAVE NO EVIDENCE?



The mistake seems to be a confusion between product evaluations and
scientific testing. There is no way anyone would claim that sighted
auditioning of audio equipment qualifies as a scientific test in the
strict sense of the term. Given that there is some variabilty among
people, losses as they age, and between the sexes, it is impossible to
be sure that any two individuals can hear exactly the same thing. So,
by showing that subject 'A' cannot hear the differences between two
amplifiers, you have indeed demonstrated very little. Experience is
also involved.
  #305   Report Post  
Nousaine
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

Harry Lavo wrote:

.....large snips .....

1) The main problem I have with double-blind is its tremendous
impracticality in actual use in the home.


I'd say they are more practical than bringing home amplifiers singly and taking
weeks of long term listening to evaluate their sound.

Remember you'll have to do this with every piece of equipment and every
equipment change cycle. And if you think that every amplifier might sound
different and sound different with every speaker assembling a new system might
be practically impossible. As for me I'd much rather rely on the already
written body of evidence on the subject.

.....more long snips....

I said (quiote) "Tom's DBT a-b and a-b-x tests" (end quote) and
specifically
state that it is because they (quote) "force the ear-brain into a
short-term
comparative mode" (unquote).* Why are we arguing?


Here's my proposal for answering the practicalities of amp/cable sound. I offer
a solution to the $$$$ of high-end equipment blues.

Simply purchase a $600 QSC ABX box and everytime you feel the urge to
re-mortgage the house for that $35k power amplifier just fire up the ABX
machine and force youself into that "short-term comparative mode" where all
amplifiers sound the same.


  #306   Report Post  
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

The testing done thus far shows that tested results are at a level near
guessing, that is a group result. Now the only question is to exclude the
possibility any one individual is an exception to the group, totally
regardless of the reasons one might evaluate/test for any reason
whatsoever. Therefore it is not one person compared to another but one
compared to the population to check for exceptions. Previous testing has
set a threshold benchmark for individual variability for all reasons, now
we are looking for possible exceptions. If one might be found, adjustment
to the benchmark can be made accordingly.

"The mistake seems to be a confusion between product evaluations and
scientific testing. There is no way anyone would claim that sighted
auditioning of audio equipment qualifies as a scientific test in the
strict sense of the term. Given that there is some variabilty among
people, losses as they age, and between the sexes, it is impossible to
be sure that any two individuals can hear exactly the same thing. So,
by showing that subject 'A' cannot hear the differences between two
amplifiers, you have indeed demonstrated very little. Experience is
also involved."
  #308   Report Post  
Michael Scarpitti
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

Stewart Pinkerton wrote in message news:MSchc.166942$gA5.1955926@attbi_s03...
On 20 Apr 2004 02:12:40 GMT, (Michael
Scarpitti) wrote:



(snippity-doo-dah)

Not with this amp. That is the point. It was a dramatically clearer
amp than anything I have ever heard.


Since there exist *many* amps which are sonically transparent, this is
clearly an unlikely claim.


How so? Did you read the part: 'anything I have ever heard'? I have
not heard every major amp, though I have heard the big Levinson stuff
(though not in the last couple of years), the big Audio Research tube
amps, and even some Krell amps.

The TA-N88B was without doubt the most amazingly clear amp I have ever
heard before or since.


It was a digital amp.


Actually, it's not a digital amp at all. It's a 'switch mode' PWM
analogue amp, otherwise known as Class D, very popular in pro-audio
because you can make a powerful amp that doesn't weight much. However,
such amps frequently suffer from HF artifacts which can noticeably
colour the sound. Some listeners tend to confuse this with clarity.
It's a plain fact that the TAN88, a design from the '70s, simply did
not have access to the ultra-fast switching devices now used for such
amplifiers, and which have enormously improved their performance.


I'd be interested in something like that. The amp I have now is a
Denon POA-1500-2, which has a lot of punch. It was inferior sonically
to the TA-N88B, but it had the overwhelming advantage of not blowing
up.

It's interesting that you should be so vocal - and so intransigent -
about this early, and quite flawed, design, when the vastly superior
modern Class D pro-audio amps made by Mackie et al tend to be sneered
at by so-called 'high enders'.


I cannot say anything about those, but the TA-N88B was the best amp I
have ever heard.
  #309   Report Post  
Bromo
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

On 4/20/04 7:04 PM, in article r0ihc.7318$GR.869715@attbi_s01, "Nousaine"
wrote:

It seems like you are claiming that if you were to listen to any other product
following a evaluation of one product then you have to "re-learn" the sound of
your own gear?


Not to throw gasoline on the fire -

I have found that when doing extensive listening to other setups, when you
get back to your setup - you hear it with "new ears" to a large degree (for
instance, listening to electrostatics and coming home to your cones and
domes). It reminds me of when you go traveling and come back to your own
house - the smells, sounds and so on are new to your senses to some degree
and it usually takes a day or two to get "used to it" again so it becomes
transparent to you again.

I suppose this is what happens in this case.
  #311   Report Post  
Norman Schwartz
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

"Bromo" wrote in message
news:Lplhc.37177$yD1.107791@attbi_s54...

Depends - I have found that tube amps sound different than solid state

amps.
A solid state PA amp sounds different than an amp, such as NAD. A lot of
negative feedback seems to change the sound somewhat as well - and much of
it should be measurable, though I am sure we haven't learned all there is

to
know about audio measurement.


..... and then as for those tube amps, we must know which manufacturer and
vintage of tube you are talking about, right? since tubes don't simply sound
like tubes and then that's the end of the story? and after that, how many
hours were on those tubes, and then of course how long you warmed them up
before listening to music.

  #312   Report Post  
Stewart Pinkerton
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

On Tue, 20 Apr 2004 07:54:02 GMT, (Michael
Scarpitti) wrote:

Stewart Pinkerton wrote in message news:tvWgc.173042$JO3.101084@attbi_s04...

That's not the case. You were not there. I reapeat and insist that the
Sony TA-N88B amp is so much clearer than other amps that anyone who
spends more than 3 seconds listening will notice it.


Absolute nonsense! I have set up several 'bypass' tests to compare a
power amp with a straight wire link. In each case, the amplifier
contributed nothing to the sound, hence could be considered to be
sonically transparent. *If* the Sony sounds different, it is because
it *adds* something to the sound, not because it is superior.

BTW, that Sony is *known* to have some quite nasty HF artifacts -
perhaps that is what you are confusing with 'clarity'?


The nonsense is coming from you.


Please be specific. So far, I have said nothing which cannot be easily
proven, whereas you have made many wild claims, compounded by factual
inaccuracy - such as in claiming that the TAN88 is a digital
amplifier.

Have you or have you not heard the TA-N88B?


Yes, on many occasions, as one of my oldest hi-fi friends was a great
Esprit series fan, and had a pair of those amps.

f you had, we would not be having this conversation.


More factual inaccuracy.................

The amp
is staggeringly clearer than any conventional amp. The problem with it
was stability.


The amp is no 'clearer' than dozens of other sonically trasnaparent
amps, although it does have some rather nasty HF artifacts which might
well be audible.

The other amps you have listened to --- ALL of them -- have nothing in
common with this digital amp.


Once again, it is *not* a digital amplifier, it's just a simple class
D circuit, like many modern pro-audio amps. How many times do you have
to be told basic facts?

It reveals levels of detail you never could hear with a conventional amp.


Utter rubbish. I have heard many amps which sound *exactly* like their
input signals, so anything you are hearing with the TAN88B which you
can't hear with other good amps, is simply *distortion*.
--

Stewart Pinkerton | Music is Art - Audio is Engineering

  #313   Report Post  
Stewart Pinkerton
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

On 21 Apr 2004 03:07:51 GMT, (Michael
Scarpitti) wrote:

Stewart Pinkerton wrote in message news:MSchc.166942$gA5.1955926@attbi_s03...
On 20 Apr 2004 02:12:40 GMT,
(Michael
Scarpitti) wrote:



(snippity-doo-dah)

Not with this amp. That is the point. It was a dramatically clearer
amp than anything I have ever heard.


Since there exist *many* amps which are sonically transparent, this is
clearly an unlikely claim.


How so? Did you read the part: 'anything I have ever heard'? I have
not heard every major amp, though I have heard the big Levinson stuff
(though not in the last couple of years), the big Audio Research tube
amps, and even some Krell amps.

The TA-N88B was without doubt the most amazingly clear amp I have ever
heard before or since.


Then one might conclude that your listening has been confined to
extremely poor amps. This is unlikely (especially since I own a
Krell), so my original conclusion (based on personal experience of the
TAN88B) stands - the Sony is doing audibly *bad* things that are
initially impressive.

It was a digital amp.


Actually, it's not a digital amp at all. It's a 'switch mode' PWM
analogue amp, otherwise known as Class D, very popular in pro-audio
because you can make a powerful amp that doesn't weight much. However,
such amps frequently suffer from HF artifacts which can noticeably
colour the sound. Some listeners tend to confuse this with clarity.
It's a plain fact that the TAN88, a design from the '70s, simply did
not have access to the ultra-fast switching devices now used for such
amplifiers, and which have enormously improved their performance.


I'd be interested in something like that. The amp I have now is a
Denon POA-1500-2, which has a lot of punch. It was inferior sonically
to the TA-N88B, but it had the overwhelming advantage of not blowing
up.


These amps have advantage for roadies, in that they are advantageous
in the kilograms per kilowatt stakes. They have shown *zero* advantage
in sonic transparency - indeed, how could they? Good amp design has
been a done deal for close on two devcades, despite the claims of the
more imaginative 'high enders'.

It's interesting that you should be so vocal - and so intransigent -
about this early, and quite flawed, design, when the vastly superior
modern Class D pro-audio amps made by Mackie et al tend to be sneered
at by so-called 'high enders'.


I cannot say anything about those, but the TA-N88B was the best amp I
have ever heard.


So you keep saying - but have no evidence for. I didn't find it at all
exceptional.
--

Stewart Pinkerton | Music is Art - Audio is Engineering

  #314   Report Post  
Nousaine
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

Bromo wrote:

On 4/20/04 7:04 PM, in article r0ihc.7318$GR.869715@attbi_s01, "Nousaine"
wrote:

It seems like you are claiming that if you were to listen to any other

product
following a evaluation of one product then you have to "re-learn" the sound

of
your own gear?


Not to throw gasoline on the fire -

I have found that when doing extensive listening to other setups, when you
get back to your setup - you hear it with "new ears" to a large degree (for
instance, listening to electrostatics and coming home to your cones and
domes). It reminds me of when you go traveling and come back to your own
house - the smells, sounds and so on are new to your senses to some degree
and it usually takes a day or two to get "used to it" again so it becomes
transparent to you again.

I suppose this is what happens in this case.


Sure, your description of human sensory adaptation exactly describes my point.
That phenomenon doesn't take but a few minutes. And you don't even need to
leave your house. Just don't listen to your system for 2 weeks and you'll
sometimes have a version of that same effect.

But, you don't have to re-learn anything, like Harry's description, would seem
to indicate. You simply adapt the sensory input of the environment as normal
and you accept it as not being of interest until something 'changes.'

One can demonstrate this to himself by turning on a fan. Flip the switch and
the fan seems unduly loud. 10 minutes later when you turn it off the room seems
unusually quiet for a few minutes.

This also highlights the point that differences in states will seem greatest at
the switch. It's like that with sound too. Differences in acoustical sound
quality will appear greatest at the point when they are switched.

The latter is why the ABX technique was developed.... to make evaluation and
comparison at a point where differences that exist will be at the highest level
of sensitivity.

  #316   Report Post  
Harry Lavo
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

"Nousaine" wrote in message
news:yExhc.40425$ru4.39405@attbi_s52...
Bromo wrote:

On 4/20/04 7:04 PM, in article r0ihc.7318$GR.869715@attbi_s01, "Nousaine"
wrote:

It seems like you are claiming that if you were to listen to any other

product
following a evaluation of one product then you have to "re-learn" the

sound
of
your own gear?


Not to throw gasoline on the fire -

I have found that when doing extensive listening to other setups, when

you
get back to your setup - you hear it with "new ears" to a large degree

(for
instance, listening to electrostatics and coming home to your cones and
domes). It reminds me of when you go traveling and come back to your own
house - the smells, sounds and so on are new to your senses to some

degree
and it usually takes a day or two to get "used to it" again so it becomes
transparent to you again.

I suppose this is what happens in this case.


Sure, your description of human sensory adaptation exactly describes my

point.
That phenomenon doesn't take but a few minutes. And you don't even need to
leave your house. Just don't listen to your system for 2 weeks and you'll
sometimes have a version of that same effect.

But, you don't have to re-learn anything, like Harry's description, would

seem
to indicate. You simply adapt the sensory input of the environment as

normal
and you accept it as not being of interest until something 'changes.'


So your own system means nothing, huh?

One can demonstrate this to himself by turning on a fan. Flip the switch

and
the fan seems unduly loud. 10 minutes later when you turn it off the room

seems
unusually quiet for a few minutes.

This also highlights the point that differences in states will seem

greatest at
the switch. It's like that with sound too. Differences in acoustical sound
quality will appear greatest at the point when they are switched.


Frankly, I read that as "differences in acoustical sound quanitity". Which
is consistent with what we have learned so far about audible differences
with music using this technique.

The latter is why the ABX technique was developed.... to make evaluation

and
comparison at a point where differences that exist will be at the highest

level
of sensitivity.


  #318   Report Post  
Nousaine
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

(Michael Scarpitti) wrote:

wrote in message ...
The testing done thus far shows that tested results are at a level near
guessing, that is a group result. Now the only question is to exclude the
possibility any one individual is an exception to the group, totally
regardless of the reasons one might evaluate/test for any reason
whatsoever. Therefore it is not one person compared to another but one
compared to the population to check for exceptions. Previous testing has
set a threshold benchmark for individual variability for all reasons, now
we are looking for possible exceptions. If one might be found, adjustment
to the benchmark can be made accordingly.


You neglected to address: "Experience is also involved".

I maintain that my ability to hear differences has increased over the years.


While it is true that some people need to be taught to hear the phantom and to
appreciate staging, image and envelopment and other sound reproduction
variables and it is true that listener training is a valuable adjunct it is
also true that "audiophiles" are inducted into the fraternity by learning to
"hear" inaudible sounds.

IOW one gets accepted by 'perceiving' things that have only psychological basis
and have never been shown to have an acoustical or physical cause. Indeed one
of the reasons this can happen is by human nature we are predisposed to finding
difference and even the demonstrable stereo phantom seems like magic.

  #319   Report Post  
Bromo
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

On 4/21/04 12:31 PM, in article Elxhc.10834$GR.1339861@attbi_s01, "Stewart
Pinkerton" wrote:

These amps have advantage for roadies, in that they are advantageous
in the kilograms per kilowatt stakes. They have shown *zero* advantage
in sonic transparency - indeed, how could they? Good amp design has
been a done deal for close on two devcades, despite the claims of the
more imaginative 'high enders'.


While it is easy to design a mediocre amplifier, it is difficult, even
today, to design a truly excellent one. This does not matter if it is an
audio amp, amplifier in a cellular base station or TV transmitter.

The advantages of digital amplifiers (switch mode) has barely begun - and it
is a matter of will, time and money to make them as good and transparent as
the current class A/AB circuits the define the best we can do currently.

  #320   Report Post  
Bromo
 
Posts: n/a
Default Comments regarding: Cables, Hearing, Stuff!!

On 4/21/04 1:07 AM, in article Kknhc.8765$GR.1105595@attbi_s01, "Norman
Schwartz" wrote:

"Bromo" wrote in message
news:Lplhc.37177$yD1.107791@attbi_s54...

Depends - I have found that tube amps sound different than solid state

amps.
A solid state PA amp sounds different than an amp, such as NAD. A lot of
negative feedback seems to change the sound somewhat as well - and much of
it should be measurable, though I am sure we haven't learned all there is

to
know about audio measurement.


.... and then as for those tube amps, we must know which manufacturer and
vintage of tube you are talking about, right? since tubes don't simply sound
like tubes and then that's the end of the story? and after that, how many
hours were on those tubes, and then of course how long you warmed them up
before listening to music.


Absolutely - I think the human ear can be much more sensitive to our ability
to measure in a lot of cases. Kinda nice, though, since it given EE's like
myself a whole career to try to devise means of closing the gap!
Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Hearing aids and music John Richards High End Audio 12 April 7th 04 06:29 PM
Can network, video and sound cables be combined to save space? Gilden Man General 4 February 3rd 04 11:33 AM
Comments about Blind Testing watch king High End Audio 24 January 28th 04 04:03 PM
Note to the Idiot George M. Middius Audio Opinions 222 January 8th 04 07:13 PM
hearing loss info Andy Weaks Car Audio 17 August 10th 03 08:32 AM


All times are GMT +1. The time now is 11:56 AM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"