Reply
 
Thread Tools Display Modes
  #81   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default Ultrabit Platium Disk Treatment

bob wrote:
On Jul 29, 6:44?pm, "Harry Lavo" wrote:

I am sorry, but it *is* an impasse.


A "debate" in which one side has data, and the other side has no data,
is not properly termed an "impasse." It is properly termed a settled
question.


The reason is, almost all claims of "science" with regard to audio
components revolve around ABX testing, or at least double-blind testing of
some sort. ?And while these tests are highly appropriate for audiometric
testing, they violate the cardinal principle of psychological test
design...they alter the variable under test...namely, listening for pleasure
and enjoyment (where differences and long term judgements and perceptions
arise from the sub-conscience).


If this were true, it would be a relevant consideration. But there is
absolutely no evidence that it is true. You cannot simply declare that
a test whose results you don't like has a flaw, Harry. You have to
demonstrate that flaw somehow. You never have.


"Devil's advocacy' has its place. But in the end, you need
to show examples that the disputed methodology actually caused
a problem. And for audio, you somehow have to do that without relying on *sighted*
results as counter-evidence.

The example used in an experimental design book I'm reviewing for
use in a class, was an experiment where mouse body temperature was
a measured variable in assessing a treatment.
The result of elevated temperature *seemed* to support
the hypothesis that the treatment had an effect, but it turned out that
just handling the mice during data collection got them excited enough to
raise their temperature. So the mice had to be acclimated to human handling,
before good data could be obtained.

--
-S
A wise man, therefore, proportions his belief to the evidence. -- David Hume, "On Miracles"
(1748)
  #82   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default Ultrabit Platium Disk Treatment

"Harry Lavo" wrote in message


The reason is, almost all claims of "science" with regard
to audio components revolve around ABX testing, or at
least double-blind testing of some sort.


Not at all. There is considerable science with regard to audio components
relating to things like Ohms and Kirchoff's law, Fourier analysis, etc.

And while these
tests are highly appropriate for audiometric testing,
they violate the cardinal principle of psychological test
design...they alter the variable under test.


That is a hypothesis, not a generally accepted fact.

.namely, listening for pleasure and enjoyment (where differences
and long term judgments and perceptions arise from the
sub-conscience).


That is another hypothesis, and again not a generally accepted fact.
Furthermore, if proven this hypothesis does not necessarily prove the first
hypothesis, above.

Good science, as opposed to handy
science, finds a way to design a test which measures the
effect indirectly, in such circumstances.


The usual situation is where the people who make hypothesis like those
above, address their own hypothesis with their own applications of science.

However, certain aspects of high end audio seem to be technologies of
hypothesis without proof or even reliable evidence.

  #83   Report Post  
Posted to rec.audio.high-end
Codifus Codifus is offline
external usenet poster
 
Posts: 228
Default Ultrabit Platium Disk Treatment

bob wrote:
On Jul 29, 6:23 pm, codifus wrote:

OK, here' we go. I phrased that incorrectly. Let's say I get the
correct answer for everything, what would you all still have to be
skeptical about?


The cause of the effect. All we'd have is one experiment, the results
of which totally violate the laws of physics. Science wouldn't be
settled until we'd squared that circle. And the most likely
possibility would be that you'd done the experiment wrong, somehow.
Until you could explain *how* it works (and I do not mean hand-waving;
I mean demonstrating a measureable effect) there would still be plenty
to be skeptical about.

bob

True, but it would be a start, no? If I can demonstrate, definitively,
that something is happening, and if I can do it time and time again, we
certainly would have something.

That's all I really want, acknowledgement that something is happening.
We could then go on and figure out why.

CD
  #84   Report Post  
Posted to rec.audio.high-end
No Name
 
Posts: n/a
Default Ultrabit Platium Disk Treatment

"Harry Lavo" wrote in message
...
I am sorry, but it *is* an impasse.

The reason is, almost all claims of "science" with regard to audio
components revolve around ABX testing, or at least double-blind testing of
some sort. And while these tests are highly appropriate for audiometric
testing, they violate the cardinal principle of psychological test
design...they alter the variable under test...namely, listening for
pleasure
and enjoyment (where differences and long term judgements and perceptions
arise from the sub-conscience). Good science, as opposed to handy
science,
finds a way to design a test which measures the effect indirectly, in such
circumstances. At the very least, it designs and executes at least one
such
test to validate that in fact the "shorthand" test does what it claims to
do
in measuring the same thing with equally valid results. In the case of
audio, no organization has a financial interest in such testing, so it has
not been done. Its complexity and scope are beyond the logistical and
financial means of individuals.


Let me see if I understand you: Although one could fail a standard DBT
between, say, 2 amplifiers, the Id can nevertheless sense a difference and
it will have an effect on one's long term enjoyment. Furthermore, it is
possible to design a suitable blind test that would confirm the difference,
only no commercial organization has any reason to run such a test and
publish the results, and no amateurs have the wherewithal to do so.

Is this a fair restatement of your argument?

Norm Strong

  #85   Report Post  
Posted to rec.audio.high-end
Sonnova Sonnova is offline
external usenet poster
 
Posts: 1,337
Default Ultrabit Platium Disk Treatment

On Wed, 30 Jul 2008 06:00:52 -0700, Harry Lavo wrote
(in article ):

"bob" wrote in message
...
On Jul 29, 6:44 pm, "Harry Lavo" wrote:

I am sorry, but it *is* an impasse.


A "debate" in which one side has data, and the other side has no data,
is not properly termed an "impasse." It is properly termed a settled
question.

The reason is, almost all claims of "science" with regard to audio
components revolve around ABX testing, or at least double-blind testing
of
some sort. And while these tests are highly appropriate for audiometric
testing, they violate the cardinal principle of psychological test
design...they alter the variable under test...namely, listening for
pleasure
and enjoyment (where differences and long term judgements and perceptions
arise from the sub-conscience).


If this were true, it would be a relevant consideration. But there is
absolutely no evidence that it is true. You cannot simply declare that
a test whose results you don't like has a flaw, Harry. You have to
demonstrate that flaw somehow. You never have.

That is why no one in the scientific community takes your objections
seriously.

bob


I rest my case. I guess true "scientists" don't consider the social
sciences as "science".

Bob, you don't do a test and then have to prove that it gets in the way. In
psychological and social science design work, you design the test from the
beginning so it *can't* get in the way, or at least appears most logically
that it *shouldn't* get in the way. ABX fails miserably in this regard. To
start with, the nature of the task is different (conscious discrimination as
opposed to unconscious detection).


Please provide some documentation that shows that there actually is such a
phenomenon. Harley wrote an entire paper to the AES trying to push that
assertion. He was unable to prove that there was any difference between the
way people listen or hear in a long-term evaluation vs a double-blind test
either.

The listening conditions are different.
The musical context is usually different. The inability to train (because
you don't know in advance what you are listening for) is different. And I
could go on and on.

There is no substitute for numbers in this. The only way to design such a
test is to have the subject listen to music in as natural a setting as
possible, perhaps to monitor certain neurological stimulus while doing so,
certainly to have scalar monadic rating after the fact, and then to comparee
LARGE NUMBERS of respondents across carefully matched samples. So thiere is
no CONSCIOUS discrimination involved. This has never been done to validate
individual double-blind discriminatory testing, and until it is done, the
test is simply an unproven vehicle for purposes of detecting musical
differences.






  #86   Report Post  
Posted to rec.audio.high-end
Edmund[_2_] Edmund[_2_] is offline
external usenet poster
 
Posts: 80
Default Ultrabit Platium Disk Treatment

On Tue, 22 Jul 2008 23:22:12 +0000, Codifus wrote:

Steven Sullivan wrote:
wrote:
On Jul 18, 8:42 pm, wrote:



I googled "CD Optical Impedance Matching Fluid"
and found this Stereophile article;

http://stereophile.com/reference/590jitter/

They tried a product called CD spotlight by Audio Prism and were
surprised to find that it did make a difference.

They then backed it up with a technical explanation as to why the
difference was probably there. I am very much inclined to believe it.


This is really really funny!
My $ 40 DVD ReWriter copies a CD with 150 times the CD speed,
after that When I do a file compare between the original disk
and the copy, they are identical!!
I wonder why a $10.000 CD payer could not nearly read as good
at a speed which is 150 times slower!
OK, it is ONLY a 50 speed or so, whatever :-)

Edmund

  #87   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default Ultrabit Platium Disk Treatment

On Jul 30, 9:00*am, "Harry Lavo" wrote:

Bob, you don't do a test and then have to prove that it gets in the way.


Of course not. You do the test, and then you wait for somebody else to
prove that there's something wrong with the test. That's how science
works in the real world, Harry, and that's what you still don't get.
You're the one making the assertion here. You're insisting that ABX
tests are flawed in some way. The burden of proof is on you, Harry,
not on the scientific community, which has no obligation to satisfy
the pseudo-scientific speculations of the ill-informed.

*ABX fails miserably in this regard.


Prove it. You can't just assert it, Harry. You have to prove it.

And you can't.

bob
  #88   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default Ultrabit Platium Disk Treatment

wrote in message


The reason is, almost all claims of "science" with regard
to audio components revolve around ABX testing, or at
least double-blind testing of some sort.


This is not true.

There are other means to resolve many of the controversies at hand, without
resorting to DBTs.

For example, there are controversies over the audibility mystical properties
of wire wherein there are no known measurable differences. The claims of
science are not based on subjective tests at all.

For example, there are controversies over the presence of audible
differences where the nature of the differences is known to be less than
audible thresholds for the changes that are claimed and can be measured. The
claims of science are thus not based on subjective tests.

  #89   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 1,243
Default Ultrabit Platium Disk Treatment

wrote in message
...
Harry Lavo wrote:
"bob" wrote in message
...
On Jul 29, 6:44 pm, "Harry Lavo" wrote:
I am sorry, but it *is* an impasse.
A "debate" in which one side has data, and the other side has no data,
is not properly termed an "impasse." It is properly termed a settled
question.

The reason is, almost all claims of "science" with regard to audio
components revolve around ABX testing, or at least double-blind testing
of
some sort. And while these tests are highly appropriate for audiometric
testing, they violate the cardinal principle of psychological test
design...they alter the variable under test...namely, listening for
pleasure
and enjoyment (where differences and long term judgements and
perceptions
arise from the sub-conscience).
If this were true, it would be a relevant consideration. But there is
absolutely no evidence that it is true. You cannot simply declare that
a test whose results you don't like has a flaw, Harry. You have to
demonstrate that flaw somehow. You never have.

That is why no one in the scientific community takes your objections
seriously.

bob


I rest my case. I guess true "scientists" don't consider the social
sciences as "science".


Yes, they do. That's why they design discrimination tests for
discrimination, and they don't use *preference/acceptability* testing to
determine whether a *difference* exists.


I'm not talking about preference testing. I'm talking about monadic ratings
evaluating the "music". And then application of statistical techniquest
between monadic samples to determine discrimination. A standard test
technique...but one that masks the true object of the test and requires no
forced discrimination.


Bob, you don't do a test and then have to prove that it gets in the way.
In
psychological and social science design work, you design the test from
the
beginning so it *can't* get in the way, or at least appears most
logically
that it *shouldn't* get in the way. ABX fails miserably in this regard.


In your opinion. A micro-minority opinion based on available data.


No, standard scientific approach.


To
start with, the nature of the task is different (conscious discrimination
as
opposed to unconscious detection).


A false constraint *you* apply. Not one inherent in the test. Sit back
and enjoy "A" for as long as you like. Sit back and enjoy "B" for as
long as you like. Sit back and enjoy "X" for as long as you like. At
the end, rate each using whatever scale you want. Do a significant
number of trials, and "X" should be easily identifiable as either "A" or
"B". A statistically significant level of correlation with the
correct A/B is a positive. No correlation, or correlation with the
incorrect A/B indicates a negative result. Exactly as your "monadic"
testing would do.


You clearly don't (or don't want to) understand what I am saying. The very
active of making conscious discrimination is different than the way
objectionable audio traits normally make themselves known.

The listening conditions are different.


A false constraint *you* apply. Not one inherent in the test.


But as is almost always practiced in these tests that claim "no difference".


The musical context is usually different.


A false constraint *you* apply. Not one inherent in the test.


But as is almost always practiced in these tests that claim "no difference".


The inability to train (because
you don't know in advance what you are listening for) is different.


A totally bogus argument. In virtually ALL cases where this particular
debate arises, the subjective "difference" has already been observed,
under non-controlled conditions. So, clearly, *you* (as in the claimant
of the difference) are fully aware of what you are listening for.
You've already identified its character when present, and the character
when its lacking.


I'm talking about testing music reproducing equipment to determine how well
it is liked, and why. That is how most audiophiles eventually reach a
conclusion...how the gear "sits" with them. The way to experimentally
duplicate that is simply to have people listen to music, and descibe how
well /what they like about it using scalars after the fact. Statistics can
then determine if their is a difference, either overall or on specific
musical attributes, between two systems, identical except for the pieces of
equipment under test.. This is a far more solid scientific test.


And I could go on and on.


And you have, you have.


:-)


There is no substitute for numbers in this.


This is just nonsense. The "numbers" are in; the vast majority of
people do not hear significant differences in cables, cable elevators,
rocks, etc. These products are not "niche" products because they matter
to large numbers of people.


I didn't say test among the unwashed masses....I said numbers. As in three
hundred audiophiles get exposed to this variable, and three hundred get
exposed to that one. As in three hundred audiophiles who prefer vinyl get
exposed to this variable, and three hundred audiophiles who prefer vinyl get
exposed to that variable, etc. Of course the subjects have to be screened
to constitute the appropriate audience, but there is no substitute for
*numbers*. Now do you see why a validation test is not simple nor
inexpensive?


Typically, here, we're starting with the *claim* by some individual or
group that some tweek has produced a subjectively identifiable physical
result. A result for which there is no known scientific basis, and
often just the opposite. One need only test this individual/group using
an appropriate methodology to prove/disprove (with a margin of error)
the accuracy of the claim. There is NO need to try and extrapolate this
to the general population, so test population size is irrelevant.


You are putting a match to a strawman here, I'm afraid, since that is not
what I suggested.

The only way to design such a
test is to have the subject listen to music in as natural a setting as
possible, perhaps to monitor certain neurological stimulus while doing
so,
certainly to have scalar monadic rating after the fact, and then to
comparee
LARGE NUMBERS of respondents across carefully matched samples. So thiere
is
no CONSCIOUS discrimination involved.


Well, ignoring that test methodologies used for *preference* and for
*acceptability* are ill suited in the extreme for the purpose at hand,
please name even ONE instance where monadic testing does not require
conscious discrimination. For your claim to be true, participants would
have to be unaware that they were participating in a test (else they are
constantly comparing current product/stimuli to past experiences of
different product/stimuli), they have to have no *interest* in
determining whether the current product/stimuli is *acceptable* in
performance (unlikely in the extreme), and of course, they *cannot be
questioned about their impressions after the test concludes*. Such
questioning forces the CONSCIOUS discrimination (either acceptability
against personal criteria, or comparison to previous experience) you
continually rail against.


Nobody is talking about a test methodology for preference. We are talking
about an evaluative methodology relating to the musical reproduction
experience. Any discrimination is provided by statistical analysis, and
since the measurement is indirect, it is not contaminated by an intervening
variable, as in a direc DBT or especially ABX.


This has never been done to validate
individual double-blind discriminatory testing, and until it is done, the
test is simply an unproven vehicle for purposes of detecting musical
differences.


OSAF. All scientific studies on subjective data, that I'm aware of, are
conducted blind unless brand recognition is part of the design. This
includes all drug and medical device testing, even when there are
numerous objective criteria that can be, and are measured concurrent
with the subjective evaluations. That blind tests don't confirm your
results in no way invalidates them.


How many scientists do you think predicted in advance that they would
discover that certain rythmic patterns seem hardwired into the human brain?
But they have. You can't use accepted dogma to rule out possibilities that
only rigorous testing can validate. And this includes test techniques and
variations, particularly ones that on the face of it present intervening
variables which is an absolute no-no in psychological testing.

  #90   Report Post  
Posted to rec.audio.high-end
[email protected] dpierce.cartchunk.org@gmail.com is offline
external usenet poster
 
Posts: 334
Default Ultrabit Platium Disk Treatment

On Jul 29, 6:23 pm, codifus wrote:
On Jul 28, 9:08 am, "Arny Krueger" wrote:

"codifus" wrote in message




OK. I believe it so strongly and I'm prepared to prove it
with the tests you outlined.


You're going to try some bias-controlled listening test?


Im am totally willing. Dick Pierce outlined a perfect setup.


No, I didn't. I outlined merely ONE setup as a starting
point to help design an experiment. There's any number
of conrols that need to be put in place to ensure that
as many variables are controlled or understood as
possible.

Does it meet with your approval?


Do you understand why I suggested what I did?

Now, let's say that I guess evertyhing correctly, would
my observaton then become fact?


The phrase "Scientific fact" is an oxymoron. All findings of science are
provisional, and only relelvant until we find out something bettter.


OK, here' we go. I phrased that incorrectly. Let's say I get the
correct answer for everything, what would you all still have to be
skeptical about?


You entrie explanation as to the cause is, to put
it frankly, ridiculous. You, on the one hand, state
that you're no expert, and on the other you then
hold forth and profer "explanantions" for physical
causes which are out beyond the left field bleachers.

I would like to get this done in a definitive fashion
that everyone agrees on.


That's assuming "everyone" thinks the rather large
amount of effort is worth it.

To be honest and meaning no disrespect, your
intransigence and refusal to entertainthe
possibility that you, as an admitted NON-expert
in the field, are either unwilling or unable to
consider the possibility that your conclusions
and you explanations are faulty. You offer what
is, to be honest, almost laughably naive explanations
of how electromangetics work without being able
or willing to see the obvious self-contradictions and
flaws in your science, and then run and hide behind
the apron of "we don't know everything."

We don't need to know everything to recognize when
something is horribly and fatally flawed.

And what you don't realize is MANY of us have
gone through this very same set of claims, MANY
times before, and they have been found wanting
EVERY time.

Why are you any different?

For the sake of clarity, the only magnetism we are concerned with is
within the audio path, the most of which is in the speaker, crossover,
voice coil etc.


Then tell us the following:

1. What parts of the speaker ARE magnetized?

2. How did they get that way?

3. If they ARE that way, how and why does it cause
the problem you're claiming?

4. How and why does this "magic" CD do its work?

5. If the CD ONLY works once, as you claim, what is
preventing the speaker from taking on the original
problem again?

I don't know how or why it happens, I just know that I have had
speakers that have lost their ability to image properly. Once
Demagicked, they recovered.


Maybe, well, it's "magic."

Here's my very non-technical, non expert opinion on the
matter: it is understood that all electronic components
do not behave ideally. Far from it.


Wrong. Electronic components including the resistors and capactors that
tweeks obsess over, do in fact generally behave as ideally as is necessary
to provide sonically-transparent operation.


Actually, you are WRONG. How does something, in fact, GENERALLY behave
ideally? It either does or doesn't. So say it. Don't use this vague
statement to try to get by.


Look, you have already admitted that you are not
an expert. WHy do you then insist on talking as if
you are.

What was stated, and is ABSOLUTELY correct, is
that "ideal behavior" is not a black-and-white issues.
A component, for example, could have gossly non-
ideal property that, for the application it is used in,
is COMPLETELY irrelevant. Take an example that
is trivial: I have a resistor component that exhibits
some grossly non-linear behavior at temperatures
above 75C. If the temperature never exceeds 50C,
then its non-ideal behavior above 75C is COMPLETELY
irrelevant.

Or consider a capacitor that has .025 uH of lead
inductance and the equivalent of about 5 MOhms
of leakage resistance. Non-ideal, right? But if that
capacitor is used in a ciscuit where it's bypassed
by a 5 kOhm resistor and never sees any frequecies
above 20 kHz, the non-ideal indictance and leakage
resistance has NO relevance on its behavior in the
circuit.

And both you and the manufacturer of the "magic"
CD have NEVER ONCE offered ANY credible
explanation of:

1. The very existance of these "static magnetic"
fields in components,

2. That IF these "static magnetic" fields did, indeed
exists, how they got there,

3. That if they exist, how and why they have a
deleterious effect as claimed on the audio,

4. How and why this "magic" CD corrects the
problems which you have never demonstrated
even exist to begin with.

Many materials just don't have any remanance,
and pure copper is one such
material.


And all speakers are made of just copper, right?


Look, if you're going to be silly bordering on the
point of insulting, then you can argue with yourself.

What's your point here?

Are you insisting that all components of a speaker
must have their static magnetic field removed?

Please explain how the speaker would work after
that?

Please explain that if it is the capacitors and
inductors and resistors and terminals that are
getting this "static magnetic field," and your "magic
CD" gets rid of it, why does this "static magnetic field"
not INSTANTLY return in the presence of the leakage
field resulting fromthe close proximity of the speaker
magnet?

I don't need proof, just the possibility. My observations show that
something is happening.


And, for myself, I am not disputing that observation.

Rather I am disputing your totally bogus attempts
and invoking bad physics to explain it, that you are
utterly unwilling to entertain the possibility that your
explanations are wrong, and that you are unwilling
or unable to accept the very STRONG possibility that
tyou have failed to identify and eliminate in ANY
credible fashion whatsoever alternate explanations
for your observation, INCLUDING but not limited to
expectation bias, suggestability, demonstrably poor
detailed auditory memory, exceptionally poor
experimental control, and much more.

Earlier, you said:

A copper wire, for example. The amount it may store
would be miniscule, but it may hold some.


You have a number of problems he

1. You ASSUME "it may hold some." Your entire
premise seems to be based on that assumption.
But what if your assumption is WRONG?

2. You assume that if it DID "hold some," that is
MUST have some audible effect. But what if
your assumption is wrong?

3. You assume that if it DID hold some and if it
DID have some audible effect, that the effect
would lead to your observation. But whiat is
your assumption is wrong?

4. You assume that if it did hold some and if it
did have some audible effect and if that audible
effect lead to your observation, the "magic CD"
would correct it. But what if your assumption was
wrong.

Don't doubt it. DId you also know that water can kill you if you drink
too much of it?


It appears you have failed to see the point of your own
analogy.

All I am saying is that when a speaker has been suffuciently thrown
out of its specification such that it fails the mono test I keep
mentioning, playing the demagic CD sets it back. Music doesn't.


Speakers can be thown "sufficiently out of their own
specifications" by ANY number of means. Changing
environmental factors such as tempearature and
humidity will result in large changes. The temperature
of the magnet and voice voil make a large difference,
Letting a speaker sit unplayed for a period of time
will result in changes in its performance. Playing a
speaker at an elevated level will change its performance.

I may be wrong on the technical explanation, but I hope
its enough to convey the idea I'm trying to get across.


What's coming accross is a basic priniciple of snake-oil audio - a complete
lack of understanding of the importance of quantifcation.


I don't undertstand it, but I definitely heard it.


Maybe you did, maybe you didn't. I'm not going to
dispute whather you did or didn't, if for no other reason
than it gets into a useless argument with no possibility
of resolution. For one thing, your experimental design is
SO poor, no real conclusion could be drawn.

But you have chosen to hold forth on a TECHNICAL
explanation that is so TECHNICALLY bogus that
it throws your whole claim into serious credibitity
meltdown.

Like I said, you play the Demagic CD as loud as you comfrotably can.
The louder it is the more effective it is.


And what have you done to eliminate ALL other
possibe explananations?

No, it's not, and this is further evidence that you do
NOT understand it. It;s substantially different in at
least one important factor: the field generated by a
degausser if MILLIONS of time stronger than the field
generated by the currents inside your audio system.
It HAS to be to overcome the coercivity of the magnetic
material in the tape. If the impressed field DOES NOT
exceed a critical threshold by a wide margin, no change
in the magnetization of the material occurs. And a few
microamps of signal passing through an audio system,
even a few amps passing through a voice coil is FAR too
small to work.


Give me a freaking break! A rocket ship and a cruise ship are both
what? Ships. Yet the speeds with which they travel are different by
orders of magnitude. Once warp drive is invented, you know what
they'll call the vessel that carries people across the galaxies at
those speeds? A starship. These examples go to show that an analogy
can apply even though one factor may different by an extremely large
amount. It's the same basic...


IN a word, kind sir, b*llsh*t.

You CLEARLY have no the slightest clue what you
are talking about. You have NO idea about the VERY
non-linear process of magnetization in materials.
You come up with preposterous "analogies" and
"extrapolations" which have NO physical analog and
in fact, are quite defintiely in complete contradiction
of demonstrated physical behavior, and you hold on
to them, for dear life.

No, give yourself a break, here, and stop being silly.

You analogies are nice, neat, comfortable, simple,
and do a good job of explaining what you believe.
Thay are also wrong.

It's as simple as that.


  #91   Report Post  
Posted to rec.audio.high-end
[email protected] outsor@city-net.com is offline
external usenet poster
 
Posts: 122
Default Ultrabit Platium Disk Treatment

"snip


I think not. There is a body of test results with routinely similar
results showing that reported subjective perception events which toggle

on
and off as the test is blind and sighted. That is the current state of
things and no other results on an equal footing nor at an impasse are
there to be considered.


"But nobody has corraborated that the test itself is not the reason the
sighted differences do not hold. That is the crux of the matter....until
the test (which is interventionist in nature) can be proven to provide
identical results to much more expensive and sophisticated testing that is
not interventionist, then the test itself has to be consider potentially
suspect. THAT is just good science."

But nobody has corraborated that the tests are the reason that the esp
powers fail to appear. Otherwise known as special pleading.

All tests from simple put a cloth over the connections to complex abx
tests have similar results.

Here is the test that reveals all and has been done.

A group of people are given sighted access to two bits of audio gear and
are told a switch is made and asked to provide subjective results of
differences produced accordingly. As each switch is announced they
provide clearly different reports of perception events accordingly.

But no actual electrical switch was made and the clearly different reports
were made.

No impasse remains as there was in this case a test of the impact being
sighted has upon reported perception events. We are then free to make the
same special pleading that sighted testing need be first validated, that
is only good science.

Until the speculative cancer method is first tested then the two existing
methods which provide clear results are not corraborated and an impasse
exists.

I think not.

  #92   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 1,243
Default Ultrabit Platium Disk Treatment

"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message


Bob, you don't do a test and then have to prove that it
gets in the way.


More to the point, its not very wise to use a test that is known to be
subject to excessive false standard as your universal and final reference.


I assume you are talking about sighted testing. I am not.


In psychological and social science
design work, you design the test from the beginning so it
*can't* get in the way, or at least appears most
logically that it *shouldn't* get in the way.


ABX was developed for exactly that reason. We tried a few DBT protocols,
and
they got in the way. We then implemented ABX and found that it does a
great
job of getting out of the way. ABX gets out of the way better than sighted
evaluations.


In determining what?



ABX fails miserably in this regard.


By what standard?


By the standard of allowing subjective impressions to emerge rather than
conscious differences. It is these subjective impressions that ultimately
determine one's long term satisfaction with the equipment.


To start with, the nature of
the task is different (conscious discrimination as
opposed to unconscious detection).


Where is it written that listening pleasure as determined by *conscious*
perceptions is invalid or incomplete?


Read the literature.



Is there any absolute proof that there even is such a thing as the
unconscious mind.


You've got to be kidding, right?


Is there a means for determining the state of this purported unconscious
mind in a reliable manner?


A great deal of psychological testing is devoted to it.


The listening conditions are different.


ABX does not change the listening conditions.


I'm sorry, as practiced it does. I do not listen to symphonies sitting with
headphones on at a computer trying to determine whether a or b sounds like
x.
  #93   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default Ultrabit Platium Disk Treatment

On Jul 30, 7:29*pm, Codifus wrote:
bob wrote:
On Jul 29, 6:23 pm, codifus wrote:


OK, here' we go. I phrased that incorrectly. Let's say I get the
correct answer for everything, what would you all still have to be
skeptical about?


The cause of the effect. All we'd have is one experiment, the results
of which totally violate the laws of physics. Science wouldn't be
settled until we'd squared that circle. And the most likely
possibility would be that you'd done the experiment wrong, somehow.
Until you could explain *how* it works (and I do not mean hand-waving;
I mean demonstrating a measureable effect) there would still be plenty
to be skeptical about.


bob


True, but it would be a start, no? If I can demonstrate, definitively,
that something is happening, and if I can do it time and time again, we
certainly would have something.


No, my point is that your test cannot be definitive. All it can do is
raise questions. And the first question it will raise will be, how did
he cook the test?

There's a great quote from a scientist in Natalie Angier's book, The
Canon:

"Most of the time, when you get an amazing, counterintuitive result,
it means you screwed up the experiment.”

bob



That's all I really want, acknowledgement that something is happening.
We could then go on and figure out why.

CD



  #94   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default Ultrabit Platium Disk Treatment

On Jul 30, 7:37*pm, "Harry Lavo" wrote:

I'm not talking about preference testing. *I'm talking about monadic ratings
evaluating the "music". *And then application of statistical techniquest
between monadic samples to determine discrimination. *A standard test
technique...


No, it's not--not for determining whether there's a perceptual
difference between two stimuli. The kind of test you describe here is
only used when the objects under test are already known to be
perceptibly different. So it's totally inappropriate for the purposes
of challenging ABX results.

but one that masks the true object of the test and requires no
forced discrimination.


A silly, made-up "requirement." Audiophiles discriminate all the time.
Every claim that two components sound different is an act of
discrimination. So why can't they do that blind?

snip

I'm talking about testing music reproducing equipment to determine how well
it is liked, and why. *That is how most audiophiles eventually reach a
conclusion...how the gear "sits" with them. *The way to experimentally
duplicate that is simply to have people listen to music, and descibe how
well /what they like about it using scalars after the fact. *Statistics can
then determine if their is a difference, either overall or on specific
musical attributes, between two systems, identical except for the pieces of
equipment under test.. *This is a far more solid scientific test.


OSAF. You can't claim something is "a far more solid scientific test"
when it's never even been tried.

snip

I didn't say test among the unwashed masses....I said numbers. *As in three
hundred audiophiles get exposed to this variable, and three hundred get
exposed to that one. *As in three hundred audiophiles who prefer vinyl get
exposed to this variable, and three hundred audiophiles who prefer vinyl get
exposed to that variable, etc. *Of course the subjects have to be screened
to constitute the appropriate audience, but there is no substitute for
*numbers*. *Now do you see why a validation test is not simple nor
inexpensive?


No, the reason it's not simple or inexpensive is that you have to make
sure it'll never be carried out.

snip

Nobody is talking about a test methodology for preference. *We are talking
about an evaluative methodology relating to the musical reproduction
experience. *Any discrimination is provided by statistical analysis, and
since the measurement is indirect, it is not contaminated by an intervening
variable, as in a direc DBT or especially ABX.


Again, you have no evidence that there *is* an intervening variable.
That would be the first step, Harry. Get busy.

bob
  #95   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 1,243
Default Ultrabit Platium Disk Treatment

wrote in message
...
"Harry Lavo" wrote in message
...
I am sorry, but it *is* an impasse.

The reason is, almost all claims of "science" with regard to audio
components revolve around ABX testing, or at least double-blind testing
of
some sort. And while these tests are highly appropriate for audiometric
testing, they violate the cardinal principle of psychological test
design...they alter the variable under test...namely, listening for
pleasure
and enjoyment (where differences and long term judgements and perceptions
arise from the sub-conscience). Good science, as opposed to handy
science,
finds a way to design a test which measures the effect indirectly, in
such
circumstances. At the very least, it designs and executes at least one
such
test to validate that in fact the "shorthand" test does what it claims to
do
in measuring the same thing with equally valid results. In the case of
audio, no organization has a financial interest in such testing, so it
has
not been done. Its complexity and scope are beyond the logistical and
financial means of individuals.


Let me see if I understand you: Although one could fail a standard DBT
between, say, 2 amplifiers, the Id can nevertheless sense a difference and
it will have an effect on one's long term enjoyment. Furthermore, it is
possible to design a suitable blind test that would confirm the
difference,
only no commercial organization has any reason to run such a test and
publish the results, and no amateurs have the wherewithal to do so.

Is this a fair restatement of your argument?


No, Norm, it does not. What I said has no relationship to the Id, which is
part of the Freudian construct of the Psyche, which relates to human
personality. The Id is a Freudian construct that resides in the unconscious
part of the brain, but does not constitute it. From Wikepedia: "It should
be stressed that, even though the model is "structural" and makes reference
to an "apparatus", the id, ego and super-ego are functions of the mind
rather than parts of the brain and do not correspond to actual somatic
structures of the kind dealt with by neuroscience".

The "kind dealt with by neuroscience" is what I am talking about.

If you replace "Id" in your statement above with "unconscious"
(Merriam-Webster: 1 a: not knowing or perceiving : not aware ; 2 b (1): not
marked by conscious thought, sensation, or feeling ), and if you preface
that statement with "in the long term", then it can serve as a reasonably
accurate summary. In other words, "Although one could fail a standard DBT
between, say, 2 amplifiers, in the long term the unconscious can sense a
difference and it will have an effect on one's long term enjoyment".

Part of what audiophiles often struggle with is sensing this "unconscious
uneasiness (or easiness...it works both ways)" and encouraging it into
consciousness so they can identify it and discuss it. But the important
part is: by definition the unconscious is incohate and unavailable, and thus
cannot operate either at the conscious level or in a forced short term time
frame. Both are conditions of a structured DBT like ABX.

Interestingly enough, these "effects on the conscious/unconcious edge" also
confond the "null" mathematics of the standard ABX test. Psychologists
report such phenomenon reveal themselves as percentages of perception. In
other words, even though the difference is real, as it is lowered towards
the edge of perception, the reporting of the phenomenon is expressed as
"percentage of times perceived". This is a different probability than the
standard null hypothesis calculation where the existence of a real
difference is not known.

That was the basis of a critique of the original Clark article published in
the JAES a year after that article appeared by a professor of psychology who
should how the probability of perception of a real albeit marginally
subliminal difference interacted with standard null probability calculations
to throw those calculations off slightly within the small sample sizes used
for ABX testing. The practical effect was great enough to sometimes cause a
one-sample deviation in what triggered the 95% significance standard.
Unfortunately this article was turgid and was heavily laden with
mathematics, and while it drew response from the Clark and Nousaine and
others, it was apparent that few of them really understood the mathematics
or their implications. It is also clear that the mathematics and general
turgidity of the article failed to catch the interest of the audio
profession, who essentially ignore it to this day. (To avoid criticism, I
have copies of the original Clark article, this article, and others....but I
have mislaid them in a consolidation of my office into new space and cannot
lay my hands on them to include the cites here).




  #96   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 1,243
Default Ultrabit Platium Disk Treatment

"Steven Sullivan" wrote in message
...
bob wrote:
On Jul 29, 6:44?pm, "Harry Lavo" wrote:

I am sorry, but it *is* an impasse.


A "debate" in which one side has data, and the other side has no data,
is not properly termed an "impasse." It is properly termed a settled
question.


The reason is, almost all claims of "science" with regard to audio
components revolve around ABX testing, or at least double-blind testing
of
some sort. ?And while these tests are highly appropriate for
audiometric
testing, they violate the cardinal principle of psychological test
design...they alter the variable under test...namely, listening for
pleasure
and enjoyment (where differences and long term judgements and
perceptions
arise from the sub-conscience).


If this were true, it would be a relevant consideration. But there is
absolutely no evidence that it is true. You cannot simply declare that
a test whose results you don't like has a flaw, Harry. You have to
demonstrate that flaw somehow. You never have.


"Devil's advocacy' has its place. But in the end, you need
to show examples that the disputed methodology actually caused
a problem. And for audio, you somehow have to do that without relying on
*sighted*
results as counter-evidence.


And yet some have rediculed John Atkinson's anecdote about choosing an
amplifier after a dbt showed it to be no different from another that he
honored, and yet giving it up after two years because of continual
irritation with some aspects of its sound that arose due to long term
listening. This of course will be dismissed as the result of "sighted
listening bias" with no proof that it is....the equivalence of attributing
the "elevated temperature" of the mouse to the treatment in the example
cited below rather than looking for other rational possibilities...including
possible flaws in the original test.

The example used in an experimental design book I'm reviewing for
use in a class, was an experiment where mouse body temperature was
a measured variable in assessing a treatment.
The result of elevated temperature *seemed* to support
the hypothesis that the treatment had an effect, but it turned out that
just handling the mice during data collection got them excited enough to
raise their temperature. So the mice had to be acclimated to human
handling,
before good data could be obtained.


Which is why it is annoying that any evidence that disputes DBT (especially
ABX) results is dismissed by advocates without any further investigation,
and is simply written off as "sighted bias".


  #97   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 1,243
Default Ultrabit Platium Disk Treatment

"bob" wrote in message
...
On Jul 30, 9:00 am, "Harry Lavo" wrote:

Bob, you don't do a test and then have to prove that it gets in the way.


Of course not. You do the test, and then you wait for somebody else to
prove that there's something wrong with the test. That's how science
works in the real world, Harry, and that's what you still don't get.
You're the one making the assertion here. You're insisting that ABX
tests are flawed in some way. The burden of proof is on you, Harry,
not on the scientific community, which has no obligation to satisfy
the pseudo-scientific speculations of the ill-informed.


No, Bob....I am arguing that they are an inappropriate test for dealing with
long term satisfaction with audio components in the reproduction of music.
The tests are fine for listening to different distortion levels, volume
levels, frequency response characteristics, etc. (audiometric measurements).
Those are not music.

ABX fails miserably in this regard.



Note: you've taken this totally apart from it's context...the "failed
miserably" refers to not presenting an intervening variable.

I've tried to show on a logical basis how it may do this and why the results
do not seem to square with a lot of reported experience. Moreover, I have
spent considerable time on this forum (more in the past than present) laying
out test approaches that would prove or disprove what I am
hypothesizing....but unfortunately neither I nor you have the resources to
undertake that kind of testing. So it remains a hypthosis...I don't deny
that.

Prove it. You can't just assert it, Harry. You have to prove it.

And you can't.


Give me a two hundred thousand dollar grant and two years to organize it,
and I'll either prove it or disprove it. All I have argued is that a giant
question remains.

Sadly this arena reflects the same kind of sickness that pervades our
political arena today....people take way over simplified "one-line"
positions and rely on name-calling ("satisfy the pseudo-scientific
speculations of the ill-informed") to substitute for meaningful discorse.


  #98   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 1,243
Default Ultrabit Platium Disk Treatment

wrote in message
...
"snip


I think not. There is a body of test results with routinely similar
results showing that reported subjective perception events which toggle

on
and off as the test is blind and sighted. That is the current state of
things and no other results on an equal footing nor at an impasse are
there to be considered.


"But nobody has corraborated that the test itself is not the reason the
sighted differences do not hold. That is the crux of the matter....until
the test (which is interventionist in nature) can be proven to provide
identical results to much more expensive and sophisticated testing that is
not interventionist, then the test itself has to be consider potentially
suspect. THAT is just good science."

But nobody has corraborated that the tests are the reason that the esp
powers fail to appear. Otherwise known as special pleading.

All tests from simple put a cloth over the connections to complex abx
tests have similar results.

Here is the test that reveals all and has been done.

A group of people are given sighted access to two bits of audio gear and
are told a switch is made and asked to provide subjective results of
differences produced accordingly. As each switch is announced they
provide clearly different reports of perception events accordingly.

But no actual electrical switch was made and the clearly different reports
were made.

No impasse remains as there was in this case a test of the impact being
sighted has upon reported perception events. We are then free to make the
same special pleading that sighted testing need be first validated, that
is only good science.

Until the speculative cancer method is first tested then the two existing
methods which provide clear results are not corraborated and an impasse
exists.

I think not.


This is the oldest bugaboo in the discussion. Obviously people can be
fooled. Obviously there is such a thing as sighted bias. Obviously double
blind testing can get rid of sighted bias. So can blinded monadic testing.
That is not what is being argued here.

What is being argued is whether or not ABX same/difference testing in
particular (which has become the industry standard) forces a consciouness
that potentially obliterates musical nuances arising in part from the
subconscious that are important long term, and which create or distroy
audiophile listening satisfaction. That is what an "intervening variable"
is in psychological parlance....a test technique that potentially changes
the "what" of what is being tested.


  #99   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default Ultrabit Platium Disk Treatment

On Jul 31, 9:03*am, "Harry Lavo" wrote:

No, Bob....I am arguing that they are an inappropriate test for dealing with
long term satisfaction with audio components in the reproduction of music.
The tests are fine for listening to different distortion levels, volume
levels, frequency response characteristics, etc. (audiometric measurements).
Those are not music.


Baseless opinion.

ABX fails miserably in this regard.


Note: you've taken this totally apart from it's context...the "failed
miserably" refers to not presenting an intervening variable.

I've tried to show on a logical basis how it may do this and why the results
do not seem to square with a lot of reported experience.


Logic doesn't do you any good if it derives from mistaken facts--or,
in your case, non-facts. Everything you assert is a baseless opinion,
generally in opposition to known science. That kind of logic is
useless.

*Moreover, I have
spent considerable time on this forum (more in the past than present) laying
out test approaches that would prove or disprove what I am
hypothesizing....but unfortunately neither I nor you have the resources to
undertake that kind of testing. *So it remains a hypthosis...I don't deny
that.


No, you've spent considerable time insisting that the only thing that
will satisfy you is a pointless, unproven, impossible-to-accomplish
test you can't even specify properly, which is very convenient if your
agenda is to continue to avoid the real science that's been done
already. Fortunately, real scientists have more effective ways of
answering questions.

bob

  #100   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default Ultrabit Platium Disk Treatment

Harry Lavo wrote:
wrote in message
...
"Harry Lavo" wrote in message
...
I am sorry, but it *is* an impasse.

The reason is, almost all claims of "science" with regard to audio
components revolve around ABX testing, or at least double-blind testing
of
some sort. And while these tests are highly appropriate for audiometric
testing, they violate the cardinal principle of psychological test
design...they alter the variable under test...namely, listening for
pleasure
and enjoyment (where differences and long term judgements and perceptions
arise from the sub-conscience). Good science, as opposed to handy
science,
finds a way to design a test which measures the effect indirectly, in
such
circumstances. At the very least, it designs and executes at least one
such
test to validate that in fact the "shorthand" test does what it claims to
do
in measuring the same thing with equally valid results. In the case of
audio, no organization has a financial interest in such testing, so it
has
not been done. Its complexity and scope are beyond the logistical and
financial means of individuals.


Let me see if I understand you: Although one could fail a standard DBT
between, say, 2 amplifiers, the Id can nevertheless sense a difference and
it will have an effect on one's long term enjoyment. Furthermore, it is
possible to design a suitable blind test that would confirm the
difference,
only no commercial organization has any reason to run such a test and
publish the results, and no amateurs have the wherewithal to do so.

Is this a fair restatement of your argument?


No, Norm, it does not. What I said has no relationship to the Id, which is
part of the Freudian construct of the Psyche, which relates to human
personality. The Id is a Freudian construct that resides in the unconscious
part of the brain, but does not constitute it. From Wikepedia: "It should
be stressed that, even though the model is "structural" and makes reference
to an "apparatus", the id, ego and super-ego are functions of the mind
rather than parts of the brain and do not correspond to actual somatic
structures of the kind dealt with by neuroscience".


The "kind dealt with by neuroscience" is what I am talking about.


If you replace "Id" in your statement above with "unconscious"
(Merriam-Webster: 1 a: not knowing or perceiving : not aware ; 2 b (1): not
marked by conscious thought, sensation, or feeling ), and if you preface
that statement with "in the long term", then it can serve as a reasonably
accurate summary. In other words, "Although one could fail a standard DBT
between, say, 2 amplifiers, in the long term the unconscious can sense a
difference and it will have an effect on one's long term enjoyment".


Part of what audiophiles often struggle with is sensing this "unconscious
uneasiness (or easiness...it works both ways)" and encouraging it into
consciousness so they can identify it and discuss it. But the important
part is: by definition the unconscious is incohate and unavailable, and thus
cannot operate either at the conscious level or in a forced short term time
frame. Both are conditions of a structured DBT like ABX.


Interestingly enough, these "effects on the conscious/unconcious edge" also
confond the "null" mathematics of the standard ABX test. Psychologists
report such phenomenon reveal themselves as percentages of perception. In
other words, even though the difference is real, as it is lowered towards
the edge of perception, the reporting of the phenomenon is expressed as
"percentage of times perceived". This is a different probability than the
standard null hypothesis calculation where the existence of a real
difference is not known.


That was the basis of a critique of the original Clark article published in
the JAES a year after that article appeared by a professor of psychology who
should how the probability of perception of a real albeit marginally
subliminal difference interacted with standard null probability calculations
to throw those calculations off slightly within the small sample sizes used
for ABX testing. The practical effect was great enough to sometimes cause a
one-sample deviation in what triggered the 95% significance standard.
Unfortunately this article was turgid and was heavily laden with
mathematics, and while it drew response from the Clark and Nousaine and
others, it was apparent that few of them really understood the mathematics
or their implications. It is also clear that the mathematics and general
turgidity of the article failed to catch the interest of the audio
profession, who essentially ignore it to this day. (To avoid criticism, I
have copies of the original Clark article, this article, and others....but I
have mislaid them in a consolidation of my office into new space and cannot
lay my hands on them to include the cites here).



That article was about Type I and II errors, essentially. The author was Les Leventhal.

First, often 'marginal effects' seem to be readily discernable to 'golden ears' ,
'sagain, in that instance, ALL that is requried to test THEIR claim (not the
blanket question of 'can ANYONE hear a difference between these?" ) is to retest them under
blind conditions.

Second, one consequences of a 'marginal' effect is that to prove its existence, one would want
to raise the Type I margin to something like a *99%* significance standard.


--
-S
A wise man, therefore, proportions his belief to the evidence. -- David Hume, "On Miracles"
(1748)



  #101   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default Ultrabit Platium Disk Treatment

Harry Lavo wrote:
"Steven Sullivan" wrote in message
...
bob wrote:
On Jul 29, 6:44?pm, "Harry Lavo" wrote:

I am sorry, but it *is* an impasse.


A "debate" in which one side has data, and the other side has no data,
is not properly termed an "impasse." It is properly termed a settled
question.


The reason is, almost all claims of "science" with regard to audio
components revolve around ABX testing, or at least double-blind testing
of
some sort. ?And while these tests are highly appropriate for
audiometric
testing, they violate the cardinal principle of psychological test
design...they alter the variable under test...namely, listening for
pleasure
and enjoyment (where differences and long term judgements and
perceptions
arise from the sub-conscience).


If this were true, it would be a relevant consideration. But there is
absolutely no evidence that it is true. You cannot simply declare that
a test whose results you don't like has a flaw, Harry. You have to
demonstrate that flaw somehow. You never have.


"Devil's advocacy' has its place. But in the end, you need
to show examples that the disputed methodology actually caused
a problem. And for audio, you somehow have to do that without relying on
*sighted*
results as counter-evidence.


And yet some have rediculed John Atkinson's anecdote about choosing an
amplifier after a dbt showed it to be no different from another that he
honored, and yet giving it up after two years because of continual
irritation with some aspects of its sound that arose due to long term
listening.


Yes, that would be me (one who 'rediculed' that tale)

This of course will be dismissed as the result of "sighted
listening bias" with no proof that it is....


Sighted bias indubiotably exists, and if Atkinson's trial were
to be submitted to a scientific review, it would fail because there was
no control for sighted bias. So why should *I* have to
provide proof that sighted bias was not a factor? You've
got it exactly backwards, Harry -- it's Atkinson who would be
required to provide proof that it *wasn't*.

the equivalence of attributing
the "elevated temperature" of the mouse to the treatment in the example
cited below rather than looking for other rational possibilities...including
possible flaws in the original test.


Sighted bias is certainly a rational possibility, and one that must
be eliminated, just as handling effect had to be eliminated in the
mouse study. The point being that both Atkinson's and the mouse
experiment were inadequately controlled.


The example used in an experimental design book I'm reviewing for
use in a class, was an experiment where mouse body temperature was
a measured variable in assessing a treatment.
The result of elevated temperature *seemed* to support
the hypothesis that the treatment had an effect, but it turned out that
just handling the mice during data collection got them excited enough to
raise their temperature. So the mice had to be acclimated to human
handling,
before good data could be obtained.


Which is why it is annoying that any evidence that disputes DBT (especially
ABX) results is dismissed by advocates without any further investigation,
and is simply written off as "sighted bias".


Controls exist because confounding factors exist. That's annoying but
science deals with it every day. Maybe audiophiles shoudl learn to.


--
-S
A wise man, therefore, proportions his belief to the evidence. -- David Hume, "On Miracles"
(1748)

  #102   Report Post  
Posted to rec.audio.high-end
[email protected] khughes@nospam.net is offline
external usenet poster
 
Posts: 38
Default Ultrabit Platium Disk Treatment

Harry Lavo wrote:
wrote in message
...
Harry Lavo wrote:
"bob" wrote in message
...
On Jul 29, 6:44 pm, "Harry Lavo" wrote:

snip
bob
I rest my case. I guess true "scientists" don't consider the social
sciences as "science".

Yes, they do. That's why they design discrimination tests for
discrimination, and they don't use *preference/acceptability* testing to
determine whether a *difference* exists.


I'm not talking about preference testing.


Of course you are! Preference/acceptability, same thing. You are
continually comparing what you hear to your internal acceptability
criteria, and your past listening experiences. You're trying to claim a
complete isolation of experience that just doesn't exist.

I'm talking about monadic ratings
evaluating the "music".


And drifting ever farther away from the topic in an effort to support
your personal dislike of blind testing. The whole topic is about
evaluating the difference that a particular tweak is said to make.
Similar to the DeMagic CD thread, where a *Difference* is claimed.

And then application of statistical techniquest
between monadic samples to determine discrimination. A standard test
technique...


Albeit never one employed for determination of *whether* a difference
exists. Name one example where it is.

but one that masks the true object of the test and requires no
forced discrimination.

Bob, you don't do a test and then have to prove that it gets in the way.
In
psychological and social science design work, you design the test from
the
beginning so it *can't* get in the way, or at least appears most
logically
that it *shouldn't* get in the way. ABX fails miserably in this regard.

In your opinion. A micro-minority opinion based on available data.


No, standard scientific approach.


I agree. ABX *is* a variation on a standard scientific approach. Note
why my statement followed your "ABX fails miserably in this regard"
unfounded claim.

To
start with, the nature of the task is different (conscious discrimination
as
opposed to unconscious detection).

A false constraint *you* apply. Not one inherent in the test. Sit back
and enjoy "A" for as long as you like. Sit back and enjoy "B" for as
long as you like. Sit back and enjoy "X" for as long as you like. At
the end, rate each using whatever scale you want. Do a significant
number of trials, and "X" should be easily identifiable as either "A" or
"B". A statistically significant level of correlation with the
correct A/B is a positive. No correlation, or correlation with the
incorrect A/B indicates a negative result. Exactly as your "monadic"
testing would do.


You clearly don't (or don't want to) understand what I am saying. The very
active of making conscious discrimination is different than the way
objectionable audio traits normally make themselves known.


Well, despite the fact that you have no evidence to support your
assertions of how these 'objectionable' traits are 'normally' observed:
You clearly refuse to admit that NONE of your objections *must* apply
to an ABX or other double blind test. Exactly as I stated above.
Listen to each presentation under *ANY* conditions, for *ANY*
timeframe, with *ANY* source material you like - with the sole exception
of foreknowledge of the equipment/tweak being used. Only when you have
satisfied yourself that you have fully characterized the sound, music,
however you want to characterize it, do you move to the next
presentation. This is EXACTLY how you describe the process you use for
finding these 'objectionable' audio traits, lacking ONLY foreknowledge.

Now you'll claim it takes too long, right?


The listening conditions are different.

A false constraint *you* apply. Not one inherent in the test.


But as is almost always practiced in these tests that claim "no difference".


And that relates to the *method* in what way?

The musical context is usually different.

A false constraint *you* apply. Not one inherent in the test.


But as is almost always practiced in these tests that claim "no difference".


And, again, that relates to the *method* in what way? You, as always,
are free to perform tests that do not incorporate these objectionable
practices, while maintaining the controls, no?

The inability to train (because
you don't know in advance what you are listening for) is different.

A totally bogus argument. In virtually ALL cases where this particular
debate arises, the subjective "difference" has already been observed,
under non-controlled conditions. So, clearly, *you* (as in the claimant
of the difference) are fully aware of what you are listening for.
You've already identified its character when present, and the character
when its lacking.


I'm talking about testing music reproducing equipment to determine how well
it is liked, and why. That is how most audiophiles eventually reach a
conclusion...how the gear "sits" with them.


And you're trying to claim that "how the gear "sits" with them" is not
determining a *preference*? Please. You don't get to have it both ways,
you can't, at the end of the test, say that you preferred "A" over "B" -
irrespective of the methodology used - without acknowledging that you
have, indeed, conducted a "preference" test.

The way to experimentally
duplicate that is simply to have people listen to music, and descibe how
well /what they like about it using scalars after the fact. Statistics can
then determine if their is a difference, either overall or on specific
musical attributes, between two systems, identical except for the pieces of
equipment under test.. This is a far more solid scientific test.


No, it isn't. Not for detecting difference, which is what this whole
thread is about.

snip

Typically, here, we're starting with the *claim* by some individual or
group that some tweek has produced a subjectively identifiable physical
result. A result for which there is no known scientific basis, and
often just the opposite. One need only test this individual/group using
an appropriate methodology to prove/disprove (with a margin of error)
the accuracy of the claim. There is NO need to try and extrapolate this
to the general population, so test population size is irrelevant.


You are putting a match to a strawman here, I'm afraid, since that is not
what I suggested.


The strawman is of your own construction Harry, since a "difference"
claim is what the thread is about.


snip

Nobody is talking about a test methodology for preference. We are talking
about an evaluative methodology relating to the musical reproduction
experience. Any discrimination is provided by statistical analysis, and
since the measurement is indirect, it is not contaminated by an intervening
variable, as in a direc DBT or especially ABX.


No, you are talking about a preference test, as previously noted.


This has never been done to validate
individual double-blind discriminatory testing, and until it is done, the
test is simply an unproven vehicle for purposes of detecting musical
differences.

OSAF. All scientific studies on subjective data, that I'm aware of, are
conducted blind unless brand recognition is part of the design. This
includes all drug and medical device testing, even when there are
numerous objective criteria that can be, and are measured concurrent
with the subjective evaluations. That blind tests don't confirm your
results in no way invalidates them.


How many scientists do you think predicted in advance that they would
discover that certain rythmic patterns seem hardwired into the human brain?
But they have.


And they did that by long term casual listening without controls and
'forced' discriminations? If not, then what's the relevance?

You can't use accepted dogma to rule out possibilities that
only rigorous testing can validate. And this includes test techniques and
variations, particularly ones that on the face of it present intervening
variables which is an absolute no-no in psychological testing.


No, not dogma, standard investigational rigor. You're proposing here
that all theories are equal until disproved by validated methods. Simply
not the case. In the titular case, for example, where physics and
engineering do not support any possible mode of efficacy, and where all
existing evidence has been gather using, not only non-validated, but
totally uncontrolled testing, the hypothesis can be rejected out of hand
unless and until reliable data is collected. That is how science
actually works.

Keith Hughes

  #103   Report Post  
Posted to rec.audio.high-end
[email protected] outsor@city-net.com is offline
external usenet poster
 
Posts: 122
Default Ultrabit Platium Disk Treatment

"Give me a two hundred thousand dollar grant and two years to organize it,
and I'll either prove it or disprove it. All I have argued is that a
giant question remains."

After the 12 month cancer trials showed a previous impasse between two
treatments was broken by clear test results another person said the
impasse remains. I speculate an unknown to us factor "x" that will show
superior results. Give me $200 k and two years to show this new impasse
does not exist. We must wait to see if I can do it or not but until it is
done the impasse remains and all cancer treatments are on an equal
footing.

"Sadly this arena reflects the same kind of sickness that pervades our
political arena today....people take way over simplified "one-line"
positions and rely on name-calling ("satisfy the pseudo-scientific
speculations of the ill-informed") to substitute for meaningful discorse."

Strawman and red herring assertions aside, I think this crack is beneath
you. You have proposed a speculative test and discussed it and it was
found wanting on many grounds of logic and science proceedure. If there
is a discussion ender it is not because you have not had your say but
because your notion was found wanting and there is no place to turn.

  #104   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 1,243
Default Ultrabit Platium Disk Treatment

"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message


The reason is, almost all claims of "science" with regard
to audio components revolve around ABX testing, or at
least double-blind testing of some sort.


Not at all. There is considerable science with regard to audio components
relating to things like Ohms and Kirchoff's law, Fourier analysis, etc.


Those are electrical and physical measurements, Arny. Not tests relating to
the perception of music. You seem to confuse the two.


And while these
tests are highly appropriate for audiometric testing,
they violate the cardinal principle of psychological test
design...they alter the variable under test.


That is a hypothesis, not a generally accepted fact.


The fact that they alter the variable under test is not disputed.....few
people, even advocates, argue that DBX testing measures the same thing as
listening to music under relaxed conditions at home. Whether or not this is
a problem is what is disputed, wih advocates of the test techniques arguing
that it doesn't.

.namely, listening for pleasure and enjoyment (where differences
and long term judgments and perceptions arise from the
sub-conscience).


That is another hypothesis, and again not a generally accepted fact.


Not accepted by test devotees, perhaps, but widely accepted among
audiophiles in general as reflecting how audio issues come to be noted.

Furthermore, if proven this hypothesis does not necessarily prove the
first
hypothesis, above.


Nor did I say it "proved" anything....I said it was prima facia evidence of
a violation of good test design according to social science and
psychological testing standards.


Good science, as opposed to handy
science, finds a way to design a test which measures the
effect indirectly, in such circumstances.


The usual situation is where the people who make hypothesis like those
above, address their own hypothesis with their own applications of
science.


And the factual basis for this assertion is.......?


However, certain aspects of high end audio seem to be technologies of
hypothesis without proof or even reliable evidence.


Which is your riff against "snake oil". This is relevant to the current
discussion how.....??

  #105   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 1,243
Default Ultrabit Platium Disk Treatment

"Arny Krueger" wrote in message
...
wrote in message


The reason is, almost all claims of "science" with regard
to audio components revolve around ABX testing, or at
least double-blind testing of some sort.


This is not true.

There are other means to resolve many of the controversies at hand,
without
resorting to DBTs.

For example, there are controversies over the audibility mystical
properties
of wire wherein there are no known measurable differences. The claims of
science are not based on subjective tests at all.

For example, there are controversies over the presence of audible
differences where the nature of the differences is known to be less than
audible thresholds for the changes that are claimed and can be measured.
The
claims of science are thus not based on subjective tests.


"Almost all" certainly allows for wire exceptions, Arny. Not so much for
active components.



  #106   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default Ultrabit Platium Disk Treatment

wrote in message


I think not. There is a body of test results with
routinely similar results showing that reported
subjective perception events which toggle on and off as
the test is blind and sighted. That is the current state
of things and no other results on an equal footing nor
at an impasse are there to be considered.


"But nobody has corroborated that the test itself is not
the reason the sighted differences do not hold.


Sure they have.

The sighted events are often contrary to the known laws of physics which
seem to hold in *all* other circumstances.

That is
the crux of the matter....until the test (which is
interventionist in nature) can be proven to provide
identical results to much more expensive and
sophisticated testing that is not interventionist, then
the test itself has to be consider potentially suspect.
THAT is just good science."


ABX fits the description of the more expensive and sophisticated test. For
example John Atkinson is on the public record as saying that a major reason
why Stereophile doesn't do DBTs is that they are too expensive for his
magazine to afford.

But nobody has corroborated that the tests are the reason
that the esp powers fail to appear. Otherwise known as
special pleading.


It is well know that there is plenty of reliable corroboration for the
failure of many sighted tests. Sighted tests are well-known to provide
obviously flawed evidence that something happened when in fact nothing
happened.

All tests from simple put a cloth over the connections to
complex ABX tests have similar results.


Agreed.

Here is the test that reveals all and has been done.


A group of people are given sighted access to two bits of
audio gear and are told a switch is made and asked to
provide subjective results of differences produced
accordingly. As each switch is announced they provide
clearly different reports of perception events
accordingly.


But no actual electrical switch was made and the clearly
different reports were made.


Exactly.

No impasse remains as there was in this case a test of
the impact being sighted has upon reported perception
events.


We are then free to make the same special
pleading that sighted testing need be first validated,
that is only good science.


Most sighted tests fail to be tests on the grounds that they do not involve
comparison to a realizable standard. So the phrase "sighted testing" is
often an oxymoron. This the main reason I use the phrase "sighted
evaluation".

  #107   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 1,243
Default Ultrabit Platium Disk Treatment

"bob" wrote in message
...
On Jul 30, 7:37 pm, "Harry Lavo" wrote:

I'm not talking about preference testing. I'm talking about monadic
ratings
evaluating the "music". And then application of statistical techniquest
between monadic samples to determine discrimination. A standard test
technique...


No, it's not--not for determining whether there's a perceptual
difference between two stimuli. The kind of test you describe here is
only used when the objects under test are already known to be
perceptibly different. So it's totally inappropriate for the purposes
of challenging ABX results.


I'm sorry. If there is no difference, the statistics will show no
difference. If there is a difference, the statistics will show a
difference. It's that simple. And to be valid, then, the ABX test must
show the same under both circumstances...but I'm willing to concede it will
show no difference where there is not difference for reasons of economy.

The point is... the validating test must get as close to the way people
ordinarily listen to music as it is possible to get....then the statistics
can differentiate. Thus the "numbers".

but one that masks the true object of the test and requires no
forced discrimination.


A silly, made-up "requirement."


Again, you seem able only to want to "will away" the potential problem.

Audiophiles discriminate all the time.
Every claim that two components sound different is an act of
discrimination. So why can't they do that blind?


Most audiophiles don't claim that. They claim they "like" one better than
others, particularly after longer-term listening. There may be difference
implied, but it is not neccessarily conscious, and it is not necessarily (or
in my opinion predominately) forced. Once a difference arises from the
subconscious in long term listening, it may be identifiable again by the
respondent under similar conditions....that doesn't mean it can be
identified under forced, direct comparison.

snip

I'm talking about testing music reproducing equipment to determine how
well
it is liked, and why. That is how most audiophiles eventually reach a
conclusion...how the gear "sits" with them. The way to experimentally
duplicate that is simply to have people listen to music, and descibe how
well /what they like about it using scalars after the fact. Statistics
can
then determine if their is a difference, either overall or on specific
musical attributes, between two systems, identical except for the pieces
of
equipment under test.. This is a far more solid scientific test.


OSAF. You can't claim something is "a far more solid scientific test"
when it's never even been tried.


First place, this kind of testing is used all the time for food and other
sensory items. It is not some wild-eyed new scheme. Secondly, it meets the
standards of a quality psychological/social science test in that it measures
directly what is important....musical satisfaction and it's components. Not
on a forced choice..instead it relies on statistical difference analysis and
large numbers to illuminate differences, both overall and in important
subcomponents of the musical experience. Thus its possible two components
could be enjoyed at an identical level (and thus neither prefered) but
differ and be prefered on different attributes, eg. one may be rated higher
for rythmn, and another for the sweetness of the strings. All this will
show up statistically. If both components are rated statistically
undifferentiated on all attributes, then clearly they are not different. So
let's look at how such a test would provide "validation" results:

* the abx test shows no difference, and the validation test shows no
difference on any attribute....abx at least partially validated (it catches
true null)
* the abx test shows a difference, and the validation test shows significant
differnces on at least some attributes...abx is pretty well validated (it
catches true differences even though forced discrimination)
* the abx test shows no difference, and the validation test shows
significant differences on at least some attributes...abx test obviously
lacks sensitivity to some musical nuances when forced discrimation is used)
* the abx test shows a difference, and the validation test shows no
significant differences on any attribute...abx test may be more sensitive
due to forced discrimination

snip

I didn't say test among the unwashed masses....I said numbers. As in
three
hundred audiophiles get exposed to this variable, and three hundred get
exposed to that one. As in three hundred audiophiles who prefer vinyl get
exposed to this variable, and three hundred audiophiles who prefer vinyl
get
exposed to that variable, etc. Of course the subjects have to be screened
to constitute the appropriate audience, but there is no substitute for
*numbers*. Now do you see why a validation test is not simple nor
inexpensive?


No, the reason it's not simple or inexpensive is that you have to make
sure it'll never be carried out.


So you would rather attack my motives (moderators, where are you?) than
simply acknowledge that such a test might be of value but is big and
expensive? And this despite the fact that such big and expensive tests are
standard fare in some consumer fields.


snip

Nobody is talking about a test methodology for preference. We are talking
about an evaluative methodology relating to the musical reproduction
experience. Any discrimination is provided by statistical analysis, and
since the measurement is indirect, it is not contaminated by an
intervening
variable, as in a direc DBT or especially ABX.


Again, you have no evidence that there *is* an intervening variable.
That would be the first step, Harry. Get busy.


The intervening variable is inherent in the test design itself. People
don't listen to music through a contrived ABX test making same/different
choices. They listen to music. If they are evaluating equipment, they
might make notes afterward.

  #108   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default Ultrabit Platium Disk Treatment

"Harry Lavo" wrote in message

"bob" wrote in message
...
On Jul 30, 9:00 am, "Harry Lavo"
wrote:

Bob, you don't do a test and then have to prove that it
gets in the way.


Of course not. You do the test, and then you wait for
somebody else to prove that there's something wrong with
the test. That's how science works in the real world,
Harry, and that's what you still don't get. You're the
one making the assertion here. You're insisting that ABX
tests are flawed in some way. The burden of proof is on
you, Harry, not on the scientific community, which has
no obligation to satisfy the pseudo-scientific
speculations of the ill-informed.


No, Bob....I am arguing that they are an inappropriate
test for dealing with long term satisfaction with audio
components in the reproduction of music. The tests are
fine for listening to different distortion levels, volume
levels, frequency response characteristics, etc.
(audiometric measurements). Those are not music.


In fact different distortion levels, volume levels, frequency response
characteristics aren't necessarily just audiometric measurements. They are
the things that audio equipment does when it fails to be sonically
transparent. They are the means by which the ear distinguishes bad equipment
from good equipment.

Music with different distortion levels, volume levels, frequency response
characteristics, etc., is still music.

I've tried to show on a logical basis how it may do this
and why the results do not seem to square with a lot of
reported experience.


So have many people. The preponderance of the reliable evidence rests
(thousands of tests) rests with the people who say that knowing which piece
of equipment you are listening to during a listening test is at best a
distraction. It is merely the knowledge that one piece of equipment is
prettier or more expensive, or recommended by some reviewer that often
explains the perception that it sounds better.

Moreover, I have spent considerable
time on this forum (more in the past than present) laying
out test approaches that would prove or disprove what I
am hypothesizing....but unfortunately neither I nor you
have the resources to undertake that kind of testing.


One has to marvel at someone who characterize listening while attached to
something like an EKG machine, or inside a NMR machine as being the more
natural listening experience. Don't some of those tests mention entering
someone's veins with a probe?

  #109   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default Ultrabit Platium Disk Treatment

"Harry Lavo" wrote in message


This is the oldest bugaboo in the discussion. Obviously
people can be fooled. Obviously there is such a thing as
sighted bias. Obviously double blind testing can get rid
of sighted bias. So can blinded monadic testing.


Interesting factoid.

The phrase "blinded monadic" appears on the entire internet just once,
according to google. No definition accompanies it.

Examining Usenet, we find that virtually every instance of this usage traces
back to Harry Lavo. So he must be something like the sole authority on what
it means.

The best explanation I can find of it is as follows:

"Entirely monadic testing simply places the sample, and a control is placed
among a similar sample.
Typically 200-300 users for each sample. The products are packaged in plain
white boxes and prepared at home by a tester, and then rated. "

Note that there are two different samples, a test sample and a control. So
400-600 pieces of equipment and testers would seem to be required.

To accomplish this with audio gear we would need to obtain 400-600 identical
pieces of gear, obliterate any product or brand identity from it, and ship
them to 400-600 people along with a rating questionnaire. In practice there
would be a number of non-responders, so maybe 1,000 pieces of gear may have
to sent out to get 400-600 responses.

This kind of testing has thusfar apparently applied to products like
shampoo, that can be packaged and sent out for maybe a dollar a sample. A
similar test of an audio cable might involve $30,000 or $300,000 worth of
products.

A rating questionnaire is not a pure go/no-go entity, so there would be some
personal judgment involved with scoring the questioners that were returned.
IOW, once we go through with this elaborate and expensive procedure, we
wouldn't be sure what we had.

Note that the far less complex and expensive ABX test is said by John
Atkinson to be too costly for his magazine to implement.

I'm familiar with the phrase "damning with faint praise". This appears to be
damning of ABX by demanding that it be validated at a tremendous expense by
a methodology this is if anything *more* susceptible to quibbling.

  #110   Report Post  
Posted to rec.audio.high-end
No Name
 
Posts: n/a
Default Ultrabit Platium Disk Treatment

"Harry Lavo" wrote in message
...
wrote in message
...
"Harry Lavo" wrote in message
...
I am sorry, but it *is* an impasse.

The reason is, almost all claims of "science" with regard to audio
components revolve around ABX testing, or at least double-blind testing
of
some sort. And while these tests are highly appropriate for audiometric
testing, they violate the cardinal principle of psychological test
design...they alter the variable under test...namely, listening for
pleasure
and enjoyment (where differences and long term judgements and
perceptions
arise from the sub-conscience). Good science, as opposed to handy
science,
finds a way to design a test which measures the effect indirectly, in
such
circumstances. At the very least, it designs and executes at least one
such
test to validate that in fact the "shorthand" test does what it claims
to
do
in measuring the same thing with equally valid results. In the case of
audio, no organization has a financial interest in such testing, so it
has
not been done. Its complexity and scope are beyond the logistical and
financial means of individuals.


Let me see if I understand you: Although one could fail a standard DBT
between, say, 2 amplifiers, the Id can nevertheless sense a difference
and
it will have an effect on one's long term enjoyment. Furthermore, it is
possible to design a suitable blind test that would confirm the
difference,
only no commercial organization has any reason to run such a test and
publish the results, and no amateurs have the wherewithal to do so.

Is this a fair restatement of your argument?


No, Norm, it does not.


Fair enough. Then lets change the first sentence to read, "Although one
could fail a standard DBT between, say, 2 amplifiers, in the long term the
unconscious mind can sense a difference, and it will have an effect on one's
enjoyment."

How's that? Of course the issue is moot, since I'm going to concentrate on
the second sentence. Since you claim that manufacturers will not run a
valid test, and amateurs can't, you must have SOME idea of a test
methodology that you would find valid, irrespective of the difficulty and
cost. Please describe such a test.

Cheers,

Norm



  #112   Report Post  
Posted to rec.audio.high-end
Sonnova Sonnova is offline
external usenet poster
 
Posts: 1,337
Default Ultrabit Platium Disk Treatment

On Thu, 31 Jul 2008 06:03:33 -0700, Harry Lavo wrote
(in article ):

wrote in message
...
"snip


I think not. There is a body of test results with routinely similar
results showing that reported subjective perception events which toggle

on
and off as the test is blind and sighted. That is the current state of
things and no other results on an equal footing nor at an impasse are
there to be considered.


"But nobody has corraborated that the test itself is not the reason the
sighted differences do not hold. That is the crux of the matter....until
the test (which is interventionist in nature) can be proven to provide
identical results to much more expensive and sophisticated testing that is
not interventionist, then the test itself has to be consider potentially
suspect. THAT is just good science."

But nobody has corraborated that the tests are the reason that the esp
powers fail to appear. Otherwise known as special pleading.

All tests from simple put a cloth over the connections to complex abx
tests have similar results.

Here is the test that reveals all and has been done.

A group of people are given sighted access to two bits of audio gear and
are told a switch is made and asked to provide subjective results of
differences produced accordingly. As each switch is announced they
provide clearly different reports of perception events accordingly.

But no actual electrical switch was made and the clearly different reports
were made.

No impasse remains as there was in this case a test of the impact being
sighted has upon reported perception events. We are then free to make the
same special pleading that sighted testing need be first validated, that
is only good science.

Until the speculative cancer method is first tested then the two existing
methods which provide clear results are not corraborated and an impasse
exists.

I think not.


This is the oldest bugaboo in the discussion. Obviously people can be
fooled. Obviously there is such a thing as sighted bias. Obviously double
blind testing can get rid of sighted bias. So can blinded monadic testing.
That is not what is being argued here.

What is being argued is whether or not ABX same/difference testing in
particular (which has become the industry standard) forces a consciouness
that potentially obliterates musical nuances arising in part from the
subconscious that are important long term, and which create or distroy
audiophile listening satisfaction.


Frankly, I don't see how it could. After all, an ABX test allows the tester
to listen to each sample at will and at length. The only difference between
so-called long-term listening and ABX is that in the latter, the listener
doesn't, ostensibly, know which of the variables he's listening to. All else
should be the same. Listen for hours if you want, or days or weeks, and then
switch to the other samples and listen to them for minutes, hours, days or
weeks. Since there is no time limit, the tests can be really not all that
different from traditional long-term listening tests. The only difference is
that the reviewer doesn't know which of the units under test that he is
auditioning and that's the point.
  #113   Report Post  
Posted to rec.audio.high-end
[email protected] outsor@city-net.com is offline
external usenet poster
 
Posts: 122
Default Ultrabit Platium Disk Treatment

"What is being argued is whether or not ABX same/difference testing in
particular (which has become the industry standard) forces a consciouness
that potentially obliterates musical nuances arising in part from the
subconscious that are important long term, and which create or distroy
audiophile listening satisfaction. That is what an "intervening variable"
is in psychological parlance....a test technique that potentially changes
the "what" of what is being tested."

So also insist failed esp claimants again and again. The test to show esp
powers distort normal esp abilities. And intervening variables not even
imagined are likely being overlooked that cause the failures.

You want intervening variables? I have my warm rendered chicken fat
satisfaction factor also. Until and unless all sighted testing is done
with the left hand emersed ut to the wrist in warm chicken fat we will
never know if this makes a vital difference that will reveal the true
nature of the sonic event. Did I mention the blue ball cap factor, one
will never know if failing to wear a blue ball cap makes a sonic
difference that will at last reveal the true depth of the sighted testing
results.

The audio dealer who lived daily for long term listening in his store and
in his home and who said he could spot the unique sonic factors of the
nelson pass amp found he could not in a simple cloth over connection test.
I suggest the long term developed unknown to us subconsciouslistening
pleasure factors were all too well given full potential in this case and
it stilled failed. He did not use warm chicken fat nor a blue ball cap so
his failure is mostly explained nonetheless.

Or could it simpley be that the reported perception event was due to
factors not existing in the signal as it entered his ears. In science
such an explanation of such parsimony is chosen first.

Blinded testing is done in many parts of science involving human senses,
conscious and nonconscious. Your special pleading that hearing is an
exception without any but assertion falls on deaf ears, so to speak.

There is no impasse and sighted and blind testing do not stand on an equal
footing until the above can be shown to be elsewise.
  #114   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 1,243
Default Ultrabit Platium Disk Treatment

"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message

"bob" wrote in message
...
On Jul 30, 9:00 am, "Harry Lavo"
wrote:

Bob, you don't do a test and then have to prove that it
gets in the way.

Of course not. You do the test, and then you wait for
somebody else to prove that there's something wrong with
the test. That's how science works in the real world,
Harry, and that's what you still don't get. You're the
one making the assertion here. You're insisting that ABX
tests are flawed in some way. The burden of proof is on
you, Harry, not on the scientific community, which has
no obligation to satisfy the pseudo-scientific
speculations of the ill-informed.


No, Bob....I am arguing that they are an inappropriate
test for dealing with long term satisfaction with audio
components in the reproduction of music. The tests are
fine for listening to different distortion levels, volume
levels, frequency response characteristics, etc.
(audiometric measurements). Those are not music.


In fact different distortion levels, volume levels, frequency response
characteristics aren't necessarily just audiometric measurements. They are
the things that audio equipment does when it fails to be sonically
transparent. They are the means by which the ear distinguishes bad
equipment
from good equipment.

Music with different distortion levels, volume levels, frequency response
characteristics, etc., is still music.


Yes it is part of music, but not necessarily how people hear things when
listening to music.
And if they can identify it as distortion, then they can also do so in a
monadic test.


I've tried to show on a logical basis how it may do this
and why the results do not seem to square with a lot of
reported experience.


So have many people. The preponderance of the reliable evidence rests
(thousands of tests) rests with the people who say that knowing which
piece
of equipment you are listening to during a listening test is at best a
distraction. It is merely the knowledge that one piece of equipment is
prettier or more expensive, or recommended by some reviewer that often
explains the perception that it sounds better.


Please pay closer attention. I am not arguing about blind testing at all.

Moreover, I have spent considerable
time on this forum (more in the past than present) laying
out test approaches that would prove or disprove what I
am hypothesizing....but unfortunately neither I nor you
have the resources to undertake that kind of testing.


One has to marvel at someone who characterize listening while attached to
something like an EKG machine, or inside a NMR machine as being the more
natural listening experience. Don't some of those tests mention entering
someone's veins with a probe?


Nor am I taliking about hooking people to machines, although I can see the
value of that for some research purposes. I'm talking about a test where
people listen to music and then fill out a rating questionaire about their
musical listening experience. They know they are evaluating a system; but
they simply don't know what part of the system is being measured, or
alternatively, they know what part of the system is being measured but have
no idea what the component is. That's a blind monadic. And at all times
the focus is on the music, not the system.

  #115   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 1,243
Default Ultrabit Platium Disk Treatment

"Steven Sullivan" wrote in message
...
Harry Lavo wrote:
wrote in message
...
"Harry Lavo" wrote in message
...
I am sorry, but it *is* an impasse.

The reason is, almost all claims of "science" with regard to audio
components revolve around ABX testing, or at least double-blind
testing
of
some sort. And while these tests are highly appropriate for
audiometric
testing, they violate the cardinal principle of psychological test
design...they alter the variable under test...namely, listening for
pleasure
and enjoyment (where differences and long term judgements and
perceptions
arise from the sub-conscience). Good science, as opposed to handy
science,
finds a way to design a test which measures the effect indirectly, in
such
circumstances. At the very least, it designs and executes at least
one
such
test to validate that in fact the "shorthand" test does what it claims
to
do
in measuring the same thing with equally valid results. In the case
of
audio, no organization has a financial interest in such testing, so it
has
not been done. Its complexity and scope are beyond the logistical and
financial means of individuals.

Let me see if I understand you: Although one could fail a standard DBT
between, say, 2 amplifiers, the Id can nevertheless sense a difference
and
it will have an effect on one's long term enjoyment. Furthermore, it
is
possible to design a suitable blind test that would confirm the
difference,
only no commercial organization has any reason to run such a test and
publish the results, and no amateurs have the wherewithal to do so.

Is this a fair restatement of your argument?


No, Norm, it does not. What I said has no relationship to the Id, which
is
part of the Freudian construct of the Psyche, which relates to human
personality. The Id is a Freudian construct that resides in the
unconscious
part of the brain, but does not constitute it. From Wikepedia: "It
should
be stressed that, even though the model is "structural" and makes
reference
to an "apparatus", the id, ego and super-ego are functions of the mind
rather than parts of the brain and do not correspond to actual somatic
structures of the kind dealt with by neuroscience".


The "kind dealt with by neuroscience" is what I am talking about.


If you replace "Id" in your statement above with "unconscious"
(Merriam-Webster: 1 a: not knowing or perceiving : not aware ; 2 b (1):
not
marked by conscious thought, sensation, or feeling ), and if you preface
that statement with "in the long term", then it can serve as a reasonably
accurate summary. In other words, "Although one could fail a standard
DBT
between, say, 2 amplifiers, in the long term the unconscious can sense a
difference and it will have an effect on one's long term enjoyment".


Part of what audiophiles often struggle with is sensing this "unconscious
uneasiness (or easiness...it works both ways)" and encouraging it into
consciousness so they can identify it and discuss it. But the important
part is: by definition the unconscious is incohate and unavailable, and
thus
cannot operate either at the conscious level or in a forced short term
time
frame. Both are conditions of a structured DBT like ABX.


Interestingly enough, these "effects on the conscious/unconcious edge"
also
confond the "null" mathematics of the standard ABX test. Psychologists
report such phenomenon reveal themselves as percentages of perception.
In
other words, even though the difference is real, as it is lowered towards
the edge of perception, the reporting of the phenomenon is expressed as
"percentage of times perceived". This is a different probability than
the
standard null hypothesis calculation where the existence of a real
difference is not known.


That was the basis of a critique of the original Clark article published
in
the JAES a year after that article appeared by a professor of psychology
who
should how the probability of perception of a real albeit marginally
subliminal difference interacted with standard null probability
calculations
to throw those calculations off slightly within the small sample sizes
used
for ABX testing. The practical effect was great enough to sometimes
cause a
one-sample deviation in what triggered the 95% significance standard.
Unfortunately this article was turgid and was heavily laden with
mathematics, and while it drew response from the Clark and Nousaine and
others, it was apparent that few of them really understood the
mathematics
or their implications. It is also clear that the mathematics and general
turgidity of the article failed to catch the interest of the audio
profession, who essentially ignore it to this day. (To avoid criticism, I
have copies of the original Clark article, this article, and
others....but I
have mislaid them in a consolidation of my office into new space and
cannot
lay my hands on them to include the cites here).



That article was about Type I and II errors, essentially. The author was
Les Leventhal.

First, often 'marginal effects' seem to be readily discernable to 'golden
ears' ,
'sagain, in that instance, ALL that is requried to test THEIR claim (not
the
blanket question of 'can ANYONE hear a difference between these?" ) is to
retest them under
blind conditions.

Second, one consequences of a 'marginal' effect is that to prove its
existence, one would want
to raise the Type I margin to something like a *99%* significance
standard.


It certainly dealt with Type I and Type II errors in its first half, but in
its second half it showed how the probabilites associated with a "real" but
slight difference that manifested itself by showing up 60% of the time,
interacted with the null hypotesis statistics of a 17 person test to change
the probability of exceeding the 95% significance level by a non-trivial
amount, with the interaction not disappearing until approximately 100 trials
were reached.. Tough sledding to get through it, but that was the final
outcome of the article.



  #116   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 1,243
Default Ultrabit Platium Disk Treatment

"Steven Sullivan" wrote in message
...
Harry Lavo wrote:
"Steven Sullivan" wrote in message
...
bob wrote:
On Jul 29, 6:44?pm, "Harry Lavo" wrote:

I am sorry, but it *is* an impasse.

A "debate" in which one side has data, and the other side has no data,
is not properly termed an "impasse." It is properly termed a settled
question.

The reason is, almost all claims of "science" with regard to audio
components revolve around ABX testing, or at least double-blind
testing
of
some sort. ?And while these tests are highly appropriate for
audiometric
testing, they violate the cardinal principle of psychological test
design...they alter the variable under test...namely, listening for
pleasure
and enjoyment (where differences and long term judgements and
perceptions
arise from the sub-conscience).

If this were true, it would be a relevant consideration. But there is
absolutely no evidence that it is true. You cannot simply declare that
a test whose results you don't like has a flaw, Harry. You have to
demonstrate that flaw somehow. You never have.

"Devil's advocacy' has its place. But in the end, you need
to show examples that the disputed methodology actually caused
a problem. And for audio, you somehow have to do that without relying
on
*sighted*
results as counter-evidence.


And yet some have rediculed John Atkinson's anecdote about choosing an
amplifier after a dbt showed it to be no different from another that he
honored, and yet giving it up after two years because of continual
irritation with some aspects of its sound that arose due to long term
listening.


Yes, that would be me (one who 'rediculed' that tale)


I seem to recall you bing in that group.

This of course will be dismissed as the result of "sighted
listening bias" with no proof that it is....


Sighted bias indubiotably exists, and if Atkinson's trial were
to be submitted to a scientific review, it would fail because there was
no control for sighted bias. So why should *I* have to
provide proof that sighted bias was not a factor? You've
got it exactly backwards, Harry -- it's Atkinson who would be
required to provide proof that it *wasn't*.


I think you miss the point. I didn't say John's "revelation" proved
anything. What I said is that their are enough antecdotes like that that if
you are designing a test to control variables, eliminating sighted is fine,
but how about also eliminating the intervention of an alien mode of
listening and even evaluating. That at least should cause pause and
consideration....but the ABX folks seem to have run right by it rather than
investigating the possibility that the test itself is not the best approach
because of that. That would get a D if not an F on a PhD thesis.


the equivalence of attributing
the "elevated temperature" of the mouse to the treatment in the example
cited below rather than looking for other rational
possibilities...including
possible flaws in the original test.


Sighted bias is certainly a rational possibility, and one that must
be eliminated, just as handling effect had to be eliminated in the
mouse study. The point being that both Atkinson's and the mouse
experiment were inadequately controlled.


Absolutely. No problem with eliminating sighted bias. So long as the test
to do that does not create other equally troublesome issues. And in this
case, that has not been seriously addressed by the advocates. And for those
of us who do try to address it conceptually, essentially we face diatribe
and name-calling rathing than any kind of constructive consideration of how
the issue could be addressed.

The example used in an experimental design book I'm reviewing for
use in a class, was an experiment where mouse body temperature was
a measured variable in assessing a treatment.
The result of elevated temperature *seemed* to support
the hypothesis that the treatment had an effect, but it turned out that
just handling the mice during data collection got them excited enough
to
raise their temperature. So the mice had to be acclimated to human
handling,
before good data could be obtained.


Which is why it is annoying that any evidence that disputes DBT
(especially
ABX) results is dismissed by advocates without any further investigation,
and is simply written off as "sighted bias".


Controls exist because confounding factors exist. That's annoying but
science deals with it every day. Maybe audiophiles shoudl learn to.


Which is why I have continually been annoying to ABX advocates by pointing
out that their is a potentially serious flaw in the test that hasn't been
adequately worked through and either proven or disproven.


  #117   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 1,243
Default Ultrabit Platium Disk Treatment

"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message


This is the oldest bugaboo in the discussion. Obviously
people can be fooled. Obviously there is such a thing as
sighted bias. Obviously double blind testing can get rid
of sighted bias. So can blinded monadic testing.


Interesting factoid.

The phrase "blinded monadic" appears on the entire internet just once,
according to google. No definition accompanies it.

Examining Usenet, we find that virtually every instance of this usage
traces
back to Harry Lavo. So he must be something like the sole authority on
what
it means.


If there is only one mention, then "every instance" means I am the one who
said it. Right, Arny?

Yep, I did...in an extended several weeks of discussion about two years ago,
I believe.

Monadic testing is one of the common forms of research used among consumer
goods manufacturers. I also discussed proto-monadic, order bias, central
location testing, and many other aspects of the sophisticated research
practiced by these companies. Kind of a PhD factory for applied
socialogical and psychological research, in which I was priveleged to work
for many years.

The best explanation I can find of it is as follows:

"Entirely monadic testing simply places the sample, and a control is
placed
among a similar sample.
Typically 200-300 users for each sample. The products are packaged in
plain
white boxes and prepared at home by a tester, and then rated. "


I was describing how one type of monadic testing was done by a food
company...an approach called monadic home use testing. It is of course
blind because of the "white box" approach so even if the control is a
commercial product its identity is masked.

Note that there are two different samples, a test sample and a control. So
400-600 pieces of equipment and testers would seem to be required.

To accomplish this with audio gear we would need to obtain 400-600
identical
pieces of gear, obliterate any product or brand identity from it, and ship
them to 400-600 people along with a rating questionnaire. In practice
there
would be a number of non-responders, so maybe 1,000 pieces of gear may
have
to sent out to get 400-600 responses.


That's why I never proposed such a test for audio gear. But I did propose a
different kind of monadic testing, and in the ensuing discussion some of us
talked about various other possibilities.

This kind of testing has thusfar apparently applied to products like
shampoo, that can be packaged and sent out for maybe a dollar a sample. A
similar test of an audio cable might involve $30,000 or $300,000 worth of
products.


For food products and other consumer goods items, the cost of samples is one
of the smaller costs. The expertise to design the tests and questionaires,
place the products (screening subjects, delivering product, collecting
responses, evaluating replies statistically, and all the other aspects of
professional market research are the biggest cost. And if another form of
testing using central locations is used, their also is rental cost, staffing
cost, and other major costs.

A rating questionnaire is not a pure go/no-go entity, so there would be
some
personal judgment involved with scoring the questioners that were
returned.
IOW, once we go through with this elaborate and expensive procedure, we
wouldn't be sure what we had.


This simply is wrong. This art of designing and pre-testing questionaires
is a highly developed skill and the resultant ratings are hard-coded and
numerical and subject to rigorous statistical analysis.

Note that the far less complex and expensive ABX test is said by John
Atkinson to be too costly for his magazine to implement.


I'm familiar with the phrase "damning with faint praise". This appears to
be
damning of ABX by demanding that it be validated at a tremendous expense
by
a methodology this is if anything *more* susceptible to quibbling.


No, I'm not damning ABX with praise, either loud or faint. I'm pointing out
that a major potential flaw exists that if validated would be an alternative
explaination about why some apparent differences do not surface in abx
testing, other than just sighted bias. And that potential flasw thas not
been wrestled to the ground one way or another, probably for a variety of
reasons. I've also pointed out that the type of testing necessary to do so
is expensive and is one of the reasons it probably won't be unless it is
done as a major intellectual excercise by a University somewhere.


  #118   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default Ultrabit Platium Disk Treatment

"Harry Lavo" wrote in message

"bob" wrote in message
...
On Jul 30, 7:37 pm, "Harry Lavo"
wrote:


I'm sorry. If there is no difference, the statistics
will show no difference.


Not necessarily. It is very easy to contrive a test that is inherently
insensitive. If a test is insensitive, then differences that are present are
not detected.

I find it terribly amusing that one who is so critical of the sensitivity of
other tests simply presumes that the test they favor is inherently the most
sensitive. Where is the proof of that, Harry? What we have here is
unsupported speculation that the Monadic test will be more sensitive to
differences than ABX with zero relevant supporting evidence. It's just a
proof by assertion.

A monadic test is not even actually a test, because it lacks comparison to a
readily-available absolute standard. Check your dictionary Harry, tests
involve comparisons with a standard. A monadic test which inherently has one
only one sample available and no readily accessible standard fails the basic
definition of a test. It is just a popularity contest.


If there is a difference, the statistics will show a difference.


Exactly the same thing can be said about ABX. However, with ABX and other
DBT methodologies, we have at least 30 years of real-world experience, and
thousands of completed tests. One thing that we know for sure is that when
differences are known to be audible, ABX is as good if not better than other
testing methodologies for detecting that difference.

It's that simple.


It is that wrong.

And to be valid, then, the ABX test must show the same
under both circumstances...but I'm willing to concede it
will show no difference where there is not difference for
reasons of economy.


We know for sure that if there is a difference that is large enough to be
audible by any other reliable means, that the ABX test when properly done,
will detect the difference. One of the advantages is that it provides two
known samples of the alternatives being compared, both clearly and properly
labeled. Yet, it has all of the advantages of being Double Blind.

The point is... the validating test must get as close to
the way people ordinarily listen to music as it is
possible to get....then the statistics can differentiate.


There's no evidence let alone proof that ABX necessarily fails to be close
enough to how people listen to music.

The logic that has been applied all along is that since ABX fails to confirm
wild, anti-scientific assertions and incredibly flawed sighted evaluations,
there must be something wrong with it.
...


  #119   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 1,243
Default Ultrabit Platium Disk Treatment

wrote in message
...
"Harry Lavo" wrote in message
...


snip



Fair enough. Then lets change the first sentence to read, "Although one
could fail a standard DBT between, say, 2 amplifiers, in the long term the
unconscious mind can sense a difference, and it will have an effect on
one's
enjoyment."

How's that? Of course the issue is moot, since I'm going to concentrate
on
the second sentence. Since you claim that manufacturers will not run a
valid test, and amateurs can't, you must have SOME idea of a test
methodology that you would find valid, irrespective of the difficulty and
cost. Please describe such a test.

Cheers,

Norm


Actually Norm I'm hosting a photography shoot on another forum and I don't
have time to repeat the three weeks of posts I put up here about two years
ago detailing some possible validation tests. If you search on "monadic" or
"blind monadic" and come up in that time frame, you will see several threads
where I spent considerable time and energy suggesting such tests in
detail....I believe Bob and Steve Sullivan were the other main discussers.


  #120   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 1,243
Default Ultrabit Platium Disk Treatment

"Arny Krueger" wrote in message
...
wrote in message


I think not. There is a body of test results with
routinely similar results showing that reported
subjective perception events which toggle on and off as
the test is blind and sighted. That is the current state
of things and no other results on an equal footing nor
at an impasse are there to be considered.


"But nobody has corroborated that the test itself is not
the reason the sighted differences do not hold.


Sure they have.


Please provide evidence of this, other than to show that it wipes out
sighted bias and reveals distortions when respondents are trained what to
listen for. Show the test that shows it can can catch subtle differences
arising from the unconscious or subliminal consceious state, the way such
differences usually present themselves under long term listening conditions.

Your "sure they have" is wrong.


The sighted events are often contrary to the known laws of physics which
seem to hold in *all* other circumstances.


A strawman.

That is
the crux of the matter....until the test (which is
interventionist in nature) can be proven to provide
identical results to much more expensive and
sophisticated testing that is not interventionist, then
the test itself has to be consider potentially suspect.
THAT is just good science."


ABX fits the description of the more expensive and sophisticated test. For
example John Atkinson is on the public record as saying that a major
reason
why Stereophile doesn't do DBTs is that they are too expensive for his
magazine to afford.


ABX is not an expensive test, but done well it is a demanding and time
consuming test of unsubstantiated value for the open-ended testing of audio
components. For an audio magazine that *may* be considered expensive, I
don't know. However, the real expensive testing comes from the need for
elaborate faciilities, large numbers of people, and true expertise in
designing and running and evaluating the test.


But nobody has corroborated that the tests are the reason
that the esp powers fail to appear. Otherwise known as
special pleading.


It is well know that there is plenty of reliable corroboration for the
failure of many sighted tests. Sighted tests are well-known to provide
obviously flawed evidence that something happened when in fact nothing
happened.

All tests from simple put a cloth over the connections to
complex ABX tests have similar results.


Agreed.


So long as the test doesn't distort or destroy the variable under test via
intervention.

snip, redundant



Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Recording level low - Nomrd Jukdbox Zen Extra to Audigy 2ZS Platium Pro # Fred # Pro Audio 10 October 2nd 06 06:27 PM
Iso Booth Treatment [email protected] Pro Audio 3 December 8th 05 07:20 PM
Which treatment for that guitar ? Bontempi Pro Audio 9 September 14th 04 05:20 AM
Wall treatment ScottW Audio Opinions 9 December 18th 03 10:01 PM
Wall Treatment ScottW High End Audio 0 December 16th 03 07:26 PM


All times are GMT +1. The time now is 05:31 AM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"