Reply
 
Thread Tools Display Modes
  #81   Report Post  
ludovic mirabel
 
Posts: n/a
Default science vs. pseudo-science

wrote in message ...
On Wed, 24 Sep 2003 17:08:01 GMT,
(ludovic
mirabel) wrote:


This is the 4th request to Mr. Jjnunes for references


I don't take you seriously, especially since these have posted before
and you argued against them with your usual absurd rhetorical games.
I'm not interested in continuing further. You apparently can't even
come to terms with audio components being reproducers of sound and
play rhetorical games about them being 'producers of music' to thus
provide yourself with an avenue to argue from the same pretext as to
how musical instruments are compared. That was why I made the mistake
of pointing you at the books I mentioned --- to provide a foundation
to look further into the subject.

You aren't interested in rational debate about this, but rather
rhetoric and rhetoric only. I should have learned this lesson sooner.

On of the signs of a healthy mind is the ability to cease when sated.
Can you do so?

Good bye.

You are breaking my heart. Are you sure? After mentioning
me twice a propos of nothing in your replies to your other fans, coyly
drawing my attention, you leave me high and dry, just with Pierce and
Audio Guy as the other relay racers. No more literary criticism? No
more hilarious quips?
But wait! You leave behind 18 titles culled a little
carelessly from the annual indices of JAES. You must have been in a
rush because you have one abstract repeated twice.... and one bare
title- no abstract. Still at least we have 16 abstracts. Better than
Pierce who gives titles alone.
So let me try and get "a healthy and sated mind" with a
little help from my friends. You see I don't take offence in a good
cause.
Mercy be! Not a single account of research documenting that
ABX is the proper instrument for us audio consumers to use when
COMPARING COMPONENTS. Remember COMPONENT COMPARISON? Remember us audio
fans? I've been putting it in capitals just to keep minds focused. I
know that some say it isn't good manners. Is that why you ignored it?
What is one to do to impress the topic on you- no underlining, no
bold print or italics in Google postings?
Instead you got: coding systems, testing of MPEG, audio
level monitoring for the blind, noise reduction, transformation of
binomials, verbal and non-verbal elicitation of audio impressions, and
other such burning audiophile issues. There is Mr. Toole, himself
testing:"Subjective measurements of loudspeaker quality" but no
mention of ABX!
You must be a believer in the great dictum: never
underestimate the idiocy of your readers.
There is Prof. Lip****z (he of the "SACD is a
catastrophe")explaining why he believes in "blind or preferably double
blind" testing
but not givimg any results in support.
There is Mr. Clark describing the ABX switch. You put
him in twice.
But you must have been in a rush because you included
as your Nr.8 exhibit a critique by Leventhal of the BIAS in ABX
testing AGAINST recognition of differences. Read it carefully- you
might get it this time. It was hotly debated by ABXers at the time.

Pierce is right. I will not read this stuff. If I
wanted to be thought an expert in psychoacoustics I would have studied
it- not just quoted a hodge-podge counting on no one calling my bluff.
The subject and your references are not of the slightest interest to
me and without any bearing on the subject of suitability of ABX for
untrained, disparate, uninterested audio consumers.

Now if you quoted one, single published audiophile
PANEL ABX test with a positive outcome there would be something to
talk about.
A "test" that has negative results only is not a
"test" for this application: comparing audio components by audio
consumers. Look up any introductory chapter on the methodology of
scientific experiment.

Ludovic Mirabel

The Great Debate: Subjective Evaluation 1170191 bytes (CD aes4)
Author(s): Lip****z, Stanley P.; Vanderkooy, John
Publication: Volume 29 Number 7/8 pp. 482·491; July 1981
Abstract: A polarization of people has occurred regarding subjective
evaluation, separating those who believe that audible differences are
related to measurable differences in controlled tests, from those who
believe that such differences have no direct relationship to
measurements. Tests are necessary to resolve such differences of
opinion, and to further the state of audio and open new areas of
understanding. We argue that highly controlled tests are necessary to
transform subjective evaluation to an objective plane so that
preferences and bias can be eliminated, in the quest for determining
the accuracy of an audio component. In order for subjective tests to
be meaningful to others, the following should be observed. (1) There
must be technical competence to prevent obvious and/or subtle effects
from affecting the test. (2) Linear differences must be thoroughly
excised before conclusions about nonlinear errors can be reached. (3)
The subjective judgment required in the test must be simple, such as
the ability to discriminate between two components, using an absolute
reference wherever possible. (4) The test must be blind or preferably
double-blind. To implement such tests we advocate the use of A/B
switchboxes. The box itself can be tested for audibly intrusive
effects, and several embellishments are described which allow
double-blind procedures to be used in listening tests. We believe
that the burden of proof must lie with those who make new hypotheses
regarding subjective tests. This alone would wipe out most criticisms
of the controlled tests reported in the literature. Speculation is
changed to fact only by careful experimentation. Recent references
are given which support out point of view. The significance of
differences in audio components is discussed, and in conclusion we
detail some of our tests, hypotheses and speculations.
Approximation Formulas for Error Risk and Sample Size in ABX Testing
442116 bytes (CD aes4)
Author(s): Burstein, Herman
Publication: Volume 36 Number 11 pp. 879·883; November 1988
Abstract: When sampling from a dichotomous population with an assumed
proportion p of events having a defined characteristic, the binomial
distribution is the appropriate statistical model for accurately
determining: type 1 error risk (symbol); type 2 error risk (symbol);
sample size n based on specified (symbol) and (symbol) and
assumptions about p; and critical c (minimum number of events to
satisfy a specified [symbol]). Table 3 in [1] pre;sents such data for
a limited number of sample sizes and p values. To extend the scope of
Table 3 to most n and p, we present approximation formulas of
substantial accuracy, based on the normal distribution as an
approximation of the binomial.
High Resolution Subjective Testing Using a Double Blind Comparator
1281885 bytes (CD aes10)
Author(s): Clark, David
Publication: Preprint 1771; Convention 69; May 1981
Abstract: A system for practical implementation of double-blind
audibility tests is described. The controller is a self contained
unit, designed to provide setup and operational convenience while
giving the user maximum sensitivity to detect differences. Standards
for response matching other controls are suggested as well as
statistical methods of evaluating data. Test results to data are
summarized.
Noise Reduction in Audio Employing Auditory Masking Approach 2543054
bytes (CD aes15)
Author(s): Czyzewski, Andrzej; Krolikowski, Rafal
Publication: Preprint 4930; Convention 106; May 1999
Abstract: A new method of noise reduction which exploits some
features of the auditory system is proposed. The noise suppression is
obtained twofold: by rising masking thresholds or by keeping noisy
components beneath these thresholds. The foundations of the method
and some engineered algorithms are described. The way of introduction
of the noise reduction features into an MPEG encoder is demonstrated.
Transformed Binomial Confidence Limits for Listening Tests 468821
bytes (CD aes5)
Author(s): Burstein, Herman
Publication: Volume 37 Number 5 pp. 363·367; May 1989
Abstract: A simple transformation of classical binomial confidence
limits provides exact confidence limits for the results of a
listening test, such as the popular ABX test. These limits are for
the proportion of known correct responses, as distinguished from
guessed correct responses. Similarly, a point estimate is obtained
for the proportion of known correct responses. The transformed
binomial limits differ, often markedly, from those obtained by the
Bayesian method.
Comments on "Type 1 and Type 2 Errors in the Statistical Analysis of
Listening Tests" and Author's Replies 674942 bytes (CD aes4)
Author(s): Shanefield, Daniel; Clark, David; Nousaine, Tom;
Leventhal, Les
Publication: Volume 35 Number 7/8 pp. 567·572; July 1987
Abstract: Not available.
High-Resolution Subjective Testing Using a Double-Blind Comparator
955218 bytes (CD aes4)
Author(s): Clark, David
Publication: Volume 30 Number 5 pp. 330-338; May 1982
Abstract: A system for the practical implementation of double-blind
audibility tests is described. The controller is a self-contained
unit, designed to provide setup and operational convenience while
giving the user maximum sensitivity to detect differences. Standards
for response matching and other controls are suggested as well as
statistical methods of evaluating data. Test results to date are
summarized.
Type 1 and Type 2 Errors in the Statistical Analysis of Listening
Tests 1828932 bytes (CD aes4)
Author(s): Leventhal, Les
Publication: Volume 34 Number 6 pp. 437·453; June 1986
Abstract: When the conventional 0.05 significance level is used to
analyze listening test data, employing a small number of trials or
listeners can produce an unexpectedly high risk of concluding that
audible differences are inaudible (type 2 error). The risk can be
both large absolutely and large relative to the risk of concluding
that inaudible differences are audible (type 2 error). this
constitutes systematic bias against those who believe that
differences are audible between well-designed electronic components
that are spectrally equated and not overdriven. A statistical table
is introduced that enables readers to look up type 1 and type 2 error
risks without calculation. Ways to manipulate the risks are
discussed, a quantitative measure of a listening test's fairness is
introduced, and implications for reviewers of the listening test
literature are discussed.
On the Audibility of Midrange Phase Distortion in Audio Systems
1936662 bytes (CD aes4)
Author(s): Lip****z, Stanley P.; Pocock, Mark; Vanderkooy, John
Publication: Volume 30 Number 9 pp. 580·595; September 1982
Abstract: The current state of our knowledge regarding the audible
consequences of phase nonlinearities in the audio chain is surveyed,
a series of experiments is described which the authors have conducted
using a flexible system of all-pass networks carefully constructed
for this purpose, and some conclusions are drawn regarding the
audible effects of midrange phase distortions. It is known that the
inner ear possesses nonlinearity (akin to an acoustic half-wave
rectifier) in its mechanical-to-electrical transduction, and this
would be expected to modify the signal on the acoustic nerve in a
manner which depends upon the acoustic signal waveform, and so upon
the relative phase relationships of the frequency components of this
signal. Some of these effects have been known for over 30 years, and
are quite audible on even very simple signals. Simple experiments are
outlined to enable the readers to demonstrate these effects for
themselves. Having satisfied ourselves that phase distortions can be
audible, the types of phase distortions contributed by the various
links in the audio chain are surveyed, and it is concluded that only
the loudspeaker contributes significant midrange phase
nonlinearities. Confining the investigation to the audibility of such
phase nonlinearities in the midrange, circuitry is described which
enables such effects to be assessed objectivbely fo their audible
consequences. The experiments conducted so far lead to a number of
conclusions. 1) Even quite small midrange phase nonlinearities can be
audible on suitably chosen signals. 2) Audibility is far greater on
headphones than on loudspeakers. 3) Simple acoustic signals generated
anechoically display clear phase audibility on headphones. 4) On
normal music or speech signals phase distortion appears not to be
generally audible, although it was heard with 99% confidence on some
recorded vocal material. It is clear that more work needs to be done
to ascertain acceptable limits for the phase linearity of audio
components·limits which might become more stringent as improved
recording/reproduction systems become available. It is stressed that
none of these experiments thus far has indicated a present
requirement for phase linearity in loudspeakers for the reproduction
of music and speech.

Subjective Evaluation of High-Quality Audio Coding Systems: Methods
and Results in the Two-Channel Case 2152388 bytes (CD aes13)
Author(s): Grusec, Theodore; Thibault, Louis; Soulodre, Gilbert
Publication: Preprint 4065; Convention 99; October 1995
Abstract: Experiments completed at the Communications Research Centre
in subjective assessment of 2-channel coding systems are described
along with the methodologies used in their execution. The discussion
centers on acoustic conditions, presentation technologies, choosing
audio materials, selecting and training listeners, grading
procedures, blind rating, data analysis, and decision-making from
experimental outcomes. Key ITU-R test results are presented to
characterize the quality of low bit-rate coding systems operating in
various configurations.
Sensitive Methodologies for the Subjecive Evaluation of High Quality
Audio Coding Systems 1881344 bytes (CD aes17)
Author(s): Grusec, Ted; Thibault, Louis; Beaton, Richard J.
Publication: Paper DSP-07; Conference: AES UK Conference: DSP;
September 1992
Abstract: Not available.
Formal Subjective Testing of the MPEG-2 NBC Multichannel Coding
Algorithm 1119369 bytes (CD aes14)
Author(s): Kirby, D.; Watanabe, K.
Publication: Preprint 4418; Convention 102; March 1997
Abstract: As part of its standardization process, the MPEG NBC
(non-backwards compatible) multichannel audio coding algorithm was
submitted for formal subjective testing in 1996 September. The tests
were carried out jointly at two test sites: the BBC and NHK. The
report was submitted to the Motion Picture Expert Group (MPEG) in
1996 November. This paper describes the design of these tests, the
preparations required, and the results obtained for each of the
codecs tested.
Verbal and Nonverbal Elicitation Techniques in the Subjective
Assessment of Spatial Sound Reproduction 2376898 bytes (CD aes18)
Author(s): MASON, RUSSELL; FORD, NATANYA; RUMSEY, FRANCIS; DE BRUYN,
BART
Publication: Volume 49 Number 5 pp. 366-384; May 2001
Abstract: Current research into spatial audio has shown an increasing
interest in the way subjective attributes of reproduced sound are
elicited from listeners. The emphasis at present is on verbal
semantics, however, studies suggest that nonverbal methods of
elicitation could be beneficial. Research into the relative merits of
these methods has found that nonverbal responses may result in
different elicited attributes compared to verbal techniques.
Nonverbal responses may be closer to the perception of the stimuli
than the verbal interpretation of this perception. There is evidence
that drawing is not as accurate as other nonverbal methods of
elicitation when it comes to reporting the localization of auditory
images. However, the advantage of drawing is its ability to describe
the whole auditory space rather than a single dimension.
Subjective Measurements of Loudspeaker Sound Quality and Listener
Performance 3114170 bytes (CD aes4)
Author(s): Toole, Floyd E.
Publication: Volume 33 Number 1/2 pp. 2·32; January 1985
Abstract: With adequate attention to the details of experiment design
and the selection of participants, listening tests on loudspeakers
yielded sound-quality ratings that were both reliable and repeatable.
Certain listeners differed in the consistency of their ratings and in
the ratings themselves. These differences correlated with both
hearing threshold levels and age. Listeners with near normal hearing
thresholds showed the smallest individual variations and the closest
agreement with each others. Sound-quality ratings changed as a
function of the hearing threshold level and age of the listener. The
amount and direction of the change depended upon the specific
products; some products were rated similarly by all listeners,
whereas others had properties that caused them to be rated
differently. Stereophonic and monophonic tests yielded similar
sound-quality ratings for highly rated products, but in stereo,
listeners tended to be less consistent and less critifal of products
with distinctive characteristics. Assessments of stereophonic spatial
and image qualities were closely related to sound-quality ratings.
The relationship between these results and objective performance data
is being pursued.
A Disk-Based System for the Subjective Assessment of High-Qualtity
Audio 1278922 bytes (CD aes12)
Author(s): Beaton, Richard J.; Wong, Peter
Publication: Preprint 3497; Convention 94; March 1993
Abstract: This paper describes the design of a digital system which
integrates automated tandem recording with a playback system
implementing an enhanced ABC triple stimulus with hidden reference
listening test methodology. This methodology was developed
specifically for CCIR evaluations of nearly transparent low bit-rate
audio coding algorithms. The system was used extensively in recent
CCIR TG 10/2 testing of low bit-rate audio coding algorithms for
digital audio broadcast. The use of a disk-based system was
instrumental in producing reliable assessments for the high-quality
systems under test. This paper outlines the technical challenges to
implementing the assessment methodology and discusses some of the
important new issues arising in evaluating the quality of nearly
transparent audio processes.
Audio Level Monitoring for Blind Sound Engineers/Recordists 760962
bytes (CD aes12)
Author(s): Angus, James A. S.; Malyon, Nicholas J.
Publication: Preprint 3219; Convention 91; October 1991
Abstract: Audio level monitoring relies heavily on visual displays
which are inappropriate for blind users. This paper will describe a
technique which allows a blind sound recordist to set her/ his own
levels via an audio cue. It will describe the design and
implementation of a unit which handles stereo recording in the studio
and on location and it will discuss extensions of the technique to
multitrack recording.
Comments on ·Subjective Appraisal of Loudspeaker Directivity for
Multichannel Reproduction· and
New Developments in MPEG-2 Audio: Extension to Multi-Channel Sound
and Improved Coding at Very Low Bit Rates 1023245 bytes (CD aes17)
Author(s): Stoll, Gerhard
Publication: Paper DAB-06; Conference: AES UK Conference: DAB, The
Future of Radio; May 1995
Abstract: The first objective of MPEG-2 Audio was the extension from
two to five channels, based on recommendations from ITU-R, SMPTE and
EBU. This was achieved in November 1994 with the approval of ISO/IEC
13818-3, known as MPEG-2 Audio. This standard provides high quality
coding of 5+1 audio channels together with backwards compatibility to
MPEG-1 · the key to ensure that existing 2-channel decoders will
still be able to decode the compatible stereo information from
multi-channel signals. For audio reproduction of surround sound the
loudspeaker positions left, center, right, left and right surround
are used · according to the 3/2-standard. The envisaged applications
are beside digital television systems such as dTTb, HDTVT, HD-SAT,
ADDT, digital storage media and the EU147 Digital Audio Broadcasting
system. The second objective was the extension of MPEG-1 Audio to
lower sampling rates to improve the audio quality at bit rates less
than 64 kbit/s per channel, in particular for speech applications.
This is of particular interest for the EU147 DAB system to provide
high quality news channels at the lowest bit rate.
Subjective Assessments on Low Bit-Rate Audio Codecs 1159838 bytes
(CD aes16)
Author(s): Grewin, Christer; Rydén, Thomas
Publication: Paper 10-013; Conference: The AES 10th International
Conference: Images of Audio; September 1991
Abstract: The Swedish Broadcasting Corporation (SR) has performed
subjective assessments on low bit-rate audio codecs for
ISO/MPEG/Audio. As it is likely that the same codec can be used for
DAB the evaluation is of great importance for broadcasters. This
paper presents the methodology, results and conclusions from the two
listening tests performed in July 1990 and April/May 1991.


  #82   Report Post  
ludovic mirabel
 
Posts: n/a
Default science vs. pseudo-science

(Audio Guy) wrote in message news:MpFcb.576680$YN5.411073@sccrnsc01...
In article ,
(ludovic mirabel) writes:
(Audio Guy) wrote in message news:zalcb.565817$Ho3.102946@sccrnsc03...
(ludovic mirabel) writes:

If you are happy with a "test" that gives as many different results
as there are people doing it, who am I to stop you? Use it. You'll get
yours.
Audio Guy:


Where is your evidence that audio DBTs "gives as many different
results as there are people doing it"? So far it is only your mistaken
interpretation of the test statistics. How about some real evidence?


Below find the results of of Greenhill's ABX cable test (The
Stereophile ,1983)
A "hit" is 12 correct answers out of 15.
Note different performers, performing differently. (Surprise,
Surprise!).
Note Nr. 6; 1.75db level difference but music is the signal. Compare
with test
1 and test 4.
I will not rediscuss the "statistics". This was thrashed out ad
nauseam here.
If it tells you something different from what it tells me, well and
good.

SUBJECTS: A B C D E F G H I J K
Test1: Monster vs. 24 g. wire,Pink noise 1.75db level difference
15 14 15 15 15 15 15 15 15 15 15
2. Same but levels matched
9 13 7 10 na. 8 9 6 14 12 12
3. Monster vs. 16 gauge zipcord, Pink noise
13 7 10 7 11 12 9 9 11 12 7
4.. 16 ga vs. 24 ga., Pink noise
15 15 na. 14 15 na 15 14 15 15 15
5. Monster vs. 16ga., choral music
4 6 11 8 9 5 5 7 6 10 10
6. Monster vs. 24ga, choral music 1.75db. level difference
14 7 15 10 8 10 6 10 11 12 10
______________________________________________
% of "hits" in the total of 6 tests, 90 tries.
67. 50 40 33 40 40 33 33 50 83 50

L.M.:


It tells me that people can easy tell level differences with pink
noise, not so easily with music. Where does it "gives as many
different results as there are people doing it"?

You're right. I got sort of dozed off looking at it all and got
carried away.
It is sometimes 10/11 the same results sometimes 3/11 and a few in
between.
It all adds up beautifully. I wish you and Mr Nunes who "has just done
that" many happy hours with the ABX and pink noise.
Ludovic Mirabel

I will not stop you and I will not continue this pointless scholastic
argument.

How is it a "scholastic argument"? Are you using this label so you
can side-step the issue? Ironically many would consider all of your
arguments purely "scholastic arguments".

Yes you're correct: scholastic arguments, including mine, are about
something unproven. When you or someone like Mr. JJnunes comes up with
experimental evidence that ABX is the right tool for COMPARING
COMPONENTS and that for instance it does not interfere with perception
of their musical characteristics we'll be talking about realities.


I believe Mr. Junes has just done that.

excessive quoting snipped


  #83   Report Post  
L Mirabel
 
Posts: n/a
Default science vs. pseudo-science

In truth I have little to substantiate to you. Every few
weeks you post a personal attack ( quotes below) with no other content
or you truncate and distort my words
. Every time when shown up you go silent for a month or two. I
find it distasteful to go over all this stuff again but since you
force me you'll find requotes below. There's more if you want it.
Just ask..
. You seem to have some kind of immunity for this kind of
thing here. I won't claim the same.
Now you thought up a new wrinkle: insinuation that I'm
lying about my professional record.. This of course has nothing to do
with an argument about the proper way to compare components but
anything will do.
One thing I know : However inadequate my DBT exposure cxould be it
exceeds yours by miles.
I apologise to the readers for what follows and invite you
to skip it. It is not MY choice of the way to discuss opinions.
-----------
------- After 5 years of postgraduate training in internal medicine
I became a full time, and the only resident researcher in the
schistosomiasis (bilharzia) unit, a division of the Tropical Disease
Research of the Med. Research Ccil of U.K. in Hertford in 1951/2.
Head: Dr Newsome. My lab technician: Mr England (yes!). We had also an
Egyptian on a fellowship from his Govt. I forgot his name but remember
him for memorising a 900 page textbook of Neurology in one week.
I already had some drug research DBT experience.. I was a
Senior House Officer in Brook Hospital, London. A Dr. G. Loxton,
rheumatologist, was trying out a "wonder drug" for rheumatoid
arthritis (dexoxycorticosterone with vitamin C). Initial enthusiasm
cooled after a DBT.
M.R.C. was the organisation and that was the time when
and where the principles of randomised DB drug testing were being
developed principally by the statistician Bradford Hill. My unit was
researching the proposed bilharzia drugs effects on infected animals
(baboons and "desert rats") and planning human trials but none of the
drugs we tested warranted it as yet..
I resigned after one year after passing my specialty exam
in int. medicine (M.R.C.P. Ed). I decided that I preferred clinical
medicine to research and as there were no openings for me in U.K I
emigrated to Canada. Hundreds of others newly qualified specialists
in U;K; had to (or U.S. or Australia) because under the system in U.K.
a specialist- consultant doesn't just hang out his shingle. You have
to wait for someone to retire or die and then be selected by a
hospital in preference to others, most at least equally bright.
Afterwards, as the consultant cardiologist ( solo for


a few years till others joined me) in a large suburban hospital I HAD
TO keep up with DBT's. Randomised DBT drug trial has been staple in
medicine for decades. No new treatment without DBT. It was the air we
breathed. At that there are constant arguments about the adequate
selection of controls, significance of results etc. . Proper DBT at
that with objective body changes to assess at the end, symmetrical
placebo control group etc.- not a question and answer ad hoc
"listening test" or a home ABX switch kit.------ End of personal stuff
The only reason I first brought up my research
experience here was because people like you with qualifications in eg.
electronic engineering kept questioning my right as an audio consumer
to express my views on the DBT tunes sung in RAHE. Perhaps you felt I
was trespassing on your territory as the all- round audio oracle. I
happen to react to the local authorities laying down the law about
things they know no more about than anyone else. Someone like me was
long overdue in RAHE. If only to infuriate the pompous importances.
That you'd imagine that disagreeing with you is important
enough for anyone to falsify his credentials tells more about you than
I care to know.
As for your "references"- whom are you kidding Pierce?
You know perfectly well that the argument is about ABX as THE test for
ordinary audio consumers for COMPARING COMPONENTS- NOT ABOUT
PSYCHOACOUSTIC RESEARCH. Of course I do not spell it in full every
time- I'm assuming the minimum of decent discussion manners. After all
I said it at least twenty times already. Of course you would quote a
dozen references of which only one (Toole) may have some bearing on
the subject. Who is actually comparing *what* components in your
"references"?. With what results? You hope no one will read this stuff
carefully , right? Why don't you quote the index of JAES for one year?
You don't want to compete with Jjnunes 18 titles.?
For your information this is what a proper reference
looks like.: S. J. Wilson & al "Comparing the quality of oral
anticoagulant management by....A randomised controlled clinical trial"
C.M.A.J., vol.169, No4, 293, '03.
This IS a reference. You want to know about management of anticoags,
this is what you look up.
You want to know about the usefulness of ABX to untrained, unselected
audio fans for comparing components you don't look in these irrelevant
collections
In the meantime I'll repeat what I said to Jjnunes:
" "Just to spur you on I will now state emphatically that
you talk about "trajectories" for lack of anything better. NOTHING in
Fletcher, NOTHING in Yost, NOTHING in good old Moore. And you know
what else? NOTHING ANYWHERE ELSE. The reputable, published basic
research for the consumer use of DBTs in comparing audio components


does NOT EXIST. "Starting points" and "trajectories" will not replace
it. You were asked for nothing complicated. Just a very simple thing
called: quotable evidence. Remember "evidence"?. Remember quote?"
And I'll add this: you Mr. Pierce do not have the
foggiest about how to transfer Component Comparison DBT from the lab
to the street. If you did you would not be quoting your pseudo
"references"..Read my soon to appear ( I hope) posting about S.
Toole's loudspeaker comparison in his laboratory to begin learning.
I can not stop wondering what accounts for the
hostility in the samples quoted below. Gourmets argue about
food, wine drinkers about wines, piano players about pianos. No Gallo
drinkers claim that they have a "test"that will show up those damn
Burgundy and Bordeaux lovers. Audio seems to breed a particularly
embittered and combative swarm of discussants..
Ludovic Mirabel

Samples of Mr. Pierce debating methods.
June 25 "Why Dbts in audio do not deliver?"
"Well, the answer is VERY simple: DBT does not deliver what people
like Ludovic want. It does not support THEIR agenda, it does not
validate THEIR preferences, indeed, it does not elevate their
preferences to the level of universal fact. In that sense, indeed, ANY
testing of ANY kind will NEVER deliver what they want, except that
testing that gives the results they expect. It basically reduces to
the fact that if you don't get the answer you expect, blame the
question, but NEVER entertain the posibility that not so much the
expectation itself is wrong, but the very fact that you HAVE an
expectation is the issue. Science certainly works hard to give you
answers, it just doesn't give a sh*t whether you like the answer or
not. THAT'S why DBT doesn't work: because it does."
No quoted argument of mine in the whole posting, Just this.

July 8 same thread:
"I'd posit, instead, that Ludovic simply engages in a continuous
2. stream of misrepresentation. Why?

1. It's inadvertant. He doesn't no better. Poor Ludovic. Poor us
for having to slog through his irrelevant misrepresentations.

2. It's deliberate. He has no sound foundation for whatever the
hell it is he's arguing about and simply to keep his side of
the conversation going, he just makes stuff up because he has
absolutely nothing to contrinute of any relevance or substance.

The evidence, especially in the form of the quoted text above,


would seem to have one lean in the direction of deliberate and
malicious misrepresentation."
It continues in the same vein. And there are plenty more like this. It
goes back two years. Just ask, Mr. Pierce, and I'll oblige.


T. Poulsen, "Application of psychoacoustic methods,"
H. Staffeldt, "Evaluation and scaling of timbre in listening tests
on loudspeakers,"
F. Toole, "Planning of listening tests - technical and environmental
variables,"
A. Gabrielsson, "Planning of listening tests - listener and
experimenatl variables,"
S. Bach, "Planning of listening tests - choice of rating scale and
test procedure,"
N. Kousgaard, "The applicatin of binary paired comparisons to
listening tests,"
M. Williams, "Choice of programme material for critical listening
to loudspeakers,"
S. Pramanik, "Inadvertant bias in listening tests,"
F. Toole, "Correlation between the results of objective and
subjective tests,"

All present at and found in the Proceedings of the Symposium
on Perception of Reproduced Sound, Gammel Avernaes, Denmark, 1987.


__________________________________________________ __________


"Dick Pierce" wrote in message
...
wrote in message

...
On Wed, 24 Sep 2003 17:08:01 GMT,
(ludovic
mirabel) wrote:


This is the 4th request to Mr. Jjnunes for references


I don't take you seriously, especially since these have posted before
and you argued against them with your usual absurd rhetorical games.
I'm not interested in continuing further. You apparently can't even
come to terms with audio components being reproducers of sound and
play rhetorical games about them being 'producers of music' to thus
provide yourself with an avenue to argue from the same pretext as to
how musical instruments are compared. That was why I made the mistake
of pointing you at the books I mentioned --- to provide a foundation
to look further into the subject.

You aren't interested in rational debate about this, but rather
rhetoric and rhetoric only. I should have learned this lesson sooner.


Indeed, but hope springs eternal even, it seems, for our persistant
Mr. Ludovic.

In addition to Mr. Nunes' excellent references, every one of which I
would wager Mr. Ludovic has never read and will ignore, I would merely
AGAIN, as I did some months ago, point out the following references:

T. Poulsen, "Application of psychoacoustic methods,"
H. Staffeldt, "Evaluation and scaling of timbre in listening tests
on loudspeakers,"
F. Toole, "Planning of listening tests - technical and environmental
variables,"
A. Gabrielsson, "Planning of listening tests - listener and
experimenatl variables,"
S. Bach, "Planning of listening tests - choice of rating scale and
test procedure,"
N. Kousgaard, "The applicatin of binary paired comparisons to
listening tests,"
M. Williams, "Choice of programme material for critical listening
to loudspeakers,"
S. Pramanik, "Inadvertant bias in listening tests,"
F. Toole, "Correlation between the results of objective and
subjective tests,"

All present at and found in the Proceedings of the Symposium
on Perception of Reproduced Sound, Gammel Avernaes, Denmark, 1987.

Through all, this, our dear Mr. Ludovic has pounded his fist and the
occasional show on the table DEMANDING references, when provided with
same, he has simply pounded louder.

And he has ALSO made claims about the unsuitability of blind testing,
claiming his experience in the medical field and testing. He has done
so, it would seem with NO substantiation of those claims. I think its
time we called his bluff:

Mr. Ludovic, you have provided NO substantiation that you have ANY
experience in the field of blind testing or medical research. Your
claims, indeed, could well be interpreted as belonging to someone
who has, at best, very limited, casual and peripheral experience in
the realm.

We, thus, kindly ask YOU to substantiate YOUR claims of experience in
those fields which you claim some experience. Where are YOUR published
papers? With whom were YOU affiliated? What research projects have YOU
been a principal or support investigator on?

Please, we have provided DOZENS of references for YOU, how about providing
us the same.

After all, aren't YOU subject to the very same criteria that you subject
others to?


  #84   Report Post  
ludovic mirabel
 
Posts: n/a
Default science vs. pseudo-science

(Stewart Pinkerton) wrote in message ...
On Wed, 24 Sep 2003 23:18:09 GMT,
(ludovic
mirabel) wrote:

(Stewart Pinkerton) wrote in message ...
On Tue, 23 Sep 2003 17:31:45 GMT,
(ludovic
mirabel) wrote:

"All Ears" wrote in message news:97Fbb.406782$cF.126279@rwcrnsc53...

Mirabel wrote:
In '89 a rather elaborate listening test for audibility of
distortion
was performed .(Masters and Clark, St. Review, Jan. '89).
Various types of distortion with different signals were tested.. There
were 15 TRAINED listeners -? Gender?. At 2 db. distortion level
(2db), playing "natural music" the "average" level of correct hits
was 61% (barely above the minimum statistically significant level of
60%). The individual scores varied from perfect 5/5 to 1/5
Similar discrepancies were observed in phase shift recognition.:
Authors' conclusion: "Distortion has to be very gross and the signal
very simple for it to be noticed" ... by the "average"
Will it do for the time being?


All Ears commented:
I think this is good "food for thoughts" because it gives an idea of how
large a margin there is to really detect a difference.

I answered:
Note that the performance varies from one listener to other-
inspite of training and retraining-. A few have 5 out of 5 correct
responses, a few 1 out of 5 and most fall in the average middle. As
you would expect.

Pinkerton:
Indeed, as you would expect for an effect which is on the threshold of
audibility.

Note that the "objectivist", objectively unbiased, proctors
showed no interest in the few who heard the DIFFERENCES accurately.
They just lumped them together with the most who DID NOT and got an
average for an average, fictitious Mr. Average Listener who hears no
differences-ever. This of course was in acordance with the "Stereo
Review" guiding principle- "the high end does not exist, our big
account advertisers sound just as good."

You are once again making the classic mistake (or is it yet another
deliberate distortion?) of ignoring the basis of statistics. *You*
invariably 'cherry pick' the results that suit your preconceptions,
the researchers above very properly included *all* the responses. They
most certainly did *not* 'ignore' the 5/5 response, they *included* it
in the results. Incidentally, you once again alter the facts to suit
yourself. There is no indication in the above report that there were
'a few' listeners who scored 1/5 or 5/5, there may have been only one
of each - as a standard distribution curve would suggest.

Now, if the researchers had *repeated* the experiment, do you presume
that the same listeners would score the same results, i.e. that the
5/5 scorer(s) really do have 'Golden Ears'? That would suit *your*
preconceptions, but the results of the recently posted TAG McLaren
tests did not show this. Once again, you attempt to ignore the very
basis of statistical probability, in an attempt to shore up your
prejudices.


I love "probability". But when someone is correct 5 times out of 5
and someone else 1 out of 5 ,or 74 times out of 90 (like Greenhill's
"golden ear") and not 30 out of 90 like his other testees I would be
curious how they would do on a repeat. In other words I'd experiment.
Of course things are diferent when one is a pure, scientific
statistician/mathematician like Mr. Pinkerton. Cherry-picking is
taboo, experiments-hell, we have probabilities and OUR probabilities
are certainties.
And paper is patient.

Now for the "golden ear". For the last time (What a hope!)
Greenhill said: "The final significant conclusion is that at least
one genuine "golden ear" exists". The only time I used the term was
when relating his results.
Greenhill was the cable test proctor and the writer ("The Stereo
Review" Aug. 1983, p. 51),
He is also former collaborator of Mr. Krueger, who I believe
invented ABX.
He is also alive and well and writing for "The Stereophile". WHY don't
you tell him what you think of his statistics and his "claim". You
have a TAG test (whatever that is) on your side.
One more little thing: I said all this stuff to you before. You
still twist the facts to suit your polemic. Go home and write the
Pinkerton comment you would write if I did such a thing.

For economy I'll refer to my today's answer to Mr. Strong.


Which is completely refuted by the TAG test, which found that the
better performers in one test, were average or worse performers in the
other test.

I don't know the test but I guess that you're saying that it
contains the ultimate truth that makes any further experiment
unnecessary,

I'll add only that when Mr. Pinkerton posts his results of his tests
in this group that is not "cherry picking".


Indeed it's not, since I posted both positive *and* negative results.
You no doubt would have claimed that the negative results were in some
mysterious way flawed, and/or that you were simply 'bad at DBTs', and
that some unnamed other person would of course have obtained no
negative results.

1) Why would anyone in his senses say that ALL the negative results
are flawed? I would not. Is this what some discussants here wittily
call a "strawman?"
2) And Greenhill posted only the positive ones?? My poor
nonmathematical head is spinning.
I see-
He had himself and one or
two of his friends in his amplifier "test". If he added 10
"audiophiles" he would add up all their results- and let the dice
fall as they may- even if Krell turned out not distinguishable from
Panasonic integrated- right?


Right.

I wonder if he ever sat an exam.?


Far too many! :-)

I wonder if he'd like the collective
results averaged or would he want himself to be cherry-picked.


Different situation, as I have a personal interest in my own results.
Audiophile friends with an interest in their own results would no
doubt conduct further tests for themselves. The analogous situation is
where I perform lots of tests on myself, to verify my own abilities,
but only limited tests on others, to verify that I am just one of many
with similar perceptual abilities. You have shown absolutely *no*
evidence of the existence of 'Golden Ears', indeed all the available
evidence suggests nothing more than standard statistical distributions
according to random chance.

Sorry. Your mathematics are beyond me. I have no idea what you're
saying.

This does not seem to prevent you from *claiming* that such people
somehow must exist - somewhat like Bigfoot.

This is witty. I go to the Okanagan (in B.C.) and I saw the Bigfoot
talking to the "Golden Ear". In person

Absurdity in the service of winning a debate on paper could not go any
further


I entirely agree.............

This is witty too.

The insinuating: " Or is it yet ANOTHER DELIBERATE DISTORTION" is par
for the gentleman. It tells more about the way he thinks than he'd
like to be known and is another one of his contributions to the
gentler , kinder RAHE debating manners.


It tells people *exactly* what I'd like to be known, which is that you
simply *refuse* to engage in honest debate, instead using every
possible trick to avoid the inevitable conclusion, that you simply do
not have a case to argue.


Name one participant in RAHE discussions who disagrees with you on
DBT matters and who does "engage in honest debate" with you. If you
recall it is not Mkuller or Harry Lavo. Name one "honest" man who
disagrees with you ,honestly by your lights, about testing cables.
It is probably ( statistically of course) true that you really
believe that people normally lie to argue a point. That is a kind of
insight.
Ludovic Mirabel

  #85   Report Post  
ludovic mirabel
 
Posts: n/a
Default science vs. pseudo-science

(Audio Guy) wrote in message news:XWldb.603104$o%2.282900@sccrnsc02...
In article O36db.436378$Oz4.243723@rwcrnsc54,
(ludovic mirabel) writes:
(Audio Guy) wrote in message news:MpFcb.576680$YN5.411073@sccrnsc01...
In article ,
(ludovic mirabel) writes:

Below find the results of of Greenhill's ABX cable test (The
Stereophile ,1983)
A "hit" is 12 correct answers out of 15.
Note different performers, performing differently. (Surprise,
Surprise!).
Note Nr. 6; 1.75db level difference but music is the signal. Compare
with test
1 and test 4.
I will not rediscuss the "statistics". This was thrashed out ad
nauseam here.
If it tells you something different from what it tells me, well and
good.

SUBJECTS: A B C D E F G H I J K
Test1: Monster vs. 24 g. wire,Pink noise 1.75db level difference
15 14 15 15 15 15 15 15 15 15 15
2. Same but levels matched
9 13 7 10 na. 8 9 6 14 12 12
3. Monster vs. 16 gauge zipcord, Pink noise
13 7 10 7 11 12 9 9 11 12 7
4.. 16 ga vs. 24 ga., Pink noise
15 15 na. 14 15 na 15 14 15 15 15
5. Monster vs. 16ga., choral music
4 6 11 8 9 5 5 7 6 10 10
6. Monster vs. 24ga, choral music 1.75db. level difference
14 7 15 10 8 10 6 10 11 12 10
______________________________________________
% of "hits" in the total of 6 tests, 90 tries.
67. 50 40 33 40 40 33 33 50 83 50

L.M.:

It tells me that people can easy tell level differences with pink
noise, not so easily with music. Where does it "gives as many
different results as there are people doing it"?

You're right. I got sort of dozed off looking at it all and got
carried away.
It is sometimes 10/11 the same results sometimes 3/11 and a few in
between.
It all adds up beautifully. I wish you and Mr Nunes who "has just done
that" many happy hours with the ABX and pink noise.


This just shows again that you have an incomplete understanding of
statistics. You state that a hit is 12 out of 15. If it is not a hit,
then it is considered to be within the realm of random chance and so
is not counted any differently whether it is 1 out of 15 or 11 out of
15. That does not by any stretch of the imagination "gives as many
different results as there are people doing it".


I said I will not discuss satististics but I'm always eager to
learn. You are right.
It is all the same. No differences. Or maybe there are SOME
differences but they are not statistical. They are just an illusion, a
puff in the wind, a knock and we are through the mirror in the
Wonderland.

Ludovic Mirabel



  #89   Report Post  
Harry Lavo
 
Posts: n/a
Default science vs. pseudo-science

Stewart -

Since my name has been dragged in here, and you are apparently calling me
"dishonest" and a "liar," would you please cite where and when I have been
"dishonest" or "lied" in a discussion of dbt testing. Of cables no less,
which I have studiously avoided.

If you can't find any (which will be the case) I would appreciate an
apology.

Harry

"Stewart Pinkerton" wrote in message
...
On Sun, 28 Sep 2003 17:37:55 GMT, (ludovic
mirabel) wrote:

(Stewart Pinkerton) wrote in message

...

It tells people *exactly* what I'd like to be known, which is that you
simply *refuse* to engage in honest debate, instead using every
possible trick to avoid the inevitable conclusion, that you simply do
not have a case to argue.


Name one participant in RAHE discussions who disagrees with you on
DBT matters and who does "engage in honest debate" with you. If you
recall it is not Mkuller or Harry Lavo. Name one "honest" man who
disagrees with you ,honestly by your lights, about testing cables.


That is of course entirely my point. It is *very* obvious that those
in this forum who support DBTs rely on logic and on the results of
actual tests, whereas those who disagree rely on polemic, distortion
and cherry-picking.

It is probably ( statistically of course) true that you really
believe that people normally lie to argue a point.


Not perhaps in general, but it certainly seems to form a pattern in
*this* instance.
--

Stewart Pinkerton | Music is Art - Audio is Engineering

  #90   Report Post  
ludovic mirabel
 
Posts: n/a
Default science vs. pseudo-science

(Audio Guy) wrote in message news:4dJdb.455493$Oz4.260164@rwcrnsc54...
In article a0Fdb.611941$Ho3.119231@sccrnsc03,
(ludovic mirabel) writes:
(Audio Guy) wrote in message news:XWldb.603104$o%2.282900@sccrnsc02...

This just shows again that you have an incomplete understanding of
statistics. You state that a hit is 12 out of 15. If it is not a hit,
then it is considered to be within the realm of random chance and so
is not counted any differently whether it is 1 out of 15 or 11 out of
15. That does not by any stretch of the imagination "gives as many
different results as there are people doing it".


I said I will not discuss satististics but I'm always eager to
learn. You are right.
It is all the same. No differences. Or maybe there are SOME
differences but they are not statistical. They are just an illusion, a
puff in the wind, a knock and we are through the mirror in the
Wonderland.


Did you not employ statistical analysis in any of the medical DBTs
you were involved in? Because it's key in analyzing the results of
the test or tests. Without the statistical analysis you cannot tell
which results are just random chance and which are really
significant.

I will agree with your point that when results get very close to
shoving significance the individual in question should be retested.
But without the retesting you cannot state that they did hear
something, only that they may have heard something. But I have yet to
see you make that qualification.


Dear man , I qualified thusly at least ten times in the
last two years, asking why didn't the proctord pay attention to the
only interesting results, namely
those of the exceptional performers and rechecked them SOS. But I
can't expect you to memorise my collected writings. So for your
convenience: Yes they should have done it. Even though Greenhill ran
not just one but six tests and only two of his subjects scored
consistenly well
in five of them.
You don't say how many times they should have repeated it to
satisfy you and Pinkerton. Twice? 3 times? Ten times like Norman
Strong once suggested?
So let's collaborate on an ideal design
Your statistical prowess encourages me. Let us get an ABX
project, you and I together, based on Sean Olive's results. (of course
he did not use ABX but we respect his results, right?).
To do justice to the differences in performance between trained
(the best) , semitrained (in the middle - 3 times worse) and the great
unwashed (us audio consumers- just like the audio students- 27 times
worse) we'll get three groups going.
First the random collection of audiophiles. Get them ABXed on
anything reasonably comparable other than the grossly unlike
loudspeakers. The result almost guaranteed: "No difference, no
preference".
All is for the best in this best of all possible worlds.. Just
what you all would have wanted. The result is accepted without a
murmur just like your predecessors in the
"Stereo Review" days accepted Greenhill, Clark, Masters and so on.
As long as their ABX manipulated results on cable, preamp, amp,
cdplayer, dac were "They are all the same" . Electronics is wonderful
Now group 3, the trained. Some get 80% correct. Panic in the
ranks. This couldn't be! Repeat please. And keep repeating till they
say "uncle" ie. till they are half-deaf and ready to confess that
there is "No difference"- and can I go home, please?
The intermediate group , the salesmen, doesn't count. They
convince easy.
Isn't statistics wonderful too?
Ludovic Mirabel.



  #91   Report Post  
ludovic mirabel
 
Posts: n/a
Default science vs. pseudo-science

(Audio Guy) wrote in message news:4dJdb.455493$Oz4.260164@rwcrnsc54...
In article a0Fdb.611941$Ho3.119231@sccrnsc03,
(ludovic mirabel) writes:
(Audio Guy) wrote in message news:XWldb.603104$o%2.282900@sccrnsc02...

This just shows again that you have an incomplete understanding of
statistics. You state that a hit is 12 out of 15. If it is not a hit,
then it is considered to be within the realm of random chance and so
is not counted any differently whether it is 1 out of 15 or 11 out of
15. That does not by any stretch of the imagination "gives as many
different results as there are people doing it".


I said I will not discuss satististics but I'm always eager to
learn. You are right.
It is all the same. No differences. Or maybe there are SOME
differences but they are not statistical. They are just an illusion, a
puff in the wind, a knock and we are through the mirror in the
Wonderland.


Did you not employ statistical analysis in any of the medical DBTs
you were involved in? Because it's key in analyzing the results of
the test or tests. Without the statistical analysis you cannot tell
which results are just random chance and which are really
significant.

I will agree with your point that when results get very close to
shoving significance the individual in question should be retested.
But without the retesting you cannot state that they did hear
something, only that they may have heard something. But I have yet to
see you make that qualification.


Dear man , I qualified thusly at least ten times in the
last two years, asking why didn't the proctord pay attention to the
only interesting results, namely
those of the exceptional performers and rechecked them SOS. But I
can't expect you to memorise my collected writings. So for your
convenience: Yes they should have done it. Even though Greenhill ran
not just one but six tests and only two of his subjects scored
consistenly well
in five of them.
You don't say how many times they should have repeated it to
satisfy you and Pinkerton. Twice? 3 times? Ten times like Norman
Strong once suggested?
So let's collaborate on an ideal design
Your statistical prowess encourages me. Let us get an ABX
project, you and I together, based on Sean Olive's results. (of course
he did not use ABX but we respect his results, right?).
To do justice to the differences in performance between trained
(the best) , semitrained (in the middle - 3 times worse) and the great
unwashed (us audio consumers- just like the audio students- 27 times
worse) we'll get three groups going.
First the random collection of audiophiles. Get them ABXed on
anything reasonably comparable other than the grossly unlike
loudspeakers. The result almost guaranteed: "No difference, no
preference".
All is for the best in this best of all possible worlds.. Just
what you all would have wanted. The result is accepted without a
murmur just like your predecessors in the
"Stereo Review" days accepted Greenhill, Clark, Masters and so on.
As long as their ABX manipulated results on cable, preamp, amp,
cdplayer, dac were "They are all the same" . Electronics is wonderful
Now group 3, the trained. Some get 80% correct. Panic in the
ranks. This couldn't be! Repeat please. And keep repeating till they
say "uncle" ie. till they are half-deaf and ready to confess that
there is "No difference"- and can I go home, please?
The intermediate group , the salesmen, doesn't count. They
convince easy.
Isn't statistics wonderful too?
Ludovic Mirabel.

  #92   Report Post  
Audio Guy
 
Posts: n/a
Default science vs. pseudo-science

In article ,
(ludovic mirabel) writes:
First the random collection of audiophiles. Get them ABXed on
anything reasonably comparable other than the grossly unlike
loudspeakers. The result almost guaranteed: "No difference, no
preference".
All is for the best in this best of all possible worlds.. Just
what you all would have wanted. The result is accepted without a
murmur just like your predecessors in the
"Stereo Review" days accepted Greenhill, Clark, Masters and so on.
As long as their ABX manipulated results on cable, preamp, amp,
cdplayer, dac were "They are all the same" . Electronics is wonderful
Now group 3, the trained. Some get 80% correct. Panic in the
ranks. This couldn't be! Repeat please. And keep repeating till they
say "uncle" ie. till they are half-deaf and ready to confess that
there is "No difference"- and can I go home, please?
The intermediate group , the salesmen, doesn't count. They
convince easy.
Isn't statistics wonderful too?


You make extreme assumptions about me and DBTs. I have NEVER said that
everything sounds the same. NEVER. I myself have found that I hear
differences in components, both sighted and via DBT. And I never said
that audio consumers MUST use a DBT of some sort to determine which
audio products they should buy.

But I do advocate that one must use a DBT of some sort if one wants to
be sure of differences in audio components because non-blind
comparisons are prone to error, the classic "Clever Hans" story.
And an ABX device is a simple way to perform a DBT on audio
components no matter how often your decry "ABX manipulated results".
Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Simple science question Schizoid Man Audio Opinions 0 February 5th 04 11:45 PM
rec.audio.opinion, isn't exactly rocket science Basksh Abdullah Audio Opinions 0 October 10th 03 12:12 AM


All times are GMT +1. The time now is 01:44 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"