Reply
 
Thread Tools Display Modes
  #81   Report Post  
Posted to rec.audio.pro
William Sommerwerck William Sommerwerck is offline
external usenet poster
 
Posts: 4,718
Default How was it known that mics were good before the advent of hi-fi playback?

A lens is a special case of the general class of items
called prisms. Fresnel lenses are even made out of prisms.
Prisms and lenses both work on the principle of refraction.


If you're talking about prisms as devices that divide light into a spectrum,
the operating principle is dispersion, not refraction. (Granted, dispersion
is a subset of refraction.)

I know of no use of prisms as image-forming devices. They are not lenses.
(The ridges of a Fresnel lens are not prisms, but "onion ring" cores of a
lens surface.) They are commonly used -- particularly in binoculars and
SLRs -- to direct the light in a different direction /without "processing"
it/ in any way.


  #82   Report Post  
Posted to rec.audio.pro
dbd dbd is offline
external usenet poster
 
Posts: 9
Default How was it known that mics were good before the advent of hi-fi playback?

On Dec 24, 2:08*am, "William Sommerwerck"
wrote:

...
I still say you don't understand what the Fourier transform is all about.
However, I can see how one might analyze a /time-limited/ noise signal.


I understand that the continuous/infinite Fourier transform is
different from the finite/discrete operations we are limited to in the
real world. By understanding the differences we can restrain our
expectations from infinite/continuous theory to achievable finite/
sampled theory and practice. We can use the knowledge of the
differences to work around some of the limitations.

For finite noise analysis, the FFT estimate of power spectral density
has a high variance, so averaging of multiple estimates is used to
reduce the variance. If you expect the zero variance of the infinite/
continuous Fourier transform on infinite/continuous data from
calculations on finite sample sets it is your expectations that are
faulty. The variance does not come from a failure to perform a proper
Fourier analysis but from performing the Fourier analysis on a finite
set of samples. Unfortunately, we have not had time to collect any
infinite noise records for analysis.

Dale B. Dalrymple
  #83   Report Post  
Posted to rec.audio.pro
William Sommerwerck William Sommerwerck is offline
external usenet poster
 
Posts: 4,718
Default How was it known that mics were good before the advent of hi-fi playback?

I still say you don't understand what the Fourier transform is all about.
However, I can see how one might analyze a /time-limited/ noise signal.


I understand that the continuous/infinite Fourier transform is
different from the finite/discrete operations we are limited to in the
real world. By understanding the differences we can restrain our
expectations from infinite/continuous theory to achievable finite/
sampled theory and practice. We can use the knowledge of the
differences to work around some of the limitations.

For finite noise analysis, the FFT estimate of power spectral density
has a high variance, so averaging of multiple estimates is used to
reduce the variance. If you expect the zero variance of the infinite/
continuous Fourier transform on infinite/continuous data from
calculations on finite sample sets it is your expectations that are
faulty. The variance does not come from a failure to perform a proper
Fourier analysis but from performing the Fourier analysis on a finite
set of samples. Unfortunately, we have not had time to collect any
infinite noise records for analysis.


If the noise is a stochastic process (am I using the term correctly?), would
you need an "infinite" sample? And if not, what would be the minimum sample
length? (I assume it would be inversely proportional to the lowest frequency
you wanted to measure.)


  #84   Report Post  
Posted to rec.audio.pro
dbd dbd is offline
external usenet poster
 
Posts: 9
Default How was it known that mics were good before the advent of hi-fi playback?

On Dec 24, 9:50*am, "William Sommerwerck"
wrote:
...



If the noise is a stochastic process (am I using the term correctly?), would
you need an "infinite" sample? And if not, what would be the minimum sample
length? (I assume it would be inversely proportional to the lowest frequency
you wanted to measure.)


You cannot achieve perfect reconstruction of the stochastic noise
process with a finite set of samples. If you simply take the DFT of n
real samples you get n/2 independent Fourier coefficients to calculate
n/2 power spectral density (PSD) estimates. If you simply increase n,
you get more estimates at closer frequency spacing with the same high
variance. To reduce the variance, multiple independent sets of (small)
n real samples are used to generate PSD estimates that are averaged
across multiple blocks to reduce variance. The conventional DFT will
produce one of the estimates at DC. This was in the reference I gave
on noise processing.

For sums of sets of stationary tones you can achieve perfect
reconstruction only if all the tones belong to a set of n/2
frequencies evenly spaced on 0 to Fsample/2. With the conventional
DFT, one of the tones will be DC. If these conditions are met, you can
achieve perfect reconstruction anywhere on the region over which the
tones are stationary. These conditions are seldom met in real
instrumentation and with real-world signals

Fortunately it is not necessary to be able to achieve perfect
reconstruction to make useful applications of Fourier analysis of
finite/discrete data sets. Is it equivalent to infinite/continuous
Fourier analysis of infinite/continuous signals? No, it doesn't need
to be.

Dale B. Dalrymple
  #85   Report Post  
Posted to rec.audio.pro
William Sommerwerck William Sommerwerck is offline
external usenet poster
 
Posts: 4,718
Default How was it known that mics were good before the advent of hi-fi playback?

If the noise is a stochastic process (am I using the term correctly?),
would
you need an "infinite" sample? And if not, what would be the minimum

sample
length? (I assume it would be inversely proportional to the lowest

frequency
you wanted to measure.)


You cannot achieve perfect reconstruction of the stochastic noise
process with a finite set of samples. If you simply take the DFT of n
real samples you get n/2 independent Fourier coefficients to calculate
n/2 power spectral density (PSD) estimates. If you simply increase n,
you get more estimates at closer frequency spacing with the same high
variance. To reduce the variance, multiple independent sets of (small)
n real samples are used to generate PSD estimates that are averaged
across multiple blocks to reduce variance. The conventional DFT will
produce one of the estimates at DC. This was in the reference I gave
on noise processing.

For sums of sets of stationary tones you can achieve perfect
reconstruction only if all the tones belong to a set of n/2
frequencies evenly spaced on 0 to Fsample/2. With the conventional
DFT, one of the tones will be DC. If these conditions are met, you can
achieve perfect reconstruction anywhere on the region over which the
tones are stationary. These conditions are seldom met in real
instrumentation and with real-world signals

Fortunately it is not necessary to be able to achieve perfect
reconstruction to make useful applications of Fourier analysis of
finite/discrete data sets. Is it equivalent to infinite/continuous
Fourier analysis of infinite/continuous signals? No, it doesn't need
to be.


That pretty much makes sense.




  #86   Report Post  
Posted to rec.audio.pro
Peter Larsen[_3_] Peter Larsen[_3_] is offline
external usenet poster
 
Posts: 2,295
Default How was it known that mics were good before the advent of hi-fi playback?

muzician21 wrote:

I've read that certain vintage mics say from the 40's are considered
desirable for their sound quality. So how did engineers of the day
gauge the performance of mics if there wasn't a truly high quality
playback system available?


I keep wondering about the premise of this question.

Or is that not correct?


Let us assume, just for brief moment, listening via say a 10" wideband
loudspeaker sans a whizzer cone. Such a unit would not be improbable back
then. Considering how many audio production differences and compression
artifacts that are audible on various car audio and workplace loudenboomer
systems I think it perfectly possible to hear the difference between more or
less clear impulse response on such a loudspeaker.

I think you need to allow for what we would consider acceptable quality
playback systems further back in time than your question implicitly asserts.
Also btw. it may make sense to just consider "transducer-evolution" since
loudspeaker and microphone technology goes hand in hand.

Kind regards

Peter Larsen



  #87   Report Post  
Posted to rec.audio.pro
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default How was it known that mics were good before the advent of hi-fi playback?

"William Sommerwerck" wrote in
message

A lens is a special case of the general class of items
called prisms. Fresnel lenses are even made out of
prisms. Prisms and lenses both work on the principle of
refraction.


If you're talking about prisms as devices that divide
light into a spectrum, the operating principle is
dispersion, not refraction. (Granted, dispersion is a
subset of refraction.)


I think that the obvious claim that refraction is not involved with prisms,
speaks for itself. Suffice it to say it must have been way too long since
you took high school physics, if you were conscious at the time.

I know of no use of prisms as image-forming devices.


One word: Periscopes and binoculars.

They are not lenses.


But they unambigiously function based on refraction.

http://library.thinkquest.org/22915/refraction.html (of literally thousands
of similar references).

(The ridges of a Fresnel lens are not
prisms, but "onion ring" cores of a lens surface.)


I've actually seen working fresnel lenses formed of prisms. You don't even
have to curve the segments if you make them up of small enough pieces. There
are rectangular fresnels that are used for lighting that are formed of
straight prismatic shapes.

It is quite clear that you'd proudly deny that you were born of a woman in
order to score points in a debate, William. This only reinforces
speculation about your canine orgins! ;-)

They are commonly used -- particularly in binoculars and SLRs
-- to direct the light in a different direction /without
"processing" it/ in any way.


Let us know when you come to your senses, William.


  #88   Report Post  
Posted to rec.audio.pro
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default How was it known that mics were good before the advent of hi-fi playback?

"Don Pearce" wrote in message

On Fri, 24 Dec 2010 10:11:07 -0500, "Arny Krueger"
wrote:

"Don Pearce" wrote in message

On Thu, 23 Dec 2010 22:41:07 -0500, "Arny Krueger"
wrote:


FFT technology can be used to implement filters with
more-or-less arbitrary bandpass characteristics.
FFT-based filters are commonly used in audio
production. For example Adobe Audition has two
FFT-based filters, one that implements the user's
arbitrarily drawn frequency response curve and another
that implments the user's arbitrarily drawn phase
response curve.


The question here is what gets FFT'd.


Windowed sets of data.

I suspect that in
the Audition filters, the drawn curve is FFT'd into the
time domain, then convolution is used against the actual
signal.


Very little seems to be known about how Audition does
much of anything at that level of detail.

Mathematically and time-wise that would make much
more sense than chopping the signal into chunks, FFTing,
multiplying by the filter function and IFFTing back to
time domain many, many times.


Given that windowing and FFT size are known to be part
of the processing, the second method seems to be the
more likely.


Windowing is no good with audio when you have to turn it
back into time domain. You end up with amplitude
modulation of the finished waveform that way.


Only if you do it incorrectly.

I suspect
that the filter response is IFFT'd then convolved with
the audio on the fly. That would use least processing,
and minimize latency in real-time filtering.


Prove it!

Of course the Audition FFT filter comes with the problem
that amplitude and phase responses are not related, so
you can't get back to where you started later using
minimum phase networks.


Since Audition also has minimum phase filters readily available, its all
about giving the user choices.


  #89   Report Post  
Posted to rec.audio.pro
Don Pearce[_3_] Don Pearce[_3_] is offline
external usenet poster
 
Posts: 2,417
Default How was it known that mics were good before the advent of hi-fi playback?

On Sun, 26 Dec 2010 07:50:14 -0500, "Arny Krueger"
wrote:

"Don Pearce" wrote in message

On Fri, 24 Dec 2010 10:11:07 -0500, "Arny Krueger"
wrote:

"Don Pearce" wrote in message

On Thu, 23 Dec 2010 22:41:07 -0500, "Arny Krueger"
wrote:

FFT technology can be used to implement filters with
more-or-less arbitrary bandpass characteristics.
FFT-based filters are commonly used in audio
production. For example Adobe Audition has two
FFT-based filters, one that implements the user's
arbitrarily drawn frequency response curve and another
that implments the user's arbitrarily drawn phase
response curve.

The question here is what gets FFT'd.

Windowed sets of data.

I suspect that in
the Audition filters, the drawn curve is FFT'd into the
time domain, then convolution is used against the actual
signal.

Very little seems to be known about how Audition does
much of anything at that level of detail.

Mathematically and time-wise that would make much
more sense than chopping the signal into chunks, FFTing,
multiplying by the filter function and IFFTing back to
time domain many, many times.

Given that windowing and FFT size are known to be part
of the processing, the second method seems to be the
more likely.


Windowing is no good with audio when you have to turn it
back into time domain. You end up with amplitude
modulation of the finished waveform that way.


Only if you do it incorrectly.

No, that is what windowing does. It reduces the amplitude of the
samples to zero in a controlled manner at the two ends. There is no
"correct" way to do it that doesn't modulate the amplitude.

I suspect
that the filter response is IFFT'd then convolved with
the audio on the fly. That would use least processing,
and minimize latency in real-time filtering.


Prove it!

Prove what? That convolution on the fly is quicker, and has less
latency than taking groups of data, performing an FFT, multiplying,
performing an IFFT then moving on... do I really need to prove that?

Of course the Audition FFT filter comes with the problem
that amplitude and phase responses are not related, so
you can't get back to where you started later using
minimum phase networks.


Since Audition also has minimum phase filters readily available, its all
about giving the user choices.

Sure. I was just making the point that some may not have grasped about
an important aspect of the FFT filter.

d
  #90   Report Post  
Posted to rec.audio.pro
William Sommerwerck William Sommerwerck is offline
external usenet poster
 
Posts: 4,718
Default How was it known that mics were good before the advent of hi-fi playback?

A lens is a special case of the general class of items
called prisms. Fresnel lenses are even made out of
prisms. Prisms and lenses both work on the principle
of refraction.


If you're talking about prisms as devices that divide
light into a spectrum, the operating principle is
dispersion, not refraction. (Granted, dispersion is a
subset of refraction.)


I think that the obvious claim that refraction is not involved with
prisms, speaks for itself. Suffice it to say it must have been way
too long since you took high school physics, if you were conscious
at the time.


I know of no use of prisms as image-forming devices.


One word: Periscopes and binoculars.


Sorry about that, but the prisms in periscopes and binoculars are not used
to form images. They simply redirect the light.


They are not lenses.


But they unambigiously function based on refraction.


But that wasn't the point.


http://library.thinkquest.org/22915/refraction.html (of literally

thousands
of similar references).


(The ridges of a Fresnel lens are not
prisms, but "onion ring" cores of a lens surface.)


I've actually seen working fresnel lenses formed of prisms. You don't even
have to curve the segments if you make them up of small enough pieces.

There
are rectangular fresnels that are used for lighting that are formed of
straight prismatic shapes.


It is quite clear that you'd proudly deny that you were born of a woman in
order to score points in a debate, William. This only reinforces
speculation about your canine orgins! ;-)


Bitch. (Couldn't resist that.)


They are commonly used -- particularly in binoculars and SLRs
-- to direct the light in a different direction /without
"processing" it/ in any way.


Let us know when you come to your senses, William.


Let us know when you start understanding what you're talking about, rather
than repeating your misunderstanding of what you read.




  #91   Report Post  
Posted to rec.audio.pro
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default How was it known that mics were good before the advent of hi-fi playback?

"Don Pearce" wrote in message

On Sun, 26 Dec 2010 07:50:14 -0500, "Arny Krueger"
wrote:

"Don Pearce" wrote in message

On Fri, 24 Dec 2010 10:11:07 -0500, "Arny Krueger"
wrote:

"Don Pearce" wrote in message

On Thu, 23 Dec 2010 22:41:07 -0500, "Arny Krueger"
wrote:

FFT technology can be used to implement filters with
more-or-less arbitrary bandpass characteristics.
FFT-based filters are commonly used in audio
production. For example Adobe Audition has two
FFT-based filters, one that implements the user's
arbitrarily drawn frequency response curve and
another that implments the user's arbitrarily drawn
phase response curve.

The question here is what gets FFT'd.

Windowed sets of data.

I suspect that in
the Audition filters, the drawn curve is FFT'd into
the time domain, then convolution is used against the
actual signal.


Thus we establish what the context of the discusison is about - exactly what
processing scheme does Audition use when it does FFT filtering. Remember
this folks, as my correspondent seems to want to ditch it at his first
convenience.

Very little seems to be known about how Audition does
much of anything at that level of detail.


Mathematically and time-wise that would make much
more sense than chopping the signal into chunks,
FFTing, multiplying by the filter function and
IFFTing back to time domain many, many times.


Given that windowing and FFT size are known to be part
of the processing, the second method seems to be the
more likely.


Windowing is no good with audio when you have to turn it
back into time domain. You end up with amplitude
modulation of the finished waveform that way.


Only if you do it incorrectly.



No, that is what windowing does. It reduces the amplitude
of the samples to zero in a controlled manner at the two
ends. There is no "correct" way to do it that doesn't
modulate the amplitude.


Please step back and see the big picture. Eventually these filters produce a
continuous audio output signal. Is the output of the filter amplitude
modulated as a byproduct of the windowing or not?

I suspect
that the filter response is IFFT'd then convolved with
the audio on the fly. That would use least processing,
and minimize latency in real-time filtering.


Prove it!


Prove what? That convolution on the fly is quicker, and
has less latency than taking groups of data, performing
an FFT, multiplying, performing an IFFT then moving on...
do I really need to prove that?


Again, you are answering a question that you made up, not the one I asked.
The topic is how a particular piece of software does FFT filtering. You
either know or you are speculating.

Of course the Audition FFT filter comes with the problem
that amplitude and phase responses are not related, so
you can't get back to where you started later using
minimum phase networks.


Since Audition also has minimum phase filters readily
available, its all about giving the user choices.


Sure. I was just making the point that some may not have
grasped about an important aspect of the FFT filter.


Only in your dreams.


  #92   Report Post  
Posted to rec.audio.pro
Don Pearce[_3_] Don Pearce[_3_] is offline
external usenet poster
 
Posts: 2,417
Default How was it known that mics were good before the advent of hi-fi playback?

On Mon, 27 Dec 2010 07:23:52 -0500, "Arny Krueger"
wrote:

"Don Pearce" wrote in message

On Sun, 26 Dec 2010 07:50:14 -0500, "Arny Krueger"
wrote:

"Don Pearce" wrote in message

On Fri, 24 Dec 2010 10:11:07 -0500, "Arny Krueger"
wrote:

"Don Pearce" wrote in message

On Thu, 23 Dec 2010 22:41:07 -0500, "Arny Krueger"
wrote:

FFT technology can be used to implement filters with
more-or-less arbitrary bandpass characteristics.
FFT-based filters are commonly used in audio
production. For example Adobe Audition has two
FFT-based filters, one that implements the user's
arbitrarily drawn frequency response curve and
another that implments the user's arbitrarily drawn
phase response curve.

The question here is what gets FFT'd.

Windowed sets of data.

I suspect that in
the Audition filters, the drawn curve is FFT'd into
the time domain, then convolution is used against the
actual signal.


Thus we establish what the context of the discusison is about - exactly what
processing scheme does Audition use when it does FFT filtering. Remember
this folks, as my correspondent seems to want to ditch it at his first
convenience.

Very little seems to be known about how Audition does
much of anything at that level of detail.


Mathematically and time-wise that would make much
more sense than chopping the signal into chunks,
FFTing, multiplying by the filter function and
IFFTing back to time domain many, many times.


Given that windowing and FFT size are known to be part
of the processing, the second method seems to be the
more likely.

Windowing is no good with audio when you have to turn it
back into time domain. You end up with amplitude
modulation of the finished waveform that way.


Only if you do it incorrectly.



No, that is what windowing does. It reduces the amplitude
of the samples to zero in a controlled manner at the two
ends. There is no "correct" way to do it that doesn't
modulate the amplitude.


Please step back and see the big picture. Eventually these filters produce a
continuous audio output signal. Is the output of the filter amplitude
modulated as a byproduct of the windowing or not?


Clearly not, therefore there is no windowing. Windowing ALWAYS
modulates the amplitude - that is its function.


I suspect
that the filter response is IFFT'd then convolved with
the audio on the fly. That would use least processing,
and minimize latency in real-time filtering.


Prove it!


Prove what? That convolution on the fly is quicker, and
has less latency than taking groups of data, performing
an FFT, multiplying, performing an IFFT then moving on...
do I really need to prove that?


Again, you are answering a question that you made up, not the one I asked.
The topic is how a particular piece of software does FFT filtering. You
either know or you are speculating.


My statement, for which you demanded proof, was that convolution
against a timebased response was quicker than and FFT method which
demanded that the audio be cut into chunks. That was what you demanded
I prove. I decline. It is obvious.

Of course the Audition FFT filter comes with the problem
that amplitude and phase responses are not related, so
you can't get back to where you started later using
minimum phase networks.


Since Audition also has minimum phase filters readily
available, its all about giving the user choices.


Sure. I was just making the point that some may not have
grasped about an important aspect of the FFT filter.


Only in your dreams.

You think that everybody in the world understands that FFT-based
filters do not exhibit a minimum phase relationship to their
time-based response? Did you over-indulge this Christmas?

d
  #93   Report Post  
Posted to rec.audio.pro
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default How was it known that mics were good before the advent of hi-fi playback?

"Don Pearce" wrote in message

On Mon, 27 Dec 2010 07:23:52 -0500, "Arny Krueger"
wrote:

"Don Pearce" wrote in message

On Sun, 26 Dec 2010 07:50:14 -0500, "Arny Krueger"
wrote:

"Don Pearce" wrote in message

On Fri, 24 Dec 2010 10:11:07 -0500, "Arny Krueger"
wrote:

"Don Pearce" wrote in message

On Thu, 23 Dec 2010 22:41:07 -0500, "Arny Krueger"
wrote:

FFT technology can be used to implement filters
with more-or-less arbitrary bandpass
characteristics. FFT-based filters are commonly
used in audio production. For example Adobe
Audition has two FFT-based filters, one that
implements the user's arbitrarily drawn frequency
response curve and another that implments the
user's arbitrarily drawn phase response curve.

The question here is what gets FFT'd.

Windowed sets of data.

I suspect that in
the Audition filters, the drawn curve is FFT'd into
the time domain, then convolution is used against
the actual signal.


Thus we establish what the context of the discusison is
about - exactly what processing scheme does Audition use
when it does FFT filtering. Remember this folks, as my
correspondent seems to want to ditch it at his first
convenience.

Very little seems to be known about how Audition does
much of anything at that level of detail.

Mathematically and time-wise that would make much
more sense than chopping the signal into chunks,
FFTing, multiplying by the filter function and
IFFTing back to time domain many, many times.


Given that windowing and FFT size are known to be
part of the processing, the second method seems to
be the more likely.

Windowing is no good with audio when you have to turn
it back into time domain. You end up with amplitude
modulation of the finished waveform that way.


Only if you do it incorrectly.



No, that is what windowing does. It reduces the
amplitude of the samples to zero in a controlled manner
at the two ends. There is no "correct" way to do it
that doesn't modulate the amplitude.


Please step back and see the big picture. Eventually
these filters produce a continuous audio output signal.
Is the output of the filter amplitude modulated as a
byproduct of the windowing or not?


Clearly not, therefore there is no windowing. Windowing
ALWAYS modulates the amplitude - that is its function.


I suspect
that the filter response is IFFT'd then convolved with
the audio on the fly. That would use least processing,
and minimize latency in real-time filtering.


Prove it!


Prove what? That convolution on the fly is quicker, and
has less latency than taking groups of data, performing
an FFT, multiplying, performing an IFFT then moving
on... do I really need to prove that?


Again, you are answering a question that you made up,
not the one I asked. The topic is how a particular piece
of software does FFT filtering. You either know or you
are speculating.


My statement, for which you demanded proof, was that
convolution against a timebased response was quicker than
and FFT method which demanded that the audio be cut into
chunks. That was what you demanded I prove. I decline. It
is obvious.

Of course the Audition FFT filter comes with the
problem that amplitude and phase responses are not
related, so you can't get back to where you started
later using minimum phase networks.


Since Audition also has minimum phase filters readily
available, its all about giving the user choices.


Sure. I was just making the point that some may not have
grasped about an important aspect of the FFT filter.


Only in your dreams.

You think that everybody in the world understands that
FFT-based filters do not exhibit a minimum phase
relationship to their time-based response? Did you
over-indulge this Christmas?


I see zero relationship between my comments and the recent responses.

For the record, I'm almost a complete teatotaller but one who can drink many
people under the table at will. Weird body chemistry, I guess.

I did drink a long neck bottle of Colorado microbrew around 4 pm on
Christmas day. That's it.


Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Another good paper about vinyl playback Arny Krueger High End Audio 2 February 25th 09 04:27 PM
good cheap recording device where you can slow down playback or alternate voice? RSF Group Pro Audio 2 August 7th 06 11:37 AM
Good inexpensive Cartridges for phono playback in 2005 [email protected] Audio Opinions 9 May 15th 05 10:36 PM
FS: Large Advent Speakers - very good condition Stephen F. Marsh Marketplace 4 December 11th 04 02:29 PM


All times are GMT +1. The time now is 11:42 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"