View Single Post
  #19   Report Post  
David Satz
 
Posts: n/a
Default

Mark and Mary Ann, when you write, "I think that 16 bits might not be
quite enough to convey very soft passages without that 'bumpiness' that
happens when there are not enough steps on the waveform to convey it
accurately. Low sample rates, such as 44.1, also may be inadequate to
convey subtle timing differences between sound arrivals, thus reducing
the sense of soundstage depth and breadth." you are stating an
often-expressed viewpoint which accords very well with many people's
mental model of how digital recording works, as well as their common
sense expectations.

Unfortunately the explanations which most people seem to accept about
digital audio are greatly simplified and do not fully describe what
happens in any actual digital recordings. Thus any projections made
from that mental model shouldn't be given much weight unless they can
be shown to occur in reality. Such is the case with the two concerns
which you mention. They are not, in fact, limitations of 16-bit, 44.1
kHz linear PCM as such.

The real, demonstrable (as well as mathematically predictable)
limitations of properly dithered 16-bit, 44.1 kHz linear PCM are its
dynamic range--about 94 dB--and its bandwidth--less than 22.05 kHz.
Sound quality may of course suffer in other ways. But if that were the
fault of this bit depth and sampling frequency, then logically,
_absolutely_ every CD ever made would sound quite bad, and there could
never be a CD--not one, ever--that sounded any good at all.

Increasing the number of "steps on the waveform" doesn't convey an
analog signal any more accurately unless the noise floor of the analog
signal is below that of the A/D converter. You can compare the output
to the input and measure (and listen to) the difference--that's the
gold standard, no? Adding more "steps" (bits) lowers the noise floor.
But it has no other effect on accuracy, i.e. on the difference between
input and output. That's why 20-bit and even 24-bit converters are used
in professional recording, while 16-bit compact discs appear to have
almost too much dynamic range for most consumer requirements. Optimal
bit depth is a straightforward engineering decision--you choose it to
fit your dynamic range requirements.

The more subtle point is that the timing accuracy of analog signals in
(for example) a 44.1 kHz sampled system isn't limited by the 44.1 kHz
rate as you seem to imagine. Rather, it is limited by the accuracy with
which the 44.1 kHz sampling process occurs (i.e. keeping the jitter low
enough). If a transient begins to occur between two sampling intervals,
it can very well be reconstructed as having begun to occur between them
in playback, too. The output of a CD player is a continuous analog
signal--not a series of stairstep sample values which are deaf to all
the occurrences between them. That misleading image comes from the
oversimplified mental model.

Consider this: Cedar, the nice British DSP company, has an "azimuth
correction" unit that can take a stereo 44.1 kHz digital input and
accurately realign the interchannel timing by small fractions of a
sampling interval. If your assumptions were correct, that unit couldn't
possibly do what it does. Yet, turn the knob and it does it.

I hope it is possible to say in a friendly fashion that this accords
with decades of actual observation, with practice as well as theory,
and is not a matter of mere personal viewpoint or opinion. What you
were saying, on the other hand, is conjecture based on your
visualization of a process that doesn't actually work in quite the way
that you (and a lot of other people, unfortunately) seem to think it
does. As a result, the conclusions which you've reached simply aren't
correct.

--best regards