View Single Post
  #111   Report Post  
Justin Ulysses Morse
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

Garthrr wrote:


Here is a bblluurrrryy moment for me. Is it that there is a _signal_ which is
2.6 microvolts or... is it that there is an error of 2.6 microvolts in the
reproduction of a signal which is the above 3.26534263219541623 volts? To me
this seems like a qualitative difference (no matter how insignificant the
quantity in question may be).


It's exactly the same thing either way you think about it. The 16-bit
signal is the what you'd end up with if you started with the 24-bit
signal and added (or subtracted) the error signal. Since we're
"rounding" (kind of), sometimes the 16-bit value will be a little
bigger, some times it'll be a little smaller than the 24-bit value. So
when you look at the error "signal" over the course of time, it's
constantly bouncing up and down, not quite randomly but sort of
arbitrarily because the music doesn't care where those 16-bit
quanitizations are.

Of course I'm ignoring the topic of dither for the sake of theory, but
as Jay points out Dither changes everything. Dither is another "error"
signal added to the 24-bit signal just before you do the "rounding."
The dither signal, which averages just under -96dB, plus those 8 lost
bits which average under -96dB, add up to a "music plus noise" signal
which averages just OVER -96dB, so it is able to show up in the 16-bit
signal.


ulysses