View Single Post
  #109   Report Post  
Arny Krueger
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

"Garthrr" wrote in message

In article , Justin
Ulysses Morse writes:

I thought it was about the 4th or 5th time I said it in these threads
over the past 2 days, and I thought I was repeating myself. But I'm
glad to hear it's starting to gel.


You may well have said it and I could have either missed the post or
missed the point. Either way it is starting to make sense to me even
though its still a little blurry.


Suppose you have an input signal whose voltage at some arbitrary
point in time is 3.26534263219541623 volts. Now, off the top of my
head I estimate that the best approximation of this voltage you can
represent with 24 bits is maybe 3.2653426 volts. And 16 bits would
round it off to around 3.26534. So what's going on in the 24-bit
audio that's missing from the 16-bit audio? A signal in the
neighborhood of 2.6 microvolts. Which is pretty dang low-level if
you ask me.


Here is a bblluurrrryy moment for me. Is it that there is a _signal_
which is 2.6 microvolts or... is it that there is an error of 2.6

microvolts
in the reproduction of a signal which is the above
3.26534263219541623 volts?


Both. You can think of 3.26534 volts as 3.26534263219541623 volts with an
approximately 2.6 microvolt error, or you can think of .26534 volts as a
3.26534263219541623 volt signal with an approximately 2.6 microvolt error
voltage added.

To me this seems like a qualitative
difference (no matter how insignificant the quantity in question may
be).


It is a small qualitative difference. It's an error that is about 120 dB
down in the presence of a signal which is close to full scale (0 dB), which
means don't worry about it. If the size of a significant signal was -100 dB,
then it would be worth worrying about.