View Single Post
  #25   Report Post  
Jay - atldigi
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

In article , "Arny Krueger"
wrote:

"Tommi" wrote in message

"Arny Krueger" wrote in message


The idea that adding bits does not increase resolution is yet another
popular urban myth about digital. It's similar to the urban myth
that analog has resolution below the noise floor.


So, if you're recording, say, someone's vocals at both 16 and 24
bits, and the peaks are at -6dB to 0dB FS, does the 24 bit recording
represent more accurately the signal in that region than the 16-bit
version?


The 24 bit recording has the capability to represent the signal much more
accurately in *any* range from zero to max, than the 16 bit recording.


I think you're suffering the myth, Arny. Let me quote from another
thread where Scott Dorsey is trying to explain the same thing that I am,
and I'll and try to explain it yet another way:


In article , (Scott
Dorsey) wrote:

A 16 bit number is significantly
smaller and therefore less precise than a 24 bit number.


Right.

So, in a nutshell. Moving from 16 bit to 24 bit, we have 8 extra bits
per sample to represent the analog wave which is a massive gain.


Not really. It gives you more dynamic range, which is often wasted
anyway. 96 dB is an awful lot.




The 24 bit number is more precise than the 16 bit. True enough. What
that means in audio, however, is that the 24 bit word can describe
smaller values than the 16 bit word, thus signals that are lower in
level. The 16 bit number is already describing 96 dB of dynamic range
just fine. If you want to carry the precision further and capture
signals that are lower, say to -144 dB, then 24 bits is your ticket.

The myth is the dynamic equivalent to the argument that 4 samples on a
20kHz sine wave will render it more accurately than 2, and 8 samples
even more so. That's not true either.

--
Jay Frigoletto
Mastersuite
Los Angeles
promastering.com