Reply
 
Thread Tools Display Modes
  #41   Report Post  
Jon Davis
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain


"Ric Oliva" wrote in message
...
Ok, so I understand that 44.1k is 44,100 samples per second and 48k is
48,000 samples per second. Obviously 48,000 is better.


Correct. The more samples of a waveform you can gather, the easier it is to
reproduce it.

I'm not exactly
sure what bit rate is though? CDs are 16 bit, DVDs are 24. What exactly
does that mean though?


That's not bit rate, but rather bit depth.

MP3 files and other audio file formats are often saved in a format
determined by "bit rate", so as to determine the number of bits per actual
chunk of time--hence, an HTTP download of an MP3 file can be streamed with
highest quality possible if the bit rate is predetermined.

Bit depth, however, is the vertical sampling resolution of an audio sample.
As you know, 44kHz sample rate is the number of samples per second. On a
horizontal waveform drawing, this is the horizontal resolution (think screen
resolution on your monitor, 640x480 vs. 1024x768). Bit depth is the number
of bits of information per sample, or the vertical resolution.

For example, in an 8-bit sample bit depth, a waveform's amplitude in a
particular sample can be in any of 256 possible positions (2*8 = 256).
Obviously, that's a very small number of possibilities. So a 16-bit sample
bit depth the resolution is much higher: 65536.

Bit rate is determined by the two resolutions combined. Theoretically, an
MP3 file saved with a 64kbps bit rate means that the file must be downloaded
from the Internet at a rate of at least 64 kilobits per second in order for
it to be streamed through the MP3 player without hiccupping.

Another question - if I'm recording a project to audio CD, is it better to
just record at 16/44 since that's what the CD will be anyway, and I can

save
system resources? or should I do 24/48 and then dither it down,

essentially
changing what I originally heard?


In general, with audio it is better to sample at high depth and resolution
and then downgrade afterwards than it is to stay true to the final output.


Jon


  #43   Report Post  
S O'Neill
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

Arny Krueger wrote:

A DSD data stream is composed of pulses that are basically integrated to
produce an analog signal. Pulses have a value of either +1 or -1. Alternate
pulses with opposite polarities sum out to zero. If the pulses are
predominately +1, then the integrated signal goes positive. The more
predominately the pulses are +1, the faster the integrated signal goes
positive. If the pulses are predominately -1 then the integrated signal goes
minus, and so on.


Didn't that used to be called 1-bit DPCM?

  #44   Report Post  
Garthrr
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

In article , "Arny Krueger"
writes:

24 bits also adds resolution in any region between -144 dB and full scale.


For me, with my limited understanding (or misunderstanding perhaps) of digital
theory, the above sentence cuts to the heart of the matter. If I understand
what Scott Dorsey and others have said then the change from 16 to 24 bits only
adds downward dynamic range and does not increase resolution of signals in the
relatively high ranges close to full scale. Maybe I misunderstand but thats
what they seem to be saying. On an intuitive level that seems wrong to me and
it seems as though resolution even at -10dB should increase (due to less
quantization error??) . I infer that thats what Arny is saying in the quote
above. Do I have that right? If so, is this, then, the crux of the discussion?
I hope you all will bear with my lack of math skills and technical knowledge
but I would like to understand as much about this as I can on an intuitive
level.

Garth~


"I think the fact that music can come up a wire is a miracle."
Ed Cherney
  #45   Report Post  
Chris Hornbeck
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

On Mon, 17 Nov 2003 22:44:52 GMT, Jay - atldigi
wrote:

Something to ponder: Why does DSD (SACD) work?


Because it's a different data stream and it's not PCM.


But how do they make 1 bit work if it can only be off or full 6dB?
The point is that digital doesn't actually work that way.


But it *is* PCM. Sampling rate and word depth can be traded
pretty liberally, I think is your point. Greater word depth
reduces the ambiguity of quantization, and ambiguity is
"like" noise.

I still have two problems with the discussion so far. First,
Bob Cain's caveat remains unaddressed:
"Within the Nyquist criterion a signal can be produced with
any arbitary phase or delay until you consider the
quantization of the samples. Then the achievable delays
become quantized as well "

Second, quantization degrades the theoretical perfection
of Nyquist criterion signals to a greater extent for
smaller signals, IOW, the conversion is not monotonic.

Thanks for your excellent comments,

Chris Hornbeck
new email address

"That is my theory, and what it is too."
Anne Elk


  #47   Report Post  
Garthrr
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

In article , Chris Hornbeck
writes:

Greater word depth reduces the ambiguity inherent in quantization.


Thanks Chris,
So is this point a matter of contention or is this agreed upon by all? If it is
agreed upon then is the argument on the "24 bit sounds no better than 16 bit"
side that the effects of the ambiguity are inherently negligable or perhaps
that interpolation or something else "repairs" the ambiguity adequately? I hope
I have framed thew question well enough to be understood.

Garth~


"I think the fact that music can come up a wire is a miracle."
Ed Cherney
  #49   Report Post  
Chris Hornbeck
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

On Sun, 16 Nov 2003 22:57:17 GMT, Jay - atldigi
wrote:

The myth is the dynamic equivalent to the argument that 4 samples on a
20kHz sine wave will render it more accurately than 2, and 8 samples
even more so. That's not true either.


But 4 samples will render it more accurately in *time* than two.
Alternatively, a smaller quantization step will also. Quantization
could be said to "jitter" the conversion in either time or
amplitude.

Not such a much, except that the effect's greater for small
signals.

Thanks,

Chris Hornbeck
new email address

"That is my theory, and what it is too."
Anne Elk
  #50   Report Post  
Hendrik Merx
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

Carey Carlan wrote in message .205...
"Tommi" wrote in
:

So, if you're recording, say, someone's vocals at both 16 and 24 bits,
and the peaks are at -6dB to 0dB FS, does the 24 bit recording
represent more accurately the signal in that region than the 16-bit
version?


The extra 8 bits give you 48 db more dynamic range between EVERY sample.
Between sample value = 0 and sample value = 1 they give you an extra 48 db
on the bottom end.

On the loud end, 16 bit max value is 32767 (0x7FFF), second value is 32766
(0x7FFE). That equates to 24 bit values 8388352 (0x7FFF00) and 8388096
(0x7FFE00), a difference of 256 values, the equivalent of 48 dB dynamic
range.


I don't think so.

The decibel is used to measure differences between two levels, thus,
following the definition U (in decibels) = 20 * log (U_1 / U_0) for
Voltages, the relative difference is 0.0006 decibels.

So, the 8 bits give you 48 dB more difference between signal and
noise, which is a measure for the accuracy, at any level.

Which, incidentally, makes 24 bits with 144 dB dynamic range
sufficient to accurately reproduce any noise between 0 dB sound
pressure level, namely the hearing threshold level, and the sound of a
starting jet, which is said to be about 140 dB spl. Assuming of course
that the hearing threshold level is also the hearing threshold of
detecting differences in amplitude between two tones at other levels.

Hendrik


  #51   Report Post  
Jay - atldigi
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

In article , Chris Hornbeck
wrote:

On Sun, 16 Nov 2003 22:57:17 GMT, Jay - atldigi
wrote:

The myth is the dynamic equivalent to the argument that 4 samples on a
20kHz sine wave will render it more accurately than 2, and 8 samples
even more so. That's not true either.


But 4 samples will render it more accurately in *time* than two.
Alternatively, a smaller quantization step will also. Quantization
could be said to "jitter" the conversion in either time or
amplitude.


This is a contention of some, but when the system is viewed as a whole
including proper dither, time resolution essentially becomes infinite.
People Bob Stuart and Tom Holman have suggested that the time issue is
significant for imaging if you are dealing with two or more channels.
Other digital audio heavy hitters and PhDs point out the seldom
understood fact that the right dither has an effect on the time
resolution as well as the preventing of truncation distortion we know
and love it for. So it is not agreed upon that 4 samples render it more
accurately in time. In fact, the science seems to be against it. It's
too bad JJ formerly from AT&T isn't around anymore to offer up all the
empirical data for us. I wouldn't mind Dick Piecrce or Dave Collins
making an appearance either.

None of this means that higher sample rates and more bits don't sound
better. It's just that the explanations often given are flawed, and the
requirements demanded are often excessive. 64kHz 20 bit sampling is
probably the minimum necessary. 96/24 offers a margin of safety. More
than that may actually cause more problems than it solves. As stated
before - upsamling for non linear processes, oversampling, and signal
processing with double precision are not the same subject and can be
beneficial.

--
Jay Frigoletto
Mastersuite
Los Angeles
promastering.com
  #52   Report Post  
Jay - atldigi
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

In article , Chris Hornbeck
wrote:

On Mon, 17 Nov 2003 22:44:52 GMT, Jay - atldigi
wrote:

Something to ponder: Why does DSD (SACD) work?

Because it's a different data stream and it's not PCM.


But how do they make 1 bit work if it can only be off or full 6dB?
The point is that digital doesn't actually work that way.


But it *is* PCM. Sampling rate and word depth can be traded
pretty liberally, I think is your point. Greater word depth
reduces the ambiguity of quantization, and ambiguity is
"like" noise.



DSD can essentially be thought of as PCM that is 1 bit with a very high
sample rate, severely noise shaped, and greatly reduced filter worries
(I'd say no filters, but many manufacturers suggest using some filtering
to prevent the high level ultrasonics due to the noise shaping from
damaging equipment). The noise shaping is one reason the high sample
rate helps so much; you can shape all that noise far away from your
desired passband (20-20k) and get the equivalent of around a 120 dB
signal to noise ratio within that limited bandwidth. However, the noise
in the ultrasonic range is ridiculous.

It's like recording right off the oversampling ADC without taking a trip
through quantization. It's 1 bit. That's all there is. But it seems to
work, doesn't it? It doesn't just output 6dB square waves. It plays
music. This couldn't be possible under the criteria that some are trying
to impose. But it works. It works due to the trade off Chris alludes to.
But there's no free lunch. All that noise has to go somewhere, but after
it does, you can hear music in the 20-20k range, even though there's
only 1 bit. And it's not because of the "rising or falling" illustration
used with 1 bit recording or you'd never be able to start a song in the
middle.

I know, it's hard to wrap your head around, but viewing a digital audio
system as a whole instead of in pieces, and taking into account proper
design and implimentation, digital audio works, even though some of the
concepts seem pretty wierd in practice. Some things when illustrated on
paper are good for learning and visualization, but you eventually have
to move away from the illustrations and into the wierd world where there
are no stairsteps and you only gain more dynamic range with more bits.

Add filtering, use crappy filters, forget to dither, or use the wrong
dither, and it can all fall apart, and all of these isolated evils that
people worry about can actually come into being. Do everything right and
this stuff isn't a problem within the limits of the system, which
admittedly do exist. The filter issues (ripple, group delay, ringing,
noise or poor performance from the analog stages) that certainly affect
the audible band in the CD standard, poor filters which may not entirely
prevent aliases or images (that would be **** poor design with no excuse
these days) or amplify the normal filter issues, the noise floor which
also can be heard in the CD standard and prevent the lowest level
details from being captured (assuming the source didn't have a bunch
more noise to begin with), and poor practice in preparing the masters,
whether it be dither problems or poor processing due to bad algorithms
or insufficient resolution (we won't even mention clocking and jitter),
and you have plenty of land mines to screw you up.

24/96 solves or lessens some of those problems, gives us some margin of
safety, and offers possibilities for currently unused, simpler filter
techniques that could really help. I'm not saying 24/96 isn't better
than 16/44.1. I'm just saying that many reasons we see given aren't
always correct, and the extent of the problems are sometimes overstated.
What do the details mean to the guy just recording some music? Not much
usually. But from the standpoint of tecnical learning, the fine
distinctions are worth making.

--
Jay Frigoletto
Mastersuite
Los Angeles
promastering.com
  #53   Report Post  
Justin Ulysses Morse
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

Jay - atldigi wrote:

White Sawn's satement seems to indicate that the extra bits are within
the same dynamic range, thereby giving you greater detail within that
range. You can't into the trap of viewing digital audio like it's
digital imagery. Unfortunately, 24 bits leaves the top 96db range of 16
bit alone, but lowers the noise floor and allows the recording of audio
events that are even smaller, at a lower level, i.e. below -96dB.


Arny Krueger wrote:

24 bits puts 16 extra levels between each pair of levels that exist with 16
bits. Thus, the resolution is increased at any level, not just the smallest
one.


In article , (Scott
Dorsey) wrote:

Not really. It gives you more dynamic range, which is often wasted
anyway. 96 dB is an awful lot.



Scott's not arguing what you're arguing there, Jay. He's just
curmudgeoning about the fact nobody's going to use the available
dynamic range anyway. Which is a different discussion altogether.

Arny's right on this one.

While it's true that the additional bits tack your extended resolution
onto "the bottom" of the dynamic range, it clearly increases the
resolution at all levels. You can have a -100dB component to a -1dB
signal, and you still want to hear it.

The simplest way to think about this is to imagine a digital audio
recording that contained two simultaneous sounds: One a -1dBFS and the
other at -111dBFS. It should be clear that in a 16-bit recording, the
-111dBFS sound will be buried in the noise floor and will not be heard,
while in a 24-bit recording it will be above the theoretical noise
floor.

Now, suppose I told you that the -1dBFS signal was my guitar; and the
-111dBFS signal was some subtle overtone of that guitar sound. Maybe
it's some fret buzz, maybe it's some room reflection. Now it should be
obvious that whether or not you hear the -111dBFS signal will affect
the level of detail you hear in the -1dBFS signal.

Once again I present my favorite digital audio analogy, cash
denominations: Having some pennies in your pocket allows you to pay a
more precise amount even if you're spending thousands of dollars.

ulysses
  #54   Report Post  
Justin Ulysses Morse
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

Thanks for this explanation, Arny. Idunno if it helped Tommi but it
helped me. Every time I finally grasp another "big concept" in digital
audio I'm always amazed at how incredibly clever it is. Those old
French mathmen must have been giddy as hell when they figured this
stuff out.

ulysses


In article , Arny Krueger
wrote:

"Tommi" wrote in message


I may well be suffering the myth, but my understanding is that it
matters whether you sample a sine wave 2 or 8 times. Tests have been
made where subjects had to determine which sound came first from
their headphones. The same signal was fed to both L and R channels,
only the other one was delayed by 5-15 _micro_seconds.



Some of the people were able to "localize" the sound source even when
it was delayed only by 5 microseconds. This implies that a sampling
rate of 192kHz(which results in 5.2 microsecond's sample intervals),
for example, is not only pushing the nyquist rate to the ultrasonic
range, but also presents better channel separation on multichannel
systems.


For the purpose of discussion, I'll stipulate that your facts are correct to
this point. I really don't know that, but it would help me make an important
point if we don't argue over that part of your comments.

So, it doesn't necessarily matter if you sample a sine wave
2 or 8 times on a mono system, but on a multichannel system higher
sample rates result in better localization.


The myth here is that signals in a digital system can have interchannel
timing differences that are only integer numbers of sample periods. IOW this
myth as applied to 44,100 Hz sampling is that interchannel timing
differences can only be multiples of 22.675736961451247165532879818594
microseconds. I agree that this seems to be intuitively clear. But it is
also quite wrong.

The myth comes from the idea that two signals in different channels that are
displaced in time can only be expressed as the same set of sample values,
but time-shifted. This is not the case. Two signals in different channels
that are displaced in time can be expressed as different sample values.

For example, if two slowly-increasing (ramp) signals are displaced in time,
one signal might have a set of sample values that starts out 0, 10, 20,
30... This is a ramp that starts at t = 0. The time-delayed version of this
signal in another channel could have a set of values that is 0 at t = 0, but
is 5, 15, 25... for successive samples. If you looked at these two signals
over time, you'd say that the second signal is time-shifted by an amount of
time equal to half a sample period. And, that is how it would sound.

The correct time resolution of sampled signals is the sample period divided
by the number of distinct amplitude levels. In the case of 16/44 this would
be 5.1418904674492623958124444033093e-10 seconds or
514.18904674492623958124444033093 picoseconds. This is a tiny, tiny number.
In reality, it is lost in the noise.


  #55   Report Post  
Justin Ulysses Morse
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

Garthrr wrote:

So is this point a matter of contention or is this agreed upon by
all? If it is agreed upon then is the argument on the "24 bit sounds
no better than 16 bit" side that the effects of the ambiguity are
inherently negligable or perhaps that interpolation or something else
"repairs" the ambiguity adequately? I hope I have framed thew
question well enough to be understood.



The fact that it isn't agreed upon by all doesn't mean it's really a
matter of contention. This is a math question, and it has a correct
answer. The disagreement only comes from those who don't know the
correct answer. That sounds snotty, but I'm saying it aside from any
declaration of who's right and who's wrong. I'm saying that after
hashing it out, it's not going to be a matter of opinion.

Furthermore, I don't think by any means that Jay is claiming 24 bits
sounds not better than 16 bits. If you read his discussion on his
website, he very clearly considers more than 16 bits necessary for
transparent audio. He's simply saying that the benefit comes only in
the lowest audio levels.

In a way he's right and in a way he's wrong. It sounds like he's
saying that increased bit depth can't add any resolution to loud
sources, that it only adds the ability to reproduce quiet sounds. But
I think he knows that a loud source can have quiet elements in it.
Music is complex and it can be thought of as a bunch of simultaneous
sounds at multiple amplitudes and frequencies. If Jay is suggesting
that the benefit of "8 more bits" only exists when there are no signals
above -96dBFS present, then he is wrong. If he is saying that the
"increased resolution" on a full-scale signal is nothing more than the
added ability to resolve the quietest overtones, then he's right and is
actually in full agreement with Arny. This is one of those areas where
describing audio in words gets kind of tricky.


ulysses


  #56   Report Post  
Jay - atldigi
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

In article , Justin
Ulysses Morse wrote:

saying that increased bit depth can't add any resolution to loud
sources, that it only adds the ability to reproduce quiet sounds. But
I think he knows that a loud source can have quiet elements in it.
Music is complex and it can be thought of as a bunch of simultaneous
sounds at multiple amplitudes and frequencies.


I could swear I've already said that in this thread... maybe that was
another thread.

that the benefit of "8 more bits" only exists when there are no signals
above -96dBFS present


Of course not...

"increased resolution" on a full-scale signal is nothing more than the
added ability to resolve the quietest overtones, then he's right and is
actually in full agreement with Arny.


At least somebody understands me, but I thought I had already said this
somewhere in the thread. It's those quieter components that you are
getting from the extra bits. The louder components aren't represented
any better. In the end, it can be a more precise and better sounding
recording (provided the source is of a quality to benefit), but it's
because of the little things you can now record, not that the big ones
are better.

So late... falling asleep... good night...

--
Jay Frigoletto
Mastersuite
Los Angeles
promastering.com
  #57   Report Post  
Jay - atldigi
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

In article , Justin
Ulysses Morse wrote:

Arny Krueger wrote:

24 bits puts 16 extra levels between each pair of levels that exist
with 16 bits. Thus, the resolution is increased at any level, not just the
smallest one.


In article , (Scott
Dorsey) wrote:

Not really. It gives you more dynamic range, which is often wasted
anyway. 96 dB is an awful lot.



Scott's not arguing what you're arguing there, Jay. He's just
curmudgeoning about the fact nobody's going to use the available
dynamic range anyway. Which is a different discussion altogether.



I think he's saying both, but I'd have to let him speak for himself.


Arny's right on this one.



Unless I misunderstand him (certainly possible), I don't think so.


While it's true that the additional bits tack your extended resolution
onto "the bottom" of the dynamic range, it clearly increases the
resolution at all levels. You can have a -100dB component to a -1dB
signal, and you still want to hear it.


But that's exactly my point: only the -100 component is what you've
gained. The -1 component is not rendered any better than it was before.

The simplest way to think about this is to imagine a digital audio
recording that contained two simultaneous sounds: One a -1dBFS and the
other at -111dBFS. It should be clear that in a 16-bit recording, the
-111dBFS sound will be buried in the noise floor and will not be heard,
while in a 24-bit recording it will be above the theoretical noise
floor.


Right. That's what I said, isn't it? The noise floor goes down and you
can record smaller events; they don't necessarily have to be a
fundamental that is very low - it can be low level overtones making a
violin sound more real, or a little incidental sound, or the sound of
the hall and the natural reverb tail, but it's still the low level stuff
that you are gaining at the higher bit depth. It doesn't mean that the
whole recording has to stay below -96 dB. Have I not made this clear? I
guess not; or it was buried too deeply in the 16 bit noise floor...

Once again I present my favorite digital audio analogy, cash
denominations: Having some pennies in your pocket allows you to pay a
more precise amount even if you're spending thousands of dollars.


But the value of the dollars don't change when you add a few pennies.
The total value changes, but each dollar is still worth a dollar.

--
Jay Frigoletto
Mastersuite
Los Angeles
promastering.com
  #58   Report Post  
Justin Ulysses Morse
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

Jay - atldigi wrote:

At least somebody understands me, but I thought I had already said this
somewhere in the thread. It's those quieter components that you are
getting from the extra bits. The louder components aren't represented
any better. In the end, it can be a more precise and better sounding
recording (provided the source is of a quality to benefit), but it's
because of the little things you can now record, not that the big ones
are better.


See, you are in full agreement with Arny. It just depends on whether
you're thinking of the music as a collection of sounds or one big
sound. As a collection of sounds, your extra bits are only revealing
the quiet ones; the loud components were already represented by 16
bits. But when you step back and listen to the whole thing, what that
MEANS is greater detail in the music, even where it's loud.


ulysses
  #59   Report Post  
Garthrr
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

In article , Justin Ulysses
Morse writes:

Furthermore, I don't think by any means that Jay is claiming 24 bits

sounds not better than 16 bits. If you read his discussion on his
website, he very clearly considers more than 16 bits necessary for
transparent audio. He's simply saying that the benefit comes only in
the lowest audio levels.


Actually I was under the impresion that Jay was on the other side of the
fence--that he was saying that 24 bit was really no better than 16 bit for any
sort of real world audio. Perhaps I misunderstood his stance. I thought it was
Arny who was contending that there is resolution to be gained by 24 bit,
resolution which exists even in the not-so-low level signal.

Disclaimer: Please forgive me if I accidently put words in anyone's mouth while
trying to paraphrase something. If I do its just my ignorance of the subject
matter and not any desire to spin.

In a way he's right and in a way he's wrong. It sounds like he's
saying that increased bit depth can't add any resolution to loud
sources, that it only adds the ability to reproduce quiet sounds. But
I think he knows that a loud source can have quiet elements in it.
Music is complex and it can be thought of as a bunch of simultaneous
sounds at multiple amplitudes and frequencies. If Jay is suggesting
that the benefit of "8 more bits" only exists when there are no signals
above -96dBFS present, then he is wrong. If he is saying that the
"increased resolution" on a full-scale signal is nothing more than the
added ability to resolve the quietest overtones, then he's right and is
actually in full agreement with Arny. This is one of those areas where
describing audio in words gets kind of tricky.


ulysses


Thanks Ulysses. There is still a nagging question I have about all this but I
want to try to think of a way to phrase it properly.

Garth~



"I think the fact that music can come up a wire is a miracle."
Ed Cherney
  #60   Report Post  
Garthrr
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

In article , Justin Ulysses
Morse writes:

Once again I present my favorite digital audio analogy, cash
denominations: Having some pennies in your pocket allows you to pay a
more precise amount even if you're spending thousands of dollars.


Following this analogy -- and I just know I'm gonna be wrong here but this is
just how it seems to me -- if we say, for example, that 16 bit audio is like
having a pocket full of 10 dimes then isnt 24 bit audio a pocket full of 100
pennies? Finer divisions of the same whole--the ability to describe finer
voltage differences?

I understand that the dynamic range increases with higher bit depth and I guess
in this money analogy we could think of that as having a dollar fifty or
something instead of the original dollar but it still seems like you get finer
resolution even in the first dollar.

Is this question not analogous to the number of pixels in a digital photograph?
The more pixels, the higher resolution the picture (all else being equal). That
being analogous to bit depth in audio then the rate of frames per second in a
moving picture would be analogous to sample rate. Is that a reasonable
comparison?

Sooner or later I'll phrase this question in enough different ways as to
clearly communicate what I want to ask!

Garth~


"I think the fact that music can come up a wire is a miracle."
Ed Cherney


  #61   Report Post  
Arny Krueger
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

"S O'Neill" wrote in message

Arny Krueger wrote:

A DSD data stream is composed of pulses that are basically
integrated to produce an analog signal. Pulses have a value of
either +1 or -1. Alternate pulses with opposite polarities sum out
to zero. If the pulses are predominately +1, then the integrated
signal goes positive. The more predominately the pulses are +1, the
faster the integrated signal goes positive. If the pulses are
predominately -1 then the integrated signal goes minus, and so on.


Didn't that used to be called 1-bit DPCM?


Here are block diagrams of a DPCM coder and decoder

http://ce.sharif.edu/~m_amiri/Projects/MWIPC/dpcm1.htm

On page 7 of

http://www.hit.bme.hu/people/papay/edu/Acrobat/DSD.pdf

there is a block diagram of a DSD decoder.

Don't look the same to me.


  #62   Report Post  
Justin Ulysses Morse
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

Garthrr wrote:

Following this analogy -- and I just know I'm gonna be wrong here but this is
just how it seems to me -- if we say, for example, that 16 bit audio is like
having a pocket full of 10 dimes then isnt 24 bit audio a pocket full of 100
pennies? Finer divisions of the same whole--the ability to describe finer
voltage differences?


Yes. For the sake of discussion, let's say you have a thousand dollars
in either dimes or pennies.
Now, say you're going to make a single purchase of something that costs
$1.87. At 16 bits, you're forced to tell the clerk to "keep the
change" because all you have are dimes. No big deal, you're out $0.03.
You'll never miss it. Most of us wouldn't bother stooping to pick up
three pennies. But suppose you're going around town buying a whole
bunch of different things, and every time you do, you have to say,
"keep the change." Eventually, it starts to add up and you wish you
had some pennies.

Suppose you record live to 2-track at 16 bits and you just make a
single "transaction" where you maybe run an EQ, a gain boost, and a
little peak limiting all in one pass. You're using 24-bit DSP but you
have to stuff the result back into a 16-bit package. Not a real big
deal, your "clerk" rings you up and says that'll be $45.58. You only
have to say "keep the change" once.

But now what if you've got a bunch of different processes to run,
incrementally, that you evaluate before you move on to the next
process? Maybe you're multi-tracking and you're processing each track
differently. There's all kinds of "keep the change" adding up. In
fact, it's not only "adding up" but it's also "multiplying up." The
error in your first process will get multiplied in your next step. I
guess that would be something like if you bought 1000 of something that
should cost $0.13 apiece but since you're paying in dimes you're paying
$0.20 apiece. Suddenly you're out $70. Your accountant is gonna be
****ed.

This is why more processing means you should start with more bits. But
you know, decimal places on a calculator is probably a better analogy.
Should I start again?

ulysses


I understand that the dynamic range increases with higher bit depth and I
guess
in this money analogy we could think of that as having a dollar fifty or
something instead of the original dollar but it still seems like you get finer
resolution even in the first dollar.

Is this question not analogous to the number of pixels in a digital
photograph?
The more pixels, the higher resolution the picture (all else being equal).
That
being analogous to bit depth in audio then the rate of frames per second in a
moving picture would be analogous to sample rate. Is that a reasonable
comparison?

Sooner or later I'll phrase this question in enough different ways as to
clearly communicate what I want to ask!

Garth~


"I think the fact that music can come up a wire is a miracle."
Ed Cherney

  #63   Report Post  
Justin Ulysses Morse
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

Garthrr wrote:

Actually I was under the impresion that Jay was on the other side of the
fence--that he was saying that 24 bit was really no better than 16 bit for any
sort of real world audio. Perhaps I misunderstood his stance. I thought it was
Arny who was contending that there is resolution to be gained by 24 bit,
resolution which exists even in the not-so-low level signal.


You've understood Arny correctly but have gotten the wrong idea about
what Jay is trying to say. On his website he advocates going to at
least 20 bits for a release format, or better yet 24 bits. The
perceived disagreement (which turned out not really to be a
disagreement at all, if you ask me) revolved around the *reason* for
needing more bits. Jay seems to think 24 bits is better than 16 bits
because of the extra low-level resolution.

I actually disagree about the need for more bits in the delivery
medium. I think 16 is enough to deliver the full fidelity of any
real-world finished production, even though 24 bits are needed during
tracking, mixdown, and mastering.

ulysses
  #64   Report Post  
Garthrr
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

Ok, considering the post below, then the question is "Where is the disagreement
between the two camps-- the one camp who says 16 bit is as good as 24 bit for
anything but very, very low level audio and the other camp that says 24 bit is
better even at higher recording levels? Where is the point at which the two
camps begin to disagree?

Garth~


In article , Justin Ulysses
Morse writes:

Yes. For the sake of discussion, let's say you have a thousand dollars
in either dimes or pennies.
Now, say you're going to make a single purchase of something that costs
$1.87. At 16 bits, you're forced to tell the clerk to "keep the
change" because all you have are dimes. No big deal, you're out $0.03.
You'll never miss it. Most of us wouldn't bother stooping to pick up
three pennies. But suppose you're going around town buying a whole
bunch of different things, and every time you do, you have to say,
"keep the change." Eventually, it starts to add up and you wish you
had some pennies.

Suppose you record live to 2-track at 16 bits and you just make a
single "transaction" where you maybe run an EQ, a gain boost, and a
little peak limiting all in one pass. You're using 24-bit DSP but you
have to stuff the result back into a 16-bit package. Not a real big
deal, your "clerk" rings you up and says that'll be $45.58. You only
have to say "keep the change" once.

But now what if you've got a bunch of different processes to run,
incrementally, that you evaluate before you move on to the next
process? Maybe you're multi-tracking and you're processing each track
differently. There's all kinds of "keep the change" adding up. In
fact, it's not only "adding up" but it's also "multiplying up." The
error in your first process will get multiplied in your next step. I
guess that would be something like if you bought 1000 of something that
should cost $0.13 apiece but since you're paying in dimes you're paying
$0.20 apiece. Suddenly you're out $70. Your accountant is gonna be
****ed.

This is why more processing means you should start with more bits. But
you know, decimal places on a calculator is probably a better analogy.
Should I start again?

ulysses






"I think the fact that music can come up a wire is a miracle."
Ed Cherney
  #65   Report Post  
Garthrr
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

In article , Justin Ulysses
Morse writes:

I actually disagree about the need for more bits in the delivery
medium. I think 16 is enough to deliver the full fidelity of any
real-world finished production, even though 24 bits are needed during
tracking, mixdown, and mastering.


Yeah this is a good point IMO. The reason is that in the real world where
things are sometimes done in a hurry and levels are not always set to optimum,
having that overkill is a good thing. Not to mention the fact that tracks get
EQed and compressed all to hell which of course brings up the noise floor and
exposes low level signals more than they would be otherwise. I can think of a
session I did a week ago where the drummer used brushes on one song very
quietly and I didnt feel like resetting the levels of all ten tracks so I just
left them, knowing that I would be fine with my 24 bit system.

In the delivery medium you can be pretty sure (especially these days with the
"level wars") that the full potential of the medium is going to be exploited.

Garth~


"I think the fact that music can come up a wire is a miracle."
Ed Cherney


  #66   Report Post  
Mike Rivers
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain


In article writes:

24 bits also adds resolution in any region between -144 dB and full scale.


For me, with my limited understanding (or misunderstanding perhaps) of digital
theory, the above sentence cuts to the heart of the matter. If I understand
what Scott Dorsey and others have said then the change from 16 to 24 bits only
adds downward dynamic range and does not increase resolution of signals in the
relatively high ranges close to full scale.


You really have to know how to interpret this. It's easy to hear low
resolution at low levels - the limiting case is that the level is so
low that the lowest order bit never gets turned on, yet you know
there's something there. That's insufficient resolution. Go up just a
tad higher in level so that the lowest bit toggles once each cycle
(assuming a constant amplitude sine wave going in) and your sine wave
gets turned into a square wave. However, in a practical system, you'd
have to amplify by 90 dB or so in order to hear it at normal listening
volume, and we don't do that other than to show why 20 bits is better
than 16.

What having more bits allows you to do is trade off some resolution
at the top end that you won't ever hear anyway in favor of additional
working headroom. You can better deal with music with a wide dynamic
range if you have 20 dB between your nominal recording level and
maximum level than if you have only 10 or 6 dB. This is the practical
advantage to working with more bits.



--
I'm really Mike Rivers - )
However, until the spam goes away or Hell freezes over,
lots of IP addresses are blocked from this system. If
you e-mail me and it bounces, use your secret decoder ring
and reach me he double-m-eleven-double-zero at yahoo
  #67   Report Post  
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

Justin Ulysses Morse wrote:
|
|While it's true that the additional bits tack your extended resolution
|onto "the bottom" of the dynamic range, it clearly increases the
|resolution at all levels. You can have a -100dB component to a -1dB
|signal, and you still want to hear it.

Is the ear even capable of hearing the -100 component against the much
louder -1? I thought masking pervented this.

Phil
  #68   Report Post  
Tommi
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain


"Justin Ulysses Morse" wrote in message
m...
Thanks for this explanation, Arny. Idunno if it helped Tommi but it
helped me. Every time I finally grasp another "big concept" in digital
audio I'm always amazed at how incredibly clever it is. Those old
French mathmen must have been giddy as hell when they figured this
stuff out.

ulysses



Yes, it was indeed an informative reply from Arny!


  #70   Report Post  
Tommi
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain


wrote in message
...
Justin Ulysses Morse wrote:
|
|While it's true that the additional bits tack your extended resolution
|onto "the bottom" of the dynamic range, it clearly increases the
|resolution at all levels. You can have a -100dB component to a -1dB
|signal, and you still want to hear it.

Is the ear even capable of hearing the -100 component against the much
louder -1? I thought masking pervented this.

Phil


Masking, it is frequency-dependent. However, this leads to thinking about
the fact that the human ear actually compresses dynamics at higher sound
pressures. My understanding is that we have roughly 80dB's worth of dynamic
range at a time, which we then move according to the sound pressure levels
of the sound sources.
For example, if you'd be listening something at 110dB SPL for 5 minutes,
after that you couldn't hear the same sound with 2dB SPL for a while. It
works the other way round too: If you're listening something at 5dB spl for
a while, and then suddenly the same sound source produces a 120dB spl sound,
your ear would compress it lower(by stretching the eardrum, moving the
hammer away from it etc) in order to protect your hearing mechanism. This,
however isn't true with very short peaks because your protection mechanism
takes some time to wake up.




  #72   Report Post  
Jay - atldigi
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

In article , Justin
Ulysses Morse wrote:

Jay - atldigi wrote:

At least somebody understands me, but I thought I had already said this
somewhere in the thread. It's those quieter components that you are
getting from the extra bits. The louder components aren't represented
any better. In the end, it can be a more precise and better sounding
recording (provided the source is of a quality to benefit), but it's
because of the little things you can now record, not that the big ones
are better.


See, you are in full agreement with Arny. It just depends on whether
you're thinking of the music as a collection of sounds or one big
sound. As a collection of sounds, your extra bits are only revealing
the quiet ones; the loud components were already represented by 16
bits. But when you step back and listen to the whole thing, what that
MEANS is greater detail in the music, even where it's loud.


ulysses


Perhaps all the extra discussion gets in the way of the simple truths.
Here's the most simple way I think my point can be stated:


16 bits is perfectly capable of reproducing 96 dB of dynamic range. With
dither, the system is linear.


You can get better than that. It's linear within 96 dB (a little less if
you count the dither's added noise floor, a little more if you count
what the ear can hear within the noise floor due to averaging of noise
in our brain). You can't get better than "what you put in is what you
get out".

However, that's not all there is to audio, and we can hear about 120 dB
of dynamic range, so 16 bits can be a limitation and 20 or 24 can
certainly sound better. It doesn't have to be in an area where there's
nothing above -96 either. however, it in no way makes 16 bits' limited
96db range any less accurate. That's the point people seem to miss.

--
Jay Frigoletto
Mastersuite
Los Angeles
promastering.com
  #74   Report Post  
Jay - atldigi
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

In article , Justin
Ulysses Morse wrote:

Suppose you record live to 2-track at 16 bits and you just make a
single "transaction" where you maybe run an EQ, a gain boost, and a
little peak limiting all in one pass. You're using 24-bit DSP but you
have to stuff the result back into a 16-bit package. Not a real big
deal, your "clerk" rings you up and says that'll be $45.58. You only
have to say "keep the change" once.

But now what if you've got a bunch of different processes to run,
incrementally, that you evaluate before you move on to the next
process? Maybe you're multi-tracking and you're processing each track
differently. There's all kinds of "keep the change" adding up.


You're really making the case for higher intermediate wordlengths. If
you have a 16 bit file, preferably you'll process even higher than 24
bits. Let's take 48 bit for purposes of discussion. Between processes,
however, if you keep "stuffing" it back to 16 as you say, then you
indeed are going to have trouble, especially cumlatively. In the best
case scenario, a DAW would hand 48 bits from one process to the next and
you'd never come back down until delivery. Unfortunately, most DAWs,
even ones that process at 48, hand 24 bit words between processors (some
do allow 32 float to be saved as an intermediate). Also, external
processors, even those that work at greater than 24 bits of precision,
can only receive and transmit 24 bit words as AES and SPDIF etc. only
support up to 24 bit words. So, you dither from 48 to 24 before handing
it off, but don't drop below 24 until delivery, and that clerk won't be
keeping your change.

This is not to say that nothing above 16 bit capture or delivery is ever
beneficial - just that this is more the argument for processing with
longer wordlengths. Simply staring with a 16 bit file to process doesn't
mean you have to go back to 16 after every process you apply.

--
Jay Frigoletto
Mastersuite
Los Angeles
promastering.com
  #76   Report Post  
Arny Krueger
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

"Tommi" wrote in message

"Justin Ulysses Morse" wrote in message
m...
Thanks for this explanation, Arny. Idunno if it helped Tommi but it
helped me. Every time I finally grasp another "big concept" in
digital audio I'm always amazed at how incredibly clever it is.
Those old French mathmen must have been giddy as hell when they
figured this stuff out.

ulysses



Yes, it was indeed an informative reply from Arny!


blush


  #77   Report Post  
Jay - atldigi
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

In article ,
(Garthrr) wrote:

Ok, considering the post below, then the question is "Where is the
disagreement between the two camps-- the one camp who says 16 bit is
as good as 24 bit for anything but very, very low level audio and the
other camp that says 24 bit is better even at higher recording levels?
Where is the point at which the two camps begin to disagree?


It's not the "if", it's the "why". The subject is so complicated that
it's hard to give simple answers. For instance, 24 bits for delivery or
initial A to D is unneccessary because analog electronics (mics, pres,
consoles, the inputs of the ADC itself, outputs in the DAC, and
reproduction equipment) can't produce that kind of dynamic range, and no
practical recording or listening environment will ever be that quiet. So
there's the "marketing bits" problem right of the bat. This is not to be
confused with digital processing, where I want even MORE than 24 bits.

However, that point is unnecessary for a basic answer to your question;
but remember, there are lots of little things like that where you have
to say "it depends", or could offer a caveat or a distinction.

I think you're making the mistake of assuming that the "low level stuff"
means that if the average recording level is above 16 bits' -96dB limit
then the extra bits are somehow not helpful. That's not neccessarily so.
Unless you have steady state noise, the signal is constantly rising and
falling. Also, there are quiet components to a signal (or even a mix),
and there are louder components. If not, you'd never hear the voice
above the string pad. In some cases, depending on what you are
recording, and where, and with what, and how you play it back, little
details that are quiet unto themselves can still have a positive impact
on the performance you capture.

Just because they are quiet components doesn't mean they are useless
when there's anything else going on. Sure, sometimes they are masked and
are indeed useless, but sometimes they are not. Remember what I said
above? It depends... There are indeed some things that do not benefit
from more than 16 bits, but some things really can use more, and for
that reason, I definitely support the idea of A/D conversion, mixing,
and processing at higher than 16 bits. It's not, however, because the
loud signals are captured more accurately. It's because the more subtle
details (which are lower level signals) are also recorded.

--
Jay Frigoletto
Mastersuite
Los Angeles
promastering.com
  #78   Report Post  
Arny Krueger
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

wrote in message


Justin Ulysses Morse wrote:

While it's true that the additional bits tack your extended
resolution onto "the bottom" of the dynamic range, it clearly
increases the resolution at all levels. You can have a -100dB
component to a -1dB signal, and you still want to hear it.


Is the ear even capable of hearing the -100 component against the much
louder -1? I thought masking prevented this.


The threshold of reliable perception of spurious signals and noise is on the
order of from -60 to -70 dB when the music has reasonably sustained peaks at
0 dB.

This is one reason why it's fair to say that the practical benefits of
going past 16 bits are non-existent at high levels.




  #79   Report Post  
Garthrr
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

In article , Jay -
atldigi writes:

Absolutely not. I know there's a lot of posting going on, and I've
written a lot in this thread, but I know I've stated several times that
the above is not what I'm saying. Instead of adding even more
confustion, please try to go back and read my posts again.



Sorry Jay. I was hoping to avoid that.

Garth~


"I think the fact that music can come up a wire is a miracle."
Ed Cherney
  #80   Report Post  
Garthrr
 
Posts: n/a
Default 16 bit vs 24 bit, 44.1khz vs 48 khz <-- please explain

In article , Jay -
atldigi writes:

I think you're making the mistake of assuming that the "low level stuff"
means that if the average recording level is above 16 bits' -96dB limit
then the extra bits are somehow not helpful.


No, I understand why you would think that but I am aware of what you mean in
that there are low level components to audio with a high average level and I
understand that they would benefit from the added dynamic range. That scenario,
however is not the crux of my question.

Garth~


"I think the fact that music can come up a wire is a miracle."
Ed Cherney
Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Explain me this Schizoid Man Audio Opinions 6 April 11th 04 12:43 AM
TS/TRS balanced/unbalanced can someone explain TheKeith General 6 March 4th 04 07:56 PM
Can you explain this 50Hz hum?? me Pro Audio 18 October 28th 03 09:46 PM
Reverb & EQ and "damping" etc .. please explain .. Daniel Pro Audio 3 October 13th 03 09:09 AM


All times are GMT +1. The time now is 09:06 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"