View Full Version : How was 44.1/16 format decided on for CD?
muzician21
June 16th 11, 04:10 PM
I've been reading about dithering in the Bob Katz Mastering book. From
what I gather it's a scheme to "fix" sound files that have been taken
from a higher bit/sample rate and convert them to the standard CD
format.
What I wonder is wouldn't it make the whole thing a moot point if
digital audio was simply produced/sold at the bit/sample rate at which
it's recorded and processed? How was the 44.1/16 format arrived at in
the first place? Were the original digital recordings/masters always
recorded at higher sample & bitrates than the 44.1/16 final format?
If I'm displaying an incomplete understanding of this by all means
enlighten me.
hank alrich
June 16th 11, 04:33 PM
muzician21 > wrote:
> I've been reading about dithering in the Bob Katz Mastering book. From
> what I gather it's a scheme to "fix" sound files that have been taken
> from a higher bit/sample rate and convert them to the standard CD
> format.
Sensbily, dithering is applied anytime processing results in a reduction
of word length.
> What I wonder is wouldn't it make the whole thing a moot point if
> digital audio was simply produced/sold at the bit/sample rate at which
> it's recorded and processed? How was the 44.1/16 format arrived at in
> the first place? Were the original digital recordings/masters always
> recorded at higher sample & bitrates than the 44.1/16 final format?
>
> If I'm displaying an incomplete understanding of this by all means
> enlighten me.
http://en.wikipedia.org/wiki/Compact_Disc
--
shut up and play your guitar * http://hankalrich.com/
http://www.youtube.com/watch?v=NpqXcV9DYAc
http://www.sonicbids.com/HankandShaidri
William Sommerwerck
June 16th 11, 04:42 PM
"muzician21" > wrote in message
...
> I've been reading about dithering in the Bob Katz Mastering book.
> From what I gather it's a scheme to "fix" sound files that have
> been taken from a higher bit/sample rate and convert them to the
> standard CD format.
Optimal dithering is required when reducing the bit depth. (As far as I
know, dithering is not a consideration when changing the sample rate.)
Dithering is required with /any/ form of digital transmission or recording.
Properly done, it hides quantization errors by converting them from
correlated distortion to uncorrelated noise. In other words, it makes a
digital signal look like an analog signal that's been "corrupted" by random
noise, rather than distorted.
> What I wonder is wouldn't it make the whole thing a moot point if
> digital audio was simply produced/sold at the bit/sample rate at which
> it's recorded and processed? How was the 44.1/16 format arrived at in
> the first place? Were the original digital recordings/masters always
> recorded at higher sample & bitrates than the 44.1/16 final format?
Sampling at a higher rate and bit depth than those of the target recording
offers several advantages, one of whicn is greater flexibility in noise
shaping, which can reduce in-band quantization errors at the expense of
out-of-band errors.
Whether the subjective sound quality is improved is debatable. I won't get
into that.
The Wikipedia article explains why 44.1/16 was chosen. As far as I know,
it's correct, though there might be other considerations not given in the
article.
Scott Dorsey
June 16th 11, 04:48 PM
William Sommerwerck > wrote:
>
>Sampling at a higher rate and bit depth than those of the target recording
>offers several advantages, one of whicn is greater flexibility in noise
>shaping, which can reduce in-band quantization errors at the expense of
>out-of-band errors.
>
>Whether the subjective sound quality is improved is debatable. I won't get
>into that.
The higher bit depth allows you to get away with being a lot more sloppy
about setting levels. Recording at 20 bit and dubbing to 16 bit release
means you have 24 dB worth of headroom for error.
If you knew exactly what the peak value was going to be in a performance
and you could set levels precisely and never need to do any processing, there
would be no need for extended word length.
In these days of oversampling, I don't think higher sampling rates buy you
a damn thing, though.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
William Sommerwerck
June 16th 11, 05:13 PM
"Scott Dorsey" > wrote in message
...
> In these days of oversampling, I don't think higher sampling
> rates buy you a damn thing, though.
Other than offering a wider variety of sampling rates for releases.
This, I think, is one reason some labels adopted DSD.
Mike Rivers
June 16th 11, 06:09 PM
On 6/16/2011 11:10 AM, muzician21 wrote:
> What I wonder is wouldn't it make the whole thing a moot point if
> digital audio was simply produced/sold at the bit/sample rate at which
> it's recorded and processed?
Well, yeah, if you went straight from the recorded file to
the CD, but practically nobody does that. If you adjust a
level, apply EQ, compression, limiting, or change panning -
anything but a simple edit that doesn't involve a crossfade
- you will increase the word length. You need to truncate
the word to get it back to 16 bits to go on the CD.
Dithering makes that truncation more graceful
> How was the 44.1/16 format arrived at in
> the first place? Were the original digital recordings/masters always
> recorded at higher sample& bitrates than the 44.1/16 final format?
Believe it or not (this actually seems to be true), Sony
decided that a CD had to be long enough to fit all of
Beethoven's 9th symphony on one disk, and they could do it
using that sample rate and word length, allowing room for
the housekeeping and error detection and correction bits.
The best converters they could get in the early days were
16-bit. They didn't even try to make 24-bit converters since
they couldn't really squeeze more than about 14 bit accuracy
out of a 16 bit converter, and that was on good days. There
were a few sample rates used, but back in the early days,
they didn't worry about connecting an analog outpu to an
analog input and "converting" the sample rate that way.
44.1 kHz became the standard because the first digital
editor used a modified videotape editor. The recording
medium was videotape (from a PCM converter) and using a
sample rate of 44.1 kHz allowed an even number of digital
audio words to fit on a video line, using the PAL system.
NTSC (the US system) actually ran at 44.056 kHz.
The simple answer is that 44.1 kHz, 16 bits is good enough
for who it's for.
--
"Today's production equipment is IT based and cannot be
operated without a passing knowledge of computing, although
it seems that it can be operated without a passing knowledge
of audio." - John Watkinson
http://mikeriversaudio.wordpress.com - useful and
interesting audio stuff
William Sommerwerck
June 16th 11, 06:54 PM
"Mike Rivers" > wrote in message
...
> Believe it or not (this actually seems to be true), Sony
> decided that a CD had to be long enough to fit all of
> Beethoven's 9th symphony on one disk, and they could do it
> using that sample rate and word length, allowing room for
> the housekeeping and error detection and correction bits.
According to the Wikipedia article, the initial standard for disk diameter
did not permit getting the 74-minute Furtwangler performance on the disk.
Sony supposedly set up a plant manufacturing larger disks, and forced the
larger disk on Philips, so the latter would not have an initial commercial
advantage from using the manufacturing facility it had already set up for
the smaller disk.
> The best converters they could get in the early days were
> 16-bit. They didn't even try to make 24-bit converters since
> they couldn't really squeeze more than about 14 bit accuracy
> out of a 16 bit converter, and that was on good days. There
> were a few sample rates used, but back in the early days,
> they didn't worry about connecting an analog outpu to an
> analog input and "converting" the sample rate that way.
Philips wanted a 14-bit system, Sony wanted 16-bit. Philips lost out here,
too, its earliest players using a 14-bit DAC with oversampling.
Arny Krueger
June 16th 11, 08:16 PM
"muzician21" > wrote in message
...
> I've been reading about dithering in the Bob Katz Mastering book. From
> what I gather it's a scheme to "fix" sound files that have been taken
> from a higher bit/sample rate and convert them to the standard CD
> format.
Dither should be applied whenever digital data is quantized, such as being
converted from analog, or when converting say 24 bit data to 16 bits. It's
purpose is to randomize quantization error to avoid having more audible
artifacts.
> What I wonder is wouldn't it make the whole thing a moot point if
> digital audio was simply produced/sold at the bit/sample rate at which
> it's recorded and processed?
For openers, the ADC needs to be dithered.
> How was the 44.1/16 format arrived at in the first place?
http://en.wikipedia.org/wiki/44,100_Hz
"The sampling rates that satisfy these requirements - at least 40 kHz (so
can encode 20 kHz sounds), no more than 46.875 kHz (so require no more than
3 samples per line in PAL), and a multiple of 900 Hz (so can be encoded in
NTSC and PAL) are thus 40.5, 41.4, 42.3, 43.2, 44.1, 45, 45.9, and 46.8 kHz.
The lower ones are eliminated due to low-pass filters requiring a transition
band, while the higher ones are eliminated due to some lines being required
for vertical blanking interval; 44.1 kHz was the higher usable rate, and was
eventually chosen"
16 bits was chosen because it was a word length that had about the right
amount of dynamic range to overkill the requirements of recording audio with
sonic accuracy, and it had already been chosen by the IT people so there was
a lot of hardware already in use that worked well with it.
> Were the original digital recordings/masters always
> recorded at higher sample & bitrates than the 44.1/16 final format?
No, other sample rates and word lengths from 32 KHz to 50 KHz were used by
various early practitioners. 48 KHz is a sample rate that is commonly used
with video to this day.
geoff
June 16th 11, 09:26 PM
"muzician21" > wrote in message
...
> I've been reading about dithering in the Bob Katz Mastering book. From
> what I gather it's a scheme to "fix" sound files that have been taken
> from a higher bit/sample rate and convert them to the standard CD
> format.
>
> What I wonder is wouldn't it make the whole thing a moot point if
> digital audio was simply produced/sold at the bit/sample rate at which
> it's recorded and processed? How was the 44.1/16 format arrived at in
> the first place? Were the original digital recordings/masters always
> recorded at higher sample & bitrates than the 44.1/16 final format?
>
> If I'm displaying
Yep that would be great. But at the time 16/44k1 was a spec decided upon by
the current (and immediately foreseeable technology level) combioned with
the play time acheivable on the SOTA media to be developed at the time.
It was also found to be pretty much as good as most hi-fi buffs could
discern, wespecially given that they were all getting stiffys over 14-bit
digital recordings on vinyl LPs at the time !
geoff
Trevor
June 17th 11, 08:03 AM
"hank alrich" > wrote in message
...
> muzician21 > wrote:
>> I've been reading about dithering in the Bob Katz Mastering book. From
>> what I gather it's a scheme to "fix" sound files that have been taken
>> from a higher bit/sample rate and convert them to the standard CD
>> format.
>
> Sensbily, dithering is applied anytime processing results in a reduction
> of word length.
Dithering is *always* applied in any *proper* A-D conversion with a finite
bit depth. (this is done automatically by any decent system these days of
course, however you may also have a choice of dither type in some systems)
Surely you mean ADDITIONAL dithering is required if reducing bit depth?
Trevor.
Trevor
June 17th 11, 08:14 AM
"William Sommerwerck" > wrote in message
...
> "Scott Dorsey" > wrote in message
> ...
>
>> In these days of oversampling, I don't think higher sampling
>> rates buy you a damn thing, though.
>
> Other than offering a wider variety of sampling rates for releases.
There's a large amount of material recorded for both DVD and CD release,
hardly anybody worries about simply resampling to 48k or 44.1k as required,
whether the original is 44.1, 48, 88.2 or more likely 96k these days.
Trevor.
Trevor
June 17th 11, 08:33 AM
"Mike Rivers" > wrote in message
...
>> How was the 44.1/16 format arrived at in
>> the first place? Were the original digital recordings/masters always
>> recorded at higher sample& bitrates than the 44.1/16 final format?
>
> Believe it or not (this actually seems to be true), Sony decided that a CD
> had to be long enough to fit all of Beethoven's 9th symphony on one disk,
> and they could do it using that sample rate and word length, allowing room
> for the housekeeping and error detection and correction bits.
MY understanding is they deliberately CHOSE 44.1k as the minimum for the
desireable flat frequency response to 20kHz with filterig, 16 bits as the
minimum desireable for a sufficiently improved DNR compared to vinyl (plus
an allowance for technology improvements with the goal as approx 100dB, no
more ever being correctly considered necessary for consumer purposes), only
THENdid they select the 5.25" disc size as the smallest required for the
best available laser/encoding technolgy at the time to fit Beethovens 9th on
a single disc. You seem to miss the critical fact they had disk size, pit
length, encoding system, track spacing etc. to play with, whereas 16/44 was
an *early* choice, 44.1 because of the use of video recorders initially.
> 44.1 kHz became the standard because the first digital editor used a
> modified videotape editor. The recording medium was videotape (from a PCM
> converter) and using a sample rate of 44.1 kHz allowed an even number of
> digital audio words to fit on a video line, using the PAL system. NTSC
> (the US system) actually ran at 44.056 kHz.
>
> The simple answer is that 44.1 kHz, 16 bits is good enough for who it's
> for.
Right, and that was DECIDED long before the disc size was finalised to suit
Beethovens 9th. They DID want to keep it smaller than the then current 7"
singles, but it could just have easily ended up at 6" if that was found to
be necessary. They would NOT have cut the bit depth or sample rate to simply
keep the disc size at 5.25".
Trevor.
William Sommerwerck
June 17th 11, 11:30 AM
"Marc Wielage" > wrote in message
.com...
> I'd like to see a mass-market book written on the full history
> of the Compact Disc, interviewing all the key people behind
> the design (as well as record-label execs), but as far as I know,
> it hasn't been done yet.
A book on the history of all consumer optical media would be even more
interesting.
Of course, such a book will be badly edited (ie, not edited at all) and
loaded with errors.
Mike Rivers
June 17th 11, 12:06 PM
On 6/17/2011 6:26 AM, Marc Wielage wrote:
> I'd like to see a mass-market book written on the full history of the Compact
> Disc, interviewing all the kep people behind the design (as well as record
> label execs), but as far as I know, it hasn't been done yet.
The mass market doesn't care. They have Wikipedia if they're
curious.
--
"Today's production equipment is IT based and cannot be
operated without a passing knowledge of computing, although
it seems that it can be operated without a passing knowledge
of audio." - John Watkinson
http://mikeriversaudio.wordpress.com - useful and
interesting audio stuff
Scott Dorsey
June 17th 11, 01:30 PM
Trevor > wrote:
>MY understanding is they deliberately CHOSE 44.1k as the minimum for the
>desireable flat frequency response to 20kHz with filterig, 16 bits as the
>minimum desireable for a sufficiently improved DNR compared to vinyl (plus
>an allowance for technology improvements with the goal as approx 100dB, no
>more ever being correctly considered necessary for consumer purposes), only
>THENdid they select the 5.25" disc size as the smallest required for the
>best available laser/encoding technolgy at the time to fit Beethovens 9th on
>a single disc. You seem to miss the critical fact they had disk size, pit
>length, encoding system, track spacing etc. to play with, whereas 16/44 was
>an *early* choice, 44.1 because of the use of video recorders initially.
44.1 existed well before the CD.
In the early days when a lot of systems were using video recorders to store
digital audio, 44.1 (or 44.056 in the US) allowed an integral multiple of
samples per line. So it was quite common.
Now, the other thing here is that in the early days, anti-aliasing and
reconstruction filters were just awful. So a lot of digital recorders used
50 ksamp/sec or 54 ksamp/sec in order to get the filter issues out of the
audible band somewhat.
How we got 48 ksamp/sec still remains a mystery.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
William Sommerwerck
June 17th 11, 03:43 PM
"Scott Dorsey" > wrote in message
...
> Now, the other thing here is that in the early days, anti-aliasing
> and reconstruction [sic] filters were just awful. So a lot of digital
> recorders used 50 ksamp/sec or 54 ksamp/sec in order to
> get the filter issues out of the audible band somewhat.
In exchange for an article, "Stereophile" installed Apogee filters in my
Nakamichi DMP-100. They were technically superior to Sony's but I was not
aware of any obvious improvement. (I still have the unit, by the way.)
> How we got 48 ksamp/sec still remains a mystery.
Probably because it made the filter design a bit less critical.
muzician21
June 17th 11, 04:10 PM
On Jun 16, 1:09*pm, Mike Rivers > wrote:
> Well, yeah, if you went straight from the recorded file to
> the CD, but practically nobody does that. If you adjust a
> level, apply EQ, compression, limiting, or change panning -
> anything but a simple edit that doesn't involve a crossfade
> - you will increase the word length. You need to truncate
> the word to get it back to 16 bits to go on the CD.
> Dithering makes that truncation more graceful
Okay. I'm still fuzzy on the nuts and bolts, what word length is and
how it relates to bit depth.
Neil Gould
June 17th 11, 04:33 PM
muzician21 wrote:
> On Jun 16, 1:09 pm, Mike Rivers > wrote:
>
>> Well, yeah, if you went straight from the recorded file to
>> the CD, but practically nobody does that. If you adjust a
>> level, apply EQ, compression, limiting, or change panning -
>> anything but a simple edit that doesn't involve a crossfade
>> - you will increase the word length. You need to truncate
>> the word to get it back to 16 bits to go on the CD.
>> Dithering makes that truncation more graceful
>
>
> Okay. I'm still fuzzy on the nuts and bolts, what word length is and
> how it relates to bit depth.
>
Although there is a context in which word length and bit depth are different
aspects of an audio file, it might be acceptable for learning purposes to
think of them as the same thing. The CD format requires 16 bits, but for
many reasons, that is not an optimal format for editing files.
As has been mentioned, the math for any operations on the file other than a
simple cut will generate much greater bit depths. A very simplified example
might be to consider dividing two sets of numbers, 24 / 6 and 24 / 7 and
restricting the "resolution" of the result to two digits. The first can
still be accurately defined within two digits, the second can not, so some
level of error must be accepted. On the other hand, if the "resolution" can
be extended to 5 digits, the result can be represented much more accurately.
Even though this simple example reduces to pretty much the same result when
expressed in two digit format (e.g. "dithered" back to the lower
resolution), I think you can see how this principle can make a difference
with larger numbers.
--
best regards,
Neil
William Sommerwerck
June 17th 11, 04:49 PM
"muzician21" > wrote in message
...
> Okay. I'm still fuzzy on the nuts and bolts, what word length
> is and how it relates to bit depth.
They're pretty much the same thing -- unless you insist that a "word" is
always 16 bits (which, traditionally, it is).
If you don't understand "bit depth", you need to learn a lot more about
digital recording.
Mike Rivers
June 17th 11, 05:37 PM
On 6/17/2011 11:10 AM, muzician21 wrote:
> Okay. I'm still fuzzy on the nuts and bolts, what word length is and
> how it relates to bit depth.
Maybe I am, too, since I never use "bit depth." I suppose
that it means how many bits used to represent the voltage
are actually useful (with a couple extra thrown in for
marketing). "Word length" is a computer term and when
talking about digital audio hardware, it's how many bits
come out of the A/D converter or can be accepted by the D/A
converter without regard to how many bits are actually used,
or usable.
In the early days of 24-bit converter ICs, manufacturers
could use then and advertise "24-bit" even though most
everything below the 15th or 16th bit was mostly noise.
They're better today, but still none actually have accuracy
to the lowest order bit.
I prefer "word length" because it doesn't sound like a term
made up by the marketing department to hide the actual
usable converter resolution.
--
"Today's production equipment is IT based and cannot be
operated without a passing knowledge of computing, although
it seems that it can be operated without a passing knowledge
of audio." - John Watkinson
http://mikeriversaudio.wordpress.com - useful and
interesting audio stuff
Scott Dorsey
June 17th 11, 06:45 PM
On 6/17/2011 11:10 AM, muzician21 wrote:
>
> Okay. I'm still fuzzy on the nuts and bolts, what word length is and
> how it relates to bit depth.
Most of the time, word length is bit depth is dynamic range.
Sometimes you will see a short word padded out to a longer one to deal with
hardware or software that only handles certain multiples. For example, if
you have a 24 bit wav file, they actually store 32 bits per sample but only
24 are actually significant. (There is a "packed" format that stores 24
bit values one after the other but it's a bit more cumbersome on computers
that expect to deal with 16 bit or 32 bit values).
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Les Cargill[_4_]
June 17th 11, 06:45 PM
muzician21 wrote:
> On Jun 16, 1:09 pm, Mike > wrote:
>
>> Well, yeah, if you went straight from the recorded file to
>> the CD, but practically nobody does that. If you adjust a
>> level, apply EQ, compression, limiting, or change panning -
>> anything but a simple edit that doesn't involve a crossfade
>> - you will increase the word length. You need to truncate
>> the word to get it back to 16 bits to go on the CD.
>> Dithering makes that truncation more graceful
>
>
> Okay. I'm still fuzzy on the nuts and bolts, what word length is and
> how it relates to bit depth.
For all practical purposes, they are the same in this discussion.
--
Les Cargill
Neil Gould
June 17th 11, 08:13 PM
Mike Rivers wrote:
> On 6/17/2011 11:10 AM, muzician21 wrote:
>
>> Okay. I'm still fuzzy on the nuts and bolts, what word length is and
>> how it relates to bit depth.
>
> Maybe I am, too, since I never use "bit depth." I suppose
> that it means how many bits used to represent the voltage
> are actually useful (with a couple extra thrown in for
> marketing). "Word length" is a computer term and when
> talking about digital audio hardware, it's how many bits
> come out of the A/D converter or can be accepted by the D/A
> converter without regard to how many bits are actually used,
> or usable.
>
> In the early days of 24-bit converter ICs, manufacturers
> could use then and advertise "24-bit" even though most
> everything below the 15th or 16th bit was mostly noise.
> They're better today, but still none actually have accuracy
> to the lowest order bit.
>
> I prefer "word length" because it doesn't sound like a term
> made up by the marketing department to hide the actual
> usable converter resolution.
>
Word length is really independent of the bit depth of an audio file, so at
some point, it could become confusing to conflate the two. Also, even though
converters were not capable of generating files with a true 24-bit depth, it
has long been common for editing apps to use much greater bit depths. For
example, CEP (I think all versions) used 32 bit floating point math with a
56 bit depth for editing when working with 24 or 32-bit files.
Word length is usually determined by computer system parameters, often the
processor in use. So, though you are right that for personal computers a
"word" was usually two bytes (16 bits) long, it can be any value that the
processor can handle, often but not always, in a single "read".
So, neither word length nor bit depth originated in the marketing
departments!
--
best regards,
Neil
Mike Rivers
June 17th 11, 11:27 PM
On 6/17/2011 3:13 PM, Neil Gould wrote:
> Word length is really independent of the bit depth of an audio file, so at
> some point, it could become confusing to conflate the two.
Right. There are "24-bit" files that have only 16
significant bits of data in a word. There are "16-bit" files
that are transmitted and/or stored as a 24-bit word. But
most of the time when you find yourself explaining details
like that, the person to whom you're explaining them won't
understand anyway, or realizes that for his purposes, it
makes no difference.
> Also, even though
> converters were not capable of generating files with a true 24-bit depth, it
> has long been common for editing apps to use much greater bit depths. For
> example, CEP (I think all versions) used 32 bit floating point math with a
> 56 bit depth for editing when working with 24 or 32-bit files.
That's just good sense since those applications are built to
modify the audio files. And practically any operation
results in a longer word length. One of the things that made
early DAWs sound bad was that they didn't have room for
expanding word length.
> Word length is usually determined by computer system parameters, often the
> processor in use.
The same is true for just about any piece of digital audio
hardware. The word length coming out of an A/D converter is
determined by the hardware parameters. The word length of
the data on a DAT recorder is determined by the recorder's
parameters (and the industry standards).
> So, neither word length nor bit depth originated in the marketing
> departments!
I never heard "bit depth" used except in spec sheets and
manuals, and they come from marketing departments.
--
"Today's production equipment is IT based and cannot be
operated without a passing knowledge of computing, although
it seems that it can be operated without a passing knowledge
of audio." - John Watkinson
http://mikeriversaudio.wordpress.com - useful and
interesting audio stuff
geoff
June 17th 11, 11:45 PM
"muzician21" > wrote in message
...
On Jun 16, 1:09 pm, Mike Rivers > wrote:
> Well, yeah, if you went straight from the recorded file to
> the CD, but practically nobody does that. If you adjust a
> level, apply EQ, compression, limiting, or change panning -
> anything but a simple edit that doesn't involve a crossfade
> - you will increase the word length. You need to truncate
> the word to get it back to 16 bits to go on the CD.
> Dithering makes that truncation more graceful
> Okay. I'm still fuzzy on the nuts and bolts, what word length is and
> how it relates to bit depth.
Word-length is the serial equivalent of bit-depth. "16 bits " are
side-by-side on a parallel computer bus, but one after the other when
represented serially, like as written as 1s and 0s.
geoff
geoff
June 17th 11, 11:54 PM
"William Sommerwerck" > wrote in message
...
> "muzician21" > wrote in message
> ...
>
>> Okay. I'm still fuzzy on the nuts and bolts, what word length
>> is and how it relates to bit depth.
>
> They're pretty much the same thing -- unless you insist that a "word" is
> always 16 bits (which, traditionally, it is).
No it's not. A 'word' is traditionally 8 or, 16, 24, 32, and now 64 bits,
depending on your systems achitecture and the context. If you want to be
really traditional and pin it down to one value, you'd have to say "8", but
that would be incorrect.
>
> If you don't understand "bit depth", you need to learn a lot more about
> digital recording.
Yep.
geoff
geoff
June 17th 11, 11:55 PM
"Mike Rivers" > wrote in message
...
> On 6/17/2011 11:10 AM, muzician21 wrote:
>
>> Okay. I'm still fuzzy on the nuts and bolts, what word length is and
>> how it relates to bit depth.
>
> Maybe I am, too, since I never use "bit depth." I suppose that it means
> how many bits used to represent the voltage are actually useful (with a
> couple extra thrown in for marketing). "Word length" is a computer term
> and when talking about digital audio hardware, it's how many bits come out
> of the A/D converter or can be accepted by the D/A converter without
> regard to how many bits are actually used, or usable.
>
> In the early days of 24-bit converter ICs, manufacturers could use then
> and advertise "24-bit" even though most everything below the 15th or 16th
> bit was mostly noise. They're better today, but still none actually have
> accuracy to the lowest order bit.
The content is irrelevant - the term relates to the size of the data frame.
geoff
William Sommerwerck
June 17th 11, 11:59 PM
"geoff" > wrote in message
...
>
> "William Sommerwerck" > wrote in message
> ...
> > "muzician21" > wrote in message
> > ...
> >
> >> Okay. I'm still fuzzy on the nuts and bolts, what word length
> >> is and how it relates to bit depth.
>> They're pretty much the same thing -- unless you insist that a "word" is
>> always 16 bits (which, traditionally, it is).
> No it's not. A "word" is traditionally 8 or, 16, 24, 32, and now 64 bits,
> depending on your system's achitecture and the context. If you want to be
> really traditional and pin it down to one value, you'd have to say "8",
but
> that would be incorrect.
When I was growing up, a word was 16 bits. Changes in computer architecture
have blurred a the meaning of "word", at least at the hardware level. (Many
languages still define a "word" as 16 bits.) Which is no big deal, if you
know what you're referring to.
Eight bits has never been a word.
geoff
June 18th 11, 12:01 AM
"Mike Rivers" > wrote in message
...
> On 6/17/2011 3:13 PM, Neil Gould wrote:
>
>> Word length is really independent of the bit depth of an audio file, so
>> at
>> some point, it could become confusing to conflate the two.
>
> Right. There are "24-bit" files that have only 16 significant bits of data
> in a word.
A 2 gallon bucket is still a two gallon bucket, even if only half full.
16 and 24 bit are fully-accurate descriptions of the data. The 'content' is
irrelevant except that the quality of the content is limited by teh maximum
size of the bucket, ooops word.
Yes, it is somewhere between inconcise tandbbing to make definitive quality
based puely on bit-depth/word-length.
Also, to make things even more complicated, a chunk of data representing a
24-bit-depth can be represented as 3 x-bit words, and in fact is especially
when transmitted by most serial methods.
geoff
Doug McDonald[_6_]
June 18th 11, 12:54 AM
On 6/17/2011 5:59 PM, William Sommerwerck wrote:
> When I was growing up, a word was 16 bits. Changes in computer architecture
> have blurred a the meaning of "word", at least at the hardware level. (Many
> languages still define a "word" as 16 bits.) Which is no big deal, if you
> know what you're referring to.
>
> Eight bits has never been a word.
>
>
When I started out with computers a "word" was anywhere from
2 to 19,999 decimal digits, or 22 to 11*19999 bits (there being bits,
4 + 1 "flag" in a digit. Soon it went to 36 bits (IBM 7094).
Then to 32 (IBM 360) then 12 (PDP-8). Now it seems to have settled,
for general purpose computers, to some power of 2.
Doug McDonald
Trevor
June 18th 11, 02:33 AM
"Scott Dorsey" > wrote in message
...
>You seem to miss the critical fact they had disk size, pit
>>length, encoding system, track spacing etc. to play with, whereas 16/44
>>was
>>an *early* choice, 44.1 because of the use of video recorders initially.
>
> 44.1 existed well before the CD.
>
> In the early days when a lot of systems were using video recorders to
> store
> digital audio, 44.1 (or 44.056 in the US) allowed an integral multiple of
> samples per line. So it was quite common.
Right, which is why it was kept for CD and other parameters changed to suit
the infamous Beethovens 9th story. Funny though that some performances of
the 9th run for more than 74 minutes, thus making the point rather dubiuos.
IF they had settled on 80 minutes in the first place, which CD's are capable
of and regularly are these days, then it would all be more believable.
Phillips obviously had originally chosen 60 minutes as more than an LP size,
and a nice round figure as tapes had nearly always been.
Trevor.
Trevor
June 18th 11, 02:44 AM
"William Sommerwerck" > wrote in message
...
> "muzician21" > wrote in message
> ...
>
>> Okay. I'm still fuzzy on the nuts and bolts, what word length
>> is and how it relates to bit depth.
>
> They're pretty much the same thing -- unless you insist that a "word" is
> always 16 bits (which, traditionally, it is).
Seems to me bit depth is fairly obvious, and "word length" is really the
number of bytes required for that bit depth, ie. 2 bytes for 16 bit, and 3
(or 4) bytes for 24 bits. So a 16 bit sample will fit in a traditional 2byte
computer "word", whereas a 24 bit sample will not.
Trevor.
Neil Gould
June 18th 11, 12:12 PM
Mike Rivers wrote:
> On 6/17/2011 3:13 PM, Neil Gould wrote:
>
>> Word length is really independent of the bit depth of an audio file,
>> so at some point, it could become confusing to conflate the two.
>
> Right. There are "24-bit" files that have only 16
> significant bits of data in a word. There are "16-bit" files
> that are transmitted and/or stored as a 24-bit word. But
> most of the time when you find yourself explaining details
> like that, the person to whom you're explaining them won't
> understand anyway, or realizes that for his purposes, it
> makes no difference.
>
We agree that the average DAW user will not have the background to
understand the difference. However, not understanding the difference does
not extinguish the difference.
>> Also, even though
>> converters were not capable of generating files with a true 24-bit
>> depth, it has long been common for editing apps to use much greater
>> bit depths. For example, CEP (I think all versions) used 32 bit
>> floating point math with a 56 bit depth for editing when working
>> with 24 or 32-bit files.
>
> That's just good sense since those applications are built to
> modify the audio files. And practically any operation
> results in a longer word length. One of the things that made
> early DAWs sound bad was that they didn't have room for
> expanding word length.
>
The thing is, the programs aren't expanding the word length... that is set
by parameters outside the DAW program such as the compiler used to make the
program. One way to look at it is that word length is one reason that 16-bit
programs might not run in a 32-bit OS, 32-bit programs might not run in a
64-bit OS, etc. Another example is that the difference between an 8088
processor and an 8016 processor is that the 8088 required two clock cycles
to read a two-byte word, while the 8016 could read it in a single clock
cycle. However, the bit depth of the file is unchanged. Conversely, DAWs and
other programs can expand the bit depth of the file for purposes of editing
without changing the word length; they really are two different things.
>> Word length is usually determined by computer system parameters,
>> often the processor in use.
>
> The same is true for just about any piece of digital audio
> hardware. The word length coming out of an A/D converter is
> determined by the hardware parameters. The word length of
> the data on a DAT recorder is determined by the recorder's
> parameters (and the industry standards).
>
What you've described is the bit depth of the files, and there isn't an easy
way to know the files' word length.
>> So, neither word length nor bit depth originated in the marketing
>> departments!
>
> I never heard "bit depth" used except in spec sheets and
> manuals, and they come from marketing departments.
>
If you heard "word length" used outside of a compiler or hardware design
reference, it was most likely being misused.
--
best regards,
Neil
Neil Gould
June 18th 11, 12:24 PM
William Sommerwerck wrote:
>
> When I was growing up, a word was 16 bits. Changes in computer
> architecture have blurred a the meaning of "word", at least at the
> hardware level. (Many languages still define a "word" as 16 bits.)
> Which is no big deal, if you know what you're referring to.
>
> Eight bits has never been a word.
>
A word could always be *any* length, and a one-byte word is not at all
unusual. One of the basic functions of a compiler is to tell the OS what the
word length of is so that the OS can interpret and manipulate the file in
the required number of chunks. The meaning of "word" has not changed, and is
not blurred, but those outside the industry have misapplied the term to
other aspects of a file (such as bit depth), which in forums such as this
one can be confusing.
--
best regards,
Neil
Don Pearce[_3_]
June 18th 11, 01:08 PM
On Sat, 18 Jun 2011 07:24:12 -0400, "Neil Gould"
> wrote:
>William Sommerwerck wrote:
>>
>> When I was growing up, a word was 16 bits. Changes in computer
>> architecture have blurred a the meaning of "word", at least at the
>> hardware level. (Many languages still define a "word" as 16 bits.)
>> Which is no big deal, if you know what you're referring to.
>>
>> Eight bits has never been a word.
>>
>A word could always be *any* length, and a one-byte word is not at all
>unusual. One of the basic functions of a compiler is to tell the OS what the
>word length of is so that the OS can interpret and manipulate the file in
>the required number of chunks. The meaning of "word" has not changed, and is
>not blurred, but those outside the industry have misapplied the term to
>other aspects of a file (such as bit depth), which in forums such as this
>one can be confusing.
A byte is the data equivalent of a letter. A word is a group of bytes,
and can be any length although there is a small legacy of usage that
makes it two bytes. That is by now essentially dead, and word length
is an important specification in any data system.
d
Neil Gould
June 18th 11, 02:02 PM
Don Pearce wrote:
>
> A byte is the data equivalent of a letter. A word is a group of bytes,
> and can be any length although there is a small legacy of usage that
> makes it two bytes. That is by now essentially dead, and word length
> is an important specification in any data system.
>
A bit is the equivalent of a letter, and a word is a group of bits. So, word
length can be anything from 2 bits on. A byte is defined as 8 bits and that
constant hasn't changed, AFAIK.
--
best regards,
Neil
Don Pearce[_3_]
June 18th 11, 02:20 PM
On Sat, 18 Jun 2011 09:02:49 -0400, "Neil Gould"
> wrote:
>Don Pearce wrote:
>>
>> A byte is the data equivalent of a letter. A word is a group of bytes,
>> and can be any length although there is a small legacy of usage that
>> makes it two bytes. That is by now essentially dead, and word length
>> is an important specification in any data system.
>>
>A bit is the equivalent of a letter, and a word is a group of bits. So, word
>length can be anything from 2 bits on. A byte is defined as 8 bits and that
>constant hasn't changed, AFAIK.
A byte is a letter in a very real sense. Each letter on this page
comprises one byte. I think that alone makes the terminology right. A
bit already has a name - a bit; no need to call it a letter too.
d
John Williamson
June 18th 11, 02:24 PM
Neil Gould wrote:
> Don Pearce wrote:
>> A byte is the data equivalent of a letter. A word is a group of bytes,
>> and can be any length although there is a small legacy of usage that
>> makes it two bytes. That is by now essentially dead, and word length
>> is an important specification in any data system.
>>
> A bit is the equivalent of a letter, and a word is a group of bits. So, word
> length can be anything from 2 bits on. A byte is defined as 8 bits and that
> constant hasn't changed, AFAIK.
>
Doesn't a lot of usenet still use the old ASCII 7-bit byte, with one bit
added for parity?
--
Tciao for Now!
John.
Neil Gould
June 18th 11, 03:30 PM
Don Pearce wrote:
> On Sat, 18 Jun 2011 09:02:49 -0400, "Neil Gould"
> > wrote:
>
>> Don Pearce wrote:
>>>
>>> A byte is the data equivalent of a letter. A word is a group of
>>> bytes, and can be any length although there is a small legacy of
>>> usage that makes it two bytes. That is by now essentially dead, and
>>> word length is an important specification in any data system.
>>>
>> A bit is the equivalent of a letter, and a word is a group of bits.
>> So, word length can be anything from 2 bits on. A byte is defined as
>> 8 bits and that constant hasn't changed, AFAIK.
>
> A byte is a letter in a very real sense. Each letter on this page
> comprises one byte. I think that alone makes the terminology right. A
> bit already has a name - a bit; no need to call it a letter too.
>
The term you used and I concur with is "_equivalent_ of a letter", and in
the sense that a letter is the smallest unit of meaning in an alphabet, a
bit is equivalent to a letter. Regarding your other contention, you may wish
to look into "extended character sets".
--
best regards,
Neil
Don Pearce[_3_]
June 18th 11, 03:38 PM
On Sat, 18 Jun 2011 10:30:11 -0400, "Neil Gould"
> wrote:
>Don Pearce wrote:
>> On Sat, 18 Jun 2011 09:02:49 -0400, "Neil Gould"
>> > wrote:
>>
>>> Don Pearce wrote:
>>>>
>>>> A byte is the data equivalent of a letter. A word is a group of
>>>> bytes, and can be any length although there is a small legacy of
>>>> usage that makes it two bytes. That is by now essentially dead, and
>>>> word length is an important specification in any data system.
>>>>
>>> A bit is the equivalent of a letter, and a word is a group of bits.
>>> So, word length can be anything from 2 bits on. A byte is defined as
>>> 8 bits and that constant hasn't changed, AFAIK.
>>
>> A byte is a letter in a very real sense. Each letter on this page
>> comprises one byte. I think that alone makes the terminology right. A
>> bit already has a name - a bit; no need to call it a letter too.
>>
>The term you used and I concur with is "_equivalent_ of a letter", and in
>the sense that a letter is the smallest unit of meaning in an alphabet, a
>bit is equivalent to a letter. Regarding your other contention, you may wish
>to look into "extended character sets".
I see where you are coming from, but I feel that "bit" already covers
the need for the description of a binary digit (of which it is a
contraction). Terms like letter and word are more linguistically based
and I find that letter is a good description of a single byte - I have
already noted its significance.
Think of a digital letter being composed of noughts and ones the same
way a linguistic letter is composed of strokes and dots.
d
Neil Gould
June 18th 11, 03:49 PM
Don Pearce wrote:
> On Sat, 18 Jun 2011 10:30:11 -0400, "Neil Gould"
> > wrote:
>
>> Don Pearce wrote:
>>> On Sat, 18 Jun 2011 09:02:49 -0400, "Neil Gould"
>>> > wrote:
>>>
>>>> Don Pearce wrote:
>>>>>
>>>>> A byte is the data equivalent of a letter. A word is a group of
>>>>> bytes, and can be any length although there is a small legacy of
>>>>> usage that makes it two bytes. That is by now essentially dead,
>>>>> and word length is an important specification in any data system.
>>>>>
>>>> A bit is the equivalent of a letter, and a word is a group of bits.
>>>> So, word length can be anything from 2 bits on. A byte is defined
>>>> as 8 bits and that constant hasn't changed, AFAIK.
>>>
>>> A byte is a letter in a very real sense. Each letter on this page
>>> comprises one byte. I think that alone makes the terminology right.
>>> A bit already has a name - a bit; no need to call it a letter too.
>>>
>> The term you used and I concur with is "_equivalent_ of a letter",
>> and in the sense that a letter is the smallest unit of meaning in an
>> alphabet, a bit is equivalent to a letter. Regarding your other
>> contention, you may wish to look into "extended character sets".
>
> I see where you are coming from, but I feel that "bit" already covers
> the need for the description of a binary digit (of which it is a
> contraction). Terms like letter and word are more linguistically based
> and I find that letter is a good description of a single byte - I have
> already noted its significance.
>
Then, you'll need to reconcile your notion with some facts, one being that
the length of a word can be any number of bits greater than one. So, your
contention that a word is longer than a byte (see your above statement) is
incorrect. Since a word can be *shorter* than a byte, a byte can't be
equivalent to "the smallest unit of meaning", ergo can't be equivalent to an
alphabetic letter.
--
best regards,
Neil
Don Pearce[_3_]
June 18th 11, 04:03 PM
On Sat, 18 Jun 2011 10:49:19 -0400, "Neil Gould"
> wrote:
>Don Pearce wrote:
>> On Sat, 18 Jun 2011 10:30:11 -0400, "Neil Gould"
>> > wrote:
>>
>>> Don Pearce wrote:
>>>> On Sat, 18 Jun 2011 09:02:49 -0400, "Neil Gould"
>>>> > wrote:
>>>>
>>>>> Don Pearce wrote:
>>>>>>
>>>>>> A byte is the data equivalent of a letter. A word is a group of
>>>>>> bytes, and can be any length although there is a small legacy of
>>>>>> usage that makes it two bytes. That is by now essentially dead,
>>>>>> and word length is an important specification in any data system.
>>>>>>
>>>>> A bit is the equivalent of a letter, and a word is a group of bits.
>>>>> So, word length can be anything from 2 bits on. A byte is defined
>>>>> as 8 bits and that constant hasn't changed, AFAIK.
>>>>
>>>> A byte is a letter in a very real sense. Each letter on this page
>>>> comprises one byte. I think that alone makes the terminology right.
>>>> A bit already has a name - a bit; no need to call it a letter too.
>>>>
>>> The term you used and I concur with is "_equivalent_ of a letter",
>>> and in the sense that a letter is the smallest unit of meaning in an
>>> alphabet, a bit is equivalent to a letter. Regarding your other
>>> contention, you may wish to look into "extended character sets".
>>
>> I see where you are coming from, but I feel that "bit" already covers
>> the need for the description of a binary digit (of which it is a
>> contraction). Terms like letter and word are more linguistically based
>> and I find that letter is a good description of a single byte - I have
>> already noted its significance.
>>
>Then, you'll need to reconcile your notion with some facts, one being that
>the length of a word can be any number of bits greater than one. So, your
>contention that a word is longer than a byte (see your above statement) is
>incorrect. Since a word can be *shorter* than a byte, a byte can't be
>equivalent to "the smallest unit of meaning", ergo can't be equivalent to an
>alphabetic letter.
I've never come across a word shorter than a byte - a nibble ( 4 bits)
is the next lower size I can recall. Anyway, I think you are trying to
impose too much literalism here. The letter equivalency has its
meaning at the cultural level. Explain it to people and they get it
immediately. The word follows on quite naturally.
d
Scott Dorsey
June 18th 11, 04:20 PM
Don Pearce > wrote:
>A byte is the data equivalent of a letter. A word is a group of bytes,
>and can be any length although there is a small legacy of usage that
>makes it two bytes. That is by now essentially dead, and word length
>is an important specification in any data system.
This is why I like the European "Octet" much better than the byte. Although
I haven't seen a machine with anything other than an 8-bit byte since the
Cyber 170s went away.
A word is most often multiple bytes, depending on the machine. There used
to be some machines (Burroughs B-series, intel iAPX432) with variable word
lengths, too.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Neil Gould
June 18th 11, 04:27 PM
Don Pearce wrote:
> On Sat, 18 Jun 2011 10:49:19 -0400, "Neil Gould"
> > wrote:
>
>> Don Pearce wrote:
>>> On Sat, 18 Jun 2011 10:30:11 -0400, "Neil Gould"
>>> > wrote:
>>>
>>>> Don Pearce wrote:
>>>>> On Sat, 18 Jun 2011 09:02:49 -0400, "Neil Gould"
>>>>> > wrote:
>>>>>
>>>>>> Don Pearce wrote:
>>>>>>>
>>>>>>> A byte is the data equivalent of a letter. A word is a group of
>>>>>>> bytes, and can be any length although there is a small legacy of
>>>>>>> usage that makes it two bytes. That is by now essentially dead,
>>>>>>> and word length is an important specification in any data
>>>>>>> system.
>>>>>>>
>>>>>> A bit is the equivalent of a letter, and a word is a group of
>>>>>> bits. So, word length can be anything from 2 bits on. A byte is
>>>>>> defined as 8 bits and that constant hasn't changed, AFAIK.
>>>>>
>>>>> A byte is a letter in a very real sense. Each letter on this page
>>>>> comprises one byte. I think that alone makes the terminology
>>>>> right. A bit already has a name - a bit; no need to call it a
>>>>> letter too.
>>>>>
>>>> The term you used and I concur with is "_equivalent_ of a letter",
>>>> and in the sense that a letter is the smallest unit of meaning in
>>>> an alphabet, a bit is equivalent to a letter. Regarding your other
>>>> contention, you may wish to look into "extended character sets".
>>>
>>> I see where you are coming from, but I feel that "bit" already
>>> covers the need for the description of a binary digit (of which it
>>> is a contraction). Terms like letter and word are more
>>> linguistically based and I find that letter is a good description
>>> of a single byte - I have already noted its significance.
>>>
>> Then, you'll need to reconcile your notion with some facts, one
>> being that the length of a word can be any number of bits greater
>> than one. So, your contention that a word is longer than a byte (see
>> your above statement) is incorrect. Since a word can be *shorter*
>> than a byte, a byte can't be equivalent to "the smallest unit of
>> meaning", ergo can't be equivalent to an alphabetic letter.
>
> I've never come across a word shorter than a byte - a nibble ( 4 bits)
> is the next lower size I can recall. Anyway, I think you are trying to
> impose too much literalism here. The letter equivalency has its
> meaning at the cultural level. Explain it to people and they get it
> immediately. The word follows on quite naturally.
>
I don't know how useful to DAW users any of our discussion is at this point,
seeing as it has devolved into such minute details. However, if anything,
I'm imposing logical consistency that takes facts into regard rather than
literalism. That you haven't run into a word shorter than a byte doesn't
alter the fact that the definition of a digital word includes the
possibility (searchable for your convenience).
--
best regards,
Neil
geoff
June 19th 11, 01:41 AM
"Neil Gould" > wrote in message
...
> William Sommerwerck wrote:
>>
>> When I was growing up, a word was 16 bits. Changes in computer
>> architecture have blurred a the meaning of "word", at least at the
>> hardware level. (Many languages still define a "word" as 16 bits.)
>> Which is no big deal, if you know what you're referring to.
>>
>> Eight bits has never been a word.
>>
> A word could always be *any* length, and a one-byte word is not at all
> unusual.
And it sure as hell was when I started on 8-bit computers !!
Geoff
Peter Larsen[_3_]
June 19th 11, 09:45 PM
John Williamson wrote:
> Neil Gould wrote:
>> Don Pearce wrote:
>>> A byte is the data equivalent of a letter. A word is a group of
>>> bytes, and can be any length although there is a small legacy of
>>> usage that makes it two bytes. That is by now essentially dead, and
>>> word length is an important specification in any data system.
>> A bit is the equivalent of a letter, and a word is a group of bits.
>> So, word length can be anything from 2 bits on. A byte is defined as
>> 8 bits and that constant hasn't changed, AFAIK.
> Doesn't a lot of usenet still use the old ASCII 7-bit byte, with one
> bit added for parity?
oh yes, and programmers still blindly strips the high bit and changes a
letter in Codepage 865 into a control character causing funny things to
happen.
Kind regards
Peter Larsen
Arny Krueger
June 20th 11, 07:36 PM
"William Sommerwerck" > wrote in message
...
> When I was growing up, a word was 16 bits.
AFAIK a computer's word started out being whatever length its accumulator
was.
This sorta fell apart when computers became capable of working with varying
length chunks of data.
This ran from 12 to 60 bits in those days.
> Changes in computer architecture
> have blurred a the meaning of "word", at least at the hardware level.
> (Many
> languages still define a "word" as 16 bits.) Which is no big deal, if you
> know what you're referring to.
> Eight bits has never been a word.
It has long been a "byte".
geoff
June 20th 11, 09:49 PM
"Arny Krueger" > wrote in message
>> Eight bits has never been a word.
>
> It has long been a "byte".
8 bits is always a byte. A word is however many bits you define it as, in
the system in question, and is usually a multiple of bytes. Even just one
byte.
geoff
Scott Dorsey
June 21st 11, 01:25 AM
geoff > wrote:
>
>8 bits is always a byte. A word is however many bits you define it as, in
>the system in question, and is usually a multiple of bytes. Even just one
>byte.
Unless you're on a CDC Cyber with a 6-bit byte or...
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Arny Krueger
June 23rd 11, 12:30 PM
"Scott Dorsey" > wrote in message
...
> geoff > wrote:
>>
>>8 bits is always a byte. A word is however many bits you define it as, in
>>the system in question, and is usually a multiple of bytes. Even just one
>>byte.
> Unless you're on a CDC Cyber with a 6-bit byte or...
In the day we called them "characters".
With only 64 possible permutations, it was a pretty limited sub-ASCII
character set at that.
IBM's pre-360 machines also sported 6 bit characters. DEC's older machines
were based on 6 bit chracters as well.
6 bits barely supports a large enough character set to handle FORTRAN
programs...
geoff
June 24th 11, 09:11 AM
"Arny Krueger" > wrote in message
...
>
> "Scott Dorsey" > wrote in message
> ...
>> geoff > wrote:
>>>
>>>8 bits is always a byte. A word is however many bits you define it as,
>>>in
>>>the system in question, and is usually a multiple of bytes. Even just
>>>one
>>>byte.
>
>> Unless you're on a CDC Cyber with a 6-bit byte or...
>
> In the day we called them "characters".
>
> With only 64 possible permutations, it was a pretty limited sub-ASCII
> character set at that.
I was brought up on ITA 2 .
geoff
Arny Krueger
June 24th 11, 01:22 PM
"geoff" > wrote in message
...
>
> "Arny Krueger" > wrote in message
> ...
>>
>> "Scott Dorsey" > wrote in message
>> ...
>>> geoff > wrote:
>>>>
>>>>8 bits is always a byte. A word is however many bits you define it as,
>>>>in
>>>>the system in question, and is usually a multiple of bytes. Even just
>>>>one
>>>>byte.
>>
>>> Unless you're on a CDC Cyber with a 6-bit byte or...
>>
>> In the day we called them "characters".
>>
>> With only 64 possible permutations, it was a pretty limited sub-ASCII
>> character set at that.
>
> I was brought up on ITA 2 .
Frankly, I never heard of it. That's because it is apparently a Continental
thing. I was somewhat aware of the USA version of it called USTTY. They
are basically 5 bit coding schemes that handle 64 characters by assigning
2 of those characters to be shift characters.
If characters were being lost in transmission, things apparently could
apparently get really strange because shifts would be lost and figures would
be treated as letters and vice-versa.
TTYs were used as computer terminals and consoles in the early days, both in
their origional electromechanical version with dedicated character bars and
type balls, and also later as nearly purely-electronic devices using dot
matrix print heads, either impact or thermal.
IBM hung onto selectric electromechanical type ball printers to the bitter
end which was replacement by CRTs. I don't recall them ever doing a dot
matrix paper-oriented computer console, even though they used matrix
printers in keypunches.
Frank
June 24th 11, 10:27 PM
On Fri, 24 Jun 2011 08:22:39 -0400, in 'rec.audio.pro',
in article <Re: How was 44.1/16 format decided on for CD?>,
"Arny Krueger" > wrote:
>IBM hung onto selectric electromechanical type ball printers to the bitter
>end which was replacement by CRTs. I don't recall them ever doing a dot
>matrix paper-oriented computer console,
I seem to recall using the 1052 as a master console on 360/65 systems.
And, of course, the 2741 in remote terminal applications.
--
Frank, Independent Consultant, New York, NY
[Please remove 'nojunkmail.' from address to reply via e-mail.]
Read Frank's thoughts on HDV at http://www.humanvalues.net/hdv/
[also covers AVCHD (including AVCCAM & NXCAM) and XDCAM EX].
Ben Bradley[_2_]
June 24th 11, 11:51 PM
On Fri, 17 Jun 2011 08:10:19 -0700 (PDT), muzician21
> wrote:
>On Jun 16, 1:09*pm, Mike Rivers > wrote:
>
>> Well, yeah, if you went straight from the recorded file to
>> the CD, but practically nobody does that. If you adjust a
>> level, apply EQ, compression, limiting, or change panning -
>> anything but a simple edit that doesn't involve a crossfade
>> - you will increase the word length. You need to truncate
>> the word to get it back to 16 bits to go on the CD.
>> Dithering makes that truncation more graceful
>
>
>Okay. I'm still fuzzy on the nuts and bolts, what word length is and
>how it relates to bit depth.
The bit depth is what comes out of the A/D converter, or how many
bits are stored in a computer file, or goes into the D/A converter.
It's what you "see" when you set how many bits a digital recorder is
going to use.
Word length is the same as bit depth, UNTIL you do some
"processing" inside the computer. The most fundamental processing is
changing the volume. Suppose you have a recording made to a depth of
16 bits. if you lower volume 12dB (or more closely, -12.0411998..,
dB), you've effectively multiplied the signal by 1/4, or the
equivalent in binary, shifted the samples by two bits. But the 16 bit
word can only hold the most significant 14 bits of this processed
word, and so the two least-significant bits are lost, unless they are
kept internally by having a larger-than-16-bit destination in the
computer. The new "word length" is 18 bits- the two most significant
bits of the 16-bit "bit depth" are now "zero" (okay, not in real
systems, but I'm simplyfying) and aren't part of the sound anymore.
The 16-bit "signal path" now has only the upper 14 bits of the
original signal, now placed in the lower 14 bits of the signal path.
This truncation causes distortion, even though it's down around the
least significant bit level of the bit depth (around -96dB on a 16-bit
CD, but that's significant for low-volume signals at -60dB, and can be
audible at higher volumes). Dithering is a semi-magical process that
allows much of the info in the two least-significant "lost" bits of
the 18-bit word length to be put back into the original 16 bit word
used as the bit depth.
Look at the spectrums shown on page 5 of this app note, it should
demonstrate how dithering "fixes" a truncated signal (an A/D conver
effectively truncates an analog signal down to its bit depth):
http://www.national.com/an/AN/AN-804.pdf
24-bit A/D converters generally don't have dither noise added to their
signals, as the natural noise in their analog sections (not to mention
whatever noise is in the signal chain they're recording) is well above
the signal level of the 24th bit (-144dB), and so dithering happens
naturally.
Rather than "process and dither" for each process, most DAW
software stores temporary files at much greater than 16 or 24 bits -
when they're rendered to be played back, all the processing stuff
(effects, track volume, mixing) that causes longer word length is done
at once, and only for the final output is the dither noise added
(which does that magic) and then the most 24 or 16 significant bits
are are sent to the A/D converter or the "final mix" file.
geoff
June 25th 11, 01:30 AM
"Arny Krueger" > wrote in message
...
>
> "geoff" > wrote in message
> ...
>>
>> "Arny Krueger" > wrote in message
>> ...
>>>
>>> "Scott Dorsey" > wrote in message
>>> ...
>>>> geoff > wrote:
>>>>>
>>>>>8 bits is always a byte. A word is however many bits you define it as,
>>>>>in
>>>>>the system in question, and is usually a multiple of bytes. Even just
>>>>>one
>>>>>byte.
>>>
>>>> Unless you're on a CDC Cyber with a 6-bit byte or...
>>>
>>> In the day we called them "characters".
>>>
>>> With only 64 possible permutations, it was a pretty limited sub-ASCII
>>> character set at that.
>>
>> I was brought up on ITA 2 .
>
> Frankly, I never heard of it.
Otherwise known as Baudot code. It's what all teleprinters/telegraph used.
geoff
Arny Krueger
June 25th 11, 04:38 AM
"Frank" > wrote in message
...
> On Fri, 24 Jun 2011 08:22:39 -0400, in 'rec.audio.pro',
> in article <Re: How was 44.1/16 format decided on for CD?>,
> "Arny Krueger" > wrote:
>
>>IBM hung onto selectric electromechanical type ball printers to the bitter
>>end which was replacement by CRTs. I don't recall them ever doing a dot
>>matrix paper-oriented computer console,
>
> I seem to recall using the 1052 as a master console on 360/65 systems.
>
> And, of course, the 2741 in remote terminal applications.
Yes, they were both type ball printers - basically a Selectric on many, many
steroids.
hank alrich
July 4th 11, 05:10 PM
Trevor > wrote:
> "siguy" > wrote...
> >Another good reason for rec'ing at a higher sample/bit rate (as if there
> >weren't enough already) is that you can then apply fx and mastering >tools
> >to the higher rate data, while this may not seem important, it does allow
> >for a much higher precision of calculation of effects such as >reverb and
> >distortion, this way you don't get nasty rounding errors creeping into the
> >noticeable left bits of the samples,
>
> This is not a reason to RECORD at higher sample/bit rates. It is a reason to
> EDIT at higher sample/bit/data sizes. There is a significant difference,
> although maybe not as critical in practice now as it once was.
>
> Trevor.
I like to start with the rate I intend to work with until editing and
mixing have been completed. Conversion to other formats happens in
masering.
--
shut up and play your guitar * http://hankalrich.com/
http://www.youtube.com/watch?v=NpqXcV9DYAc
http://www.sonicbids.com/HankandShaidri
Trevor
July 5th 11, 02:45 AM
"hank alrich" > wrote in message
...
> I like to start with the rate I intend to work with until editing and
> mixing have been completed. Conversion to other formats happens in
> masering.
I'll bet your software has other ideas on what actual data it likes to work
with though. Most use 32 or 64 bit floating point these days fortunately.
But I note you don't say you like to start with the final format, so
obviously you do make a choice that suits you. As I said, these days that
can be pretty much whatever you like. That was not always the case, for many
reasons, a decade or two ago.
Trevor.
Soulmatic
July 6th 11, 08:55 AM
Am 16.06.2011 22:26, schrieb geoff:
> > wrote in message
> ...
>> I've been reading about dithering in the Bob Katz Mastering book. From
>> what I gather it's a scheme to "fix" sound files that have been taken
>> from a higher bit/sample rate and convert them to the standard CD
>> format.
>>
>> What I wonder is wouldn't it make the whole thing a moot point if
>> digital audio was simply produced/sold at the bit/sample rate at which
>> it's recorded and processed? How was the 44.1/16 format arrived at in
>> the first place? Were the original digital recordings/masters always
>> recorded at higher sample& bitrates than the 44.1/16 final format?
>>
>> If I'm displaying
> Yep that would be great. But at the time 16/44k1 was a spec decided upon by
> the current (and immediately foreseeable technology level) combioned with
> the play time acheivable on the SOTA media to be developed at the time.
>
> It was also found to be pretty much as good as most hi-fi buffs could
> discern, wespecially given that they were all getting stiffys over 14-bit
> digital recordings on vinyl LPs at the time !
>
>
> geoff
>
>
In the beginning (just hav'n developed the two sided toasted bread) Mr.
Karajan (who had a great vibe for technical gimmicks and a direct
contact to the CEO of Sony at that time) decide to record the whole line
of Beethovens concerts for CD. So the audiophiles had been driven this
way...
Cheers
Werner
www-Soulmatic.com <http://www.soulmatic.com>
Dave O'Heare[_2_]
August 18th 11, 02:15 AM
muzician21 > wrote in news:397bf7a2-40aa-4388-ac63-
:
> How was the 44.1/16 format arrived at in > the first place?
I'm chiming in late, as usual -- but this is curious:
44,100=2^2*3^2*5^2*7^2
I doubt that this has anything to do with it deliberately.
Dave O'H
Don Pearce[_3_]
August 18th 11, 06:30 AM
On Thu, 18 Aug 2011 01:15:15 +0000, "Dave O'Heare"
<dave.oheareATgmail.com> wrote:
>muzician21 > wrote in news:397bf7a2-40aa-4388-ac63-
:
>
>> How was the 44.1/16 format arrived at in > the first place?
>
>I'm chiming in late, as usual -- but this is curious:
>
>
> 44,100=2^2*3^2*5^2*7^2
>
>I doubt that this has anything to do with it deliberately.
>
>Dave O'H
It was the rate that maintained low-interference compatibility between
both PAL and NTSC television systems.
d
William Sommerwerck
August 20th 11, 10:03 PM
> It was the rate that maintained low-interference compatibility
> between both PAL and NTSC television systems.
"Interference" has nothing to do with it.
PAL and NTSC systems transmit about the same number of lines per second (30
x 525 is about equal to 25 x 625). 44.1k sample pairs per second can easily
be formatted as three sample-pairs per line in both systems, and will fit
within the nominal luminance bandwidths of both systems.
Don Pearce[_3_]
August 20th 11, 10:14 PM
On Sat, 20 Aug 2011 14:03:59 -0700, "William Sommerwerck"
> wrote:
>> It was the rate that maintained low-interference compatibility
>> between both PAL and NTSC television systems.
>
>"Interference" has nothing to do with it.
>
>PAL and NTSC systems transmit about the same number of lines per second (30
>x 525 is about equal to 25 x 625). 44.1k sample pairs per second can easily
>be formatted as three sample-pairs per line in both systems, and will fit
>within the nominal luminance bandwidths of both systems.
>
Sure it is an interference issue. If you don't synchronize the data to
the line the sync pulses drift through and wipe out bits.
d
William Sommerwerck
August 21st 11, 01:01 AM
"Don Pearce" > wrote in message
...
> On Sat, 20 Aug 2011 14:03:59 -0700, "William Sommerwerck"
> > wrote:
>>> It was the rate that maintained low-interference compatibility
>>> between both PAL and NTSC television systems.
>> "Interference" has nothing to do with it.
>> PAL and NTSC systems transmit about the same number of lines per second
(30
>> x 525 is about equal to 25 x 625). 44.1k sample pairs per second can
easily
>> be formatted as three sample-pairs per line in both systems, and will fit
>> within the nominal luminance bandwidths of both systems.
> Sure it is an interference issue. If you don't synchronize the data to
> the line the sync pulses drift through and wipe out bits.
That isn't what the original statement (above) says.
Scott Dorsey
August 21st 11, 02:37 AM
William Sommerwerck > wrote:
>That isn't what the original statement (above) says.
The whole thing is discussed very well in the FAQ for this group.
--scott
(still in Reno)
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Don Pearce[_3_]
August 21st 11, 09:07 AM
On Sat, 20 Aug 2011 17:01:42 -0700, "William Sommerwerck"
> wrote:
>"Don Pearce" > wrote in message
...
>> On Sat, 20 Aug 2011 14:03:59 -0700, "William Sommerwerck"
>> > wrote:
>
>>>> It was the rate that maintained low-interference compatibility
>>>> between both PAL and NTSC television systems.
>
>>> "Interference" has nothing to do with it.
>
>>> PAL and NTSC systems transmit about the same number of lines per second
>(30
>>> x 525 is about equal to 25 x 625). 44.1k sample pairs per second can
>easily
>>> be formatted as three sample-pairs per line in both systems, and will fit
>>> within the nominal luminance bandwidths of both systems.
>
>> Sure it is an interference issue. If you don't synchronize the data to
>> the line the sync pulses drift through and wipe out bits.
>
>That isn't what the original statement (above) says.
>
That's exactly what it says. The synchronisation is what prevents the
interference. In the analogue domain it was seen in frequency terms
rather than time - the synchronisation of the colour subcarrier
(4.43361875MHz in PAL) dropped the chroma sidebands neatly between the
luma sidebands and the system thus worked.
d
William Sommerwerck
August 21st 11, 12:43 PM
"Don Pearce" > wrote in message
...
> On Sat, 20 Aug 2011 17:01:42 -0700, "William Sommerwerck"
> > wrote:
>
> >"Don Pearce" > wrote in message
> ...
> >> On Sat, 20 Aug 2011 14:03:59 -0700, "William Sommerwerck"
> >> > wrote:
> >
> >>>> It was the rate that maintained low-interference compatibility
> >>>> between both PAL and NTSC television systems.
> >
> >>> "Interference" has nothing to do with it.
> >
> >>> PAL and NTSC systems transmit about the same number of lines per
second
> >(30
> >>> x 525 is about equal to 25 x 625). 44.1k sample pairs per second can
> >easily
> >>> be formatted as three sample-pairs per line in both systems, and will
fit
> >>> within the nominal luminance bandwidths of both systems.
> >
> >> Sure it is an interference issue. If you don't synchronize the data to
> >> the line the sync pulses drift through and wipe out bits.
> >
> >That isn't what the original statement (above) says.
> That's exactly what it says. The synchronisation is what prevents the
> interference. In the analogue domain it was seen in frequency terms
> rather than time - the synchronisation of the colour subcarrier
> (4.43361875MHz in PAL) dropped the chroma sidebands neatly between the
> luma sidebands and the system thus worked.
Interference "between" NTSC and PAL? Since when are NTSC and PAL signals
transmitted on adjacent channels?
Words have meanings.
Don Pearce[_3_]
August 21st 11, 01:01 PM
On Sun, 21 Aug 2011 04:43:07 -0700, "William Sommerwerck"
> wrote:
>"Don Pearce" > wrote in message
...
>> On Sat, 20 Aug 2011 17:01:42 -0700, "William Sommerwerck"
>> > wrote:
>>
>> >"Don Pearce" > wrote in message
>> ...
>> >> On Sat, 20 Aug 2011 14:03:59 -0700, "William Sommerwerck"
>> >> > wrote:
>> >
>> >>>> It was the rate that maintained low-interference compatibility
>> >>>> between both PAL and NTSC television systems.
>> >
>> >>> "Interference" has nothing to do with it.
>> >
>> >>> PAL and NTSC systems transmit about the same number of lines per
>second
>> >(30
>> >>> x 525 is about equal to 25 x 625). 44.1k sample pairs per second can
>> >easily
>> >>> be formatted as three sample-pairs per line in both systems, and will
>fit
>> >>> within the nominal luminance bandwidths of both systems.
>> >
>> >> Sure it is an interference issue. If you don't synchronize the data to
>> >> the line the sync pulses drift through and wipe out bits.
>> >
>> >That isn't what the original statement (above) says.
>
>> That's exactly what it says. The synchronisation is what prevents the
>> interference. In the analogue domain it was seen in frequency terms
>> rather than time - the synchronisation of the colour subcarrier
>> (4.43361875MHz in PAL) dropped the chroma sidebands neatly between the
>> luma sidebands and the system thus worked.
>
>Interference "between" NTSC and PAL? Since when are NTSC and PAL signals
>transmitted on adjacent channels?
>
>Words have meanings.
>
Huh? That wasn't what I meant. I was saying that 44.1kHz avoided
interference on both NTSC and PAL. I just didn't word it terribly
well. As it happens, all existing PAL systems transmit audio in NICAM
on a separate carrier, so the issue doesn't actually arise.
d
William Sommerwerck
August 21st 11, 01:21 PM
"Don Pearce" > wrote in message
...
> On Sun, 21 Aug 2011 04:43:07 -0700, "William Sommerwerck"
> > wrote:
>
> >"Don Pearce" > wrote in message
> ...
> >> On Sat, 20 Aug 2011 17:01:42 -0700, "William Sommerwerck"
> >> > wrote:
> >>
> >> >"Don Pearce" > wrote in message
> >> ...
> >> >> On Sat, 20 Aug 2011 14:03:59 -0700, "William Sommerwerck"
> >> >> > wrote:
> >> >
> >> >>>> It was the rate that maintained low-interference compatibility
> >> >>>> between both PAL and NTSC television systems.
> >> >
> >> >>> "Interference" has nothing to do with it.
> >> >
> >> >>> PAL and NTSC systems transmit about the same number of lines per
> >second
> >> >(30
> >> >>> x 525 is about equal to 25 x 625). 44.1k sample pairs per second
can
> >> >easily
> >> >>> be formatted as three sample-pairs per line in both systems, and
will
> >fit
> >> >>> within the nominal luminance bandwidths of both systems.
> >> >
> >> >> Sure it is an interference issue. If you don't synchronize the data
to
> >> >> the line the sync pulses drift through and wipe out bits.
> >> >
> >> >That isn't what the original statement (above) says.
> >
> >> That's exactly what it says. The synchronisation is what prevents the
> >> interference. In the analogue domain it was seen in frequency terms
> >> rather than time - the synchronisation of the colour subcarrier
> >> (4.43361875MHz in PAL) dropped the chroma sidebands neatly between the
> >> luma sidebands and the system thus worked.
> >
> >Interference "between" NTSC and PAL? Since when are NTSC and PAL signals
> >transmitted on adjacent channels?
> >
> >Words have meanings.
> >
>
> Huh? That wasn't what I meant. I was saying that 44.1kHz avoided
> interference on both NTSC and PAL. I just didn't word it terribly
> well.
Which was my point.
Don Pearce[_3_]
August 21st 11, 01:26 PM
On Sun, 21 Aug 2011 05:21:40 -0700, "William Sommerwerck"
> wrote:
>
>"Don Pearce" > wrote in message
...
>> On Sun, 21 Aug 2011 04:43:07 -0700, "William Sommerwerck"
>> > wrote:
>>
>> >"Don Pearce" > wrote in message
>> ...
>> >> On Sat, 20 Aug 2011 17:01:42 -0700, "William Sommerwerck"
>> >> > wrote:
>> >>
>> >> >"Don Pearce" > wrote in message
>> >> ...
>> >> >> On Sat, 20 Aug 2011 14:03:59 -0700, "William Sommerwerck"
>> >> >> > wrote:
>> >> >
>> >> >>>> It was the rate that maintained low-interference compatibility
>> >> >>>> between both PAL and NTSC television systems.
>> >> >
>> >> >>> "Interference" has nothing to do with it.
>> >> >
>> >> >>> PAL and NTSC systems transmit about the same number of lines per
>> >second
>> >> >(30
>> >> >>> x 525 is about equal to 25 x 625). 44.1k sample pairs per second
>can
>> >> >easily
>> >> >>> be formatted as three sample-pairs per line in both systems, and
>will
>> >fit
>> >> >>> within the nominal luminance bandwidths of both systems.
>> >> >
>> >> >> Sure it is an interference issue. If you don't synchronize the data
>to
>> >> >> the line the sync pulses drift through and wipe out bits.
>> >> >
>> >> >That isn't what the original statement (above) says.
>> >
>> >> That's exactly what it says. The synchronisation is what prevents the
>> >> interference. In the analogue domain it was seen in frequency terms
>> >> rather than time - the synchronisation of the colour subcarrier
>> >> (4.43361875MHz in PAL) dropped the chroma sidebands neatly between the
>> >> luma sidebands and the system thus worked.
>> >
>> >Interference "between" NTSC and PAL? Since when are NTSC and PAL signals
>> >transmitted on adjacent channels?
>> >
>> >Words have meanings.
>> >
>>
>> Huh? That wasn't what I meant. I was saying that 44.1kHz avoided
>> interference on both NTSC and PAL. I just didn't word it terribly
>> well.
>
>Which was my point.
>
If that was your point, why didn't you say so? What you actually
replied was "Interference has nothing to do with it". That, of course,
makes no sense in context.
d
Mike Rivers
August 21st 11, 02:50 PM
On 8/21/2011 7:43 AM, William Sommerwerck wrote:
I thought that was chosen because it was a way that they
could use available technology (video cassette recording) to
fit Beethoven's 9th Symphony on a single disk.
This may not be the only reason, but I know that it
contributed to the choice of the digital format.
--
"Today's production equipment is IT based and cannot be
operated without a passing knowledge of computing, although
it seems that it can be operated without a passing knowledge
of audio." - John Watkinson
http://mikeriversaudio.wordpress.com - useful and
interesting audio stuff
Frank
August 21st 11, 08:13 PM
On Sun, 21 Aug 2011 09:50:50 -0400, in 'rec.audio.pro',
in article <Re: How was 44.1/16 format decided on for CD?>,
Mike Rivers > wrote:
>On 8/21/2011 7:43 AM, William Sommerwerck wrote:
>
>I thought that was chosen because it was a way that they
>could use available technology (video cassette recording) to
>fit Beethoven's 9th Symphony on a single disk.
>
>This may not be the only reason, but I know that it
>contributed to the choice of the digital format.
FWIW...
http://stason.org/TULARC/pc/cd-recordable/2-35-Why-44-1KHz-Why-not-48KHz.html
http://en.wikipedia.org/wiki/44.1_kHz
--
Frank, Independent Consultant, New York, NY
[Please remove 'nojunkmail.' from address to reply via e-mail.]
Read Frank's thoughts on HDV at http://www.humanvalues.net/hdv/
[also covers AVCHD (including AVCCAM & NXCAM) and XDCAM EX].
vBulletin® v3.6.4, Copyright ©2000-2025, Jelsoft Enterprises Ltd.