Reply
 
Thread Tools Display Modes
  #1   Report Post  
Posted to rec.audio.high-end
 
Posts: n/a
Default lossless encoding

Can anyone give me a clue how lossless encoders work?

My iPod, using Apple Lossless, can store CDs in about 60% of their
original size.

What's curious is that a lot of my classical music reduces to 40% or
less, while the rock music is typically around 70% or more. I think
rock music has a broader spectral content more continuously (with bass
drum, bass, and cymbols going constantly).

I know that the principle will involve the idea that there is some
redundancy in the original signal, because it is not a purely
unpredictable signal. Music has some patterns.

For example, music often has a beat. Theoretically it takes fewer bits
to describe a bass drum hit, when that hit occurs almost exactly in the
same interval. I doubt it works on that level, however. Probably
something like LPC with a small number of bits reserved to give deltas.

In theory, it seems to me, whenever the music can be divided into bands
with some space between them, then you can devote more bits to the
bands with signal, and fewer bits to cover the noise in-between.

Mike
  #2   Report Post  
Posted to rec.audio.high-end
MC
 
Posts: n/a
Default lossless encoding

I don't know any details, but it may just be lossless file compression
(similar to ZIP).

This is based on using concise codes for repeated strings of bytes.

wrote in message
...
Can anyone give me a clue how lossless encoders work?

My iPod, using Apple Lossless, can store CDs in about 60% of their
original size.

What's curious is that a lot of my classical music reduces to 40% or
less, while the rock music is typically around 70% or more. I think
rock music has a broader spectral content more continuously (with bass
drum, bass, and cymbols going constantly).

I know that the principle will involve the idea that there is some
redundancy in the original signal, because it is not a purely
unpredictable signal. Music has some patterns.

For example, music often has a beat. Theoretically it takes fewer bits
to describe a bass drum hit, when that hit occurs almost exactly in the
same interval. I doubt it works on that level, however. Probably
something like LPC with a small number of bits reserved to give deltas.

In theory, it seems to me, whenever the music can be divided into bands
with some space between them, then you can devote more bits to the
bands with signal, and fewer bits to cover the noise in-between.

Mike

  #3   Report Post  
Posted to rec.audio.high-end
 
Posts: n/a
Default lossless encoding

MC wrote:
I don't know any details, but it may just be lossless file compression
(similar to ZIP).

This is based on using concise codes for repeated strings of bytes.


I'm sure it is sound-specific, whatever algorithm is used. Or more
accurately, music-specific. I'm sure whatever it is, it makes use of
the kinds of patterns which occur in digitized music, which are
definitely NOT repeated bytes, but they ARE related to the spectrum of
the music. Probably something like fitting a curve to the music, after
dividing into segments, and then reserving a small number of bits to
express the difference between the fitted curve and the actual data.
But let's someone who knows tell us.

Mike



wrote in message
...
Can anyone give me a clue how lossless encoders work?

My iPod, using Apple Lossless, can store CDs in about 60% of their
original size.

What's curious is that a lot of my classical music reduces to 40% or
less, while the rock music is typically around 70% or more. I think
rock music has a broader spectral content more continuously (with bass
drum, bass, and cymbols going constantly).

I know that the principle will involve the idea that there is some
redundancy in the original signal, because it is not a purely
unpredictable signal. Music has some patterns.

For example, music often has a beat. Theoretically it takes fewer bits
to describe a bass drum hit, when that hit occurs almost exactly in the
same interval. I doubt it works on that level, however. Probably
something like LPC with a small number of bits reserved to give deltas.

In theory, it seems to me, whenever the music can be divided into bands
with some space between them, then you can devote more bits to the
bands with signal, and fewer bits to cover the noise in-between.

Mike

  #4   Report Post  
Posted to rec.audio.high-end
 
Posts: n/a
Default lossless encoding

wrote:
MC wrote:
I don't know any details, but it may just be lossless file compression
(similar to ZIP).

This is based on using concise codes for repeated strings of bytes.


I'm sure it is sound-specific, whatever algorithm is used. Or more
accurately, music-specific. I'm sure whatever it is, it makes use of
the kinds of patterns which occur in digitized music, which are
definitely NOT repeated bytes, but they ARE related to the spectrum of
the music. Probably something like fitting a curve to the music, after
dividing into segments, and then reserving a small number of bits to
express the difference between the fitted curve and the actual data.
But let's someone who knows tell us.


Well, everyone is sort of right, but there are some important
principles
that are missing.

First, normal compression schemes such as ZIP (which uses a
form of LZ compression) are very ineffective at compressing audio.
Indeed, they look for repeated patterns, but only over short spans
of data, e.g., looking for the same pattern over a few dozen bytes.
The patterns, when they exist, in audio, are MUCH longer than that.
For example, look at the example given: the rythmic beats in a
bass line. Let's pretend, for the moment, that the basic beat is
at a rate of 120 beats/minute. That's 2 beats per second, meaning
about 22,000 samples (at 44.1 kHz CD sample rate) per beat.
This is MUCH longer than the search window in ZIP when it looks
for redundant patterns. And such an algorithm assumes there is
NO variations AT ALL, not even a single bit, present in each beat.
Such is unheard of in music.

Using ZIP to compress music almost NEVER results in any
substantial reduction in file size, and as often as not results in
"compressed" files that are actually bigger. They are, however,
truly lossless.

One way that you can compress music files that's very different
than a simple LZ-based scheme is that if the dynamic range is
wide enough or, more specifically, if there's enough quiet
passages, the upper bits of each sample are all unused, and this
represents redundant data that can be removed losslessly. For
example, take the simple case where a pasage of music is
consistently more than 48 dB below full level: the top 8 bits of each
sample will all be 0. That means we can get a 50% compression
right there by simply throwing away the top 8 bits of each 16 bit
sample. Take a somewhat more realistic case where the music
is consistently lower than, say, 18 dB below full level. That's 3 bits
of redundant information that can be eliminated without any loss,
saving about 19% of space. In cases like this, a section of audio
that meets such a criteria is blocked together with a header than
indicates the next n bytes are m dB below full level. The playback
or decompressor will see this header and now how to reconstruct
the original data exactly.

Now, take these two methods: looking for repeated patterns and
getting rid of unneeded high bits, and see how that applies to
different kinds of music.

In classical mussic, where the spectrum CAN (and there are
many exceptions) exhibit line-like properties, this signifies
highly correlated and thus, to a great degree, redundant patterns,
making a broad redundancy compressor reasonably effective.

However, in, say, rock musisc, where the spectrum is much denser,
the occurrance of redundant patterns is far less likely, thus there is
less chance of taking advantage of such. In fact, lots of rock music
has psectral characteristics that are closer to random than lots of
classical music.

Let's look at the ability to eliminate high-bit redundancies. Classical
music can have widely varying dynamics (high crest factor), and
thus there is the opportunity tp eliminate unneeded high bits.

Rock music, often as highly compressed as it is (having a very low
crest factor) uses all the bits most of the time, and thus there is
little
or no opportunity to eliminate unused high-order bits.

Now, these are broad generalizations used to illustrate SOME of
the underlying pronciples, and there are, for certain, exceptions
in both extremes, but these illustrate the reasons why these variations
occur.
  #5   Report Post  
Posted to rec.audio.high-end
bob
 
Posts: n/a
Default lossless encoding

wrote:
MC wrote:
I don't know any details, but it may just be lossless file compression
(similar to ZIP).

This is based on using concise codes for repeated strings of bytes.


I'm sure it is sound-specific, whatever algorithm is used. Or more
accurately, music-specific. I'm sure whatever it is, it makes use of
the kinds of patterns which occur in digitized music, which are
definitely NOT repeated bytes, but they ARE related to the spectrum of
the music. Probably something like fitting a curve to the music, after
dividing into segments, and then reserving a small number of bits to
express the difference between the fitted curve and the actual data.


Not even close. MC's basically right--it's data compression, looking
for patterns and repetitions in the digits. Audio files (not
specifically music) tend to be highly random, however, so standard data
compression doesn't compress them very much. Lossless audio codecs use
various methods of transforming the digits to create a file with more
repetitions/patterns that lend themselves to more efficient
compression. That's a gross oversimplification. Here's a somewhat less
gross oversimplification:

http://en.wikipedia.org/wiki/Audio_d...ss_compression

It's all data manipulation. It has to be. Anything else, and you
wouldn't be able to reconstruct the exact digital file. Which means it
would be lossy compression.

bob


  #6   Report Post  
Posted to rec.audio.high-end
Harry Lavo
 
Posts: n/a
Default lossless encoding

wrote in message
...

snip


Now, these are broad generalizations used to illustrate SOME of
the underlying pronciples, and there are, for certain, exceptions
in both extremes, but these illustrate the reasons why these variations
occur.


Beautifully explained and very illuminating. Thanks Dick.


  #7   Report Post  
Posted to rec.audio.high-end
 
Posts: n/a
Default lossless encoding

bob wrote:
wrote:
MC wrote:
I don't know any details, but it may just be lossless file compression
(similar to ZIP).

This is based on using concise codes for repeated strings of bytes.


I'm sure it is sound-specific, whatever algorithm is used. Or more
accurately, music-specific. I'm sure whatever it is, it makes use of
the kinds of patterns which occur in digitized music, which are
definitely NOT repeated bytes, but they ARE related to the spectrum of
the music. Probably something like fitting a curve to the music, after
dividing into segments, and then reserving a small number of bits to
express the difference between the fitted curve and the actual data.


Not even close.


Thanks for the encouragement. :-)

MC's basically right--it's data compression, looking
for patterns and repetitions in the digits. Audio files (not
specifically music) tend to be highly random, however, so standard data
compression doesn't compress them very much. Lossless audio codecs use
various methods of transforming the digits to create a file with more
repetitions/patterns that lend themselves to more efficient
compression. That's a gross oversimplification. Here's a somewhat less
gross oversimplification:

http://en.wikipedia.org/wiki/Audio_d...ss_compression

It's all data manipulation. It has to be. Anything else, and you
wouldn't be able to reconstruct the exact digital file. Which means it
would be lossy compression.


What makes you think I wasn't talking about data manipulation? In
computer modeling of sounds and voices, a common technique is to fit an
approximate curve via linear predictive coding. I simply pointed out
that such a fit, which is an approximate fit, together with some bits
reserved to give the deltas from the fitted curve to the actual data,
would be lossless. I don't know whether that would work, but I assume
it's not the best solution since others are used.

Mike
  #10   Report Post  
Posted to rec.audio.high-end
 
Posts: n/a
Default lossless encoding

bob wrote:
wrote:

What makes you think I wasn't talking about data manipulation? In
computer modeling of sounds and voices, a common technique is to fit an
approximate curve via linear predictive coding. I simply pointed out
that such a fit, which is an approximate fit, together with some bits
reserved to give the deltas from the fitted curve to the actual data,
would be lossless. I don't know whether that would work, but I assume
it's not the best solution since others are used.


Well, this appears to be much closer to your field than mine, but what
you are describing is not lossless encoding--it's lossy encoding. The
point of lossless encoding is to get back the exact digital file.
There's no way that the approach you're suggesting would do that. You
might be able to get equivalent resolution that way, but (I think) only
if you used at least as many bits as in the original file. Which would
kinda defeat the point of compression.

bob


MC explained it too, but the idea is that LPC curves often capture the
main frequency bands and modulations. They are an approximation, but if
it's a good enough approixmation, then you won't need many bits to
express the difference between this approximation and the original
curve. The total data used at that point would the coefficients for the
LPC curve and the bits used to express the deltas. If the music has
patterns that fit LPC curves well, then not many bits will be needed
for the deltas, hence you might end up compressing the data while being
lossless. However, I assume this is not a good solution, or not as good
as the one you and Dick described, since it's not being used.

Mike


  #12   Report Post  
Posted to rec.audio.high-end
vlad
 
Posts: n/a
Default lossless encoding

wrote:
bob wrote:
wrote:

What makes you think I wasn't talking about data manipulation? In
computer modeling of sounds and voices, a common technique is to fit an
approximate curve via linear predictive coding. I simply pointed out
that such a fit, which is an approximate fit, together with some bits
reserved to give the deltas from the fitted curve to the actual data,
would be lossless. I don't know whether that would work, but I assume
it's not the best solution since others are used.


Well, this appears to be much closer to your field than mine, but what
you are describing is not lossless encoding--it's lossy encoding. The
point of lossless encoding is to get back the exact digital file.
There's no way that the approach you're suggesting would do that. You
might be able to get equivalent resolution that way, but (I think) only
if you used at least as many bits as in the original file. Which would
kinda defeat the point of compression.

bob


MC explained it too, but the idea is that LPC curves often capture the
main frequency bands and modulations. They are an approximation, but if
it's a good enough approixmation, then you won't need many bits to
express the difference between this approximation and the original
curve. The total data used at that point would the coefficients for the
LPC curve and the bits used to express the deltas. If the music has
patterns that fit LPC curves well, then not many bits will be needed
for the deltas, hence you might end up compressing the data while being
lossless. However, I assume this is not a good solution, or not as good
as the one you and Dick described, since it's not being used.

Mike


Mike,

If you are interested in a subject, why don't you search on Google or
go to the library and find text book on compression schemes. Then you
can find out how it works.

Instead of it you are speculating here about how it can be done without
real knowledge of the subject.

Apple is doing lossless compression in iPod now. Dolby Labs. came up
with Dolby TrueHD lossless audio compression scheme for DVD-HD format.
So it is perfectly possible and people are doing it now. Just educate
yourself.

Gosh, if only average high-ender would learn a little bit about
underlying technology of LP's and CD's. A lot of bandwidth would be
saved.

vlad
  #15   Report Post  
Posted to rec.audio.high-end
Stewart Pinkerton
 
Posts: n/a
Default lossless encoding

On 12 Mar 2006 15:47:34 GMT, Scott wrote:

To put it simply there are no truely lossless audio compression
techniques. They all lose something in the reconstruction process.


Utter nonsense! Zip is probably the best known, but for audio you have
MLP and Apple Lossless in the commercial field, plus several 'open
source' schemes such as FLAC. Every single one of these will produce a
bit-perfect reconstruction of the original data file.

I've gone back to using mp3 using the latest lame3 encoder set for a
high variable bitrate, and set for the highest quality standard. The
Music plays louder and cleaner than cda files, because the speakers
aren't trying to produce multiple harmonics at the same time that a
humans ears can't simultaneously distinguish anyway.


I have absolutely no idea where that idea came from, but you should
put it back, because it has no basis in reality.

Just another thing to think about.


For a few milliseconds......

--

Stewart Pinkerton | Music is Art - Audio is Engineering


  #16   Report Post  
Posted to rec.audio.high-end
not2cool4u
 
Posts: n/a
Default lossless encoding

"Scott" wrote in message
...
To put it simply there are no truely lossless audio compression
techniques. They all lose something in the reconstruction process.
I've gone back to using mp3 using the latest lame3 encoder set for a
high variable bitrate, and set for the highest quality standard. The
Music plays louder and cleaner than cda files, because the speakers
aren't trying to produce multiple harmonics at the same time that a
humans ears can't simultaneously distinguish anyway.

Just another thing to think about.

I think you need to think about what you just said, since there are sevral
bit for bit encoder/decoders. Apple, FLAC and several others.



--

  #17   Report Post  
Posted to rec.audio.high-end
gofab.com
 
Posts: n/a
Default lossless encoding

On 12 Mar 2006 15:47:34 GMT, in article , Scott
stated:

To put it simply there are no truely lossless audio compression
techniques. They all lose something in the reconstruction process.


Absolutely untrue.

And even in lossy compression, the "something" is lost when the data is
compressed, not when it is "reconstructed" for playback.

I've gone back to using mp3 using the latest lame3 encoder set for a
high variable bitrate, and set for the highest quality standard. The
Music plays louder and cleaner than cda files, because the speakers
aren't trying to produce multiple harmonics at the same time that a
humans ears can't simultaneously distinguish anyway.


Huh????

Just another thing to think about.


Not really.




--

Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Which Encoding Option Better for Listening asdf Pro Audio 23 November 6th 05 05:18 AM
Which Encoding Option Better for Listening asdf Tech 23 November 6th 05 05:18 AM
UHJ Encoding bsuhorndog Pro Audio 7 April 24th 05 10:25 AM
Apple Lossless Encoding glitches mystery David Abrahams Pro Audio 1 December 9th 04 05:42 PM
lossless audio codec for video capturing Joseph Brown Pro Audio 38 November 29th 03 11:48 PM


All times are GMT +1. The time now is 09:11 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"