View Single Post
  #1   Report Post  
 
Posts: n/a
Default Louder IS Better (With Lossy)

Lord Hasenpfeffer wrote:
wrote:


I would be somewhat surprised that any current CD was not NORMALISED
to somewhere around (just below) 0dbfs.


Yes, as that has apparently been "all the rage" for several years
running now. Actually, *abuse* of the 0dBFS threshold is really "all
the rage" these days. I personally am only interested in "normalizing"
older, quieter CDs mastered during the stone and middle ages before
encoding them as MP3s so that my final files sound somewhat "modern" to
my ears and not all tinny and weak when juxtaposed with MP3s encoded
from more recently mastered CDs.


I would be somewhat surprised to learn that any mainstream commercial
release has ever gone out without being normalised (using the strict
definition of the term) to somewhere within 1db of FS. Given this, it
sounds like you are actually compressing or limiting to raise the RMS
level.

In no way am I interested in brutally forcing any RMS levels to go
"through the roof". Anywhere from strictly normalized to "slightly hot"
levels (depending on genre) are all that I seek.


Well, if you are going for 'slightly hot' then you must be limiting or
clipping somewhere to keep within the actual hard limit of 0dbFS, so
irrespective of what the tool calls itself this cannot be said to be
normalising. I would also observe that a track which is quieter will
tend to sound 'tinny & weak' when compared to a louder track even without
being coded to MP3 first, it is just how humans work. I am sure that lots
of people here have stories of finding the 'extra something' at the end of
a long mix session by tweaking the control room monitor volume up a db or
two for the benefit of a client.

Many times I've
encountered WAVs from CDs that are, according to my standards, mastered
too loudly. When this is the case, I _do_ simply "rip-and-encode" the
unmodified, original WAVs (contrary to what dip**** said of me earlier
when he said I seek to indiscriminately normalize every CD I own). If
that were the case, I'd end up making half of what's in my library
*quieter* than it is on the original CDs, not louder!


Making them quieter does not undo the compression used to make them
boring to start with!

If normalization is _needed_ for a "mix-CD" sporting tracks by various
artists from various sources, I will "normalize" all the tracks
individually to bring them to a common loudness, across-the-board.


Well, this will get all the tracks to the same peak level, but they are
unlikely to all sound the same volume, as to a first approx, we hear RMS
not peak. What you probably want to do is (being careful that no
intermediate file clips (32 bit signed int may be a good intermediate
format for this)), is to equalise the average RMS levels (which will make
the peak level end up all over the place), then normalize the entire
collection to put the peak level into range. Your choice of averaging
function matters if you want good results.

However, depsite its name, the "normalize" application I use is capable
of doing more than just "textbook normalization" if I tell it to do so.
This has been a major stumbling block for me when I've previously
attempted to discuss its behaviour amongst others who've not used nor
even heard of it before.


Do you have a URL for a tarball?

"Normalize" can be made to limit the peaks
when instructed by the user to boost an RMS level beyond the textbook
normalization level. One of the guys over in the other newsgroup
suggested that we call this "limitizing" because there is no other,
readily available, predefined textbook term to describe it.


Sounds like fast attack hard knee limiting (Possibly with compression) to
me? Have you seen that compressor Dyson wrote when he was at free BSD?
It is excellent for this sort of thing.

Further the phycoacoustic model IIRC derives 'too quiet' threshold
from the RMS level of the signal, not an absolute threshold (IE at
some frequency it may be 30db below curent RMS level, NOT 40db below
0dbFS).


Mmm-HMMM.... Now *that's* something of which I was not previously aware.
If that's true then my hypothesis for boosting amps to save freqs may
indeed be fatally flawed.


Are you sure you're not thinking of frequency masking when you say this?


Yes, to a first approx, frequency masking weights towards energy in
nearby bands when calculating thresholds, what I am thinking of looks
at overall energy.

It is worth noting that whatever you do to the input data, it is only
possible to fit a fixed amount of information into the output
stream,


Now, I do understand that, however, I haven't contemplated it very much.


It is worth working thru the implications of this as it really brings home
the absence of a free lunch.

if you force the encoder to include more frequency data (higher
resolution or more bands active, then time or ampletude resolution
MUST necassarily suffer).


Excellent information. By time resolution you mean in that the file
would have to be made to play slower in order to accommodate the
increased amount of data?


No, not slower (which would imply a higher effective bit rate), but
that the information about when something happens may have to be stored
less precisely.

Also, would not such effects be greater at lower bitrates than at higher
ones?


All the compromises apply more at lower bitrates!

And if so, would this not mean that my hypothesis would actually become
more appropriately applied towards higher bitrate MP3s than at lower ones?


Because I'm guessing here, by reversing your logic, that a large enough
bitrate could eventually be employed which would cause the encoder to
either "pad the file with zeroes" or store the additional data depending
on the normalized status of the WAV being encoded.


Depends on if your hypothesis holds at all, I am yet to be convinced
that anything beyond psycological effects are at play here (Louder is
usaually perceved as better).

Consider that a 'perfect' data compression tool would simply store the
gain used in the normalisation once (after all it does not change during
playback), thus normalising has little effect on the amount of
*information* in the .wav file.

And if that's true, what bitrate may I be talking about?


There are a few lossless wav file compression tools around, some of them
are even reasonably good, find one then see how much it can reduce the
size of a typical wave file. This will give you some idea of how much
data is actually redundant in a wav file and of how much *information*
is required to represent that file.
It will be program dependent, a file containing a single 1Khz tone can
be losslessly represented in very few bits, a thrash metal gig will take
rather more (but why anyone would bother....).

Regards, Dan.
--
** The email address *IS* valid, do NOT remove the spamblock
And on the evening of the first day the lord said...........
..... LX 1, GO!; and there was light.