View Single Post
  #3   Report Post  
John LeBlanc
 
Posts: n/a
Default Louder IS Better (With Lossy)


"David Morgan (MAMS)" wrote in message
...

Stop thinking, for a moment, of amplitude - as in the file's RMS power
of being the determining factor. Move your train of thought from the
vertical scale to one of 'depth'. Stop thinking of frequencies as being
louder or softer, but rather as being in front of or behind others. Some
call that a form of masking.

No matter what the amplitude of the .wav file, the encoding process still
looks for frequencies that are potentially hidden *behind* others.


Exactly. THough I'd throw in another factor: length of time. A unit of
Time-Frequency is broken up into bands and the software deals with that.

If a snare hit at 1046Hz (originally ATRAC band 9 1080-1270Hz) occurs at the
same moment as, say, a mallet striking a xylophone at 2637Hz (originally
ATRAC band 10 1270-1480Hz), and the amplitude of the snare hit exceeds that
of the xylophone to the extent determined by the software as threshold, the
frequency band controlled by the software matching to that of the xylophone
hit is given less attention -- masking -- (introducing quantization noise in
the process) for the length of time the snare hit's amplitude controls the
threshold. It's less data for the encoder to produce since it doesn't have
to give much attention to band 10, which makes for a smaller soundfile on
the other end of the encoding process.

When the amplitude moves below the threshold by moving through the
time-frequency slice, we get the ringing tail of the xylophone. Which, we
are told, is perfectly fine anyway because we couldn't really hear the
xylophone strike behind the snare hit. Psychoacoustic bull****, in my
opinion. But it does help to understand why some audio files get converted
in a more acceptable manner than others. It's all in the timing.

More bands, the center frequency and width of which being adaptive from one
time-frequency slice to another, still removes the subtlety a recording and
mixing engineer could have put in there to begin with. Is it acceptable to
screw up audio in this manner? Evidently the overwhelming response from MP3
fans indicates that the answer is, "Yes." I happen to disagree, but then I
represent just one man's opinion.

Lowering the bar to such a degree (actually, to any degree) for acceptable
audio quality -- 128KHz sample rate MP3 -- is such an odd thing given the
lengths to which some engineers and equipment manufacturers go to increase
the ability to produce quality audio. It's amazing how little effort and
consideration it takes to marginalize into oblivian a careful studio upgrade
to 24-bit 192KHz.

John