Mark & Mary Ann Weiss wrote:
My hunch? The sample rate was smearing subtle time arrival cues.
It is the bit depth that and sample rate together that
controls time resolution. Divide the sample period by the
number of discrete levels, (65536 for 16 bit) to get
approximately the time resolution. I think 16/44.1 at about
..35 nsec jitter is finer than the ear can detect.
I think higher and deeper sample rates, such as 24/96, are the solution to
this difference.
Yes, I've heard the dither in very soft passages of classical recordings.
There are only 32,768 loudness levels with 16-bits. That increases to 16.7
million levels with 24-bit recording.
And this translates purely to white noise, hiss. Stairstep
audibility is a myth. There was a pair of samples put up
here some time ago with music at 4 bit resolution compared
to full 16 bit resolution with the same amount of random
noise simply added and the sound is identical. If a
recording is anywhere near 0 dB, the -96 dB white noise due
to 16 bit samples (without dither) is absolutely inaudible.
There really isn't any valid argument for distribution and
consumer playback gear to be better than 16 bits. The wider
width is a headroom consideration for the recording and
mixing engineer only.
There is an argument that the reconstruction filters for
44.1 kHz introduces some small phase shift at the highest
frequencies that is audible but AFAIC that is very
equivocal. Most that claim to hear it are invested in being
known for having golden ears. Double blind testing has not
substantiated that audibility.
Bob
--
"Things should be described as simply as possible, but no
simpler."
A. Einstein
|