View Single Post
  #20   Report Post  
Posted to rec.audio.tech
[email protected] dpierce@cartchunk.org is offline
external usenet poster
 
Posts: 402
Default IM distortion: why related to level?

On Oct 24, 6:14 pm, MRC01 wrote:
So the simplistic square wave sequence is a valid
sequence of samples that defines a unique properly
bandwidth filtered wave.


If by "simplistic square wave" you mean the alternating
sequence of positive and negative values, then no, it
most assuredly IS NOT a "unique, properly bandwidth
filtered wave." It has, in essence, infinite bandwidth for
the purpose of this discussion,

But that wave isn't a square wave.


Yes, it is, it just has a bandwidth that well exceeds
the Nyquist criteria. And sampled in that fashion, it
violates said criteria. In doing so, it is "broken" in that
all the out-of-bandwith products are aliased down
into the baseband.

The interesting concept here is that I assumed that
if you sample a waveform, the spacing / frequency of
the samples would simply skip over anything that was
moving too fast / too high frequency to be seen.


But they ARE "seen", they are aliased down into the
baseband where they can be readily "seen" (heard,
measured) as spurious information.


Here's completely apt analogy. Say you have a movie
camera running at 24 frames per second. You concept
of "the spacing/frequency ... moving to fast to be seen"
can be shown to break down when you look at a seen
from an old Western where the bad guys are chasing
the good guys riding in wagons with spoked wheels.
They're racing along forward at a breakneck speed,
yet there are the wheels, quite visibly turning BACKWARDS
at a low speed.. You can see exactly the same phenomenon
in the rotor of a helicopter, where we know the blade
is rotating fast, but a movie of it shows the blades almost
stationary or very slowly.

Why? Because due to the discrete time-sampling of
the camera, and the fact that the blades or spokes are
moving at a frequency much higher than the sampling
frequency, the image we get is quite an incorrect picture
of physical reality. Why? Because of the aliasing caused
by things moving too fast in a discrete-time sampled
stream.

Let's see how this works. Take our helicopter, which we
will assume has two blades. Let's, for simplicity sake,
set our frame (sampling) rate at 25 fps. Now, let's assume
the blade is rotating at, oh, 13.9 revolutions per second.
I pick that number for a two reasons:

1. It's a plausible value for the rotational speed of the blade,

2. It's above the Nyquist frequency of 12.5 Hz (half the frame
rate of 25 Hz) and thus deliberately violates the Nyquist
criteria for the purpose of the demonstration.

Now, what happens? Frame 1 capture the blade at 0 deg.
Frame 2 capture it having rotated about 10% less than half
a revolution, frame 3 about 20% less, and so on.

String all those frames together and view them, and what do
we see? We DON'T see the blade moving forward at 12.5
RPM, we REALLY see it moving BACKWARDS at about
75 RPM. The forward rotating blade was ALIASED by the
sampling process to appear rotating backwards.

EXACTLY the same principle applies to discrete time-sampled
digital audio.

Think of your "mathematically pure" square wave, the
sequence of plus and minus samples, as a bunch of
those rotating helicopter blades all connected together
by gear trains so that 1 is rotating at, oh, 10 kHz,
another at 30, another at 50, and at 70, 90, 110, 130, 150
kHz and so on. Sample that at 44.1 kHz, which is EXACTLY
what you've done by simply alternating a sequence of plus
and minus values, and what do you get? All those "blades"
get aliased down:

10 kHz - 10 kHz
30 kHz - 14.1 kHz
50 kHz - 5.9 kHz
70 kHz - 18.2 kHz
90 kHz - 1.8 kHz
110 kHz - 21.8 kHz
130 kHz - 2.3 kHz
150 kHz - 17.7 kHz

And so on.

The problem is that those aliases ARE ALREADY BUILT IN
TO THE SAMPLED DATA STREAM as an intrinsic result
of sampling.

For example, suppose the waveform contains frequencies above Nyquist,
so your sampling points are far apart relative to changes in the
waveform. So you are skipping over a lot of information simply based
on the spacing. But each sampling point has to land *somewhere*, and
it will frequently happen that it lands on a certain bump in the
waveform that wouldn't exist except for frequencies above Nyquist.
These frequences have to be eliminated BEFORE sampling because once
sampled they MUST be interpreted by the playback DAC as frequencies
below Nyquist - which they aren't so that means the wave constructed
from them MUST be different from the one sampled.


Yup. you got it!

Now *that* is an AHA experience. Thanks!


Indeed.