Log in

View Full Version : Nyquist filters at different sample rates.


Tobiah[_4_]
August 15th 11, 04:52 PM
One of the advantages of recording at 96000Hz, is that
there is more room between the highest needed captured
frequency, and the Nyquist frequency, and so does not
need to have so steep a curve, which is of some benefit
of a nature that escapes me at the moment.

Do most devices actually change filters then, based on
current sample rate, or do they just have one that works
with 44100, and call that good for everything?

Thanks,

Tobiah

Tobiah[_4_]
August 15th 11, 04:55 PM
> and so does not
> need to have so steep a curve,

I meant the curve of the low-pass filter that is used
to avoid crossing the Nyquist on the way
into the ADC.

Don Pearce[_3_]
August 15th 11, 06:44 PM
On Mon, 15 Aug 2011 08:52:26 -0700, Tobiah >
wrote:

>One of the advantages of recording at 96000Hz, is that
>there is more room between the highest needed captured
>frequency, and the Nyquist frequency, and so does not
>need to have so steep a curve, which is of some benefit
>of a nature that escapes me at the moment.
>
>Do most devices actually change filters then, based on
>current sample rate, or do they just have one that works
>with 44100, and call that good for everything?
>
>Thanks,
>
>Tobiah

I have an Arcam CD player that changes the nature of the filter
dynamically in response to the programme material. It is so good that
I have never heard a difference between it and a normal CD player.
Recovering audio from digital sources is by now such a done deal that
you really need not worry about exotic sampling rates etc.

d

Arny Krueger[_4_]
August 15th 11, 07:40 PM
"Tobiah" > wrote in message
...
> One of the advantages of recording at 96000Hz, is that
> there is more room between the highest needed captured
> frequency, and the Nyquist frequency, and so does not
> need to have so steep a curve, which is of some benefit
> of a nature that escapes me at the moment.
>
> Do most devices actually change filters then, based on
> current sample rate, or do they just have one that works
> with 44100, and call that good for everything?

Most modern DACs implement the brick wall low pass filter that provides a
sharp cutoff just below the Nyquist frequency with a digital filter that
naturally changes its corner frequency to suit the sampling frequency.

Scott Dorsey
August 18th 11, 06:51 PM
Tobiah > wrote:
>One of the advantages of recording at 96000Hz, is that
>there is more room between the highest needed captured
>frequency, and the Nyquist frequency, and so does not
>need to have so steep a curve, which is of some benefit
>of a nature that escapes me at the moment.

Well, that was true in the 1980s, but today we use oversampling and we
don't have to worry about that junk.

>Do most devices actually change filters then, based on
>current sample rate, or do they just have one that works
>with 44100, and call that good for everything?

If they are properly made. Some (Panasonic) equipment traditionally
did not change the filters properly and so sounded dramatically different
at different sample rates. But today we use oversampling and we don't
worry about it.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."

William Sommerwerck
August 20th 11, 10:07 PM
> One of the advantages of recording at 96000Hz, is that
> there is more room between the highest needed captured
> frequency, and the Nyquist frequency, and so does not
> need to have so steep a curve, which is of some benefit
> of a nature that escapes me at the moment.

CDs are cut at 44.1kHz, so 88.2kHz and 176.4kHz are preferable to 96kHz and
192kHz, as no compensation for a non-integral change in sample rate is
needed (eg, interpolation, etc).

Of course, BD and more-recent formats natively support 96kHz and 192kHz, so,
recordings made for them would naturally use the higher rates.

Mike Rivers
August 21st 11, 12:09 PM
On 8/20/2011 5:07 PM, William Sommerwerck wrote:

> CDs are cut at 44.1kHz,

That much is correct.

> so 88.2kHz and 176.4kHz are preferable to 96kHz and
> 192kHz, as no compensation for a non-integral change in sample rate is
> needed (eg, interpolation, etc).

That's not correct, unless you want to do a half-assed job
of sample rate conversion. You always need to re-sample, you
can't just leave out every other sample and end up with the
same waveform less any content above the Nyquest frequency
limit.

Don't press me for a citation, I just know these things
because I'm smart. ;)
I'm surprised that you don't know this, because you're
pretty smart yourself.

--
"Today's production equipment is IT based and cannot be
operated without a passing knowledge of computing, although
it seems that it can be operated without a passing knowledge
of audio." - John Watkinson

http://mikeriversaudio.wordpress.com - useful and
interesting audio stuff

William Sommerwerck
August 21st 11, 12:49 PM
"Mike Rivers" > wrote in message
...
> On 8/20/2011 5:07 PM, William Sommerwerck wrote:

>> CDs are cut at 44.1kHz,

> That much is correct.

> > so 88.2kHz and 176.4kHz are preferable to 96kHz and
> > 192kHz, as no compensation for a non-integral change in sample rate is
> > needed (eg, interpolation, etc).

> That's not correct, unless you want to do a half-assed job
> of sample rate conversion. You always need to re-sample, you
> can't just leave out every other sample and end up with the
> same waveform less any content above the Nyquest frequency
> limit.

> Don't press me for a citation, I just know these things
> because I'm smart. ;)
> I'm surprised that you don't know this, because you're
> pretty smart yourself.

If one is down-converting to a rate that's an integral fraction (yes, that
sounds dumb), there's no need to create "in-between" samples. But when you
go from (say) 96kHz to 44.1kHz, you have generate and interpolate
appropriate samples.

Regardless of the sample rate, the original digital data have to be filtered
for the Nyquist frequency of the lower rate before resampling.

Sean Conolly
August 21st 11, 07:41 PM
"William Sommerwerck" > wrote in message
...
> "Mike Rivers" > wrote in message
> ...
>> On 8/20/2011 5:07 PM, William Sommerwerck wrote:
>
>>> CDs are cut at 44.1kHz,
>
>> That much is correct.
>
>> > so 88.2kHz and 176.4kHz are preferable to 96kHz and
>> > 192kHz, as no compensation for a non-integral change in sample rate is
>> > needed (eg, interpolation, etc).
>
>> That's not correct, unless you want to do a half-assed job
>> of sample rate conversion. You always need to re-sample, you
>> can't just leave out every other sample and end up with the
>> same waveform less any content above the Nyquest frequency
>> limit.
>
>> Don't press me for a citation, I just know these things
>> because I'm smart. ;)
>> I'm surprised that you don't know this, because you're
>> pretty smart yourself.
>
> If one is down-converting to a rate that's an integral fraction (yes, that
> sounds dumb), there's no need to create "in-between" samples. But when you
> go from (say) 96kHz to 44.1kHz, you have generate and interpolate
> appropriate samples.

Or more precisely, the minimum interpolation rate is the same as the
starting rate when it's a 2:1 downsample. There's no need to interpolate up
by a factor of 2 so you can decimate by a factor of 4.

> Regardless of the sample rate, the original digital data have to be
> filtered
> for the Nyquist frequency of the lower rate before resampling.

For quick and dirty work (like telephony codecs) simply averaging the
samples removes some of the artifacts for 2:1 resample, which is usually
good enough for those applications. But agreed, you always need a digital
low pass filter to remove all of the artifacts.

Sean

William Sommerwerck
August 21st 11, 08:10 PM
>> If one is down-converting to a rate that's an integral fraction (yes,
that
>> sounds dumb), there's no need to create "in-between" samples. But
>> when you go from (say) 96kHz to 44.1kHz, you have generate and
>> interpolate appropriate samples.

> Or more precisely, the minimum interpolation rate is the same as the
> starting rate when it's a 2:1 downsample. There's no need to interpolate
up
> by a factor of 2 so you can decimate by a factor of 4.

I'm not sure the point is coming across.

Data at 96k samples/second cannot be /directly/ downconverted to 44.1k,
because the higher rate is not an intergral multiple of the lower. The
smaples only line up every (LCM of 96 and 44.1) samples. You have to create
"in-between" samples. Their values will vary, depending on whether you use
linear interpolation, or something else.

Paul[_13_]
August 22nd 11, 03:14 AM
On 8/21/2011 12:10 PM, William Sommerwerck wrote:
>>> If one is down-converting to a rate that's an integral fraction (yes,
> that
>>> sounds dumb), there's no need to create "in-between" samples. But
>>> when you go from (say) 96kHz to 44.1kHz, you have generate and
>>> interpolate appropriate samples.
>
>> Or more precisely, the minimum interpolation rate is the same as the
>> starting rate when it's a 2:1 downsample. There's no need to interpolate
> up
>> by a factor of 2 so you can decimate by a factor of 4.
>
> I'm not sure the point is coming across.
>
> Data at 96k samples/second cannot be /directly/ downconverted to 44.1k,
> because the higher rate is not an intergral multiple of the lower. The
> smaples only line up every (LCM of 96 and 44.1) samples. You have to create
> "in-between" samples. Their values will vary, depending on whether you use
> linear interpolation, or something else.
>
>

From Wiki:

http://en.wikipedia.org/wiki/Sample_rate_conversion

It appears the main advantage to having the two sample
rates being an integer multiple of one another, is that the
least common multiple is simply the larger of the two numbers
(using method "a").

For 96 and 44.1, although 96/44.1 is still a rational
number(960/441), the least common multiple frequency is some huge
number. So in this case, it would appear the linear interpolation
of method "b" would be preferred.

But since in method "a", a digital FIR filter must be applied at
the Nyquist of the lower frequency, I don't know if there is
a significant difference in computation time between the two.

Paul[_13_]
August 22nd 11, 09:23 AM
On 8/21/2011 7:14 PM, Paul wrote:
> On 8/21/2011 12:10 PM, William Sommerwerck wrote:
>>>> If one is down-converting to a rate that's an integral fraction (yes,
>> that
>>>> sounds dumb), there's no need to create "in-between" samples. But
>>>> when you go from (say) 96kHz to 44.1kHz, you have generate and
>>>> interpolate appropriate samples.
>>
>>> Or more precisely, the minimum interpolation rate is the same as the
>>> starting rate when it's a 2:1 downsample. There's no need to interpolate
>> up
>>> by a factor of 2 so you can decimate by a factor of 4.
>>
>> I'm not sure the point is coming across.
>>
>> Data at 96k samples/second cannot be /directly/ downconverted to 44.1k,
>> because the higher rate is not an intergral multiple of the lower. The
>> smaples only line up every (LCM of 96 and 44.1) samples. You have to
>> create
>> "in-between" samples. Their values will vary, depending on whether you
>> use
>> linear interpolation, or something else.
>>
>>
>
> From Wiki:
>
> http://en.wikipedia.org/wiki/Sample_rate_conversion
>
> It appears the main advantage to having the two sample
> rates being an integer multiple of one another, is that the
> least common multiple is simply the larger of the two numbers
> (using method "a").
>
> For 96 and 44.1, although 96/44.1 is still a rational
> number(960/441), the least common multiple frequency is some huge
> number.


The least common multiple frequency of 96k and 44.1k is
14.112MHz:

http://www.mathsisfun.com/least-common-multiple-tool.html

So you'd take the 96k signal, add 146 zeros to each bit,
apply the FIR filter at the Nyquist of 44.1k (cut-off at 22.05kHz),
and then take only every 320th sample.

But with modern computing power, perhaps this wouldn't be
that much slower than the interpolation method "b".

hank alrich
August 22nd 11, 04:58 PM
William Sommerwerck > wrote:

> > One of the advantages of recording at 96000Hz, is that
> > there is more room between the highest needed captured
> > frequency, and the Nyquist frequency, and so does not
> > need to have so steep a curve, which is of some benefit
> > of a nature that escapes me at the moment.
>
> CDs are cut at 44.1kHz, so 88.2kHz and 176.4kHz are preferable to 96kHz and
> 192kHz, as no compensation for a non-integral change in sample rate is
> needed (eg, interpolation, etc).

> Of course, BD and more-recent formats natively support 96kHz and 192kHz, so,
> recordings made for them would naturally use the higher rates.

From a thread at PRW:

"88.2 divides into 44.1 with a nice, simple 2 but 96 does not, so SRC
for 96Khz must be more complex, right? However, that is not how SRC
works. All the upper rates are upsampled even further until a number
that can be commonly divided between it and the desired lower sample
rate is reached. THEN the math is done and the sample is converted. So
in the end, the math is simple(r) than it appears, even for apparently
non divisible rates."

--
shut up and play your guitar * http://hankalrich.com/
http://www.youtube.com/walkinaymusic
http://www.sonicbids.com/HankandShaidri

Scott Dorsey
August 22nd 11, 07:40 PM
In article >, Paul > wrote:
> So you'd take the 96k signal, add 146 zeros to each bit,
>apply the FIR filter at the Nyquist of 44.1k (cut-off at 22.05kHz),
>and then take only every 320th sample.

This is the old-style "filter and decimate" algorthm. It requires little
CPU. I am using it on an 8051 taking low frequency data right now.

> But with modern computing power, perhaps this wouldn't be
>that much slower than the interpolation method "b".

Not to mention that we have plenty of dedicated interpolation hardware
now, so it doesn't even need to be done in software. The AD1890
has spawned many children.
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Paul[_13_]
August 22nd 11, 08:28 PM
On 8/22/2011 11:40 AM, Scott Dorsey wrote:
> In >, > wrote:
>> So you'd take the 96k signal, add 146 zeros to each bit,
>> apply the FIR filter at the Nyquist of 44.1k (cut-off at 22.05kHz),
>> and then take only every 320th sample.
>
> This is the old-style "filter and decimate" algorthm. It requires little
> CPU. I am using it on an 8051 taking low frequency data right now.
>
>> But with modern computing power, perhaps this wouldn't be
>> that much slower than the interpolation method "b".
>
> Not to mention that we have plenty of dedicated interpolation hardware
> now, so it doesn't even need to be done in software. The AD1890
> has spawned many children.
> --scott
>

Sample rate conversion is still done in software for DAWs,
but yeah, there are hardware things like this:

http://www.cirrus.com/en/pubs/proDatasheet/CS8420_F4.pdf

Interesting stuff....

Don Pearce[_3_]
August 22nd 11, 09:04 PM
On Mon, 22 Aug 2011 12:28:10 -0700, Paul > wrote:

>On 8/22/2011 11:40 AM, Scott Dorsey wrote:
>> In >, > wrote:
>>> So you'd take the 96k signal, add 146 zeros to each bit,
>>> apply the FIR filter at the Nyquist of 44.1k (cut-off at 22.05kHz),
>>> and then take only every 320th sample.
>>
>> This is the old-style "filter and decimate" algorthm. It requires little
>> CPU. I am using it on an 8051 taking low frequency data right now.
>>
>>> But with modern computing power, perhaps this wouldn't be
>>> that much slower than the interpolation method "b".
>>
>> Not to mention that we have plenty of dedicated interpolation hardware
>> now, so it doesn't even need to be done in software. The AD1890
>> has spawned many children.
>> --scott
>>
>
> Sample rate conversion is still done in software for DAWs,
>but yeah, there are hardware things like this:
>
> http://www.cirrus.com/en/pubs/proDatasheet/CS8420_F4.pdf
>
> Interesting stuff....

Most hardware converters are in fact software - simply permanently
blown into a micro. It is actually a pretty blurred line between
hardware and software.

d

Paul[_13_]
August 22nd 11, 10:15 PM
On 8/22/2011 1:04 PM, Don Pearce wrote:
> On Mon, 22 Aug 2011 12:28:10 -0700, > wrote:
>
>> On 8/22/2011 11:40 AM, Scott Dorsey wrote:
>>> In >, > wrote:
>>>> So you'd take the 96k signal, add 146 zeros to each bit,
>>>> apply the FIR filter at the Nyquist of 44.1k (cut-off at 22.05kHz),
>>>> and then take only every 320th sample.
>>>
>>> This is the old-style "filter and decimate" algorthm. It requires little
>>> CPU. I am using it on an 8051 taking low frequency data right now.
>>>
>>>> But with modern computing power, perhaps this wouldn't be
>>>> that much slower than the interpolation method "b".
>>>
>>> Not to mention that we have plenty of dedicated interpolation hardware
>>> now, so it doesn't even need to be done in software. The AD1890
>>> has spawned many children.
>>> --scott
>>>
>>
>> Sample rate conversion is still done in software for DAWs,
>> but yeah, there are hardware things like this:
>>
>> http://www.cirrus.com/en/pubs/proDatasheet/CS8420_F4.pdf
>>
>> Interesting stuff....
>
> Most hardware converters are in fact software - simply permanently
> blown into a micro. It is actually a pretty blurred line between
> hardware and software.
>

True...

Mike Rivers
August 22nd 11, 11:56 PM
On 8/22/2011 4:04 PM, Don Pearce wrote:

> Most hardware converters are in fact software - simply permanently
> blown into a micro. It is actually a pretty blurred line between
> hardware and software.

That's true, but it's a black box that you can't screw up
too badly and it never needs an updated driver because the
operating system or the I/O ports never change.

--
"Today's production equipment is IT based and cannot be
operated without a passing knowledge of computing, although
it seems that it can be operated without a passing knowledge
of audio." - John Watkinson

http://mikeriversaudio.wordpress.com - useful and
interesting audio stuff

Arny Krueger[_4_]
August 23rd 11, 01:34 PM
"Don Pearce" > wrote in message
...
> On Mon, 22 Aug 2011 12:28:10 -0700, Paul > wrote:

> Most hardware (sample rate) converters are in fact software - simply
> permanently
> blown into a micro. It is actually a pretty blurred line between
> hardware and software.

The increasing power and programmability of CPUs at low prices and small
sizes is only going to make that more and more true. Not too many people
have thought seriously about using a general purpose CPU chip as a component
of a sub $100 audio widget until lately, but the
price/size/power/performance revolution fostered by ARM processors has
changed all of that.

Don Pearce[_3_]
August 23rd 11, 08:43 PM
On Tue, 23 Aug 2011 08:34:23 -0400, "Arny Krueger" >
wrote:

>"Don Pearce" > wrote in message
...
>> On Mon, 22 Aug 2011 12:28:10 -0700, Paul > wrote:
>
>> Most hardware (sample rate) converters are in fact software - simply
>> permanently
>> blown into a micro. It is actually a pretty blurred line between
>> hardware and software.
>
>The increasing power and programmability of CPUs at low prices and small
>sizes is only going to make that more and more true. Not too many people
>have thought seriously about using a general purpose CPU chip as a component
>of a sub $100 audio widget until lately, but the
>price/size/power/performance revolution fostered by ARM processors has
>changed all of that.
>

The "hard wired software" aspect is even greater now that some of
these micros are no longer programmed in native machine code, but in a
more or less high level language. This presupposes some built in
operating system, which may well be blown into the micro along with
the working code.

d

Don Pearce[_3_]
August 23rd 11, 08:45 PM
On Mon, 22 Aug 2011 18:56:40 -0400, Mike Rivers >
wrote:

>On 8/22/2011 4:04 PM, Don Pearce wrote:
>
>> Most hardware converters are in fact software - simply permanently
>> blown into a micro. It is actually a pretty blurred line between
>> hardware and software.
>
>That's true, but it's a black box that you can't screw up
>too badly and it never needs an updated driver because the
>operating system or the I/O ports never change.

Do you remember "permanently greased bearings"? They were identical to
ordinary bearings, but made cheaper by not having grease nipples. This
is exactly the same thing.

d

Mike Rivers
August 23rd 11, 10:31 PM
On 8/23/2011 3:45 PM, Don Pearce wrote:

> Do you remember "permanently greased bearings"? They were identical to
> ordinary bearings, but made cheaper by not having grease nipples. This
> is exactly the same thing.

I had to replace some of those in a car that I apparently
kept too long. When I commented about them being "lifetime
lubricated" the mechanic said "that was for the life of the
bearing, not the life of the car."



--
"Today's production equipment is IT based and cannot be
operated without a passing knowledge of computing, although
it seems that it can be operated without a passing knowledge
of audio." - John Watkinson

http://mikeriversaudio.wordpress.com - useful and
interesting audio stuff

Sean Conolly
August 24th 11, 01:37 AM
"William Sommerwerck" > wrote in message
...
>>> If one is down-converting to a rate that's an integral fraction (yes,
> that
>>> sounds dumb), there's no need to create "in-between" samples. But
>>> when you go from (say) 96kHz to 44.1kHz, you have generate and
>>> interpolate appropriate samples.
>
>> Or more precisely, the minimum interpolation rate is the same as the
>> starting rate when it's a 2:1 downsample. There's no need to interpolate
> up
>> by a factor of 2 so you can decimate by a factor of 4.
>
> I'm not sure the point is coming across.

Referring to your first point above from 88.2 to 44.1.

Sean

Arny Krueger[_4_]
August 24th 11, 11:24 AM
"Mike Rivers" > wrote in message
...
> On 8/23/2011 3:45 PM, Don Pearce wrote:
>
>> Do you remember "permanently greased bearings"? They were identical to
>> ordinary bearings, but made cheaper by not having grease nipples. This
>> is exactly the same thing.
>
> I had to replace some of those in a car that I apparently kept too long.
> When I commented about them being "lifetime lubricated" the mechanic said
> "that was for the life of the bearing, not the life of the car."

When was the last time you had to replace a wheel bearing in a car, before
that?

I've logged well over 100,000 miles in at least a half dozen cars, close to
200,000 in several, and had to replace exactly one wheel bearing. The one
bearing I replaced was arguably abused when I encountered severe wheel hop
trying to climb an icy road maybe a decade before it failed.

If you are old enough and were hands-on with cars enough, you can remember
repacking all 4 wheel bearings. I can't remember the exact interval back in
the 60s, maybe 40,000 miles. At one time the interval was 1,000 miles, if
memory serves related to studying really old car maintenance manuals.

Mike Rivers
August 24th 11, 12:19 PM
On 8/24/2011 6:24 AM, Arny Krueger wrote:

> When was the last time you had to replace a wheel bearing in a car, before
> that?

I don't even think of wheel bearings any more, but I
remember repacking them by hand. I did have a "permanent"
rear wheel bearing go out on a car that had less than
100,000 miles on it. It was covered under the warranty.

I was thinking about the suspension and steering links that
used to get greased every time you changed the oil. I
haven't lubricated one of those, and had only one that was
worn enough to worry about (but I got rid of the car before
fixing it) in the last 20 years. I think the last car I had
that had grease fittings was my 1972 240Z. It came with
plug, and at 20,000 miles or so, you were supposed to take
them out, put in grease fittings, and lubricate it. Then you
were supposed to remove the fittings and put the plugs back
in, but I just left it so it could be maintained.

Kind of like when replacing an IC, installing a socket.


--
"Today's production equipment is IT based and cannot be
operated without a passing knowledge of computing, although
it seems that it can be operated without a passing knowledge
of audio." - John Watkinson

http://mikeriversaudio.wordpress.com - useful and
interesting audio stuff