Home |
Search |
Today's Posts |
#1
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
I've measured IM distortion on various devices using a digital
waveform consisting of 19 & 20 kHz at 0 dB. Even on high quality devices I always get pretty high numbers - the highest IM artifacts in a spectrum analysis usually peak at -40 to -60 dB below the original two waves. As an experiment, I changed the waveform so the 19 & 20 kHz are at -1 dB instead of 0 dB. On the same devices, the measured IM distortion dropped to -80 to -100 dB and lower. What's happening here? Is this expected? If so why? Or is it some kind of flaw in my testing? If so, what? |
#2
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Tue, 23 Oct 2007 10:38:35 -0700, MRC01 wrote:
I've measured IM distortion on various devices using a digital waveform consisting of 19 & 20 kHz at 0 dB. Even on high quality devices I always get pretty high numbers - the highest IM artifacts in a spectrum analysis usually peak at -40 to -60 dB below the original two waves. As an experiment, I changed the waveform so the 19 & 20 kHz are at -1 dB instead of 0 dB. On the same devices, the measured IM distortion dropped to -80 to -100 dB and lower. What's happening here? Is this expected? If so why? Or is it some kind of flaw in my testing? If so, what? Your DAC just ran out of performance. You don't say what you are actually testing, but on the assumption that it is simply being used as the source of the test signals, and you aren't testing the DAC itself, keep it at -1dB, or even a bit lower if it gets better still. d -- Pearce Consulting http://www.pearce.uk.com |
#3
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Oct 23, 1:38 pm, MRC01 wrote:
I've measured IM distortion on various devices using a digital waveform consisting of 19 & 20 kHz at 0 dB. Even on high quality devices I always get pretty high numbers - the highest IM artifacts in a spectrum analysis usually peak at -40 to -60 dB below the original two waves. As an experiment, I changed the waveform so the 19 & 20 kHz are at -1 dB instead of 0 dB. On the same devices, the measured IM distortion dropped to -80 to -100 dB and lower. "0 dB" and "-1 dB" relative to what? What's happening here? Is this expected? If so why? Or is it some kind of flaw in my testing? If so, what? Without seeing more data, I'd bet you're right near the upper limit of the dynamic range of your measurement system, and the change from "-1" to "0" db is just enough to push you into limiting the test system. If, for example, by "0 dB" you mean the highest level waveform the system can generate before clipping, then consider what happens when you add a 0 dB 19 kHz and a 0 dB 20 kHz waveform: the result is a combine waveform whose average level is +3 dB, and whose peak level can be +6 dB: you could be clipping. |
#4
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]() "MRC01" wrote in message oups.com... I've measured IM distortion on various devices using a digital waveform consisting of 19 & 20 kHz at 0 dB. Even on high quality devices I always get pretty high numbers - the highest IM artifacts in a spectrum analysis usually peak at -40 to -60 dB below the original two waves. As an experiment, I changed the waveform so the 19 & 20 kHz are at -1 dB instead of 0 dB. On the same devices, the measured IM distortion dropped to -80 to -100 dB and lower. What's happening here? It is not unusual for digital devices to be far less linear over the last dB before FS. Is this expected? Yes. If so why? We live in an imperfect world. Or is it some kind of flaw in my testing? If so, what? Testing digital equipment at FS - 0 dB is sort of like testing amplifiers right at clipping. Most manufacturers spec amplifiers so that rated output is 0.5-2 dB below actual clipping. Some standards recommend testing digital equipment at FS -3 dB . |
#5
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Oct 23, 11:03 am, wrote:
"0 dB" and "-1 dB" relative to what? 0 dB meaning the sample values at the peaks and troughs of the waveform reach 32767 and -32767 respectively. Because of this, I suspect this is a limitation of the DAC, not of the analog outputs. I've tried with multiple devices all having different DACs, and all exhibit this behavior to varying degrees. Because of this, it seems to be a limitation common to DACs in general. I suspect a more useful and realistic test waveform for measuring whatever IM distortion might be audible during actual listening of real music would be to use two tones in the more sensitive area of human hearing - say 1 kHz and 2 kHz encoded at digital -6 dB. |
#6
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Oct 23, 2:43 pm, MRC01 wrote:
On Oct 23, 11:03 am, wrote: "0 dB" and "-1 dB" relative to what? 0 dB meaning the sample values at the peaks and troughs of the waveform reach 32767 and -32767 respectively. That means, then, that it's quite possible on a regular basis, when the peaks of each coincide, for the values to WANT to be 65534 and -65534: clearly this WILL be a problem for a 16-bit system. Because of this, I suspect this is a limitation of the DAC, not of the analog outputs. I've tried with multiple devices all having different DACs, and all exhibit this behavior to varying degrees. Because of this, it seems to be a limitation common to DACs in general. No, it's not even a DAC problem. You're trying to represent a number in the generation itself which is outside the realm of the number system, assuming you're using 16-bit signed integers. If I try, for example, in a language like C, to do the following: short int a, b, c; a = 32767; b = 32767; c = a + b; the program will generate, on most platforms, an overflow error. Do it in floats: float a, b, c; a = (float) 32767; b = (float) 32767; c = a + b; And you get non error, but now try to stuff that into the 16- bit input register of a DAC, and something is going to break. If you're trying to do this on your typical soundcard, my only suprise is that the results are not worse than you're reporting. I suspect a more useful and realistic test waveform for measuring whatever IM distortion might be audible during actual listening of real music would be to use two tones in the more sensitive area of human hearing - say 1 kHz and 2 kHz encoded at digital -6 dB. No, it will be pretty useless, for a couple of reasons, but the most important one is that if there is intermodulation, the result will be sum and ifference frequencies, so you'll get: 1 kHz + 2 kHz = 3 kHz and 2 kHz - 1 kHz = 1 kHz The 3 kHz difference, unless it's fairly high, is not going to be audible because of the masking of the 2 kHz, and the 1 kHz difference? How are you going to distinguish that 1 kHz from the input 1 kHz? The reasons tone like 19 kHz and 20 kHz are used a 1. The difference frequency is way down where things are audible, at 1 kHz. 2. It's not likely that any products are going to be masked by the input frequencies, because they are so widely separated, 3. None of the sum and difference products will be mistaken for the input tones, 4. If you're going to listen, the ear isn't very sensitive to the input frequen cies to begin with 5. Many devices are more NONlinear at higher frequencies and are thus easier to send into nonlinearity. But, generating them digitally, you run the risk of generating aliasing products unless both your generation algorithm and your DAC is properly implemented. Not only will 19 kHz and 20 kHz generate a difference of 1 kHz, they will also generate a sum of 39 kHz, and that, if not done properly, can alias down to 5.1 kHz (assuming a 44.1 kHz sample rate) or 9 kHz (assuming 48 kHz sample rate). And example where your waveform generating algorrithm, implemented in an obvious fashion, can go awry like this many people attempt to generate a digital square wave as simply a series of alternating values, e.g., 22 values at, oh, 20000 followed by 22 values at -20000, and so on. This will NOT sound very much like a square wave, because the algorithm is ignorant of the fact that such a representation has hamronics that extend far above the nyquist rate ( tom infinity, in fact)., and all thos harmonics WILL get aliased back into the base band. Better instead to simply compute the first 11 terms of the series and sum them together: such is inherently band-pass limited. |
#7
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Oct 23, 12:43 pm, wrote:
No, it's not even a DAC problem. You're trying to represent a number in the generation itself which is outside the realm of the number system, assuming you're using 16-bit signed integers... This assumes the waveform is incorrectly encoded. That may be true, but it isn't necessarily true. What if it is correctly encoded, for example the 19 kHz and 20 kHz components are reduced in level by 6 dB so their sum peaks at zero with no overload? When I look at this raw waveform in Adobe Audition it is perfectly smooth and symmetric with no evident clipping. No, it will be pretty useless, for a couple of reasons, but the most important one is that if there is intermodulation, the result will be sum and ifference frequencies... OK that makes sense. Actually I knew that IM distortion was based on difference tones but didn't think it through. Why not shift the test tones up to 21 kHz and 22 kHz? This would make them inaudible for most people, yet still below Nyquist. That makes a useful test: play it back and whatever you hear is IM distortion. Just don't fry your tweeters! And example where your waveform generating algorrithm, implemented in an obvious fashion, can go awry like this many people attempt to generate a digital square wave as simply a series of alternating values, e.g., 22 values at, oh, 20000 followed by 22 values at -20000, and so on. This will NOT sound very much like a square wave, because the algorithm is ignorant of the fact that such a representation has hamronics that extend far above the nyquist rate ( tom infinity, in fact)., and all thos harmonics WILL get aliased back into the base band. Better instead to simply compute the first 11 terms of the series and sum them together: such is inherently band-pass limited. It sounds like you're saying to low pass filter the raw data of the waveform instead of relying on the playback digital filter to do the same. If so, it seems the results would depend on the playback digital filter, which could make it a good test of the digital filter. In other words, you can pre-filter the raw data to achieve a reasonable result on most systems - extraneous frequencies already removed so you don't need the playback filter to do much of anything, or you can encode it with no filtering to test how well the playback filter does. |
#8
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Oct 23, 4:18 pm, MRC01 wrote:
On Oct 23, 12:43 pm, wrote: No, it's not even a DAC problem. You're trying to represent a number in the generation itself which is outside the realm of the number system, assuming you're using 16-bit signed integers... This assumes the waveform is incorrectly encoded. No, it makes no such assumption. Again, consider the code: long in t; float a, b, c, ampl = 32767.0; short int r; // big loop, where t is incremented and scaled // accordingly, for each sample period for (t = 0; T someBigNumber, t += 1) { a = ampl * sin(19000*t); b = ampl * sin(20000*t); c = a + b; I think you would agree that, so far, c is being computed correctly. The next step: r = (short int) c; breaks, because r will overflow if c 32767, for example, which it WILL be when the two peaks coincide. What if it is correctly encoded, for example the 19 kHz and 20 kHz components are reduced in level by 6 dB so their sum peaks at zero with no overload? When I look at this raw waveform in Adobe Audition it is perfectly smooth and symmetric with no evident clipping. Because what you see in Adobe Audition is a VISUAL representation of the waveform, and there is no assurance whatsoever it represents what the waveform itself is actually doing. But, just from the standpoint of the basic properties of the algorithm, reducing both by 6 dB will ensure proper encoding. Not reducing the levels invites breakage. It has nothing to do with what the waveforms LOOK like, it has everything to do with what they ARE. No, it will be pretty useless, for a couple of reasons, but the most important one is that if there is intermodulation, the result will be sum and ifference frequencies... OK that makes sense. Actually I knew that IM distortion was based on difference tones but didn't think it through. Why not shift the test tones up to 21 kHz and 22 kHz? This would make them inaudible for most people, yet still below Nyquist. Because both will probably be at or above the actual cutoff frequency of the brickwall filter, and at 21 kHz, you're sailing awfully close to the Nyquist wind anyway. What I would do, if you were concerned about the audibility of the input tones, is run your 19 kHz and 20 kHz through whatever you're testing, take the output, put a 10 kHz low pass filter in place, and get rid of the original stuff, leaving only the difference tones. And example where your waveform generating algorrithm, implemented in an obvious fashion, can go awry like this many people attempt to generate a digital square wave as simply a series of alternating values, e.g., 22 values at, oh, 20000 followed by 22 values at -20000, and so on. This will NOT sound very much like a square wave, because the algorithm is ignorant of the fact that such a representation has hamronics that extend far above the nyquist rate ( tom infinity, in fact)., and all thos harmonics WILL get aliased back into the base band. Better instead to simply compute the first 11 terms of the series and sum them together: such is inherently band-pass limited. It sounds like you're saying to low pass filter the raw data of the waveform instead of relying on the playback digital filter to do the same. No, not only that, but it's also relying on any algorithms that follow that might be sensitive to aliasing products, The playback reconstruction filter is only the very last thing you have to worry about. A very important principle of sampling, and one that many people have a hard time grasping is that a time-samples stream contains all the aliases from infinity crammed into the baseband, and contains all images of the baseband out to infinity. ANY process must deal with that, whether its digital or analog. That's one reason why, for example, sample rate conversion algorithms ALWAYS start FIRST with an anti-imaging filter (even though you might not think it necessary), it does the conversion, and then it has an anti-aliasing filter, even though everything is done in the digital domain. |
#9
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Oct 23, 4:18 pm, MRC01 wrote:
This assumes the waveform is incorrectly encoded. On Oct 23, 12:43 pm, wrote: No, it makes no such assumption. Again, consider the code: I understand your code. I call it "incorrect" because it overflows. What I'm saying is that this code is not necessarily the way the waveform was created. If you take that same code and normalize the final value to a range of 32767 to -32767, while it is still an int, BEFORE converting it to a short int, then it doesn't overflow. But that's the same thing I was suggesting - cutting it in half or scaling it back -6 dB. I suspect that since I only have to attenuate it by 1 dB for the distortion to drop to levels around -100 dB, it seems that the waveform is properly constructed without overflow. In other words, if the overflow was there as you are suggesting, I would have to attenuate it a lot more - at least 6 dB before the distortion would be eliminated. Because what you see in Adobe Audition is a VISUAL representation of the waveform, and there is no assurance whatsoever it represents what the waveform itself is actually doing. I can also look at the actual sample values in adobe audition. Though it's true the curve it draws through the samples is likely not filtered the same way the DAC works. No, not only that, but it's also relying on any algorithms that follow that might be sensitive to aliasing products, The playback reconstruction filter is only the very last thing you have to worry about. A very important principle of sampling, and one that many people have a hard time grasping is that a time-samples stream contains all the aliases from infinity crammed into the baseband, and contains all images of the baseband out to infinity. ANY process must deal with that, whether its digital or analog. That's one reason why, for example, sample rate conversion algorithms ALWAYS start FIRST with an anti-imaging filter (even though you might not think it necessary), it does the conversion, and then it has an anti-aliasing filter, even though everything is done in the digital domain.- Hide quoted text - It depends on the goal. If the goal is to reliably produce the closest thing to a square wave that 44.1 kHz samples allow, then what you describe makes sense. But if the goal is to test how well the DAC interprets a difficult waveform then one *should* use the simpler mathematically pure square wave that you described. Theoretically, an ideal DAC should output a proper looking square wave when fed that signal. It should be able to filter out everthing 22.5 kHz with minimal passband distortion. Nothing in the real world is perfect; one should expect to see some distortion, but the goal is to compare the distortion generated by different DACs. |
#10
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Oct 23, 11:48 pm, MRC01 wrote:
That's one reason why, for example, sample rate conversion algorithms ALWAYS start FIRST with an anti-imaging filter (even though you might not think it necessary), it does the conversion, and then it has an anti-aliasing filter, even though everything is done in the digital domain. It depends on the goal. If the goal is to reliably produce the closest thing to a square wave that 44.1 kHz samples allow, then what you describe makes sense. But if the goal is to test how well the DAC interprets a difficult waveform then one *should* use the simpler mathematically pure square wave that you described. Theoretically, an ideal DAC should output a proper looking square wave when fed that signal. It should be able to filter out everthing 22.5 kHz with minimal passband distortion. No, that's what you're not getting: The waveform I described is NOT "mathematically pure;" It's already broken BEFORE it gets to the DAC: because it is discrete time sampled, and because it was NOT band-limited before it was generated, it already contains all of the aliases folded down into the baseband. Nothing in the real world is perfect; one should expect to see some distortion, but the goal is to compare the distortion generated by different DACs. Then the step-generated square-wave I described is NOT the way to do it, because it is intrinsically distorted before it hits the DAC. Let's try a different approach: whatever code you are using to generate the waveform can be viewed as THE analog-to- digital conversion process for that waveform. The code IS sampling a waveform. And to prevent ANY aliases from finding there way into the sampled stream, the waveform MUST be low-pass filtered to less than 1/2 the sample rate BEFORE sampling. Therefore, a sample sequence of the type I described first, where you have some number of samples at some positive level, followed by the same number of samples at the same negative value, IS ALREADY BROKEN in that it was not properly anti-alias filtered before sampling. A 10 kHz waveform generated this way will have, in it's "pure mathematical" form, harmonics at 30 kHz, 50 kHz, 70 kHz and so on. In that sampled stream, those harmonics will already be folded back to 14.1 kHz, 5.9 kHz, 28.2 kHz and so on. Those aliases ARE ALREADY IN THE SAMPLED STREAM. A PERFECT DAC could NEVER filter them out: they're within the passband of a perfect anti-imaging filter. Now, what would that same 10 kHz square wave really look like in the sampled stream if captured by the perfect ADC? Well, since 30 kHz and everything else above the Nyquist frequency gets filtered, the sampled stream would consist ONLY of the sampling of a 10 kHz SINE wave. Let's repeat, a step-generated sampled square wave is a BAD test for a DAC, becuase that sampled stream is already heavily distorted with the aliases, because the Nyquist criteria was violated by the very sampling process used to generate it. If a DAC playing such a waveform sounds distorted, the DAC is doing its job correctly, because the sampled waveform is distorted. It's far from intuitive, to be sure. But it's another case where intuition about things is simply wrong. |
#11
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Oct 24, 5:41 am, wrote:
Let's repeat, a step-generated sampled square wave is a BAD test for a DAC, becuase that sampled stream is already heavily distorted with the aliases, because the Nyquist criteria was violated by the very sampling process used to generate it. If a DAC playing such a waveform sounds distorted, the DAC is doing its job correctly, because the sampled waveform is distorted. Are you saying that certain combinations of samples are invalid because there does not exist a unique curve that passes through them which is properly bandwidth limited? If so that is an interesting proposition. A "correct" set of sampling points is a set in which there exists a single, unique curve that passes through all the points and is bandwidth limited below Nyquist. Thus I can see two possible classes of invalidity. One, where there doesn't exist *any* curve (properly bandwidth limited) that passes through the points. Two, where multiple different curves may pass through the same points. In either case, the output of the DAC is undefined. |
#12
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
In article .com,
wrote: Let's try a different approach: whatever code you are using to generate the waveform can be viewed as THE analog-to- digital conversion process for that waveform. The code IS sampling a waveform. And to prevent ANY aliases from finding there way into the sampled stream, the waveform MUST be low-pass filtered to less than 1/2 the sample rate BEFORE sampling. Therefore, a sample sequence of the type I described first, where you have some number of samples at some positive level, followed by the same number of samples at the same negative value, IS ALREADY BROKEN in that it was not properly anti-alias filtered before sampling. A 10 kHz waveform generated this way will have, in it's "pure mathematical" form, harmonics at 30 kHz, 50 kHz, 70 kHz and so on. In that sampled stream, those harmonics will already be folded back to 14.1 kHz, 5.9 kHz, 28.2 kHz and so on. Those aliases ARE ALREADY IN THE SAMPLED STREAM. A PERFECT DAC could NEVER filter them out: they're within the passband of a perfect anti-imaging filter. Yup. Another way of looking at it, is that the "mathematically pure" sample sequence is *the* correct and legitimate sampled representation of a valid (properly-bandlimited) analog waveform - a waveform which could have been presented to an ADC for conversion to digital form. And, that waveform is *not* a bandwidth-limited square wave. If you feed a good ADC a bandlimited analog waveform with the frequency components that Dick indicates (in the correct amplitudes and phase relationships, of course), the sample sequence which comes out of the ADC will be the "mathematically pure" pattern of values originally discussed. If you feed these samples into a properly-functioning DAC, it will re-create this bandlimited analog waveform... *not* an attempted recreation of a sharp-edged square wave with frequencies lying above half of the sample rate. I'm sure that it's possible to deliberately tweak the design of a DAC's reconstruction filter in order to make the output "look better" in this case and cases like this... i.e. to create an analog waveform which looks more like a square wave. However, doing so *reduces* the DAC's actual accuracy, and will introduce linear or nonlinear distortion into every signal which goes through the DAC... it makes the DAC worse rather than better. If I recall correctly, there have been a couple of DAC designs which did this (Wadia, and the Pioneer Legato Link design come to mind, but my memory could well be wrong about this). It's far from intuitive, to be sure. But it's another case where intuition about things is simply wrong. Perhaps not quite as counter-intuitive as quantum physics, but it does share the same violation-of-apparent-reasonableness problem :-) -- Dave Platt AE6EO Friends of Jade Warrior home page: http://www.radagast.org/jade-warrior I do _not_ wish to receive unsolicited commercial email, and I will boycott any company which has the gall to send me such ads! |
#13
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]() In article . com, MRC01 wrote: Let's repeat, a step-generated sampled square wave is a BAD test for a DAC, becuase that sampled stream is already heavily distorted with the aliases, because the Nyquist criteria was violated by the very sampling process used to generate it. If a DAC playing such a waveform sounds distorted, the DAC is doing its job correctly, because the sampled waveform is distorted. Are you saying that certain combinations of samples are invalid because there does not exist a unique curve that passes through them which is properly bandwidth limited? Every combination of samples is valid, and corresponds to a unique (within the system's quantization limits) bandwidth-limited analog waveform. The "mathematically pure" sample pattern at issue (N samples at one value, followed by another N samples at a different value, lather/rinse/repeat) doesn't correspond to a bandwidth-limited square wave of period 2N samples, though. It corresponds to a bandwidth-limited signal with the characteristics that Dick describes... one which has a sinewave fundamental of period 2N, plus a whole bunch of non-harmonically-related components that correspond to folded-back aliases of the harmonics of the fundamental. Thus I can see two possible classes of invalidity. One, where there doesn't exist *any* curve (properly bandwidth limited) that passes through the points. Two, where multiple different curves may pass through the same points. In either case, the output of the DAC is undefined. It's not a problem of the sample sequence being invalid. It isn't. It's perfectly valid. It just doesn't happen to correspond with the sort of analog signal that intuition suggests that it does. -- Dave Platt AE6EO Friends of Jade Warrior home page: http://www.radagast.org/jade-warrior I do _not_ wish to receive unsolicited commercial email, and I will boycott any company which has the gall to send me such ads! |
#14
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]() "MRC01" wrote in message ups.com... On Oct 24, 5:41 am, wrote: Let's repeat, a step-generated sampled square wave is a BAD test for a DAC, becuase that sampled stream is already heavily distorted with the aliases, because the Nyquist criteria was violated by the very sampling process used to generate it. If a DAC playing such a waveform sounds distorted, the DAC is doing its job correctly, because the sampled waveform is distorted. Are you saying that certain combinations of samples are invalid because there does not exist a unique curve that passes through them which is properly bandwidth limited? If so that is an interesting proposition. A "correct" set of sampling points is a set in which there exists a single, unique curve that passes through all the points and is bandwidth limited below Nyquist. Thus I can see two possible classes of invalidity. One, where there doesn't exist *any* curve (properly bandwidth limited) that passes through the points. Two, where multiple different curves may pass through the same points. In either case, the output of the DAC is undefined. There's a third interesting relevant case that could be relevant to the OP. There can sometimes be a unique curve that passes through the samples and has a maximum amplitude that significantly exceeds FS. Such cases arise from time to time in the real world, and can exceed 120% of FS. |
#15
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]() MRC01 wrote: wrote: "0 dB" and "-1 dB" relative to what? 0 dB meaning the sample values at the peaks and troughs of the waveform reach 32767 and -32767 respectively. Which is FSD. So if you add 2 such waveforms you'll clip the DAC it would seem. Are BOTH frequencies at '0dB' level ? Graham |
#16
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Oct 24, 11:38 am, (Dave Platt) wrote:
Every combination of samples is valid, and corresponds to a unique (within the system's quantization limits) bandwidth-limited analog waveform. That's what I thought... which is why I was confused by Pierce's description. The "mathematically pure" sample pattern at issue (N samples at one value, followed by another N samples at a different value, lather/rinse/repeat) doesn't correspond to a bandwidth-limited square wave of period 2N samples, though. It corresponds to a bandwidth-limited signal with the characteristics that Dick describes... one which has a sinewave fundamental of period 2N, plus a whole bunch of non-harmonically-related components that correspond to folded-back aliases of the harmonics of the fundamental. Here is where that is counterintuitive. A perfect square wave cannot exist in nature - the first derivative of the waveform curve is discontinuous, implying an infinite rate of change of air pressure, which is impossible. But suppose you generate a very sharp "square" wave with frequencies up to, say, 100 kHz. Then you pick it up with a super fancy microphone with bandwidth to 100 kHz. That microphone produces an analog signal which is a very sharp square wave. Now you sample it digitally at 44.1 kHz. The sampling points are measuring the amplitude every 22.7 microseconds. This square wave is so sharp, containing 100 kHz elements, it jumps full scale in less time than that. And it has such high frequencies, the overshoot and ringing is practically invisible to sampling points spaced so far apart in time. So how could the sampling points produced by the ADC *not* be this simplisitic wave? |
#17
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Wed, 24 Oct 2007 13:52:28 -0700, MRC01 wrote:
On Oct 24, 11:38 am, (Dave Platt) wrote: Every combination of samples is valid, and corresponds to a unique (within the system's quantization limits) bandwidth-limited analog waveform. That's what I thought... which is why I was confused by Pierce's description. The "mathematically pure" sample pattern at issue (N samples at one value, followed by another N samples at a different value, lather/rinse/repeat) doesn't correspond to a bandwidth-limited square wave of period 2N samples, though. It corresponds to a bandwidth-limited signal with the characteristics that Dick describes... one which has a sinewave fundamental of period 2N, plus a whole bunch of non-harmonically-related components that correspond to folded-back aliases of the harmonics of the fundamental. Here is where that is counterintuitive. A perfect square wave cannot exist in nature - the first derivative of the waveform curve is discontinuous, implying an infinite rate of change of air pressure, which is impossible. But suppose you generate a very sharp "square" wave with frequencies up to, say, 100 kHz. Then you pick it up with a super fancy microphone with bandwidth to 100 kHz. That microphone produces an analog signal which is a very sharp square wave. Now you sample it digitally at 44.1 kHz. The sampling points are measuring the amplitude every 22.7 microseconds. This square wave is so sharp, containing 100 kHz elements, it jumps full scale in less time than that. And it has such high frequencies, the overshoot and ringing is practically invisible to sampling points spaced so far apart in time. So how could the sampling points produced by the ADC *not* be this simplisitic wave? Because you have neglected that vital component - the anti-aliasing filter. That will turn your steep-sided 100kHz square wave into a sloping sided, rounded cornered wave which can be successfully sampled without producing alias signals. I've written a paper on aliasing that should explain things. http://www.pearce.uk.com/papers/index.htm d -- Pearce Consulting http://www.pearce.uk.com |
#18
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
In article .com,
MRC01 wrote: Here is where that is counterintuitive. A perfect square wave cannot exist in nature - the first derivative of the waveform curve is discontinuous, implying an infinite rate of change of air pressure, which is impossible. But suppose you generate a very sharp "square" wave with frequencies up to, say, 100 kHz. Then you pick it up with a super fancy microphone with bandwidth to 100 kHz. That microphone produces an analog signal which is a very sharp square wave. Now you sample it digitally at 44.1 kHz. The sampling points are measuring the amplitude every 22.7 microseconds. This square wave is so sharp, containing 100 kHz elements, it jumps full scale in less time than that. And it has such high frequencies, the overshoot and ringing is practically invisible to sampling points spaced so far apart in time. So how could the sampling points produced by the ADC *not* be this simplisitic wave? Well, there are two answers to your question, based on whether this process did, or did not "follow the rules". By "follow the rules", I mean whether there was a step that you did not specifically mention: properly band-limiting the signal just prior to sampling. If you didn't band-limit the signal - if you sampled it when it was still at its full bandwidth - then I'd say: - The sample values you see *will* be the "simplistic" sequence, and - The sample values are not a *meaningful* representation of the original square wave, within the constraints of a sampled system, because you've broken the cardinal rule - you're sampling a signal which contain frequency components outside of the system's bandwidth limit. "Garbage in, garbage out". If you did bandwidth-limit the signal, then it will no longer have the extremely fast rise-time of the original square-wave. The higher-order odd harmonics will be missing. With them absent, the "square wave' will no longer have a flat top... it will exhibit ripple, and thus the sampler won't produce the same uniform sequence of values. You can think of it in another way. Yes, feeding a non-bandwidth- limited square wave to a sampler will produce a "mathematically pure square wave" in sampled form. However, there are a literally infinite number of *other* non-bandwidth-limited waveforms which will produce the exact same sequence of samples. How can the DAC possibly decide between them? There's only one "legal" (properly bandwidth-limited) signal (within quantization limits) which will produce this particular pattern of samples... and that's the only one which the a properly-designed DAC can reproduce. -- Dave Platt AE6EO Friends of Jade Warrior home page: http://www.radagast.org/jade-warrior I do _not_ wish to receive unsolicited commercial email, and I will boycott any company which has the gall to send me such ads! |
#19
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
So the simplistic square wave sequence is a valid sequence of samples
that defines a unique properly bandwidth filtered wave. But that wave isn't a square wave. The interesting concept here is that I assumed that if you sample a waveform, the spacing / frequency of the samples would simply skip over anything that was moving too fast / too high frequency to be seen. But that is not the case. For example, suppose the waveform contains frequencies above Nyquist, so your sampling points are far apart relative to changes in the waveform. So you are skipping over a lot of information simply based on the spacing. But each sampling point has to land *somewhere*, and it will frequently happen that it lands on a certain bump in the waveform that wouldn't exist except for frequencies above Nyquist. These frequences have to be eliminated BEFORE sampling because once sampled they MUST be interpreted by the playback DAC as frequencies below Nyquist - which they aren't so that means the wave constructed from them MUST be different from the one sampled. Now *that* is an AHA experience. Thanks! |
#20
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Oct 24, 6:14 pm, MRC01 wrote:
So the simplistic square wave sequence is a valid sequence of samples that defines a unique properly bandwidth filtered wave. If by "simplistic square wave" you mean the alternating sequence of positive and negative values, then no, it most assuredly IS NOT a "unique, properly bandwidth filtered wave." It has, in essence, infinite bandwidth for the purpose of this discussion, But that wave isn't a square wave. Yes, it is, it just has a bandwidth that well exceeds the Nyquist criteria. And sampled in that fashion, it violates said criteria. In doing so, it is "broken" in that all the out-of-bandwith products are aliased down into the baseband. The interesting concept here is that I assumed that if you sample a waveform, the spacing / frequency of the samples would simply skip over anything that was moving too fast / too high frequency to be seen. But they ARE "seen", they are aliased down into the baseband where they can be readily "seen" (heard, measured) as spurious information. Here's completely apt analogy. Say you have a movie camera running at 24 frames per second. You concept of "the spacing/frequency ... moving to fast to be seen" can be shown to break down when you look at a seen from an old Western where the bad guys are chasing the good guys riding in wagons with spoked wheels. They're racing along forward at a breakneck speed, yet there are the wheels, quite visibly turning BACKWARDS at a low speed.. You can see exactly the same phenomenon in the rotor of a helicopter, where we know the blade is rotating fast, but a movie of it shows the blades almost stationary or very slowly. Why? Because due to the discrete time-sampling of the camera, and the fact that the blades or spokes are moving at a frequency much higher than the sampling frequency, the image we get is quite an incorrect picture of physical reality. Why? Because of the aliasing caused by things moving too fast in a discrete-time sampled stream. Let's see how this works. Take our helicopter, which we will assume has two blades. Let's, for simplicity sake, set our frame (sampling) rate at 25 fps. Now, let's assume the blade is rotating at, oh, 13.9 revolutions per second. I pick that number for a two reasons: 1. It's a plausible value for the rotational speed of the blade, 2. It's above the Nyquist frequency of 12.5 Hz (half the frame rate of 25 Hz) and thus deliberately violates the Nyquist criteria for the purpose of the demonstration. Now, what happens? Frame 1 capture the blade at 0 deg. Frame 2 capture it having rotated about 10% less than half a revolution, frame 3 about 20% less, and so on. String all those frames together and view them, and what do we see? We DON'T see the blade moving forward at 12.5 RPM, we REALLY see it moving BACKWARDS at about 75 RPM. The forward rotating blade was ALIASED by the sampling process to appear rotating backwards. EXACTLY the same principle applies to discrete time-sampled digital audio. Think of your "mathematically pure" square wave, the sequence of plus and minus samples, as a bunch of those rotating helicopter blades all connected together by gear trains so that 1 is rotating at, oh, 10 kHz, another at 30, another at 50, and at 70, 90, 110, 130, 150 kHz and so on. Sample that at 44.1 kHz, which is EXACTLY what you've done by simply alternating a sequence of plus and minus values, and what do you get? All those "blades" get aliased down: 10 kHz - 10 kHz 30 kHz - 14.1 kHz 50 kHz - 5.9 kHz 70 kHz - 18.2 kHz 90 kHz - 1.8 kHz 110 kHz - 21.8 kHz 130 kHz - 2.3 kHz 150 kHz - 17.7 kHz And so on. The problem is that those aliases ARE ALREADY BUILT IN TO THE SAMPLED DATA STREAM as an intrinsic result of sampling. For example, suppose the waveform contains frequencies above Nyquist, so your sampling points are far apart relative to changes in the waveform. So you are skipping over a lot of information simply based on the spacing. But each sampling point has to land *somewhere*, and it will frequently happen that it lands on a certain bump in the waveform that wouldn't exist except for frequencies above Nyquist. These frequences have to be eliminated BEFORE sampling because once sampled they MUST be interpreted by the playback DAC as frequencies below Nyquist - which they aren't so that means the wave constructed from them MUST be different from the one sampled. Yup. you got it! Now *that* is an AHA experience. Thanks! Indeed. |
#21
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Oct 25, 6:03 am, wrote:
On Oct 24, 6:14 pm, MRC01 wrote: So the simplistic square wave sequence is a valid sequence of samples that defines a unique properly bandwidth filtered wave. If by "simplistic square wave" you mean the alternating sequence of positive and negative values, then no, it most assuredly IS NOT a "unique, properly bandwidth filtered wave." It has, in essence, infinite bandwidth for the purpose of this discussion, But that wave isn't a square wave. Yes, it is, it just has a bandwidth that well exceeds the Nyquist criteria. And sampled in that fashion, it violates said criteria. In doing so, it is "broken" in that all the out-of-bandwith products are aliased down into the baseband. No, that's not what I meant. Let me rephrase in more precise terms. *IF* every possible sequence of samples is valid - meaning every sequence has exactly 1 unique Nyquist bandwidth limited waveform that passes through all the sample points, *THEN*, the sequence that consists of 22 samples of, say, 30,000 followed by 22 samples of -30,000 lather rinse repeat, must be a valid sequence - by definition there must exist *some* analog waveform that, after being properly anti-aliased, produces these values from the ADC. *BUT*, the analog waveform that produces this sequence is not a square wave. It is something else. Actually this leads to another question. Is the DAC limited in amplitude as it is in bandwidth? That is, does the DAC necessarily have to clip the analog wave it produces just because the samples appear to be clipped? Example: Suppose you digitize a pure sine wave but you overshoot the levels so the peaks are clipped. But the frequency of the sine wave is high enough - yet still below Nyquist - that none of the clipped portions of the wave happened to be sampled. There may not even be any samples at digital zero (full scale), but there will be some very close to that. When the DAC reconstructs this wave, will it generate a wave whose amplitude is greater than full scale, which matches the original? Or will it be required to produce a wave whose amplitude never exceeds full scale - which will be a different wave, thus distorted? |
#22
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
MRC01 wrote:
OK that makes sense. Actually I knew that IM distortion was based on difference tones but didn't think it through. Why not shift the test tones up to 21 kHz and 22 kHz? This would make them inaudible for most people, yet still below Nyquist. That makes a useful test: play it back and whatever you hear is IM distortion. Just don't fry your tweeters! You seem to assume that all can hear up to exactly 20000 Hz? - also, assuming a 44.1 or even 48 kHz sampling rate I tend to assume that the antialiasing filters will matter. Kind regards Peter Larsen |
#23
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Oct 25, 1:38 pm, MRC01 wrote:
On Oct 25, 6:03 am, wrote: On Oct 24, 6:14 pm, MRC01 wrote: So the simplistic square wave sequence is a valid sequence of samples that defines a unique properly bandwidth filtered wave. If by "simplistic square wave" you mean the alternating sequence of positive and negative values, then no, it most assuredly IS NOT a "unique, properly bandwidth filtered wave." It has, in essence, infinite bandwidth for the purpose of this discussion, But that wave isn't a square wave. Yes, it is, it just has a bandwidth that well exceeds the Nyquist criteria. And sampled in that fashion, it violates said criteria. In doing so, it is "broken" in that all the out-of-bandwith products are aliased down into the baseband. No, that's not what I meant. Let me rephrase in more precise terms. *IF* every possible sequence of samples is valid - meaning every sequence has exactly 1 unique Nyquist bandwidth limited waveform that passes through all the sample points, And that assumption itself is false: Every possible sequence of sample values IS NOT valid, because a very large number of them, the example you give below, is one example of a an entire class of sequences that violates the Nyquist criteria. *THEN*, the sequence that consists of 22 samples of, say, 30,000 followed by 22 samples of -30,000 lather rinse repeat, must be a valid sequence - by definition there must exist *some* analog waveform that, after being properly anti-aliased, produces these values from the ADC. Yes, there does exist SOME function that passes through these points. The problem is there is not a UNIQUE function that passes through these points, because the waveform you describe HAS NOT BEEN ANTI-ALIS FILTERED BEFORE IT WAS SAMPLED. *BUT*, the analog waveform that produces this sequence is not a square wave. It is something else. It makes absolutely no difference what the original was: the waveform YOU describe above itself is intrinsically NOT a valid sequence of samples. Your assumption that every sequence of samples is a valid sequence itself is intrinsically flawed. We keep describing, over and over, an example of a sequence WHICH IS NOT VALID because it violates Nyquist. Actually this leads to another question. Is the DAC limited in amplitude as it is in bandwidth? That is, does the DAC necessarily have to clip the analog wave it produces just because the samples appear to be clipped? Nope. You can have a valid sequence of samples whose reconstructed waveform exceeds the output voltage produced by, say, a constant DC value of 32767. And that waveform would be valid. Example: Suppose you digitize a pure sine wave but you overshoot the levels so the peaks are clipped. But the frequency of the sine wave is high enough - yet still below Nyquist - that none of the clipped portions of the wave happened to be sampled. Well, you keep making the mistake of not filtering before sampling. Let's say it's a 15 kHz sine wave. And let's say full scale (+- 32767) represents a a voltage of +-1 volt. If we put in a sine wave whose amplitude is +- 1.2 volts, ANY clipping in the analog domain is irrelevant, because the 20 kHz anti-aliasing filter will prevent ANY harmonics from ever reaching the sampler. But, you could well have two tones that intermodulate before the filter, and whose products are BELOW Nyquist, and they will be sampled and digitized quite nicely, thank you. When the DAC reconstructs this wave, will it generate a wave whose amplitude is greater than full scale, which matches the original? It's certainly possible. Or will it be required to produce a wave whose amplitude never exceeds full scale - which will be a different wave, thus distorted? It's also certainly possible that a DAC could produce a waveform that has peak levels in excess of the DAC's o dB level and have a downstream buffer or line driver clip, but that's another issues. The short answer is that it's quite possible for a system sampled quantized encoding to encode waveforms whose continuous representation exceeds, at peaks, the nominal maximum output level. It's unusual simply because this represents a signal with a LOT of energy near the Nyquist limit, which is a relatively rare occurrence in the things we generally like to record and listen to. |
#24
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Oct 25, 5:36 pm, wrote:
... Every possible sequence of sample values IS NOT valid, because a very large number of them, the example you give below, is one example of a an entire class of sequences that violates the Nyquist criteria. But on Oct 24, 11:38 am, Dave Platt wrote: Every combination of samples is valid, and corresponds to a unique (within the system's quantization limits) bandwidth-limited analog waveform. This is an interesting mathematical question. Do there exist some combinations of samples that are invalid? Or does every combination of samples define a unique anti-aliased Nyquist limited wave? Hmmm.... It may be tunnel vision to assume that because this particular example: 22 samples of 30,000 followed by 22 samples of -30,000 doesn't represent a properly anti-aliased square wave, that it doesn't represent any wave at all. I'm pondering the idea that *some* kind of wave, after being anti-aliased, might just so happen to produce this exact set of samples, even though that wave might not even resemble a square wave. If there *are* combinations of samples that are invalid - that no anti- aliased waveform could produce - then what is the DAC supposed to do when it encounters one? It's going to fit a waveform... if the algorithm or function it uses has an inverse then that would show us the waveform that would have produced that set of samples. Of course it doesn't necessarily have an inverse. In that sense, the mathematical question about whether there exist "invalid" sequences of samples, may be the same as asking whether the ADC / DAC functions or algorithms are bijections. |
#25
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
In article .com,
MRC01 wrote: But on Oct 24, 11:38 am, Dave Platt wrote: Every combination of samples is valid, and corresponds to a unique (within the system's quantization limits) bandwidth-limited analog waveform. This is an interesting mathematical question. Do there exist some combinations of samples that are invalid? Nope. Or does every combination of samples define a unique anti-aliased Nyquist limited wave? Hmmm.... Yes (within the quantization resolution limit of the system). It may be tunnel vision to assume that because this particular example: 22 samples of 30,000 followed by 22 samples of -30,000 doesn't represent a properly anti-aliased square wave, that it doesn't represent any wave at all. This sample sequence *does* correspond a perfectly-legitimate, properly- antialiased (Nyquist-limited) continuous waveform. So, yeah, I'd say that you're suffering from intuitive tunnel vision. The fact that the sample sequence does *not* correspond to the sort of continuous signal that your intuition leads you to believe it should, is causing you to (mistakenly) believe that it doesn't correspond to *any* continuous waveform. That's a mis-conclusion, that you should strive to let float away on the breeze :-) I'm pondering the idea that *some* kind of wave, after being anti-aliased, might just so happen to produce this exact set of samples, even though that wave might not even resemble a square wave. Oh, it will. This sample sequence is *not* invalid. It's perfectly legal. If there *are* combinations of samples that are invalid - that no anti- aliased waveform could produce - then what is the DAC supposed to do when it encounters one? Since the "If" you state isn't true, the "then what" is irrelevant. It's going to fit a waveform... if the algorithm or function it uses has an inverse then that would show us the waveform that would have produced that set of samples. Of course it doesn't necessarily have an inverse. If you feed this set of samples to a properly-designed DAC, the DAC will output a waveform. Because this waveform is coming out of a properly-designed DAC with a proper reconstruction filter, it will contain no frequency components lying above the Nyquist limit. Hence, the output of the DAC has no aliases to remove. Now, feed this waveform into another sampler (an ADC). If you lock the sampler's timing to that of the DAC accurately, so that you sample at precisely the right moments... and if you've set the gain correctly and have low-enough noise... then the sequence of samples that you take will replicate the original ones which were fed into the DAC! Hence, you've just shown that this particular continuous waveform, and the "square-wave-like" sequence of samples, have a 1-to-1 correspondence. -- Dave Platt AE6EO Friends of Jade Warrior home page: http://www.radagast.org/jade-warrior I do _not_ wish to receive unsolicited commercial email, and I will boycott any company which has the gall to send me such ads! |
#27
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
MRC01 writes:
[...] In that sense, the mathematical question about whether there exist "invalid" sequences of samples, may be the same as asking whether the ADC / DAC functions or algorithms are bijections. I think they are, but I can't come up with a proof quickly. -- % Randy Yates % "With time with what you've learned, %% Fuquay-Varina, NC % they'll kiss the ground you walk %%% 919-577-9882 % upon." %%%% % '21st Century Man', *Time*, ELO http://www.digitalsignallabs.com |
#28
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Oct 25, 9:13 pm, (Dave Platt) wrote:
In article .com, MRC01 wrote: But on Oct 24, 11:38 am, Dave Platt wrote: Every combination of samples is valid, and corresponds to a unique (within the system's quantization limits) bandwidth-limited analog waveform. This is an interesting mathematical question. Do there exist some combinations of samples that are invalid? Nope. Yes, there is. The most trivial example is an alternating positiive and negative stream of constant values. It's a waveform which is at precisely 1/2 the sample rate, which violates the Nyquist criteria. Show us an input function to a properly implemented sampler that would result in such a waveform. I'm willing to concede the point, but you'll have to explain away cases such as the one illustrated above. |
#29
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
In article .com,
wrote: Nope. Yes, there is. The most trivial example is an alternating positiive and negative stream of constant values. It's a waveform which is at precisely 1/2 the sample rate, which violates the Nyquist criteria. Erp. You're quite right... I'd forgotten that particular set of boundary cases. Shame on me :-( Show us an input function to a properly implemented sampler that would result in such a waveform. Isn't one. You're right. -- Dave Platt AE6EO Friends of Jade Warrior home page: http://www.radagast.org/jade-warrior I do _not_ wish to receive unsolicited commercial email, and I will boycott any company which has the gall to send me such ads! |
#30
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Oct 24, 1:13 pm, Eeyore
wrote: MRC01 wrote: wrote: "0 dB" and "-1 dB" relative to what? 0 dB meaning the sample values at the peaks and troughs of the waveform reach 32767 and -32767 respectively. Which is FSD. So if you add 2 such waveforms you'll clip the DAC it would seem. Are BOTH frequencies at '0dB' level ? No. I just double checked this waveform. A spectrum analysis shows the 19 kHz and 20 kHz components at -6 dB each. The overall waveform (sum) samples peak at 0 dB. Theoretically, this wave could be played back with no clipping or distortion. But the DACs generate a good deal of distortion playing them back - spurious IM frequencies peak at -50 to -60 dB relative to the signal, depending on the device. When I attenuate the waveform -1 db - the spectrum analysis shows the components at -7 dB each - it plays with the IM frequencies at -80 to -100 dB relative to the signal. This IM distortion may go lower with even more attenuation, but -100 dB is low enough I'm not worried about it. ![]() Based on this, it appears that this is a valid test waveform and it shows that the DACs are going non-linear in the last 1 dB of amplitude. FWIW, one DAC is the Wolfson WM-8718, the other is a Burr Brown PCM-1732, one is a Marantz CDR-630 (whatever DAC it uses, not sure) and the 4th is from a portable CD player. |
#31
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Oct 26, 1:45 pm, (Dave Platt) wrote:
wrote: Yes, there is. The most trivial example is an alternating positiive and negative stream of constant values. It's a waveform which is at precisely 1/2 the sample rate, which violates the Nyquist criteria. Erp. You're quite right... I'd forgotten that particular set of boundary cases. Shame on me :-( I wonder if there are other invalid sequences of samples. And what does the DAC do if it encounters one of these invalid sequences? It has to create *some* kind of waveform. Is it undefined? Different DACs might generate totally different waves? |
#32
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
writes:
On Oct 25, 9:13 pm, (Dave Platt) wrote: In article .com, MRC01 wrote: But on Oct 24, 11:38 am, Dave Platt wrote: Every combination of samples is valid, and corresponds to a unique (within the system's quantization limits) bandwidth-limited analog waveform. This is an interesting mathematical question. Do there exist some combinations of samples that are invalid? Nope. Yes, there is. The most trivial example is an alternating positiive and negative stream of constant values. It's a waveform which is at precisely 1/2 the sample rate, which violates the Nyquist criteria. I had forgotten this type of case. The Nyquist criteria is not an "if and only if." That is, it says that IF an input signal satisfies the criteria, then it can be converted to digital without losing any information. It does NOT say that if a digital signal represents an input signal without losing information, it satisfies the Nyquist criteria. Show us an input function to a properly implemented sampler that would result in such a waveform. f(t) = sin(2*pi*(F_s / 2) * t), assuming the sampler samples at times n*T_s, where n is integer and T_s = 1 / F_s. -- % Randy Yates % "Though you ride on the wheels of tomorrow, %% Fuquay-Varina, NC % you still wander the fields of your %%% 919-577-9882 % sorrow." %%%% % '21st Century Man', *Time*, ELO http://www.digitalsignallabs.com |
#33
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
Randy Yates writes:
writes: On Oct 25, 9:13 pm, (Dave Platt) wrote: In article .com, MRC01 wrote: But on Oct 24, 11:38 am, Dave Platt wrote: Every combination of samples is valid, and corresponds to a unique (within the system's quantization limits) bandwidth-limited analog waveform. This is an interesting mathematical question. Do there exist some combinations of samples that are invalid? Nope. Yes, there is. The most trivial example is an alternating positiive and negative stream of constant values. It's a waveform which is at precisely 1/2 the sample rate, which violates the Nyquist criteria. I had forgotten this type of case. The Nyquist criteria is not an "if and only if." That is, it says that IF an input signal satisfies the criteria, then it can be converted to digital without losing any information. It does NOT say that if a digital signal represents an input signal without losing information, it satisfies the Nyquist criteria. Show us an input function to a properly implemented sampler that would result in such a waveform. f(t) = sin(2*pi*(F_s / 2) * t), assuming the sampler samples at times n*T_s, where n is integer and T_s = 1 / F_s. Let me be quick to add that I know this signal violates the Nyquist criteria because it is NOT Fs/2. However, the point I am attempting to make is that all possible digital sequences produce valid analog signals. -- % Randy Yates % "Bird, on the wing, %% Fuquay-Varina, NC % goes floating by %%% 919-577-9882 % but there's a teardrop in his eye..." %%%% % 'One Summer Dream', *Face The Music*, ELO http://www.digitalsignallabs.com |
#34
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Fri, 26 Oct 2007 13:52:54 -0700, MRC01 wrote:
I wonder if there are other invalid sequences of samples. And what does the DAC do if it encounters one of these invalid sequences? It has to create *some* kind of waveform. Is it undefined? Different DACs might generate totally different waves? "DAC" is almost an undefined term, but in the rawest case of a simple ladder with summed output, *any* data sequence is valid, and an unambiguous output is generated. Aliasing, in particular, doesn't apply here - it's just us chickens. In modern usage the term "DAC" might be expected to be a plastic package and to include sample rate conversion and filtering, so your question is very deep indeed. To the OP's observations: many other folks have reported surprisingly large "abnormalies" in consumer-level DAC's at peak outputs, especially in modern DVD players' audio. Haven't heard any especially creditable explanations, but there're lots of reasonable theories. Thanks to all for a very interesting thread, Chris Hornbeck |
#35
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Nov 2, 7:10 pm, Chris Hornbeck
wrote: To the OP's observations: many other folks have reported surprisingly large "abnormalies" in consumer-level DAC's at peak outputs, especially in modern DVD players' audio. Haven't heard any especially creditable explanations, but there're lots of reasonable theories. Can you define "consumer level"? One of the devices that does this is a Marantz CDR-630 which was sold by Marantz as "pro" gear. I bought mine used from a recording studio in Chicago. The DACs in most players - pro or consumer - often seem to be the same DACs made by the same companies. They're not designing their own chips; they're buying off the shelf from Burr Brown, Wolfson, or whoever. Because of this I wouldn't expect to see any difference between high end consumer versus pro gear. |
#36
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]() "MRC01" wrote in message oups.com... The DACs in most players - pro or consumer - often seem to be the same DACs made by the same companies. Pretty much true today, but not always true. In the early days Sony CD players had Sony-made converter chips, for example. They're not designing their own chips; they're buying off the shelf from Burr Brown, Wolfson, or whoever. Seems to be true. Price/performance has a lot to do with that. Because of this I wouldn't expect to see any difference between high end consumer versus pro gear. The various vendor chip offerings do differ in terms of bandwidth and dynamic range. |
#37
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]() MRC01 wrote: Chris Hornbeck wrote To the OP's observations: many other folks have reported surprisingly large "abnormalies" in consumer-level DAC's at peak outputs, especially in modern DVD players' audio. Haven't heard any especially creditable explanations, but there're lots of reasonable theories. Can you define "consumer level"? One of the devices that does this is a Marantz CDR-630 which was sold by Marantz as "pro" gear. I bought mine used from a recording studio in Chicago. What exactly does it do ? Marantz are hardly a well-known 'pro' brand. The DACs in most players - pro or consumer - often seem to be the same DACs made by the same companies. They're not designing their own chips; they're buying off the shelf from Burr Brown, Wolfson, or whoever. Because of this I wouldn't expect to see any difference between high end consumer versus pro gear. The only audio DACs to be bought of any quality are indeed typically the ones you mention plus AKM , Cirrus/Crystal and Anolog Devices. It's simply not practical for many companies to make their own converters. Not sure what Philips, Sony, Yamaha are currently doing. Graham |
#38
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]()
On Nov 5, 1:46 pm, Eeyore
wrote: MRC01 wrote: One of the devices that does this is a Marantz CDR-630 ... What exactly does it do ? Marantz are hardly a well-known 'pro' brand. It's a rack mount CD burner. Inputs are coax & optical digital or XLR or unbalanced RCA. It's a dated model but a solid reliable performer - good specs and clean, natural sound. Having come from a recording studio, mine in particular had already burned literally thousands of CDs before I got it and I've done another thousand or so since then - and still trucking along. |
#39
![]()
Posted to rec.audio.tech
|
|||
|
|||
![]() "Eeyore" wrote in message ... MRC01 wrote: Chris Hornbeck wrote To the OP's observations: many other folks have reported surprisingly large "abnormalies" in consumer-level DAC's at peak outputs, especially in modern DVD players' audio. Haven't heard any especially creditable explanations, but there're lots of reasonable theories. Can you define "consumer level"? One of the devices that does this is a Marantz CDR-630 which was sold by Marantz as "pro" gear. I bought mine used from a recording studio in Chicago. What exactly does it do ? Marantz are hardly a well-known 'pro' brand. In the US Marantz have largely gone underground, but they retain a relatively large presence in pro audio. I think its the follow-on to their old pro cassette recorder line. |
Reply |
Thread Tools | |
Display Modes | |
|
|
![]() |
||||
Thread | Forum | |||
Some tweak-related questions | Audio Opinions | |||
Some tweak-related questions | Audio Opinions | |||
I need a line level feed from a speaker level signal | Car Audio | |||
instrument level to very low gtr pickup level | Tech | |||
imac low recording level from line level input | Pro Audio |