Home |
Search |
Today's Posts |
#1
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
Hi
Is there a recommended order to process a digital audio signal using Audacity or similar so that optimum results are obtained? Eg, is it better to remove hiss, clicks, etc before Normalizing? I have a number of old cassettes I need to copy to CDs and I want to get the best results possible. thanks Chaz Cotton |
#2
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
Chaz Cotton wrote:
Is there a recommended order to process a digital audio signal using Audacity or similar so that optimum results are obtained? Eg, is it better to remove hiss, clicks, etc before Normalizing? I have a number of old cassettes I need to copy to CDs and I want to get the best results possible. Spend 90% of your time getting the azimuth on the tape playback right and getting the Dolby levels so the damn thing pumps as little as possible. Any digital processing is gravy. I would not trust the noise removal on Audacity, but your goal is to deal with that before it ever goes into the computer. --scott -- "C'est un Nagra. C'est suisse, et tres, tres precis." |
#3
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
On 17/07/2012 16:35, Chaz Cotton wrote:
Hi Is there a recommended order to process a digital audio signal using Audacity or similar so that optimum results are obtained? Eg, is it better to remove hiss, clicks, etc before Normalizing? I have a number of old cassettes I need to copy to CDs and I want to get the best results possible. I'd say remove clicks, then noise, and use multiple passes to partially remove them each time until you're happy with the sound rather than one heavy handed pass. Then normalise. Just be aware that no matter how you do it, you will lose some of the musicality of the performance, which you may find more annoying than the noise. A quick and dirty method to clean up cassettes and tapes is to use a free Winamp plugin called Tape Restorer Live!, currently at version 1.20. It has settings to allow for incorrectly adjusted Dolby B & C, presets for removal of 19KHz pilot tone and hum from recordings and even lets you time align the channels on tapes where the head gaps aren't in the right place. Use the WAV writer output plugin to record the output to HD. http://www.winamp.com/plugin/tape-restore-live/154246 -- Tciao for Now! John. |
#4
![]()
Posted to rec.audio.pro
|
|||
|
|||
![]() On Tue 2012-Jul-17 12:04, Scott Dorsey (1:3634/1000) wrote to All: Chaz Cotton wrote: Is there a recommended order to process a digital audio signal using Audacity or similar so that optimum results are obtained? Eg, is it Spend 90% of your time getting the azimuth on the tape playback right and getting the Dolby levels so the damn thing pumps as little as possible. Right, but if he's asking about normalize and asking the questions it's probably a given that he wouldn't know how to do the things you suggest. Any digital processing is gravy. I would not trust the noise removal on Audacity, but your goal is to deal with that before it ever goes into the computer. For this though he's going to need a good oscilloscope and some other hardware. Regards, Richard .... Remote audio in the southland: See www.gatasound.com -- | Remove .my.foot for email | via Waldo's Place USA Fidonet-Internet Gateway Site | Standard disclaimer: The views of this user are strictly his own. |
#5
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]() "Chaz Cotton" wrote in message ... Is there a recommended order to process a digital audio signal using Audacity or similar so that optimum results are obtained? Eg, is it better to remove hiss, clicks, etc before Normalizing? I have a number of old cassettes I need to copy to CDs and I want to get the best results possible. The order is pretty well unimportant when copying cassette to CD even if you used a Nakamichi Dragon for both the record and playback source. And if you are using anything less, then you have FAR more problems to worry about, and the digital processing order is NOT one of them these days. Trevor. |
#6
![]()
Posted to rec.audio.pro
|
|||
|
|||
![]()
On Jul 17, 6:18*pm,
(Richard Webb) wrote: On Tue 2012-Jul-17 12:04, Scott Dorsey (1:3634/1000) wrote to All: Chaz Cotton wrote: Is there a recommended order to process a digital audio signal using Audacity or similar so that optimum results are obtained? Eg, is it Spend 90% of your time getting the azimuth on the tape playback right and getting the Dolby levels so the damn thing pumps as little as possible. Right, but if he's asking about normalize and asking the questions it's probably a given that he wouldn't know how to do the things you suggest. Any digital processing is gravy. *I would not trust the noise removal on Audacity, but your goal is to deal with that before it ever goes into the computer. For this though he's going to need a good oscilloscope and some other hardware. A scope doesn't really help align azimuth unless the tape has high- frequency test tones recorded on it. For aligning azimuth on program, there's nothing better than a pair of ears listening for maximum treble (and a monitoring system that can be switched to mono). Peace, Paul |
#7
![]()
Posted to rec.audio.pro
|
|||
|
|||
![]()
PStamler wrote:
(Richard Webb) wrote: For this though he's going to need a good oscilloscope and some other hardware. A scope doesn't really help align azimuth unless the tape has high- frequency test tones recorded on it. For aligning azimuth on program, there's nothing better than a pair of ears listening for maximum treble (and a monitoring system that can be switched to mono). This is true, although once you have peaked for maximum treble it's good to be able to make fine adjustments listening to how centered the stereo image is. And you're still going to need to scope to align the machine back to the standard tape after you've futzed with it. --scott -- "C'est un Nagra. C'est suisse, et tres, tres precis." |
#8
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
On Tue, 17 Jul 2012 17:06:24 +0100, John Williamson
wrote: On 17/07/2012 16:35, Chaz Cotton wrote: Hi Is there a recommended order to process a digital audio signal using Audacity or similar so that optimum results are obtained? Eg, is it better to remove hiss, clicks, etc before Normalizing? I have a number of old cassettes I need to copy to CDs and I want to get the best results possible. I'd say remove clicks, then noise, and use multiple passes to partially remove them each time until you're happy with the sound rather than one heavy handed pass. Then normalise. Just be aware that no matter how you do it, you will lose some of the musicality of the performance, which you may find more annoying than the noise. A quick and dirty method to clean up cassettes and tapes is to use a free Winamp plugin called Tape Restorer Live!, currently at version 1.20. It has settings to allow for incorrectly adjusted Dolby B & C, presets for removal of 19KHz pilot tone and hum from recordings and even lets you time align the channels on tapes where the head gaps aren't in the right place. Use the WAV writer output plugin to record the output to HD. http://www.winamp.com/plugin/tape-restore-live/154246 Thanks for this. |
#9
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
"Chaz Cotton" wrote in message
... Hi Is there a recommended order to process a digital audio signal using Audacity or similar so that optimum results are obtained? Eg, is it better to remove hiss, clicks, etc before Normalizing? I have a number of old cassettes I need to copy to CDs and I want to get the best results possible. You shouldn't have clicks for a commercial cassette. Maybe if it was copied from an LP. You should do your final normalization (to set the level on CD) after you've removed any clicks or applied any EQ, since these steps can alter the peak levels in the recording. FWIW, the order I normally do is: * Capture (analog to digital), being fairly conservative about peak levels. Saved as 24 bit or 32 bit files. * Normalize to -3 dbfs for convenience when I'm working in the wave editor * Apply whatever restorative (de-noise) or additive (EQ to taste) processing * Normalize & hard limit (if needed) to the final levels I want for CD. I hard limit sparingly, just enough to bring up the average level if there's a few peaks that stand out. * Convert to 16 bit for burning to CD I'm usually putting a mix of songs on my CD's for listening, so I will do some adjustments to the volume and EQ to roughly match the other songs in the set. Hope this helps, Sean |
#10
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
In article ,
Sean Conolly wrote: * Normalize & hard limit (if needed) to the final levels I want for CD. I hard limit sparingly, just enough to bring up the average level if there's a few peaks that stand out. Why on earth would you want to do that? -- *Most people have more than the average number of legs* Dave Plowman London SW To e-mail, change noise into sound. |
#11
![]()
Posted to rec.audio.pro
|
|||
|
|||
![]()
Paul writes:
Right, but if he's asking about normalize and asking the questions it's probably a given that he wouldn't know how to do the things you suggest. For this though he's going to need a good oscilloscope and some other hardware. A scope doesn't really help align azimuth unless the tape has high- frequency test tones recorded on it. For aligning azimuth on program, there's nothing better than a pair of ears listening for maximum treble (and a monitoring system that can be switched to mono). Yep, was assuming the tape with the proper test tones of course. Ears work if you know how to use 'em for most purposes grin. Still, doing this setup before playing the tapes into the computer, whether with properly done test tones or setting up the deck by ear will go a long way toward eliminating the need for further band aid fixes. Regards, Richard -- | Remove .my.foot for email | via Waldo's Place USA Fidonet-Internet Gateway Site | Standard disclaimer: The views of this user are strictly his own. |
#12
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
Sean Conolly wrote:
* Normalize & hard limit (if needed) to the final levels I want for CD. I hard limit sparingly, just enough to bring up the average level if there's a few peaks that stand out. * Convert to 16 bit for burning to CD Why in the world would you hard limit when moving from a medium with maybe 50 or 60 dB of dynamic range to one of 96 dB of dynamic range? Significant peaks probably didn't survive recording to cassette in the first place. -- shut up and play your guitar * http://hankalrich.com/ http://www.youtube.com/walkinaymusic http://www.sonicbids.com/HankandShaidri |
#13
![]()
Posted to rec.audio.pro
|
|||
|
|||
![]() "Richard Webb" wrote in message ... Paul writes: Right, but if he's asking about normalize and asking the questions it's probably a given that he wouldn't know how to do the things you suggest. For this though he's going to need a good oscilloscope and some other hardware. A scope doesn't really help align azimuth unless the tape has high- frequency test tones recorded on it. For aligning azimuth on program, there's nothing better than a pair of ears listening for maximum treble (and a monitoring system that can be switched to mono). Yep, was assuming the tape with the proper test tones of course. Ears work if you know how to use 'em for most purposes grin. Still, doing this setup before playing the tapes into the computer, whether with properly done test tones or setting up the deck by ear will go a long way toward eliminating the need for further band aid fixes. The point was to match the heads to the *particular* tape you are copying for best response. Adjusting them to a test tape is what you do before you record, and why would anybody want to do that on a cassette deck these days? (unless of course you are trying to copy the test tape to CD, which would be even more pointless!) *Only* if the tape was recorded on the same machine with the heads adjusted to a test tape before the recording you are copying will there be any point in doing that again now, and for most cassettes decks, not even then! Trevor. |
#14
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
In article , Sean Conolly
wrote: "Chaz Cotton" wrote in message ... * Normalize & hard limit (if needed) to the final levels I want for CD. I hard limit sparingly, just enough to bring up the average level if there's a few peaks that stand out. * Convert to 16 bit for burning to CD Like others, I'd suggest *avoiding* normalizing or limiting to anything close to 0dBFS. if you normalise to within a dB or so of 0dBFS you'd need to check any CD player you use can then handle intersample excursions produced by the DAC/reconstruction filter that rise above 0dBFS. And for the same reason I'd certainly be cautious about downsampling to 16 bit *after* normalizing to near 0dBFS. Again, that might lead to problems. Unless you are *really* pushed for covering ultra-wide range material I'd not 'normalise' to a level where any peak sample went above about -2 to -3dBFS. I appreciate that would not suit some in the 'biz' though, because of their obsession with LOUDNESS and the assumption that the listener is incapable of adjusting the volume. Slainte, Jim -- Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm Armstrong Audio http://www.audiomisc.co.uk/Armstrong/armstrong.html Audio Misc http://www.audiomisc.co.uk/index.html |
#15
![]()
Posted to rec.audio.pro
|
|||
|
|||
![]()
Ñреда, 18. јул 2012. 08.12.56 UTC+2, PStamler је напиÑао/ла:
there's nothing better than a pair of ears listening for maximum treble (and a monitoring system that can be switched to mono). What is maximum treble? Where Hi mids and highs are strongest, or where the highest frequencies are best heard, although lower in level? |
#16
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
"Jim Lesurf" wrote in message
... Like others, I'd suggest *avoiding* normalizing or limiting to anything close to 0dBFS. if you normalise to within a dB or so of 0dBFS you'd need to check any CD player you use can then handle intersample excursions produced by the DAC/reconstruction filter that rise above 0dBFS. Is that *really* an existing problem ur an urban myth? The Red Book specifies 16 bit data. Why would they do that if a player would not be able to reconstruct full 16 bit data to an analog signal? to me this would really be a severe design flaw in a CD player! If I were the designer of a DAC and I would expect that an interpolation between two samples would rise above the maximum dynamic range, I'd add the necessary bits to take care of that or scale the input of that process down. In my DSP work I always checked the input range and output range of calculations to see if it would fit in the 'width' of calculations and scale accordingly. This is standard design practice. Meindert |
#17
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
In article , Meindert
Sprang wrote: "Jim Lesurf" wrote in message ... Like others, I'd suggest *avoiding* normalizing or limiting to anything close to 0dBFS. if you normalise to within a dB or so of 0dBFS you'd need to check any CD player you use can then handle intersample excursions produced by the DAC/reconstruction filter that rise above 0dBFS. Is that *really* an existing problem ur an urban myth? I can't say for all players. But when I was investigating this area for Hi Fi News a few years ago one of the people there I was discussing it with did some checks on some then-current players that he had. The measured results showed that some of them did alter/distort waveform excursions above 0dBFS due to intersample peaks. The Red Book specifies 16 bit data. Why would they do that if a player would not be able to reconstruct full 16 bit data to an analog signal? The snag here is the usual one. In theory, theory and practice agree. But in practice they may not... at least some of the time. to me this would really be a severe design flaw in a CD player! If I were the designer of a DAC and I would expect that an interpolation between two samples would rise above the maximum dynamic range, I'd add the necessary bits to take care of that or scale the input of that process down. In my DSP work I always checked the input range and output range of calculations to see if it would fit in the 'width' of calculations and scale accordingly. This is standard design practice. For me, also. But any finite state system with finite value representations will have a limit, And the evidence is that some designers haven't catered for this. For a CD player you should only need one more bit as the max possible overshoot is of the order of 3dB so far as I was able to determine. Not a very demanding requirement, but still needs to be implimented into the design. So a *good* designer making a *good* machine will allow for intersample peaks. But are all designers and machines "good"?... And given how clipped some pop/rock CDs are, how much difference would it make given what has been done to the music before the player reads it from the disc?! I'll see if I can find the data and I'll ask the person who gave it to me if he minds it being made public. At the time it was just sent to me as part of our discussions about the topic. One of the related discussions I've had with others is the speculation that 'NOS' DACs and players may be liked by some people because they avoid this by having no digital values generated in between input samples, and can have a following analogue filter for reconstruction that can cope with the peaks. Slainte, Jim -- Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm Armstrong Audio http://www.audiomisc.co.uk/Armstrong/armstrong.html Audio Misc http://www.audiomisc.co.uk/index.html |
#18
![]()
Posted to rec.audio.pro
|
|||
|
|||
![]()
In article ,
Luxey wrote: =D1=81=D1=80=D0=B5=D0=B4=D0=B0, 18. =D1=98=D1=83=D0=BB 2012. 08.12.56 UTC+2= , PStamler =D1=98=D0=B5 =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0=D0=BE/=D0=BB= =D0=B0: there's nothing better than a pair of ears listening for maximum treble (and a monitoring system that can be switched to mono). What is maximum treble? Where Hi mids and highs are strongest, or where the= highest frequencies are best heard, although lower in level? You'll know it when you hear it. You're basically moving a comb filter forward and back.... you want to be listening for the highest frequencies you can, and peaking them. How easy this is to do and how accurate you can be depends somewhat on the source material. --scott -- "C'est un Nagra. C'est suisse, et tres, tres precis." |
#19
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
In article ,
Meindert Sprang wrote: "Jim Lesurf" wrote in message ... Like others, I'd suggest *avoiding* normalizing or limiting to anything close to 0dBFS. if you normalise to within a dB or so of 0dBFS you'd need to check any CD player you use can then handle intersample excursions produced by the DAC/reconstruction filter that rise above 0dBFS. Is that *really* an existing problem ur an urban myth? It does exist on a few model players. The Red Book specifies 16 bit data. Why would they do that if a player would not be able to reconstruct full 16 bit data to an analog signal? Because the Red Book was written before any of the players existed. to me this would really be a severe design flaw in a CD player! If I were the designer of a DAC and I would expect that an interpolation between two samples would rise above the maximum dynamic range, I'd add the necessary bits to take care of that or scale the input of that process down. In my DSP work I always checked the input range and output range of calculations to see if it would fit in the 'width' of calculations and scale accordingly. This is standard design practice. Consumer equipment is designed to be as cheap as absolutely possible. --scott -- "C'est un Nagra. C'est suisse, et tres, tres precis." |
#20
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
In article , Scott Dorsey
wrote: Consumer equipment is designed to be as cheap as absolutely possible. --scott My take on that is slightly different. That consumer equipment is made to be *sold* (at a profit), not to be *used*. Some designers/makers may be more concerned for the end-user (rather than customer) than others. But I'm not sure price is always a good indicator... Slainte, Jim -- Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm Armstrong Audio http://www.audiomisc.co.uk/Armstrong/armstrong.html Audio Misc http://www.audiomisc.co.uk/index.html |
#21
![]()
Posted to rec.audio.pro
|
|||
|
|||
![]() On Thu 2012-Jul-19 01:37, Trevor writes: Yep, was assuming the tape with the proper test tones of course. Ears work if you know how to use 'em for most purposes grin. Still, doing this setup before playing the tapes into the computer, whether with properly done test tones or setting up the deck by ear will go a long way toward eliminating the need for further band aid fixes. The point was to match the heads to the *particular* tape you are copying for best response. Adjusting them to a test tape is what you do before you record, and why would anybody want to do that on a cassette deck these days? (unless of course you are trying to copy the test tape to CD, which would be even more pointless!) *Only* if the tape was recorded on the same machine with the heads adjusted to a test tape before the recording you are copying will there be any point in doing that again now, and for most cassettes decks, not even then! Good point! That's what we get for assumptions. I'm assuming that the recording was made on a properly set up deck in the first place grin. Tain't necessarily so huge grin. Regards, Richard -- | Remove .my.foot for email | via Waldo's Place USA Fidonet-Internet Gateway Site | Standard disclaimer: The views of this user are strictly his own. |
#22
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
"hank alrich" wrote in message
... Sean Conolly wrote: * Normalize & hard limit (if needed) to the final levels I want for CD. I hard limit sparingly, just enough to bring up the average level if there's a few peaks that stand out. * Convert to 16 bit for burning to CD Why in the world would you hard limit when moving from a medium with maybe 50 or 60 dB of dynamic range to one of 96 dB of dynamic range? Simply to somewhat match the levels of the other material on the CD, so I'm not adjusting the levels per song when I'm listening. Just my own preference. Trimming a few DB off a few transients is inaudible. Shaving the entire track like a lawnmower is not the idea, but as you know is a common (and bad) practice. Significant peaks probably didn't survive recording to cassette in the first place. Agreed. Sean |
#23
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
"Meindert Sprang" wrote in message
... "Jim Lesurf" wrote in message ... Like others, I'd suggest *avoiding* normalizing or limiting to anything close to 0dBFS. if you normalise to within a dB or so of 0dBFS you'd need to check any CD player you use can then handle intersample excursions produced by the DAC/reconstruction filter that rise above 0dBFS. Is that *really* an existing problem ur an urban myth? The Red Book specifies 16 bit data. Why would they do that if a player would not be able to reconstruct full 16 bit data to an analog signal? to me this would really be a severe design flaw in a CD player! If I were the designer of a DAC and I would expect that an interpolation between two samples would rise above the maximum dynamic range, I'd add the necessary bits to take care of that or scale the input of that process down. In my DSP work I always checked the input range and output range of calculations to see if it would fit in the 'width' of calculations and scale accordingly. This is standard design practice. No, it's a legitmate problem with some gear, and I'm inclined to agree with Jim on why. For demos that I'm going to distribute I set the peaks to -1 db, which is fine with all the players I've encountered. Sean |
#24
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
Scott Dorsey wrote:
Consumer equipment is designed to be as cheap as absolutely possible. Some is, and some is designed to be as expensive as absolutely possible, on the hope that there will be a fool who pays the money. And then there is us, in a range across the middle, with slightly varying criteria. geoff |
#25
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]() "geoff" wrote in message ... Scott Dorsey wrote: Consumer equipment is designed to be as cheap as absolutely possible. Some is, and some is designed to be as expensive as absolutely possible, on the hope that there will be a fool who pays the money. Nope, that is still designed to be as cheap as possible *for the manufacturer* (obviously NOT for the purchaser) so the manufacturer can spend money on advertising a very low volume item whilst still making a very large profit. Trevor. |
#26
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
Trevor wrote:
"geoff" wrote in message ... Scott Dorsey wrote: Consumer equipment is designed to be as cheap as absolutely possible. Some is, and some is designed to be as expensive as absolutely possible, on the hope that there will be a fool who pays the money. Nope, that is still designed to be as cheap as possible *for the manufacturer* (obviously NOT for the purchaser) so the manufacturer can spend money on advertising a very low volume item whilst still making a very large profit. Trevor. And you can buy a high-priced Loewe that is actually LG inside a pretty box ! geoff |
#27
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
In article , Jim Lesurf
wrote: [snip discussion of players distorting inter-sample peaks] I'll see if I can find the data and I'll ask the person who gave it to me if he minds it being made public. At the time it was just sent to me as part of our discussions about the topic. He has just replied and said this is OK. So I'll publish his results on my website when I get a 'round tuit'. :-) Slainte, Jim -- Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm Armstrong Audio http://www.audiomisc.co.uk/Armstrong/armstrong.html Audio Misc http://www.audiomisc.co.uk/index.html |
#28
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
On 23 Jul, wrote:
In article , Jim Lesurf wrote: [snip discussion of players distorting inter-sample peaks] I'll see if I can find the data and I'll ask the person who gave it to me if he minds it being made public. At the time it was just sent to me as part of our discussions about the topic. He has just replied and said this is OK. So I'll publish his results on my website when I get a 'round tuit'. :-) I've now done this and the results can be seen at http://www.audiomisc.co.uk/HFN/OTTre...d/results.html My thanks to Keith Howard for kindly agreeing to let me publish his measured results. Note they were made in 2007 so only covered the players and DACs he had to hand at that time. But - rather depressingly - all but one of the nine he tried showed they had problems coping with the waveform that has peaks at +3dBFS. FWIW personally I'd love to see all reviews on DACs or player use the 'waveform from hell' I devised for the original article (a link to that is on the above page) as a test of how they cope - or not! :-) I have the uncomfortable feeling that whilst reviews continue to overlook this area, problems will continue to afflict some new designs without anyone knowing. Slainte, Jim -- Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm Armstrong Audio http://www.audiomisc.co.uk/Armstrong/armstrong.html Audio Misc http://www.audiomisc.co.uk/index.html |
#29
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
"Jim Lesurf" wrote in message
... I've now done this and the results can be seen at http://www.audiomisc.co.uk/HFN/OTTre...d/results.html My thanks to Keith Howard for kindly agreeing to let me publish his measured results. Note they were made in 2007 so only covered the players and DACs he had to hand at that time. But - rather depressingly - all but one of the nine he tried showed they had problems coping with the waveform that has peaks at +3dBFS. FWIW personally I'd love to see all reviews on DACs or player use the 'waveform from hell' I devised for the original article (a link to that is on the above page) as a test of how they cope - or not! :-) I have the uncomfortable feeling that whilst reviews continue to overlook this area, problems will continue to afflict some new designs without anyone knowing. This test is flawed. The sample points on the right represent a sinewave of +3dBFS. If you would sample a sinewave on the tops, you would indeed get the picture as shown on the left. If you move the sample points to the positions as shown in the picture on the right, the sample points would be 3dB down and they would still represent a sinewave of 0dBFS. Following this reasoning, one could say that if a DAW normalizes by simply measuring the highest sample and scaling all others accordingly, that DAW is flawed too. It should at least "reconstruct" the whole waveform to be able to determine the *real* maximum amplitude. So the question is: do we know how a DAW measures the maximum amplitude? Is it documented in the manual/specs? If not, using -3dBFS is always safe because this is the worst case we could encounter as seen in the test. In reality, a piece of audio with thousands of samples would probably have a few that are nearly on the top of a loudest sinewave, statistically speaking, so -1dBFS would *probably* do as well as someone else already mentioned. Interesting stuff when you think about it.... Meindert |
#30
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
In article , Meindert
Sprang wrote: "Jim Lesurf" wrote in message ... I've now done this and the results can be seen at http://www.audiomisc.co.uk/HFN/OTTre...d/results.html My thanks to Keith Howard for kindly agreeing to let me publish his measured results. Note they were made in 2007 so only covered the players and DACs he had to hand at that time. But - rather depressingly - all but one of the nine he tried showed they had problems coping with the waveform that has peaks at +3dBFS. FWIW personally I'd love to see all reviews on DACs or player use the 'waveform from hell' I devised for the original article (a link to that is on the above page) as a test of how they cope - or not! :-) I have the uncomfortable feeling that whilst reviews continue to overlook this area, problems will continue to afflict some new designs without anyone knowing. This test is flawed. Depends what you mean by "flawed". cf below... The sample points on the right represent a sinewave of +3dBFS. Correct. That is what sampling theory, etc, indicate the samples should mean when reconstructed if we assume sampling/filtering at ADC was done using standard-theory time-symmetric sinc-like filters. If you would sample a sinewave on the tops, you would indeed get the picture as shown on the left. If you move the sample points to the positions as shown in the picture on the right, the sample points would be 3dB down and they would still represent a sinewave of 0dBFS. Following this reasoning, one could say that if a DAW normalizes by simply measuring the highest sample and scaling all others accordingly, that DAW is flawed too. I tend to agree. However this once again depends on how you choose to define "flawed". Information theory is quite happy to accept the sample values as representing a waveform that peaks at 3dBFS. And the critical point is that some CDs, etc, do carry sample values that will require excursions 0dBFS. So I don't think the *test* is "flawed" given that players may often be presented with series of samples that require peaks above 0dBFS to reconstruct as part of the music. Saying the *discs* are "flawed" might be a better argument. But in theory they should be fine *provided* the players/DAC designer realised the implications of sampling theory, etc, and made a player/DAC that copes as theory indicates. It should at least "reconstruct" the whole waveform to be able to determine the *real* maximum amplitude. I agree. However the reality is that some Audio CDs, etc, *do* give series of sample values that reconstruct to have excursions above 0dBFS. Hence the test exposes that some players may have a problem reconstructing such waveforms. The problem them becomes a real one for those who buy CDs/files. Should their player/DAC cope with such material or not? My view is that it should. This is for two reasons. 1) Such material exists, so a player/DAC should reconstruct it in an orderly way in accord with the sampling theorem, etc. 2) It is simple enough these days for the designer to arrange (1) if they have a clue. So why should they refuse give the existence of such material? The snag being, how to tell if no-one checks using such a test or examines the CDs to see if any require this? The test exposes a problem area. In an ideal world, though, I'd certainly *much* prefer those who release CDs and files (and stream) to avoid any intersample peaks reaching 0dBFS or more. Alas, that world isn't the one we live in. :-/ So the question is: do we know how a DAW measures the maximum amplitude? Nielsen and Lund have looked at such issues IIRC in AES papers. However I assume that it would depend on the details of the specific cases and equipment. The actual max possible peak intersample values will depend on the bandwidth/shape/etc of the source material being processed as well as the sample rates, etc. So the complication is that a precise figure will depend on the material as well as the details of the system used. Overall, though, I'd expect allowing around 3dB to be generally 'safe' cf below. One problem, alas, is the obsession some CD/file 'mastering' people have with LOUDNESS. To deal with avoiding intersample peak problems that would have to be tackled, I think. TBH I wonder if some of those 'mastering' have a clue about any of these questions, or give a hoot... :-/ FWIW if you look at my website and many other places you will find plenty of examples of commercial CDs 'mastered' with plenty of samples in the range above -1dBFS. My experience is that 'classical' types of music avoid these problems because those who make the CDs take more care and aren't obsessed with loudness, and perhaps because those involved take care. But pop/rock material is often LOUD and the assumption seems to be "that is what sells". Is it documented in the manual/specs? If not, using -3dBFS is always safe because this is the worst case we could encounter as seen in the test. The peak excursions can potentially be much higher than 3dB. e.g. if you look at http://www.audiomisc.co.uk/HFN/OverTheTop/OTT.html you can see where I got over +5dBFS for a deliberately chosen 'waveform from hell'. 8-] So if you wish to be *really* careful you'd need to avoid samples going above -5dBFS. But I would agree that such extreme examples would be unlikely in real practice with acoustic music. (Not so sure for 'electronic' creations for pop/rock!) In reality, a piece of audio with thousands of samples would probably have a few that are nearly on the top of a loudest sinewave, statistically speaking, so -1dBFS would *probably* do as well as someone else already mentioned. FWIW my own opinion is that for material at rates like 48k or 44.1k it is wisest to ensure no samples go above about -3dBFS. Although more like -1 or -2 may often be OK - depending on the material. That was why I raised this issue when people were discussing 'normalisation'. The bottom line is that simply normalising to the biggest sample being at 0dBFS is unwise. IIRC If you look at BBC Radio 3 material, they generally try to keep peak samples below about -4dBFS. (This is even more important for methods that encode using methods like AAC which are 'lossy' so may alter the size of peaks.) So BBC R3 caution puts them in between the 3dB value from the simple sinewave example Keith used, and my 'waveform from hell'. BTW if anyone wants to experiment with the 'waveform from hell' it is still available as a zipped wave file from http://jcgl.orpheusweb.co.uk/temp/WaveFromHell.zip (Actually a pair of wave files with different sign relationships left/right.) Use with care though, not sure how happy some amps or speakers would be with that waveform. :-) Slainte, Jim -- Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm Armstrong Audio http://www.audiomisc.co.uk/Armstrong/armstrong.html Audio Misc http://www.audiomisc.co.uk/index.html |
#31
![]()
Posted to rec.audio.pro,uk.rec.audio
|
|||
|
|||
![]()
On Thu, 19 Jul 2012 15:13:45 +0100, Jim Lesurf
wrote: In article , Scott Dorsey wrote: Consumer equipment is designed to be as cheap as absolutely possible. --scott My take on that is slightly different. That consumer equipment is made to be *sold* (at a profit), not to be *used*. Some designers/makers may be more concerned for the end-user (rather than customer) than others. But I'm not sure price is always a good indicator... Apologies for the lateness of my response but I've only just subscribed to this NG and I feel that I have a useful contribution to make... Almost certainly cost will be a major factor (even, as in the case of Akai's flagship GX630DBm at that time, it is only a matter of shaving a few pence off the cost of a piece of kit priced close to 500 quid). In this case, looking at the plots, it seems more likely (especially so for the Pioneer DV-939A) that the analogue output stage of the DAC or the following stages were starved of sufficient DC bias voltage to encompass the full peak to peak swing, i.e. they were clipping the output waveform. The extra 5db required for the 'waveform from Hell' is pretty close to the 6db mark (which I suspect would only just exceed that of the _ulimate_ 'waveform from Hell') which suggests a doubling of bias voltage over and above that required to prevent clipping at 0db. If the 0db voltage level is, as is commonly the case, based on the 0dbm 600R reference level (775mV rms sine), The peak to peak voltage swing requirement at +6db becomes 4.3834v. Theoretically possible to achieve with a single ended buffer amp powered from a 5 volt rail (quite possibly just achievable with a FET based amplifier stage). In this case, I suspect the real culprit isn't so much the 'extra cost of doubling the output buffer amp rail voltage' so much as a 'lack of attention to detail' since the internal comparitor reference voltage setting can be arbitarily adjusted[1] to avoid such interpolated excursions exceeding whatever the peak to peak clipping level limit happens to be in the analogue output stage(s)[2]. Getting back to the order of processing of digitised vinyl recordings (whether direct or from tape dubs), I would suggest you look at the whole waveform and home in on suspicious peaks manually and apply clip/pop filtering to any that are obviously pops or clicks rather than musical transients. Once you've cleaned up such blatent pops and clicks, scan the waveform to determine whether or not you now have headroom for normalisng the tracks and assess whether the gain values look reasonable for how the level appears by eye in the waveform editor display (it's still possible that some strong clicks may remain without showing in the edit display). Once you've done this, you can apply normalisation and verify that, in bulk, the result still looks (and sounds) ok. You could home in on any peaks to verify whether they represent musical transients or previously missed clicks or pops. I tend not to apply declicking to the whole recording since it can result in distorting effects on certain musical instruments (trumpets can be very badly mangled by this process). I find such processing is best concentrated on the quieter parts (typically intros and outros). Aside from very loud clicks or pops, the music in the louder parts will mask modest scratch noise and clicks that would otherwise become objectionable in the very quiet parts of the recording. I don't think there are any such cleanup filters that can be trusted to automatically de-click / de-pop / de-scratch a complete vinyl recording in one go without risk of objectional distortions arising from the processing but I could well be out of touch with the latest developments in such DSP technology[3] If you are going to attempt to apply an all-in-one cleanup process, audition the result thoroughly before deleting the original un-processed wav file. Talking of which, deleting such files after producing a cleaned up version was quite common back in the day some ten to fifteen years ago, but with the much larger disk drive capacities now availble, there's no longer any need for such space savings. As long as you archive the orginal wav files, you can be as cavalier as you like with the post digitsing processing. [1] Of course, all bets are off if the DAC chip employed offers no such adjustment. [2] Assuming the manufacturer isn't totally obsessed with the 0dbm 600R reference or else is prepared to adjust to this reference by adding the extra gain required in a seperate buffer amp that is fed from a voltage rail that will allow a peak to peak swing without clipping. [3] After re- researching the subject of DSP software to bypass the Dolby decoder stage in the tape deck itself and apply the Dolby B decoding function in software, I don't hold out much hope for any improvements in such de-clicking software. -- Regards, J B Good |
Reply |
Thread Tools | |
Display Modes | |
|
|