Home |
Search |
Today's Posts |
#41
![]() |
|||
|
|||
![]()
In article ,
Leonid Makarovsky wrote: Scott Dorsey wrote: : But for any other level changes that are NOT powers of two, there will be : some rounding error introduced with the multiplication, and that is what folks : are trying to avoid. Is 6 powers of 2? Did you mean the odd numbers would introduce rounding error? No. A 6 dB increase is a doubling of level. If you multiply all the elements in your data file by two, the meters jump 6 dB. In binary, multiplying by two is just done with shifting, the way in base 10, multiplying by ten is just done with shifting. Say if I went from -5db to 0db, I would've had a problem? Well, I guess I was going from any negative # to 0db. I wish I knew better. Well, the chances that you'll ever want to do anything in precise 6 dB increments are pretty small. --scott -- "C'est un Nagra. C'est suisse, et tres, tres precis." |
#42
![]() |
|||
|
|||
![]()
Scott Dorsey wrote:
: No. A 6 dB increase is a doubling of level. If you multiply all the elements : in your data file by two, the meters jump 6 dB. : In binary, multiplying by two is just done with shifting, the way in base 10, : multiplying by ten is just done with shifting. Ok. I got it. Thanks. So let me make sure I understand it. When I normalize, I don't add the volume, but multiply by some number. So it looks more like Y(x) = n * x rather than Y(x) = x + n Thanks. --Leonid |
#43
![]() |
|||
|
|||
![]()
Scott Dorsey wrote:
: No. A 6 dB increase is a doubling of level. If you multiply all the elements : in your data file by two, the meters jump 6 dB. : In binary, multiplying by two is just done with shifting, the way in base 10, : multiplying by ten is just done with shifting. Ok. I got it. Thanks. So let me make sure I understand it. When I normalize, I don't add the volume, but multiply by some number. So it looks more like Y(x) = n * x rather than Y(x) = x + n Thanks. --Leonid |
#44
![]() |
|||
|
|||
![]()
In article ,
Leonid Makarovsky wrote: Scott Dorsey wrote: : No. A 6 dB increase is a doubling of level. If you multiply all the elements : in your data file by two, the meters jump 6 dB. : In binary, multiplying by two is just done with shifting, the way in base 10, : multiplying by ten is just done with shifting. Ok. I got it. Thanks. So let me make sure I understand it. When I normalize, I don't add the volume, but multiply by some number. So it looks more like Y(x) = n * x rather than Y(x) = x + n Right. If you added, what you would get would be a DC offset. --scott -- "C'est un Nagra. C'est suisse, et tres, tres precis." |
#45
![]() |
|||
|
|||
![]()
In article ,
Leonid Makarovsky wrote: Scott Dorsey wrote: : No. A 6 dB increase is a doubling of level. If you multiply all the elements : in your data file by two, the meters jump 6 dB. : In binary, multiplying by two is just done with shifting, the way in base 10, : multiplying by ten is just done with shifting. Ok. I got it. Thanks. So let me make sure I understand it. When I normalize, I don't add the volume, but multiply by some number. So it looks more like Y(x) = n * x rather than Y(x) = x + n Right. If you added, what you would get would be a DC offset. --scott -- "C'est un Nagra. C'est suisse, et tres, tres precis." |
#46
![]() |
|||
|
|||
![]()
Scott Dorsey wrote:
:don't add the volume, but multiply by some number. So it looks more like :Y(x) = n * x rather than Y(x) = x + n : Right. If you added, what you would get would be a DC offset. Then it does make sense to normalize with multiples of 2. Now in SoundForge 5, how do I make sure I normalize with multiples of 2? I don't even normalize both channels at the same time. I normalize each channel individually by peak making sure that the average volume level is about the same. --Leonid |
#47
![]() |
|||
|
|||
![]()
Scott Dorsey wrote:
:don't add the volume, but multiply by some number. So it looks more like :Y(x) = n * x rather than Y(x) = x + n : Right. If you added, what you would get would be a DC offset. Then it does make sense to normalize with multiples of 2. Now in SoundForge 5, how do I make sure I normalize with multiples of 2? I don't even normalize both channels at the same time. I normalize each channel individually by peak making sure that the average volume level is about the same. --Leonid |
#48
![]() |
|||
|
|||
![]()
Leonid Makarovsky wrote:
Scott Dorsey wrote: :don't add the volume, but multiply by some number. So it looks more like :Y(x) = n * x rather than Y(x) = x + n : Right. If you added, what you would get would be a DC offset. Then it does make sense to normalize with multiples of 2. Now in SoundForge 5, how do I make sure I normalize with multiples of 2? I don't even normalize both channels at the same time. I normalize each channel individually by peak making sure that the average volume level is about the same. You can't. So you have to do the normalizing as late as possible and live'with whatever rounding you get. Life is like that. --scott -- "C'est un Nagra. C'est suisse, et tres, tres precis." |
#49
![]() |
|||
|
|||
![]()
Leonid Makarovsky wrote:
Scott Dorsey wrote: :don't add the volume, but multiply by some number. So it looks more like :Y(x) = n * x rather than Y(x) = x + n : Right. If you added, what you would get would be a DC offset. Then it does make sense to normalize with multiples of 2. Now in SoundForge 5, how do I make sure I normalize with multiples of 2? I don't even normalize both channels at the same time. I normalize each channel individually by peak making sure that the average volume level is about the same. You can't. So you have to do the normalizing as late as possible and live'with whatever rounding you get. Life is like that. --scott -- "C'est un Nagra. C'est suisse, et tres, tres precis." |
#50
![]() |
|||
|
|||
![]()
Leonid Makarovsky wrote:
Scott Dorsey wrote: :don't add the volume, but multiply by some number. So it looks more like :Y(x) = n * x rather than Y(x) = x + n : Right. If you added, what you would get would be a DC offset. Then it does make sense to normalize with multiples of 2. Close, except that it actually makes sense to normalize with multiples of 1 rather than 2 (i.e. positive integers). And except that other seems to be saying that in the real world, software doesn't support it. The best plan solution by far is to normalize at some higher sample size (24-bit or 32-bit) and then convert down to 16-bit later. - Logan |
#51
![]() |
|||
|
|||
![]()
Leonid Makarovsky wrote:
Scott Dorsey wrote: :don't add the volume, but multiply by some number. So it looks more like :Y(x) = n * x rather than Y(x) = x + n : Right. If you added, what you would get would be a DC offset. Then it does make sense to normalize with multiples of 2. Close, except that it actually makes sense to normalize with multiples of 1 rather than 2 (i.e. positive integers). And except that other seems to be saying that in the real world, software doesn't support it. The best plan solution by far is to normalize at some higher sample size (24-bit or 32-bit) and then convert down to 16-bit later. - Logan |
#52
![]() |
|||
|
|||
![]()
Arny Krueger wrote:
"Chris Hornbeck" wrote in message On Tue, 16 Nov 2004 22:09:24 -0500, "Arny Krueger" wrote: It may be advantageous to dither that; ie, sometimes shift in a 1 instead. Exactly. The problem is that the binary represenation of an analog voltage is rarely exact. There is almost always quantization error. When you double the data by means of simple shifting, you also double the quantization error. OK, but how about in a theoretical, ideal case that's properly dithered and has no quantization error. Does a shift cause any quantization error? The shift does not add or subtract quantization error. Instead, it multiplies the error that is already there. As was pointed out, the SNR does not change, but the noise level increases. Given that the final playback level will presumably be unchanged, the only practical effect of the shift is that subsequent processing MAY be more precise due to the newly available low-order bits. Whether that actually happens depends on the algorithms used. |
#53
![]() |
|||
|
|||
![]()
Arny Krueger wrote:
"Chris Hornbeck" wrote in message On Tue, 16 Nov 2004 22:09:24 -0500, "Arny Krueger" wrote: It may be advantageous to dither that; ie, sometimes shift in a 1 instead. Exactly. The problem is that the binary represenation of an analog voltage is rarely exact. There is almost always quantization error. When you double the data by means of simple shifting, you also double the quantization error. OK, but how about in a theoretical, ideal case that's properly dithered and has no quantization error. Does a shift cause any quantization error? The shift does not add or subtract quantization error. Instead, it multiplies the error that is already there. As was pointed out, the SNR does not change, but the noise level increases. Given that the final playback level will presumably be unchanged, the only practical effect of the shift is that subsequent processing MAY be more precise due to the newly available low-order bits. Whether that actually happens depends on the algorithms used. |
#54
![]() |
|||
|
|||
![]()
"Ed Anson" wrote in message
Arny Krueger wrote: "Chris Hornbeck" wrote in message On Tue, 16 Nov 2004 22:09:24 -0500, "Arny Krueger" wrote: It may be advantageous to dither that; ie, sometimes shift in a 1 instead. Exactly. The problem is that the binary represenation of an analog voltage is rarely exact. There is almost always quantization error. When you double the data by means of simple shifting, you also double the quantization error. OK, but how about in a theoretical, ideal case that's properly dithered and has no quantization error. Does a shift cause any quantization error? The shift does not add or subtract quantization error. Instead, it multiplies the error that is already there. As was pointed out, the SNR does not change, but the noise level increases. Given that the final playback level will presumably be unchanged, the only practical effect of the shift is that subsequent processing MAY be more precise due to the newly available low-order bits. Whether that actually happens depends on the algorithms used. IME these claims about the alleged technical superiority of doubling or halving of levels have a lot more to do emotion to them than practical technology. We recently deconstructed similar claims related to sample rate conversions. |
#55
![]() |
|||
|
|||
![]()
"Ed Anson" wrote in message
Arny Krueger wrote: "Chris Hornbeck" wrote in message On Tue, 16 Nov 2004 22:09:24 -0500, "Arny Krueger" wrote: It may be advantageous to dither that; ie, sometimes shift in a 1 instead. Exactly. The problem is that the binary represenation of an analog voltage is rarely exact. There is almost always quantization error. When you double the data by means of simple shifting, you also double the quantization error. OK, but how about in a theoretical, ideal case that's properly dithered and has no quantization error. Does a shift cause any quantization error? The shift does not add or subtract quantization error. Instead, it multiplies the error that is already there. As was pointed out, the SNR does not change, but the noise level increases. Given that the final playback level will presumably be unchanged, the only practical effect of the shift is that subsequent processing MAY be more precise due to the newly available low-order bits. Whether that actually happens depends on the algorithms used. IME these claims about the alleged technical superiority of doubling or halving of levels have a lot more to do emotion to them than practical technology. We recently deconstructed similar claims related to sample rate conversions. |
#56
![]() |
|||
|
|||
![]()
Logan Shaw wrote:
: The best plan solution by far is to normalize at some higher sample : size (24-bit or 32-bit) and then convert down to 16-bit later. If I record at 24bit (from record, then yes). But what if I digitally x-fer audio from say LaserDisc player which is 16 bit. Do you recommend to upsample it to 32 bit, perform operations and then downsample it back to 16 bit? --Leonid |
#57
![]() |
|||
|
|||
![]()
Logan Shaw wrote:
: The best plan solution by far is to normalize at some higher sample : size (24-bit or 32-bit) and then convert down to 16-bit later. If I record at 24bit (from record, then yes). But what if I digitally x-fer audio from say LaserDisc player which is 16 bit. Do you recommend to upsample it to 32 bit, perform operations and then downsample it back to 16 bit? --Leonid |
#58
![]() |
|||
|
|||
![]()
Leonid Makarovsky wrote:
Logan Shaw wrote: : The best plan solution by far is to normalize at some higher sample : size (24-bit or 32-bit) and then convert down to 16-bit later. If I record at 24bit (from record, then yes). But what if I digitally x-fer audio from say LaserDisc player which is 16 bit. Do you recommend to upsample it to 32 bit, perform operations and then downsample it back to 16 bit? Well, first of all, please be aware that I do pretty much just live sound and not any kind of sophisticated recording, so anything I say here is based more on my knowledge of computer science (in which I do have a degree...) and math than on practical experience with real digital sound editing software. But yes, even if you are just normalizing, I would probably convert to 24-bit (or 32-bit) to do the work then dither back to 16-bit. If you are doing a sample rate conversion and normalizing, I would definitely do it. The first and most obvious reason is that going to a higher bit depth is certainly not going to hurt. There are no negative effects on the sound. Plus, normalizing is basically going to trash the accuracy of the least significant bit (unless you do it by multiplying by an integer, which as discussed before is exceedingly unlikely), so it makes sense to add precision beyond the (original) least significant bit so that you aren't throwing away information. Things are a little clouded by the fact that the least significant bit is probably already mostly trash, but there's no reason to make it worse. Sample rate conversion, if reducing the sample rate, could/should actually give you in effect additional bits of information. That is, if you go from 16-bit 48kHz down to 44.1kHz, there is enough information there (assuming the original 16-bit samples are not garbage in the lower order bits) to create more than 16 bits of information per sample. I think of it sort of like what you do when you do a survey or a measurement in science: by taking multiple measurements and combining the results, you can get a more accurate value than any one of the samples. So each of the samples at the 44.1kHz rate is composed of information gleaned from more than one sample (at the 48kHz rate). This is where my math background stretches very thin, but if you converted from 88.2kHz / 16-bit to 44.1kHz, you should have enough info for maybe another full bit of sample precision, i.e. you could get perhaps 44.1kHz / 17-bit worth of information out of the 88.2kHz / 16-bit original material. When making a less dramatic conversion (e.g. 48kHz to 44.1kHz), you don't gain that much extra, but the point is that the original sample size is not big enough to contain the extra information you can get by combining multiple samples into one sample, even if you only have 480 samples' worth of information to use to build 441 new samples. By the way, if anyone who really does know this stuff backwards and forwards would like to comment on whether what I've said is accurate, that might be nice. :-) - Logan |
#59
![]() |
|||
|
|||
![]()
Leonid Makarovsky wrote:
Logan Shaw wrote: : The best plan solution by far is to normalize at some higher sample : size (24-bit or 32-bit) and then convert down to 16-bit later. If I record at 24bit (from record, then yes). But what if I digitally x-fer audio from say LaserDisc player which is 16 bit. Do you recommend to upsample it to 32 bit, perform operations and then downsample it back to 16 bit? Well, first of all, please be aware that I do pretty much just live sound and not any kind of sophisticated recording, so anything I say here is based more on my knowledge of computer science (in which I do have a degree...) and math than on practical experience with real digital sound editing software. But yes, even if you are just normalizing, I would probably convert to 24-bit (or 32-bit) to do the work then dither back to 16-bit. If you are doing a sample rate conversion and normalizing, I would definitely do it. The first and most obvious reason is that going to a higher bit depth is certainly not going to hurt. There are no negative effects on the sound. Plus, normalizing is basically going to trash the accuracy of the least significant bit (unless you do it by multiplying by an integer, which as discussed before is exceedingly unlikely), so it makes sense to add precision beyond the (original) least significant bit so that you aren't throwing away information. Things are a little clouded by the fact that the least significant bit is probably already mostly trash, but there's no reason to make it worse. Sample rate conversion, if reducing the sample rate, could/should actually give you in effect additional bits of information. That is, if you go from 16-bit 48kHz down to 44.1kHz, there is enough information there (assuming the original 16-bit samples are not garbage in the lower order bits) to create more than 16 bits of information per sample. I think of it sort of like what you do when you do a survey or a measurement in science: by taking multiple measurements and combining the results, you can get a more accurate value than any one of the samples. So each of the samples at the 44.1kHz rate is composed of information gleaned from more than one sample (at the 48kHz rate). This is where my math background stretches very thin, but if you converted from 88.2kHz / 16-bit to 44.1kHz, you should have enough info for maybe another full bit of sample precision, i.e. you could get perhaps 44.1kHz / 17-bit worth of information out of the 88.2kHz / 16-bit original material. When making a less dramatic conversion (e.g. 48kHz to 44.1kHz), you don't gain that much extra, but the point is that the original sample size is not big enough to contain the extra information you can get by combining multiple samples into one sample, even if you only have 480 samples' worth of information to use to build 441 new samples. By the way, if anyone who really does know this stuff backwards and forwards would like to comment on whether what I've said is accurate, that might be nice. :-) - Logan |
#60
![]() |
|||
|
|||
![]()
"Leonid Makarovsky" wrote in message
Logan Shaw wrote: The best plan solution by far is to normalize at some higher sample size (24-bit or 32-bit) and then convert down to 16-bit later. If I record at 24bit (from record, then yes). Agreed. But what if I digitally x-fer audio from say LaserDisc player which is 16 bit. As a rule you should be able to transcribe an existing high quality recording without a lot of editing or processing. This isn't a LP or cassette source, its actually a fairly modern format with decent dynamic range. Therfore there should be no reason to do much of anything but simply re-record the audio. |
#61
![]() |
|||
|
|||
![]()
"Leonid Makarovsky" wrote in message
Logan Shaw wrote: The best plan solution by far is to normalize at some higher sample size (24-bit or 32-bit) and then convert down to 16-bit later. If I record at 24bit (from record, then yes). Agreed. But what if I digitally x-fer audio from say LaserDisc player which is 16 bit. As a rule you should be able to transcribe an existing high quality recording without a lot of editing or processing. This isn't a LP or cassette source, its actually a fairly modern format with decent dynamic range. Therfore there should be no reason to do much of anything but simply re-record the audio. |
#62
![]() |
|||
|
|||
![]()
Logan Shaw wrote:
: The first and most obvious reason is that going to a higher bit : depth is certainly not going to hurt. There are no negative : effects on the sound. I see. Thanks. : Sample rate conversion, if reducing the sample rate, could/should In my case I was increasing sample rate from 44.1 to 48. I wish DVDs have 44.1 audio in their specs. Thanks. --Leonid |
#63
![]() |
|||
|
|||
![]()
Logan Shaw wrote:
: The first and most obvious reason is that going to a higher bit : depth is certainly not going to hurt. There are no negative : effects on the sound. I see. Thanks. : Sample rate conversion, if reducing the sample rate, could/should In my case I was increasing sample rate from 44.1 to 48. I wish DVDs have 44.1 audio in their specs. Thanks. --Leonid |
#64
![]() |
|||
|
|||
![]()
Arny Krueger wrote:
: But what if I digitally : x-fer audio from say LaserDisc player which is 16 bit. : As a rule you should be able to transcribe an existing high quality : recording without a lot of editing or processing. This isn't a LP or : cassette source, its actually a fairly modern format with decent dynamic : range. : Therfore there should be no reason to do much of anything but simply : re-record the audio. Sample rate conversion needed to match a DVD format. As for Normalizing, the volume there was really low. RMS was -24db and the peak was -8db. I didn't do anything else. --Leonid |
#65
![]() |
|||
|
|||
![]()
Arny Krueger wrote:
: But what if I digitally : x-fer audio from say LaserDisc player which is 16 bit. : As a rule you should be able to transcribe an existing high quality : recording without a lot of editing or processing. This isn't a LP or : cassette source, its actually a fairly modern format with decent dynamic : range. : Therfore there should be no reason to do much of anything but simply : re-record the audio. Sample rate conversion needed to match a DVD format. As for Normalizing, the volume there was really low. RMS was -24db and the peak was -8db. I didn't do anything else. --Leonid |
#66
![]() |
|||
|
|||
![]() |
#67
![]() |
|||
|
|||
![]() |
#68
![]() |
|||
|
|||
![]()
Ryan wrote:
(Scott Dorsey) wrote in message ... But for any other level changes that are NOT powers of two, there will be some rounding error introduced with the multiplication, Powers of two a 2, 4, 8, 16, 32, 64, etc., yes? Do you mean multiple of two? Even multiple of two is not correct, although it's closer. The set of integers is closed under the multiplication operator[1]. Therefore, any integer will do. (Although zero and negative integers will tend to have OTHER negative effects.) - Logan [1] In other words, for all A and B in the set (of integers), A*B is also in the set. |
#69
![]() |
|||
|
|||
![]()
Ryan wrote:
(Scott Dorsey) wrote in message ... But for any other level changes that are NOT powers of two, there will be some rounding error introduced with the multiplication, Powers of two a 2, 4, 8, 16, 32, 64, etc., yes? Do you mean multiple of two? Even multiple of two is not correct, although it's closer. The set of integers is closed under the multiplication operator[1]. Therefore, any integer will do. (Although zero and negative integers will tend to have OTHER negative effects.) - Logan [1] In other words, for all A and B in the set (of integers), A*B is also in the set. |
#70
![]() |
|||
|
|||
![]()
Ryan wrote:
(Scott Dorsey) wrote in message ... Yes. And for a 6 dB increase all you need to do is a right shift, so there is no loss of precision. But for any other level changes that are NOT powers of two, there will be some rounding error introduced with the multiplication, and that is what folks are trying to avoid. Powers of two a 2, 4, 8, 16, 32, 64, etc., yes? Right. Do you mean multiple of two? No. The first shift doubles it. The second shift doubles that, giving your four times. The third shift doubles it again, giving you eight. The doubling operation is the only thing you can do that has a guarantee of never having rounding error. --scott -- "C'est un Nagra. C'est suisse, et tres, tres precis." |
#71
![]() |
|||
|
|||
![]()
Ryan wrote:
(Scott Dorsey) wrote in message ... Yes. And for a 6 dB increase all you need to do is a right shift, so there is no loss of precision. But for any other level changes that are NOT powers of two, there will be some rounding error introduced with the multiplication, and that is what folks are trying to avoid. Powers of two a 2, 4, 8, 16, 32, 64, etc., yes? Right. Do you mean multiple of two? No. The first shift doubles it. The second shift doubles that, giving your four times. The third shift doubles it again, giving you eight. The doubling operation is the only thing you can do that has a guarantee of never having rounding error. --scott -- "C'est un Nagra. C'est suisse, et tres, tres precis." |
#72
![]() |
|||
|
|||
![]()
Mike Rivers wrote:
The reason to normalize is because the average listener is too lazy to reach over (or get up) and turn up the volume. And on a lot of those portable players that people use today, it's damn inconvenient to adjust the volume because you don't have a knob, you have up/down buttons. I'm sure someone could come up with a situation where full scale on the CD is desirable. Say a user with some cheap discman with low sensitivity earbuds in a noisy environment. in that case, a CD that is peaking at -18 dB may not produce sufficent output even with the volume knob maxed out. I'm by no mean aproving the "CD volume war" that is going on for the last 10 years, but simple normalizing is providing several benefits for only a minute degradation in sound quality, imho. -- Eric (Dero) Desrochers http://homepage.mac.com/dero72 Hiroshima 45, Tchernobyl 86, Windows 95 |
#73
![]() |
|||
|
|||
![]()
Mike Rivers wrote:
The reason to normalize is because the average listener is too lazy to reach over (or get up) and turn up the volume. And on a lot of those portable players that people use today, it's damn inconvenient to adjust the volume because you don't have a knob, you have up/down buttons. I'm sure someone could come up with a situation where full scale on the CD is desirable. Say a user with some cheap discman with low sensitivity earbuds in a noisy environment. in that case, a CD that is peaking at -18 dB may not produce sufficent output even with the volume knob maxed out. I'm by no mean aproving the "CD volume war" that is going on for the last 10 years, but simple normalizing is providing several benefits for only a minute degradation in sound quality, imho. -- Eric (Dero) Desrochers http://homepage.mac.com/dero72 Hiroshima 45, Tchernobyl 86, Windows 95 |
#74
![]() |
|||
|
|||
![]()
"Eric Desrochers" wrote in message
Mike Rivers wrote: The reason to normalize is because the average listener is too lazy to reach over (or get up) and turn up the volume. And on a lot of those portable players that people use today, it's damn inconvenient to adjust the volume because you don't have a knob, you have up/down buttons. I'm sure someone could come up with a situation where full scale on the CD is desirable. Coming up with a reason to have peaks that come within a few dB of full scale is pretty easy, but coming up with a reason to have peaks that go to exactly FS is pretty hard. After all, if you miss FS by 1 dB people can hardly hear the difference between that and FS. OTOH, its not unusual to find converters that act strange at some point within that last 1 dB before FS. Say a user with some cheap discman with low sensitivity earbuds in a noisy environment. in that case, a CD that is peaking at -18 dB may not produce sufficent output even with the volume knob maxed out. The 21st century real-world version of that story is typified by an European iPod with Etymotic ER-4 or ER-6 earphones plugged into it. The problem was so bad that Etymotic came out with a special high-output model of the ER-6. I'm by no mean aproving the "CD volume war" that is going on for the last 10 years, but simple normalizing is providing several benefits for only a minute degradation in sound quality, imho. Normalizing to -1 dB can work and provide few sonic disadvantages, if any. Of course, not all music is optimized artistically by being played at the highest reasonable levels. |
#75
![]() |
|||
|
|||
![]()
"Eric Desrochers" wrote in message
Mike Rivers wrote: The reason to normalize is because the average listener is too lazy to reach over (or get up) and turn up the volume. And on a lot of those portable players that people use today, it's damn inconvenient to adjust the volume because you don't have a knob, you have up/down buttons. I'm sure someone could come up with a situation where full scale on the CD is desirable. Coming up with a reason to have peaks that come within a few dB of full scale is pretty easy, but coming up with a reason to have peaks that go to exactly FS is pretty hard. After all, if you miss FS by 1 dB people can hardly hear the difference between that and FS. OTOH, its not unusual to find converters that act strange at some point within that last 1 dB before FS. Say a user with some cheap discman with low sensitivity earbuds in a noisy environment. in that case, a CD that is peaking at -18 dB may not produce sufficent output even with the volume knob maxed out. The 21st century real-world version of that story is typified by an European iPod with Etymotic ER-4 or ER-6 earphones plugged into it. The problem was so bad that Etymotic came out with a special high-output model of the ER-6. I'm by no mean aproving the "CD volume war" that is going on for the last 10 years, but simple normalizing is providing several benefits for only a minute degradation in sound quality, imho. Normalizing to -1 dB can work and provide few sonic disadvantages, if any. Of course, not all music is optimized artistically by being played at the highest reasonable levels. |
#76
![]() |
|||
|
|||
![]()
I used the term "full scale" but really intended to mean "near full
scale"! I know of those converters with FS problems... -- Eric (Dero) Desrochers http://homepage.mac.com/dero72 Hiroshima 45, Tchernobyl 86, Windows 95 |
#77
![]() |
|||
|
|||
![]()
I used the term "full scale" but really intended to mean "near full
scale"! I know of those converters with FS problems... -- Eric (Dero) Desrochers http://homepage.mac.com/dero72 Hiroshima 45, Tchernobyl 86, Windows 95 |