Reply
 
Thread Tools Display Modes
  #121   Report Post  
Posted to rec.audio.tubes
Patrick Turner
 
Posts: n/a
Default All right, Patrick



Henry Pasternack wrote:

"Patrick Turner" wrote in message ...
alright, but with 22.55k and 0.13 uf, the product is 2,931 uS, not
3,180 uS. so -3db is 54.2 Hz.


I neglected to consider the effect of the 3.2K resistor, which adds
to the total series resistance seen by C1. This lowers the corner
frequency.


at 50Hz, the 0.1uF has Z = 31.8k, so the Z or 3.2kohms + reactance of 31,800 ohms
is the square root of the sum of the squares of ZC and R, so total is 31.96 k ohms.

The 3.2k has little effect to the 50hz pole, but much more effect with Z 0.1uF at 500 Hz = 3,180 ohms.




if C2 wasn't there, the attenuation at R1 output should be negligible
at 10Hz, but by 10kHz should be exactly -20dB; you should have
poles at 50hz and 500Hz. so R1 = 9R2, so if R2 = 3.2k, R1 = 28.8k
can you show by first principle reasoning that I am wrong?


But C2 is there, and the network never hits a flat response of -20dB.
At midband (1000Hz), where RIAA is specified to be -19.911dB, the
reactance of C2 is a non-negligible 4.8K Ohms.


I like to think if C2 wasn't there. Then what the effect is if it was.
C2 cause the poles for 3180 uS and 318uS to bee were they should be but if C2 was't there
the poles would shift a bit but you should stil get an ultimate 20dB atten without C2 present.

Hence where source R is very low, R1 should always be 9 x R2 to make a 20dB divider when C1 has very lowZ at
say 22kHz.



I ain't very good with equations with reactive j quanties, vector
analysis etc. I just measure and trim; its quicker than calculations.


It's certainly quicker if you're no good with equations.

Everywhere i look around the web there are different relevant values
of R&C which are supposed to give the same error-less outcome, but
they are just not all right; and especially RIAA feedback networks.
They all blather on with schematics and calculations, but few
measurements.


If you account for all strays and second-order effects, calculations
will be exact. It's easier to use a reasonable approximation to get
within, say, 0.5dB and trim the network on the bench to get it
spot-on.

Well that is about true. But trimming should be not necessary when
a computer is trying to see us right; we employ computers to be
accurate within 0.1%.


We would have no RIAA curve in the first place without calculations.
I don't deny that you can get good results without math, but in this
case I think your lack of math background causes you to be unduly
suspicious of computers and calculations


We shouldn't be having this discussion. The program should just work without question and be 0.1%accurate,
and no trimming should be required unless it is to make ssure the parts from the supplier are the values
exactly specified by the program. I think my capacitance meter does not lie to me too much.



The noise generated by the network after the first gain stage is
quite negligible compared to the noise already in the amp from
V1 grid input noise and flicker noise and shot noise.


The 20dB loss introduced by the network adds directly to the noise
figure measured at the input to the second stage. If the first stage
had only 20dB of gain, at midband the signal level at the input to
the second stage would be the same as at the first stage, and so
the second stage noise would be as significant as the first's.


But the RIAA network attenuates its own noise. The noise at the RIAA filter input
is say 20khz BW, but the BW at its output is only 50Hz, so noise is reduced by the square root of the
bandwidth reduction which is 400 times, so noise is reduced 20 times. Only LF noise "get's through".
Tubes and mosfets and so on have flicker or popcorn noise that has LF content so it gets through,
plus LF noise of the RIAA series R, but with anything up to R1 = 100k, its not a problem.

There
doesn't seem to be any advantage to degeneration in the cathode
(or source) of the first stage except as required to set the bias
current and to insure sufficient input overload margin. Every dB of
gain in the first stage lessens the effective noise contribution of
the second stage.


Where you have a CCS loaded tube for V1, the bias R does not reduce gain which is still near µ.
V1 gain should be high as possible for SNR to be good further along.
The unbypassed Rk for V1 is a source of noise to the tube but would only be a bother if
you have low input MC connected. Rk should be bypassed with 1,000 uF.

Patrick Turner.



-Henry


  #122   Report Post  
Posted to rec.audio.tubes
Henry Pasternack
 
Posts: n/a
Default All right, Patrick

"Patrick Turner" wrote in message ...
I neglected to consider the effect of the 3.2K resistor, which adds
to the total series resistance seen by C1. This lowers the corner
frequency.


at 50Hz, the 0.1uF has Z = 31.8k, so the Z or 3.2kohms + reactance
of 31,800 ohms is the square root of the sum of the squares of ZC and
R, so total is 31.96 k ohms.

The 3.2k has little effect to the 50hz pole, but much more effect with
Z 0.1uF at 500 Hz = 3,180 ohms.


No, that's not the right calculation. With respect to C1, the 22.1K and
the 3.2K resistors are in series, so the add. The truly right thing to do
is to work out the polynomial expression for the amplitude response in
terms of Zsource, R4, R5, R6, C1, C2, and f. Then, if you factor the
denomenator, you'll be able to see exactly which component values
appear in the expressions for each time constant.

Or just trust Lip****z and Jones.

I like to think if C2 wasn't there. Then what the effect is if it was. C2
cause the poles for 3180 uS and 318uS to bee were they should be
but if C2 was't there the poles would shift a bit but you should stil get
an ultimate 20dB atten without C2 present.

Hence where source R is very low, R1 should always be 9 x R2 to
make a 20dB divider when C1 has very lowZ at say 22kHz.


There are many unpleasant things in life that we would like just to
disappear, so that everything else would be much easier. Alas, in
most cases (including this one), things just aren't that simple.

We shouldn't be having this discussion. The program should just work
without question and be 0.1%accurate, and no trimming should be
required unless it is to make ssure the parts from the supplier are the
values exactly specified by the program. I think my capacitance meter
does not lie to me too much.


And I should be able to stack a pile of wood on my table saw, turn on
the switch, walk away, and come back a few minutes later to find all
of the pieces cut to the right size.

Actually, there are tools like that. They're CNC machines used by big
manufacturers to mass-produce furniture, pianos, loudspeaker boxes,
and so on. They cost a lot of money to design and build and program,
and each program is only good for one specific part.

An RIAA calculator is a tool. If you want to invest the time and effort
in designing and programming one to calculate values to 0.1%, then
nothing is stopping you. For hobby purposes, it's economical to use
a simpler tool that helps you get to the final result, but still requires
some effort on the part of the operator. Like my table saw.

But the RIAA network attenuates its own noise. The noise at the RIAA
filter input is say 20khz BW, but the BW at its output is only 50Hz, so
noise is reduced by the square root of the bandwidth reduction which is
400 times, so noise is reduced 20 times. Only LF noise "get's through".
Tubes and mosfets and so on have flicker or popcorn noise that has LF
content so it gets through, plus LF noise of the RIAA series R, but with
anything up to R1 = 100k, its not a problem.


There are tree source of noise to consider here. The first is the thermal
noise generated by the RIAA network itself due to its non-zero resistance.
The second is the noise mixed into the signal at the output of the first
tube. The third is the input-referred noise added to the signal by the
second tube.

Assume we can ignore the RIAA network's thermal noise.

If the RIAA network was lossless and the two tube stages had the same
input-referred noise, then the noise voltage contribution of the second stage,
referred to the input, would be less by a factor equal to the gain of the first
stage. At a given frequency, the RIAA network attenuates both the noise
and the signal at its input equally. The amplitude of the second stage's
input-referred noise increases in proportion to the signal at the RIAA
filter's output by a factor equal to the filter's loss. Therefore, the filter loss
appears in the overall noise calculation as a direct increase in noise. This
is bog standard first-year communications engineering theory that any
communications receiver designer knows by heart.

The total noise at the output of the preamplifier is a consideration, but it's
not necessarily a meaningful number because it doesn't account for the
shape of the noise spectrum, nor the weighting curve for how the ear
perceives noise at different frequencies.

Where you have a CCS loaded tube for V1, the bias R does not reduce
gain which is still near µ. V1 gain should be high as possible for SNR to
be good further along. The unbypassed Rk for V1 is a source of noise to
the tube but would only be a bother if you have low input MC connected.
Rk should be bypassed with 1,000 uF.


I was referring to your design and Allens, which don't use CCSs, and which
will see a gain variation with Rk.

I prefer not to use electrolytic capacitors anywhere in the signal path.

I think this discussion is starting to get a bit too nit-picky for me now. You're
welcome to reply, but I think I'll check out for the moment and get back when
I've figured out which preamp I'm going to build.

Thanks for the feedback.

-Henry


  #123   Report Post  
Posted to rec.audio.tubes
John Byrns
 
Posts: n/a
Default All right, Patrick

In article , "Henry Pasternack"
wrote:

"Patrick Turner" wrote in message

...
I missed the µ follower bit. His tube schematic does not show exactly
what is there. With Rout very low, the 0.1 and 22k give 2,200 uS ???


There's an RIAA calculator he

http://www.kabusa.com/frameset.htm?/phonpre.htm

It confirms the values in the Pimm schematic. Supposedly, the calculator
is based on math by Stanley Lip****z. I haven't had a chance to check the
calculations, but I suspect it's correct. I guess there's interaction between
the sections that accounts for the seeming discrepancy, although it isn't
intuitively obvious to me why right at this moment.


Yes, you are quite right that there is an interaction between the
equalizer sections in the Pimm design that makes perfect RIAA equalization
impossible. If the Lip****z paper is the one that was published in the
JAES sometime in the 1970's, I believe Stanley discussed a method to
minimize the interaction. I don't have access to this paper and have been
wondering exactly what criterion Lip****z used for minimizing the error
produced by this interaction? If anyone knows what Stanley's method was I
would be interested in hearing an explanation. It is easy enough to
derive the transfer function for the Pimm type network and compare it with
the theoretical RIAA transfer function, which presumes two independent
networks. Presumably you could use any of several different criteria for
defining the best way to minimize the interaction you mentioned.

Another question which I think I have asked here before, but that I must
confess I have forgotten the answer to, is what the time constant is for
the high frequency zero in the network, does anyone know? Also does
anyone know when this high frequency zero was added to the specification
for the RIAA eq curve? I don't remember it being part of the original
specification. Some versions also seem to include an added low frequency
roll off somewhere around 30 Hz, but this seems to have either been
removed for the specification again, or it is ignored by most audiophiles.


Regards,

John Byrns


Surf my web pages at, http://users.rcn.com/jbyrns/
  #124   Report Post  
Posted to rec.audio.tubes
Patrick Turner
 
Posts: n/a
Default All right, Patrick



Henry Pasternack wrote:

"Patrick Turner" wrote in message ...
I neglected to consider the effect of the 3.2K resistor, which adds
to the total series resistance seen by C1. This lowers the corner
frequency.


at 50Hz, the 0.1uF has Z = 31.8k, so the Z or 3.2kohms + reactance
of 31,800 ohms is the square root of the sum of the squares of ZC and
R, so total is 31.96 k ohms.

The 3.2k has little effect to the 50hz pole, but much more effect with
Z 0.1uF at 500 Hz = 3,180 ohms.


No, that's not the right calculation. With respect to C1, the 22.1K and
the 3.2K resistors are in series, so the add. The truly right thing to do
is to work out the polynomial expression for the amplitude response in
terms of Zsource, R4, R5, R6, C1, C2, and f. Then, if you factor the
denomenator, you'll be able to see exactly which component values
appear in the expressions for each time constant.

Or just trust Lip****z and Jones.


I'll just build the network in about 15 minutes and test by using with my reverse eq network
from a cathode follower source I have....



I like to think if C2 wasn't there. Then what the effect is if it was. C2
cause the poles for 3180 uS and 318uS to bee were they should be
but if C2 was't there the poles would shift a bit but you should stil get
an ultimate 20dB atten without C2 present.

Hence where source R is very low, R1 should always be 9 x R2 to
make a 20dB divider when C1 has very lowZ at say 22kHz.


There are many unpleasant things in life that we would like just to
disappear, so that everything else would be much easier. Alas, in
most cases (including this one), things just aren't that simple.


I knew you'd say that. BUT, we should be able to get a 20 attenuation between LF and
22kHz if the if you have 22k and 2.44 in series with a C beween 2.44k and 0V and the output is from the
join on 22k and 2.44k . Its thew classic shelved response. The value of C should be 0.144 uF to get a -3dB 1
at 50Hz.
I dunno exactly how to calculate where F2 would be, somewhere near 500Hz, yes?
at say 500Hz, ZC = 2,208 ohms, and the response should be about -17dB below the LF level,
and bt 22kHz, very close to -20dB.
Then you could have an amp stage after such a filer as a buffer, high Z in, low Z out, and have
a simple RC filter for the 75uS, say 22k and 0.0034uF. This would give 0dB attenuation at 10Hz, but by
2,112Hz it is -3dB. The combination of two such filters gives exactly the correct RIAA.
No interactions.

In the filter I have been using I have a two such cascaded filters rather than just having
the 0.03uF strapped across the 3.2k and 0.1uF as in that schematic.



We shouldn't be having this discussion. The program should just work
without question and be 0.1%accurate, and no trimming should be
required unless it is to make ssure the parts from the supplier are the
values exactly specified by the program. I think my capacitance meter
does not lie to me too much.


And I should be able to stack a pile of wood on my table saw, turn on
the switch, walk away, and come back a few minutes later to find all
of the pieces cut to the right size.


Well, when I built the last sub-woofer for a customer I had a local joiner cut the 33mm thick mdf panels
with his table
saw which can fit a 2.4m x 1.2m standard sheet easily. They fitted perfectly, there were no adjustements to
be done. Every cut was perfectly square.
All I had to do was assemble the peices onto glue beads and let them cure for a day and then
drill and fit the dowels to make sure the box wouldn't fall apart if it was dropped.
It was so easy to build.




Actually, there are tools like that. They're CNC machines used by big
manufacturers to mass-produce furniture, pianos, loudspeaker boxes,
and so on. They cost a lot of money to design and build and program,
and each program is only good for one specific part.

An RIAA calculator is a tool. If you want to invest the time and effort
in designing and programming one to calculate values to 0.1%, then
nothing is stopping you.


But filters are only applied basic maths, pianos etc are much more complex.

For hobby purposes, it's economical to use
a simpler tool that helps you get to the final result, but still requires
some effort on the part of the operator. Like my table saw.

But the RIAA network attenuates its own noise. The noise at the RIAA
filter input is say 20khz BW, but the BW at its output is only 50Hz, so
noise is reduced by the square root of the bandwidth reduction which is
400 times, so noise is reduced 20 times. Only LF noise "get's through".
Tubes and mosfets and so on have flicker or popcorn noise that has LF
content so it gets through, plus LF noise of the RIAA series R, but with
anything up to R1 = 100k, its not a problem.


There are tree source of noise to consider here. The first is the thermal
noise generated by the RIAA network itself due to its non-zero resistance.


As I mentioned, the johson resistance noise is very low compared to the signal levels.

The second is the noise mixed into the signal at the output of the first
tube.


The third is the input-referred noise added to the signal by the
second tube.


The amplified input noise of the triode dominates the noise by at least an order of magnitude.

Its not unusual to have 2uV of noise at the triode input, so if the gain of V1 is 70x then you have 140uV
at the V1 anode, and it swamps all noise contribution by the following networks and tubes.



Assume we can ignore the RIAA network's thermal noise.

If the RIAA network was lossless and the two tube stages had the same
input-referred noise, then the noise voltage contribution of the second stage,
referred to the input, would be less by a factor equal to the gain of the first
stage. At a given frequency, the RIAA network attenuates both the noise
and the signal at its input equally. The amplitude of the second stage's
input-referred noise increases in proportion to the signal at the RIAA
filter's output by a factor equal to the filter's loss. Therefore, the filter loss
appears in the overall noise calculation as a direct increase in noise. This
is bog standard first-year communications engineering theory that any
communications receiver designer knows by heart.


Yes you are right, but in practice its the noise of the V1 tube which creates the noise we hear
when we turn up the gain in Phono with a grounded grid input. If the grid is only grounded by 47k , noise is
a lot worse.
If you have a fet input, noise it typically 0.14uV instead of 2uV that a tube produces, so the noise heard
from the amp turned up with a grounded gate input is MUCH lower, even with the same RIAA network and V2.



The total noise at the output of the preamplifier is a consideration, but it's
not necessarily a meaningful number because it doesn't account for the
shape of the noise spectrum, nor the weighting curve for how the ear
perceives noise at different frequencies.


Use a fet input in cascode; you'll be amazed how quiet they are compared to just about any tube.

equivalant input resistance determines input noise and for triodes, EIR = 2.5 / gm meaured in mA/V.
So ou'd think paralleling 3 x 6DJ8 so you have 6 triode sections instead of using 1/2 a 12AX7
would give you less noise. Unfortunatly, the outcome isn't necessarily as quiet as you'd expect.
pigs don't fly either, and the formula is BS. But using a fet just works, and
the formula for fet EIR = 0.7 / gm.

So for 12AX7, EIR = 2.5 / 0.0016 = 1,500 ohms approx, using 3 x 6DJ8, EIR = 2.5 / 0.04 = 62.5 ohms and
noise should be 5 times lower than the 12AX7, but I have never seen this. For 2SK369, EIR = 0.7 / 0.04 = 18
ohms,
so noise should be nearly 1/10 of the 12AX7, and that's what i do get, unweighted........



Where you have a CCS loaded tube for V1, the bias R does not reduce
gain which is still near µ. V1 gain should be high as possible for SNR to
be good further along. The unbypassed Rk for V1 is a source of noise to
the tube but would only be a bother if you have low input MC connected.
Rk should be bypassed with 1,000 uF.


I was referring to your design and Allens, which don't use CCSs, and which
will see a gain variation with Rk.

I prefer not to use electrolytic capacitors anywhere in the signal path.


I don't think thay are the sonic monsters they are made out to be.




I think this discussion is starting to get a bit too nit-picky for me now. You're
welcome to reply, but I think I'll check out for the moment and get back when
I've figured out which preamp I'm going to build.


Happy soldering.

Patrick Turner.



Thanks for the feedback.

-Henry


  #125   Report Post  
Posted to rec.audio.tubes
Stewart Pinkerton
 
Posts: n/a
Default All right, Patrick

On Sun, 09 Apr 2006 16:36:21 -0500, (John Byrns) wrote:

Another question which I think I have asked here before, but that I must
confess I have forgotten the answer to, is what the time constant is for
the high frequency zero in the network, does anyone know? Also does
anyone know when this high frequency zero was added to the specification
for the RIAA eq curve? I don't remember it being part of the original
specification. Some versions also seem to include an added low frequency
roll off somewhere around 30 Hz, but this seems to have either been
removed for the specification again, or it is ignored by most audiophiles.


The high frequency zero is at 50kHz, a time constant of 3.18
microseconds. I believe it is not actually part of the official RIAA
spec, but was introduced by Neumann to protect their cutting heads,
and therefore became the de facto industry standard.

The LF rolloff (actually at 20Hz) is an IEC addition to the original
RIAA spec, and is optional, since it is not used by cutting lathes and
is a replay-only feature, intended to reduce problems caused by record
warps. Those audiophiles who use clamps or vacuum hold-down on their
turntables may omit it, if preferred.
--

Stewart Pinkerton | Music is Art - Audio is Engineering

Posted Via Usenet.com Premium Usenet Newsgroup Services
----------------------------------------------------------
** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **
----------------------------------------------------------
http://www.usenet.com


  #126   Report Post  
Posted to rec.audio.tubes
John Byrns
 
Posts: n/a
Default All right, Patrick

In article , Stewart Pinkerton
wrote:

On Sun, 09 Apr 2006 16:36:21 -0500, (John Byrns) wrote:

Another question which I think I have asked here before, but that I must
confess I have forgotten the answer to, is what the time constant is for
the high frequency zero in the network, does anyone know? Also does
anyone know when this high frequency zero was added to the specification
for the RIAA eq curve? I don't remember it being part of the original
specification. Some versions also seem to include an added low frequency
roll off somewhere around 30 Hz, but this seems to have either been
removed for the specification again, or it is ignored by most audiophiles.


The high frequency zero is at 50kHz, a time constant of 3.18
microseconds. I believe it is not actually part of the official RIAA
spec, but was introduced by Neumann to protect their cutting heads,
and therefore became the de facto industry standard.


Hi Stewart,

Thanks for the input, actually I think you may have also been the one that
answered my earlier question about the high frequency zero, and as it
turns out I remembered the previous answer correctly as 50 kHz. Are you
sure that the purpose of the 50 kHz pole in the Neumann RIAA recording
equalizer was to protect their cutting heads? It is impractical for the
recording equalizer to just keep boosting forever, a pole is required at
some point, and 50 kHz is exactly the frequency I would choose if I were
designing the equalizer.

I can't believe that the tape used to drive the cutting amplifier had any
energy above 50 kHz that a zero at 50 kHz would usefully remove! I would
think that frequencies within the upper audio band would be more likely to
be the cause of cutter head damage. A high frequency limiter circuit
would seem to be the best way to prevent cutter head damage, not a pole in
the recording characteristic at 50 kHz.

The LF rolloff (actually at 20Hz) is an IEC addition to the original
RIAA spec, and is optional, since it is not used by cutting lathes and
is a replay-only feature, intended to reduce problems caused by record
warps. Those audiophiles who use clamps or vacuum hold-down on their
turntables may omit it, if preferred.


For some reason 30 Hz sticks in my mind, I wonder if a 30 Hz roll off had
been used before the IEC standardized it at 20 Hz? The 20 Hz frequency is
an interesting coincidence, about 25 years ago I added a high pass filter
to the output of my preamp to allow recording from LP to cassette without
the low frequency saturation problems caused by record warp, as it happens
I chose 20 Hz as the cut off frequency for the first order low pass
filter. Ultimately I found that a better solution was to use a less
compliant stylus moving the arm/cartridge resonance above the record warp
frequency.


Regards,

John Byrns


Surf my web pages at,
http://users.rcn.com/jbyrns/
  #127   Report Post  
Posted to rec.audio.tubes
Stewart Pinkerton
 
Posts: n/a
Default All right, Patrick

On Mon, 10 Apr 2006 20:36:06 -0500, (John Byrns) wrote:

In article , Stewart Pinkerton
wrote:

On Sun, 09 Apr 2006 16:36:21 -0500,
(John Byrns) wrote:

Another question which I think I have asked here before, but that I must
confess I have forgotten the answer to, is what the time constant is for
the high frequency zero in the network, does anyone know? Also does
anyone know when this high frequency zero was added to the specification
for the RIAA eq curve? I don't remember it being part of the original
specification. Some versions also seem to include an added low frequency
roll off somewhere around 30 Hz, but this seems to have either been
removed for the specification again, or it is ignored by most audiophiles.


The high frequency zero is at 50kHz, a time constant of 3.18
microseconds. I believe it is not actually part of the official RIAA
spec, but was introduced by Neumann to protect their cutting heads,
and therefore became the de facto industry standard.


Hi Stewart,

Thanks for the input, actually I think you may have also been the one that
answered my earlier question about the high frequency zero, and as it
turns out I remembered the previous answer correctly as 50 kHz. Are you
sure that the purpose of the 50 kHz pole in the Neumann RIAA recording
equalizer was to protect their cutting heads? It is impractical for the
recording equalizer to just keep boosting forever, a pole is required at
some point, and 50 kHz is exactly the frequency I would choose if I were
designing the equalizer.


The 50kHz rolloff was certainly introduced by Neumann, and it simply
stops the HF boost, so it's a common-sense engineering item. It would
also protect the head against such things as tape bias oscillator
breakthrough, this signal often being as low as 90kHz. With unlimited
HF boost, it wouldn't take a particularly high amount of breakthrough
to cause significant heating of the cutter head. I guess Neumann's
point was that, since you can't keep the HF boosting *forever*, you
might as well define the cutoff point.

I can't believe that the tape used to drive the cutting amplifier had any
energy above 50 kHz that a zero at 50 kHz would usefully remove! I would
think that frequencies within the upper audio band would be more likely to
be the cause of cutter head damage. A high frequency limiter circuit
would seem to be the best way to prevent cutter head damage, not a pole in
the recording characteristic at 50 kHz.


Perhaps it was a 'belt and braces' item, and as noted above, you have
to stop HF boost at some point in any case. HF limiters are another
matter, and are part of the basic limitation of vinyl, often being
introduced to keep groove velocities inside the playable range of
available cartridges, rather than because of cutter head problems.

The LF rolloff (actually at 20Hz) is an IEC addition to the original
RIAA spec, and is optional, since it is not used by cutting lathes and
is a replay-only feature, intended to reduce problems caused by record
warps. Those audiophiles who use clamps or vacuum hold-down on their
turntables may omit it, if preferred.


For some reason 30 Hz sticks in my mind, I wonder if a 30 Hz roll off had
been used before the IEC standardized it at 20 Hz? The 20 Hz frequency is
an interesting coincidence, about 25 years ago I added a high pass filter
to the output of my preamp to allow recording from LP to cassette without
the low frequency saturation problems caused by record warp, as it happens
I chose 20 Hz as the cut off frequency for the first order low pass
filter. Ultimately I found that a better solution was to use a less
compliant stylus moving the arm/cartridge resonance above the record warp
frequency.


Not everyone pays close enough attention to the fundamental
mass/compliance resonance of the arm and cartridge, and IIRC, the
'rumble filter' IEC addendum was pretty much synchronous with the
appearance of high-compliance carts such as the Shure V-15 and the
ADCs.
--

Stewart Pinkerton | Music is Art - Audio is Engineering

Posted Via Usenet.com Premium Usenet Newsgroup Services
----------------------------------------------------------
** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **
----------------------------------------------------------
http://www.usenet.com
  #129   Report Post  
Posted to rec.audio.tubes
Iain Churches
 
Posts: n/a
Default All right, Patrick


"Stewart Pinkerton" wrote in message
...
On Mon, 10 Apr 2006 20:36:06 -0500, (John Byrns) wrote:

In article , Stewart Pinkerton
wrote:

On Sun, 09 Apr 2006 16:36:21 -0500,
(John Byrns) wrote:


The 50kHz rolloff was certainly introduced by Neumann, and it simply
stops the HF boost, so it's a common-sense engineering item.


It would be nice to give the credit to a European company, but it was
RCA and Westrex who made the proposal. All other parties, including
Neumann adopted it without exception.

It would
also protect the head against such things as tape bias oscillator
breakthrough, this signal often being as low as 90kHz.'


The tape machine is in playback mode, not record, when the disc
is being cut so the bias oscillator is not even running!!


With unlimited
HF boost, it wouldn't take a particularly high amount of breakthrough
to cause significant heating of the cutter head. I guess Neumann's
point was that, since you can't keep the HF boosting *forever*, you
might as well define the cutoff point.

I can't believe that the tape used to drive the cutting amplifier had any
energy above 50 kHz that a zero at 50 kHz would usefully remove!


Correct John. There is none. Most good cutter heads could manage
25kHz at -10dBm


I would
think that frequencies within the upper audio band would be more likely to
be the cause of cutter head damage. A high frequency limiter circuit
would seem to be the best way to prevent cutter head damage, not a pole in
the recording characteristic at 50 kHz.


The most problematic instruments were things like crash and rivet cymbals,
(lots of 8kHz)

In hindsight, it would have made sense originally, to have
stipulated 75µS to 50kHz.

Regards to all
Iain



  #130   Report Post  
Posted to rec.audio.tubes
Stewart Pinkerton
 
Posts: n/a
Default All right, Patrick

On Wed, 12 Apr 2006 13:19:48 +0300, "Iain Churches"
wrote:


"Stewart Pinkerton" wrote in message
.. .
On Mon, 10 Apr 2006 20:36:06 -0500, (John Byrns) wrote:

In article , Stewart Pinkerton
wrote:

On Sun, 09 Apr 2006 16:36:21 -0500,
(John Byrns) wrote:


The 50kHz rolloff was certainly introduced by Neumann, and it simply
stops the HF boost, so it's a common-sense engineering item.


It would be nice to give the credit to a European company, but it was
RCA and Westrex who made the proposal. All other parties, including
Neumann adopted it without exception.


Do you have any documentation to support this? The 50kHz cutoff seems
to pretty generally referred to as the 'Neumann rolloff', in Europe at
least.

It would
also protect the head against such things as tape bias oscillator
breakthrough, this signal often being as low as 90kHz.'


The tape machine is in playback mode, not record, when the disc
is being cut so the bias oscillator is not even running!!


Yes, but some bias signal remains on the tape, albeit at very low
level. Probably not a real-world problem.

With unlimited
HF boost, it wouldn't take a particularly high amount of breakthrough
to cause significant heating of the cutter head. I guess Neumann's
point was that, since you can't keep the HF boosting *forever*, you
might as well define the cutoff point.

I can't believe that the tape used to drive the cutting amplifier had any
energy above 50 kHz that a zero at 50 kHz would usefully remove!


Correct John. There is none. Most good cutter heads could manage
25kHz at -10dBm

I would
think that frequencies within the upper audio band would be more likely to
be the cause of cutter head damage. A high frequency limiter circuit
would seem to be the best way to prevent cutter head damage, not a pole in
the recording characteristic at 50 kHz.


The most problematic instruments were things like crash and rivet cymbals,
(lots of 8kHz)

In hindsight, it would have made sense originally, to have
stipulated 75µS to 50kHz.


The RIAA is of course not famous for common sense! :-)
--

Stewart Pinkerton | Music is Art - Audio is Engineering

Posted Via Usenet.com Premium Usenet Newsgroup Services
----------------------------------------------------------
** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **
----------------------------------------------------------
http://www.usenet.com


  #131   Report Post  
Posted to rec.audio.tubes
Iain Churches
 
Posts: n/a
Default All right, Patrick


"Stewart Pinkerton" wrote in message
...
On Wed, 12 Apr 2006 13:19:48 +0300, "Iain Churches"
wrote:


"Stewart Pinkerton" wrote in message
. ..
On Mon, 10 Apr 2006 20:36:06 -0500, (John Byrns) wrote:

In article , Stewart
Pinkerton
wrote:

On Sun, 09 Apr 2006 16:36:21 -0500,
(John Byrns) wrote:


The 50kHz rolloff was certainly introduced by Neumann, and it simply
stops the HF boost, so it's a common-sense engineering item.


It would be nice to give the credit to a European company, but it was
RCA and Westrex who made the proposal. All other parties, including
Neumann adopted it without exception.


Do you have any documentation to support this? The 50kHz cutoff seems
to pretty generally referred to as the 'Neumann rolloff', in Europe at
least.


It is common knowledge among studio personnel. It was generally
referred to as "the fourth corner" which was the term used by Westrex.
I am still in contact with some of the former senior cutting staff at
RCA, so I will see what I can find out. I have worked
professionally in recording since 1965, and never heard the
term "Neumann rolloff" :-)) A member of my current recording
team worked at Studer 1976 to 1984. He has never heard the
term either.

Compared with some of the other cutter heads, Scully for instance,
there was much less to fear with a Neumann, and one could meter the
cutter current constantly, and cool with helium as required. It is
generally accepted by those in the studio industry that the reason for
the "fourth corner" was just to make the replay curve a little more
sensible (realistic) as John surmises.


It would
also protect the head against such things as tape bias oscillator
breakthrough, this signal often being as low as 90kHz.'


The tape machine is in playback mode, not record, when the disc
is being cut so the bias oscillator is not even running!!


Yes, but some bias signal remains on the tape, albeit at very low
level. Probably not a real-world problem.


Definately not. You are also overlooking the fact that tape machines
in cutting suites were invariably replay only, without any record
electronics. There was an advance replay head to drive the lead screw
amplifier . No erase or record head was fitted. Both Studer and
Philips built special models of tape machines designed for cutting
facilities.

With unlimited
HF boost, it wouldn't take a particularly high amount of breakthrough
to cause significant heating of the cutter head. I guess Neumann's
point was that, since you can't keep the HF boosting *forever*, you
might as well define the cutoff point.

I can't believe that the tape used to drive the cutting amplifier had
any
energy above 50 kHz that a zero at 50 kHz would usefully remove!


Correct John. There is none. Most good cutter heads could manage
25kHz at -10dBm

I would
think that frequencies within the upper audio band would be more likely
to
be the cause of cutter head damage. A high frequency limiter circuit
would seem to be the best way to prevent cutter head damage, not a pole
in
the recording characteristic at 50 kHz.


The most problematic instruments were things like crash and rivet cymbals,
(lots of 8kHz)

In hindsight, it would have made sense originally, to have
stipulated 75µS to 50kHz.


The RIAA is of course not famous for common sense! :-)


I think they did a pretty good job, especially in specifiying a
*replay* curve when you consider the previous confusion
with eight different record curves.

We of course have the benefit of hindsight:-)

Regards to all
Iain





  #132   Report Post  
Posted to rec.audio.tubes
John Byrns
 
Posts: n/a
Default All right, Patrick

In article , "Iain Churches"
wrote:

It is common knowledge among studio personnel. It was generally
referred to as "the fourth corner" which was the term used by Westrex.
I am still in contact with some of the former senior cutting staff at
RCA, so I will see what I can find out. I have worked
professionally in recording since 1965, and never heard the
term "Neumann rolloff" :-)) A member of my current recording
team worked at Studer 1976 to 1984. He has never heard the
term either.

Compared with some of the other cutter heads, Scully for instance,
there was much less to fear with a Neumann, and one could meter the
cutter current constantly, and cool with helium as required. It is
generally accepted by those in the studio industry that the reason for
the "fourth corner" was just to make the replay curve a little more
sensible (realistic) as John surmises.


I thought I only "surmised" that the high frequency pre emphasis network
used in recording must of practical necessity stop boosting at some
point. There is little practical limit as to how far a passive high
frequency playback equalizer can go on cutting. I believe the feedback
type of playback equalizer does suffer limitations similar to those of the
HF boost equalizer used in recording, which I assume is what you are
speaking of.

If the 3.18 usec. time constant is the "fourth corner", then is the 20 Hz
roll off the "fifth corner"?


Regards,

John Byrns


Surf my web pages at, http://users.rcn.com/jbyrns/
Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
KISS 100 by Andre Jute at 31 March 2004 -- The KISS Amp INDEX [email protected] Vacuum Tubes 0 April 1st 05 04:45 AM
KISS 100 4 December 2004 Andre Jute Vacuum Tubes 0 December 5th 04 11:20 PM
What are they Teaching Michael McKelvy Audio Opinions 199 October 15th 04 07:56 PM


All times are GMT +1. The time now is 07:09 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"