Home |
Search |
Today's Posts |
#1
|
|||
|
|||
phase correction
hey out there, i'm in an audio production class and the instructor avoids a
direct answer to this question: is there something in the nature of a slightly out of phase recording (such as that created by a difference in mic distance) that can make something sound "warmer" like analog? i know all about the "if it sounds good do it" philosophy, but i'm wondering to what extent do engineers delve into phase corrction with their software? thanks -greg |
#2
|
|||
|
|||
greggery peccary .@. wrote:
hey out there, i'm in an audio production class and the instructor avoids a direct answer to this question: is there something in the nature of a slightly out of phase recording (such as that created by a difference in mic distance) that can make something sound "warmer" like analog? i know all about the "if it sounds good do it" philosophy, but i'm wondering to what extent do engineers delve into phase corrction with their software? thanks Group delay doesn't make things sound warmer. For the most part it is not even all that audible, to be honest, although it can make things sound smearier if it's really bad. Usually group delay is a symptom of something else gone wrong. If you are mixing two microphone sources, it can be handy to have a box between them that adds group delay to help reduce the comb filtering when you sum feeds from slightly different positions. Little Labs makes such a box. What you call a "slightly out of phase recording" is actually a recording with comb filtering _caused_ by the out of phase sources. The phase differences are not themselves audible; the comb filtering when they are summed is. I find comb filtering rather annoying myself, but you can use it as a fun effect if you want. Sweet Smoke's _Just a Poke_ has a very interesting drum solo done with mikes that change position throughout the solo. --scott -- "C'est un Nagra. C'est suisse, et tres, tres precis." |
#3
|
|||
|
|||
In article .@. writes: hey out there, i'm in an audio production class and the instructor avoids a direct answer to this question: is there something in the nature of a slightly out of phase recording (such as that created by a difference in mic distance) that can make something sound "warmer" like analog? I can understand why he avoids a direct answer because "warmer" and "like analog" are pretty ambiguous, and for that matter, so is "a slightly out of phase recording." When something is picked up by mics at a different distance which are combined in a mix, certain frequencies are attenuated. This is called "comb filtering" because there are multiple frequencies, and in the perfect case, the null is complete, making the frequency response plot look a bit like the teeth of a comb. However, unless you're just lucky enough to reduce frequencies with comb filtering that make the recording sound harsh (if that's the opposite of "warm") there's no way that this would warm up a recording. Analog tape recording does have some group delay that tends to alter the relationship between the fundamental and overtones, but that isn't really what gives what we know as "the warm analog sound." It's just a defect that we've had to live with. -- I'm really Mike Rivers - ) However, until the spam goes away or Hell freezes over, lots of IP addresses are blocked from this system. If you e-mail me and it bounces, use your secret decoder ring and reach me he double-m-eleven-double-zero at yahoo |
#4
|
|||
|
|||
"greggery peccary" .@. wrote in message
hey out there, i'm in an audio production class and the instructor avoids a direct answer to this question: is there something in the nature of a slightly out of phase recording (such as that created by a difference in mic distance) that can make something sound "warmer" like analog? Time delays can cause two different audible effects at the same time. One effect is pretty straight-forward. A sound from a given source shows up in the recording twice or more, with the instances showing up at different times and with different amplitudes and colorations. The second effect is a little trickier to understand. When a sound from a source mixes with a time-delayed version of itself, there can be some pretty strong frequency domain effects, often in the form of what is commonly called comb filtering. The same time delays due to differences in mic position and the like can create both effects. There is a guide for mic positioning called the 3:1 rule, which tends to help us manage both of these effects. Just because these effects are different from straight-wire response doesn't mean that they are a bad thing. I've recently been getting my noise shoved into the beneficial effects of relatively close reflections, in the 5-25 millisecond range. If they are properly scaled in the amplitude domain and dispersed correctly in the time domain, they can be both euphonic and also helpful so that listeners correctly perceive both vocal and instrumental information in the music. i know all about the "if it sounds good do it" philosophy, but i'm wondering to what extent do engineers delve into phase correction with their software? Phase correction and time correction are pretty much the same thing as related to differences in mic distances. Digital consoles and DAW software generally have tools for managing time differences. For example I've heard comments that time-correcting the arrivals from multiple close mics can further "tighten" up the perceived focus of the singing of a well-trained vocal group that is already pretty tight. |
#5
|
|||
|
|||
"greggery peccary" .@. wrote in message ...
hey out there, i'm in an audio production class and the instructor avoids a direct answer to this question: is there something in the nature of a slightly out of phase recording (such as that created by a difference in mic distance) that can make something sound "warmer" like analog? i know all about the "if it sounds good do it" philosophy, but i'm wondering to what extent do engineers delve into phase corrction with their software? thanks -greg There is certainly a different sound when using a coincident pair of mics vs. a spaced pair of mics, even if the exact same pair of mics is used in the same room on the same acoustic source. The difference is due to TOA (time of arrival) differences, but is this phase difference is not constant with all frequencies or all parts of the source. Imagine that a flute sitting to the front left of a spaced microphone array, playing a melody. The sound from that flute arrives at the left microphone first, then the one on the right. So there is a TOA difference between the two outputs of the mic. However, the *phase* difference varies with frequency, since phase is a sine *angle* issue. In other words, the higher frequencies are more "out of phase" than the lower frequencies (i.e. a greater angle of phase displacement). For this situation, you actually could adjust the TOA between the two signals by delaying the signal from the left microphone to match the signal from the one on the right. Now, where the problem comes in is that usually, there are musicians all the way across the front of the array, so some musicians are to the left, some in the middle, and some to the right. All producing a wide range of frequencies, and with a huge number of TOA differences (some signals even arrive at both mics at the same time - those signals from sources directly between the two mics). And this does not include the reflections, which come to the mic array from all angles. So no, it is not possible to correct the TOA differences of a spaced pair, across the entire frequency range, to bring this pair of mics "in phase". But, it is just these TOA differences that give the listener a greater sense of "space" than if a coincident pair is used. Perhaps some people characterize this as "warmth". However, I would guess that it is more about the type of microphones typically used for the two types of mic placement. For coincident pairs, usually directional mics are used (cardioid is the most common choice for XY, for instance) and omni mics are most often chosen for spaced arrays. The thing here is that directional mics have proximity effect, and the low end response is dependant on distance from the source. Most people equate this to "boosting the bass when the source is close to the mic" but what many people forget is that the opposite is also true: greater distance from the source *reduces* the bass response. Most mics are measured at 1 meter, and even at this distance, you can see the majority of directional mics show reduced LF response. So imagine what the LF response is at typical acoustic orchestra recordings - 2, 3, 5 meters, perhaps. Omni mics do not exhibit proximity effect, and thus are not dependant on distance for the LF response. If the mic shows flat at 30Hz at 1 meter, it will be flat at 30Hz at 10 meters. It is THIS factor I thing probably contributes more to the subjective "warmth" of recordings done with spaced pairs. One further issue... then I'll stop. Coincident pairs use "intensity" (differences in volume between left and right, depending on the direction of the source) to generate a stereo signal. Spaced pairs rely on TOA, as mention above. Despite subwoofer manufacturers' claims that low-frequency energy does not give a sense of direction, one very nice factor of spaced pair recordings with omni is that they give you *stereo bass*. The combination of this, with the extended and natural LF response of omnis is a great effect when done correctly. All this being said, spaced pair recordings do not usually sum well to mono... OK, I'm done. Karl Winkler Lectrosonics, Inc. http://www.lectrosonics.com |
#6
|
|||
|
|||
|
#8
|
|||
|
|||
|
#9
|
|||
|
|||
(Mike Rivers) wrote in message news:znr1098278336k@trad...
In article writes: I agree that such a situation is hypothetical. However, if the only difference between the two mics is their distance from the source (say, a flute) and there are no reflections (i.e. this is an anechoic chamber) then the two signals would be nearly identical at the two mics (if the two mics are a foot or two apart, say), except for time displacement (about 1mS). In other words, it's not a specific phase angle at a specific frequency, but all frequencies, in the same phase relationships, arriving at the two mics at different times. I know you know what you're trying to say g but if there's a time difference between signals, there's a phase relationship (not phase DIFFERNCE as prople are prone to say) that isn't 0 degrees at all frequencies. With a constant time delay, the phase shift will increase from zero to several thousand degrees (depending on how far apart the mics are and how high up in frequency you wish to measure it). However, the time difference will be constant, obviously. When we "correct" an "out of phase" microphone by reversing the polarity, we add 180 degrees of phase shift to whatever it happens to be at whatever frequency we choose to look at. Sometimes that sounds better, sometimes it sounds worse, sometimes it's a tossup. When we move a mic a few inches relative to another mic (or delay one signal relative to the other electronically), we change all the phase shift numbers. By finding the right amount of delay, we can make them all zero, which usually is a good thing, or at least the most theoretically correct thing. But at times, there's some aesthetic benefit to letting a phase difference reduce the amplitude of certain frequencies. Mike, Am I wrong in the following, or did I misunderstand your post? The delay created by mic distance introduces different degrees of phase shift across the frequency spectrum IN ADDITION to the time delay whereas a delay created by an electronic delay line (or by shifting a digitally recorded track) introduces only a time delay and preserves the phase relationship of the original and hence the second can not be used to compensate fully for the first. -Andy |
#11
|
|||
|
|||
|
#12
|
|||
|
|||
(Mike Rivers) wrote in message news:znr1098314910k@trad...
In article writes: Mike, Am I wrong in the following, or did I misunderstand your post? The delay created by mic distance introduces different degrees of phase shift across the frequency spectrum IN ADDITION to the time delay whereas a delay created by an electronic delay line (or by shifting a digitally recorded track) introduces only a time delay and preserves the phase relationship of the original and hence the second can not be used to compensate fully for the first. Nope. There's no functional difference between changing the phase between two mics by moving one relative to the other or by inserting an electronic delay. As to whether you can FULLY compensate for physical spacing with an electronic delay, probably not. If you move a mic, you will change what it hears slightly (in additon to when it hears it) due to changes in reflections and possibly changes in proximity to the source or to other objects (like the other microphone that's picking up the same source, for instance). However, in practice, if you align the two mics in time using electronic or computational means (like sliding one track relative to the other on a DAW) you'll be well ahead of the game. Thought about this more and now I see where I was wrong. Thanks for the corrections. |
Reply |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Forum | |||
Phase Correction | Pro Audio | |||
Phase Correction | Pro Audio | |||
Doppler Distortion - Fact or Fiction | Pro Audio | |||
Transient response of actively filtered speakers | Tech | |||
Blindtest question | High End Audio |