Reply
 
Thread Tools Display Modes
  #1   Report Post  
greggery peccary
 
Posts: n/a
Default phase correction

hey out there, i'm in an audio production class and the instructor avoids a
direct answer to this question: is there something in the nature of a
slightly out of phase recording (such as that created by a difference in mic
distance) that can make something sound "warmer" like analog? i know all
about the "if it sounds good do it" philosophy, but i'm wondering to what
extent do engineers delve into phase corrction with their software? thanks
-greg


  #2   Report Post  
Scott Dorsey
 
Posts: n/a
Default

greggery peccary .@. wrote:
hey out there, i'm in an audio production class and the instructor avoids a
direct answer to this question: is there something in the nature of a
slightly out of phase recording (such as that created by a difference in mic
distance) that can make something sound "warmer" like analog? i know all
about the "if it sounds good do it" philosophy, but i'm wondering to what
extent do engineers delve into phase corrction with their software? thanks


Group delay doesn't make things sound warmer. For the most part it is
not even all that audible, to be honest, although it can make things sound
smearier if it's really bad. Usually group delay is a symptom of something
else gone wrong.

If you are mixing two microphone sources, it can be handy to have a box
between them that adds group delay to help reduce the comb filtering when
you sum feeds from slightly different positions. Little Labs makes such
a box.

What you call a "slightly out of phase recording" is actually a recording
with comb filtering _caused_ by the out of phase sources. The phase differences
are not themselves audible; the comb filtering when they are summed is.

I find comb filtering rather annoying myself, but you can use it as a fun
effect if you want. Sweet Smoke's _Just a Poke_ has a very interesting
drum solo done with mikes that change position throughout the solo.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
  #3   Report Post  
Mike Rivers
 
Posts: n/a
Default


In article .@. writes:

hey out there, i'm in an audio production class and the instructor avoids a
direct answer to this question: is there something in the nature of a
slightly out of phase recording (such as that created by a difference in mic
distance) that can make something sound "warmer" like analog?


I can understand why he avoids a direct answer because "warmer" and
"like analog" are pretty ambiguous, and for that matter, so is "a
slightly out of phase recording."

When something is picked up by mics at a different distance which are
combined in a mix, certain frequencies are attenuated. This is called
"comb filtering" because there are multiple frequencies, and in the
perfect case, the null is complete, making the frequency response plot
look a bit like the teeth of a comb.

However, unless you're just lucky enough to reduce frequencies with
comb filtering that make the recording sound harsh (if that's the
opposite of "warm") there's no way that this would warm up a
recording. Analog tape recording does have some group delay that tends
to alter the relationship between the fundamental and overtones, but
that isn't really what gives what we know as "the warm analog sound."
It's just a defect that we've had to live with.


--
I'm really Mike Rivers - )
However, until the spam goes away or Hell freezes over,
lots of IP addresses are blocked from this system. If
you e-mail me and it bounces, use your secret decoder ring
and reach me he double-m-eleven-double-zero at yahoo
  #4   Report Post  
Arny Krueger
 
Posts: n/a
Default

"greggery peccary" .@. wrote in message


hey out there, i'm in an audio production class and the instructor
avoids a direct answer to this question: is there something in the
nature of a slightly out of phase recording (such as that created by
a difference in mic distance) that can make something sound "warmer"
like analog?


Time delays can cause two different audible effects at the same time.

One effect is pretty straight-forward. A sound from a given source shows up
in the recording twice or more, with the instances showing up at different
times and with different amplitudes and colorations.

The second effect is a little trickier to understand. When a sound from a
source mixes with a time-delayed version of itself, there can be some pretty
strong frequency domain effects, often in the form of what is commonly
called comb filtering.

The same time delays due to differences in mic position and the like can
create both effects. There is a guide for mic positioning called the 3:1
rule, which tends to help us manage both of these effects.

Just because these effects are different from straight-wire response doesn't
mean that they are a bad thing. I've recently been getting my noise shoved
into the beneficial effects of relatively close reflections, in the 5-25
millisecond range. If they are properly scaled in the amplitude domain and
dispersed correctly in the time domain, they can be both euphonic and also
helpful so that listeners correctly perceive both vocal and instrumental
information in the music.

i know all about the "if it sounds good do it"
philosophy, but i'm wondering to what extent do engineers delve into
phase correction with their software?


Phase correction and time correction are pretty much the same thing as
related to differences in mic distances. Digital consoles and DAW software
generally have tools for managing time differences. For example I've heard
comments that time-correcting the arrivals from multiple close mics can
further "tighten" up the perceived focus of the singing of a well-trained
vocal group that is already pretty tight.


  #5   Report Post  
Karl Winkler
 
Posts: n/a
Default

"greggery peccary" .@. wrote in message ...
hey out there, i'm in an audio production class and the instructor avoids a
direct answer to this question: is there something in the nature of a
slightly out of phase recording (such as that created by a difference in mic
distance) that can make something sound "warmer" like analog? i know all
about the "if it sounds good do it" philosophy, but i'm wondering to what
extent do engineers delve into phase corrction with their software? thanks
-greg


There is certainly a different sound when using a coincident pair of
mics vs. a spaced pair of mics, even if the exact same pair of mics is
used in the same room on the same acoustic source. The difference is
due to TOA (time of arrival) differences, but is this phase difference
is not constant with all frequencies or all parts of the source.
Imagine that a flute sitting to the front left of a spaced microphone
array, playing a melody. The sound from that flute arrives at the left
microphone first, then the one on the right. So there is a TOA
difference between the two outputs of the mic. However, the *phase*
difference varies with frequency, since phase is a sine *angle* issue.
In other words, the higher frequencies are more "out of phase" than
the lower frequencies (i.e. a greater angle of phase displacement).

For this situation, you actually could adjust the TOA between the two
signals by delaying the signal from the left microphone to match the
signal from the one on the right.

Now, where the problem comes in is that usually, there are musicians
all the way across the front of the array, so some musicians are to
the left, some in the middle, and some to the right. All producing a
wide range of frequencies, and with a huge number of TOA differences
(some signals even arrive at both mics at the same time - those
signals from sources directly between the two mics). And this does not
include the reflections, which come to the mic array from all angles.

So no, it is not possible to correct the TOA differences of a spaced
pair, across the entire frequency range, to bring this pair of mics
"in phase".

But, it is just these TOA differences that give the listener a greater
sense of "space" than if a coincident pair is used. Perhaps some
people characterize this as "warmth". However, I would guess that it
is more about the type of microphones typically used for the two types
of mic placement. For coincident pairs, usually directional mics are
used (cardioid is the most common choice for XY, for instance) and
omni mics are most often chosen for spaced arrays. The thing here is
that directional mics have proximity effect, and the low end response
is dependant on distance from the source. Most people equate this to
"boosting the bass when the source is close to the mic" but what many
people forget is that the opposite is also true: greater distance from
the source *reduces* the bass response. Most mics are measured at 1
meter, and even at this distance, you can see the majority of
directional mics show reduced LF response. So imagine what the LF
response is at typical acoustic orchestra recordings - 2, 3, 5 meters,
perhaps.

Omni mics do not exhibit proximity effect, and thus are not dependant
on distance for the LF response. If the mic shows flat at 30Hz at 1
meter, it will be flat at 30Hz at 10 meters. It is THIS factor I thing
probably contributes more to the subjective "warmth" of recordings
done with spaced pairs.

One further issue... then I'll stop. Coincident pairs use "intensity"
(differences in volume between left and right, depending on the
direction of the source) to generate a stereo signal. Spaced pairs
rely on TOA, as mention above. Despite subwoofer manufacturers' claims
that low-frequency energy does not give a sense of direction, one very
nice factor of spaced pair recordings with omni is that they give you
*stereo bass*. The combination of this, with the extended and natural
LF response of omnis is a great effect when done correctly.

All this being said, spaced pair recordings do not usually sum well to
mono...

OK, I'm done.

Karl Winkler
Lectrosonics, Inc.
http://www.lectrosonics.com


  #6   Report Post  
apa
 
Posts: n/a
Default

(Karl Winkler) wrote in message . com...
"greggery peccary" .@. wrote in message ...
hey out there, i'm in an audio production class and the instructor avoids a
direct answer to this question: is there something in the nature of a
slightly out of phase recording (such as that created by a difference in mic
distance) that can make something sound "warmer" like analog? i know all
about the "if it sounds good do it" philosophy, but i'm wondering to what
extent do engineers delve into phase corrction with their software? thanks
-greg


There is certainly a different sound when using a coincident pair of
mics vs. a spaced pair of mics, even if the exact same pair of mics is
used in the same room on the same acoustic source. The difference is
due to TOA (time of arrival) differences, but is this phase difference
is not constant with all frequencies or all parts of the source.
Imagine that a flute sitting to the front left of a spaced microphone
array, playing a melody. The sound from that flute arrives at the left
microphone first, then the one on the right. So there is a TOA
difference between the two outputs of the mic. However, the *phase*
difference varies with frequency, since phase is a sine *angle* issue.
In other words, the higher frequencies are more "out of phase" than
the lower frequencies (i.e. a greater angle of phase displacement).

For this situation, you actually could adjust the TOA between the two
signals by delaying the signal from the left microphone to match the
signal from the one on the right.

Even if you delay the signal from the left microphone to adjust for
TOA, there will be phase differences between the microphones - it
doesn't matter that there is only one source. Delaying one signal can
work for one frequency (or rather a few mathmatically related
frequencies), but not for a melody - there will be some frequencies
that will be 180 degrees out and will cancel if the mics are summed.
Because the waves of different frequencies have different physical
lengths and peak at different intervals, the phase relationships (what
point in it's phase curve each of the frequencies is when it arrives
at the mic related to what point other frequencies are in theirs) are
different in two mics at different distances from the source,
regardless of TOA.
  #7   Report Post  
Karl Winkler
 
Posts: n/a
Default

(apa) wrote in message . com...
(Karl Winkler) wrote in message . com...
"greggery peccary" .@. wrote in message ...
hey out there, i'm in an audio production class and the instructor avoids a
direct answer to this question: is there something in the nature of a
slightly out of phase recording (such as that created by a difference in mic
distance) that can make something sound "warmer" like analog? i know all
about the "if it sounds good do it" philosophy, but i'm wondering to what
extent do engineers delve into phase corrction with their software? thanks
-greg


There is certainly a different sound when using a coincident pair of
mics vs. a spaced pair of mics, even if the exact same pair of mics is
used in the same room on the same acoustic source. The difference is
due to TOA (time of arrival) differences, but is this phase difference
is not constant with all frequencies or all parts of the source.
Imagine that a flute sitting to the front left of a spaced microphone
array, playing a melody. The sound from that flute arrives at the left
microphone first, then the one on the right. So there is a TOA
difference between the two outputs of the mic. However, the *phase*
difference varies with frequency, since phase is a sine *angle* issue.
In other words, the higher frequencies are more "out of phase" than
the lower frequencies (i.e. a greater angle of phase displacement).

For this situation, you actually could adjust the TOA between the two
signals by delaying the signal from the left microphone to match the
signal from the one on the right.

Even if you delay the signal from the left microphone to adjust for
TOA, there will be phase differences between the microphones - it
doesn't matter that there is only one source. Delaying one signal can
work for one frequency (or rather a few mathmatically related
frequencies), but not for a melody - there will be some frequencies
that will be 180 degrees out and will cancel if the mics are summed.
Because the waves of different frequencies have different physical
lengths and peak at different intervals, the phase relationships (what
point in it's phase curve each of the frequencies is when it arrives
at the mic related to what point other frequencies are in theirs) are
different in two mics at different distances from the source,
regardless of TOA.


I agree that such a situation is hypothetical. However, if the only
difference between the two mics is their distance from the source
(say, a flute) and there are no reflections (i.e. this is an anechoic
chamber) then the two signals would be nearly identical at the two
mics (if the two mics are a foot or two apart, say), except for time
displacement (about 1mS). In other words, it's not a specific phase
angle at a specific frequency, but all frequencies, in the same phase
relationships, arriving at the two mics at different times. Sound
travels at the same speed despite the frequency. So, delaying the
signal from the nearer microphone should allow you to match it up with
the signal from the further microphone, with very little phase
cancellations. this is done all the time with spot mics in orchestral
recordings, for instance.

Karl Winkler
Lectrosonics, Inc.
http://www.lectrosonics.com
  #8   Report Post  
Mike Rivers
 
Posts: n/a
Default


In article writes:

I agree that such a situation is hypothetical. However, if the only
difference between the two mics is their distance from the source
(say, a flute) and there are no reflections (i.e. this is an anechoic
chamber) then the two signals would be nearly identical at the two
mics (if the two mics are a foot or two apart, say), except for time
displacement (about 1mS). In other words, it's not a specific phase
angle at a specific frequency, but all frequencies, in the same phase
relationships, arriving at the two mics at different times.


I know you know what you're trying to say g but if there's a time
difference between signals, there's a phase relationship (not phase
DIFFERNCE as prople are prone to say) that isn't 0 degrees at all
frequencies. With a constant time delay, the phase shift will increase
from zero to several thousand degrees (depending on how far apart the
mics are and how high up in frequency you wish to measure it).
However, the time difference will be constant, obviously.

When we "correct" an "out of phase" microphone by reversing the
polarity, we add 180 degrees of phase shift to whatever it happens to
be at whatever frequency we choose to look at. Sometimes that sounds
better, sometimes it sounds worse, sometimes it's a tossup. When we
move a mic a few inches relative to another mic (or delay one signal
relative to the other electronically), we change all the phase shift
numbers. By finding the right amount of delay, we can make them all
zero, which usually is a good thing, or at least the most
theoretically correct thing. But at times, there's some aesthetic
benefit to letting a phase difference reduce the amplitude of certain
frequencies.



--
I'm really Mike Rivers )
However, until the spam goes away or Hell freezes over,
lots of IP addresses are blocked from this system. If
you e-mail me and it bounces, use your secret decoder ring
and reach me he double-m-eleven-double-zero at yahoo
  #9   Report Post  
apa
 
Posts: n/a
Default

(Mike Rivers) wrote in message news:znr1098278336k@trad...
In article
writes:

I agree that such a situation is hypothetical. However, if the only
difference between the two mics is their distance from the source
(say, a flute) and there are no reflections (i.e. this is an anechoic
chamber) then the two signals would be nearly identical at the two
mics (if the two mics are a foot or two apart, say), except for time
displacement (about 1mS). In other words, it's not a specific phase
angle at a specific frequency, but all frequencies, in the same phase
relationships, arriving at the two mics at different times.


I know you know what you're trying to say g but if there's a time
difference between signals, there's a phase relationship (not phase
DIFFERNCE as prople are prone to say) that isn't 0 degrees at all
frequencies. With a constant time delay, the phase shift will increase
from zero to several thousand degrees (depending on how far apart the
mics are and how high up in frequency you wish to measure it).
However, the time difference will be constant, obviously.

When we "correct" an "out of phase" microphone by reversing the
polarity, we add 180 degrees of phase shift to whatever it happens to
be at whatever frequency we choose to look at. Sometimes that sounds
better, sometimes it sounds worse, sometimes it's a tossup. When we
move a mic a few inches relative to another mic (or delay one signal
relative to the other electronically), we change all the phase shift
numbers. By finding the right amount of delay, we can make them all
zero, which usually is a good thing, or at least the most
theoretically correct thing. But at times, there's some aesthetic
benefit to letting a phase difference reduce the amplitude of certain
frequencies.


Mike,

Am I wrong in the following, or did I misunderstand your post?

The delay created by mic distance introduces different degrees of
phase shift across the frequency spectrum IN ADDITION to the time
delay whereas a delay created by an electronic delay line (or by
shifting a digitally recorded track) introduces only a time delay and
preserves the phase relationship of the original and hence the second
can not be used to compensate fully for the first.

-Andy
Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Phase Correction SoundMax Pro Audio 12 October 2nd 04 03:29 AM
Phase Correction SoundMax Pro Audio 0 September 30th 04 02:05 AM
Doppler Distortion - Fact or Fiction Bob Cain Pro Audio 266 August 17th 04 06:50 AM
Transient response of actively filtered speakers Carlos Tech 64 November 26th 03 06:44 PM
Blindtest question Thomas A High End Audio 74 August 25th 03 05:09 PM


All times are GMT +1. The time now is 04:38 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"