Reply
 
Thread Tools Display Modes
  #1   Report Post  
Walter Harley
 
Posts: n/a
Default Is relative phase audible?

I know I'm hardly the first person to study whether relative phase is
audible :-) Probably not even the first person this week. Nonetheless, I
never actually tried it myself till now. So:

Two samples each containing a signal comprising 220Hz, 440Hz, 660Hz, and
880Hz sines at the same relative levels but with different phase
relationships sound pretty different, to me. (That is, I can reliably tell
them apart in blind randomized trials.)

Thought a few others might be interested to take a listen. So, I put the
..wav files and a bit of discussion up on my web page, at
http://www.cafewalter.com/cafewalter/signals/phase.htm.

By the way, does anyone know offhand whether MP3 encoders preserve relative
phase?

-walter


  #2   Report Post  
Don Pearce
 
Posts: n/a
Default

On Thu, 9 Dec 2004 23:47:13 -0800, "Walter Harley"
wrote:

I know I'm hardly the first person to study whether relative phase is
audible :-) Probably not even the first person this week. Nonetheless, I
never actually tried it myself till now. So:

Two samples each containing a signal comprising 220Hz, 440Hz, 660Hz, and
880Hz sines at the same relative levels but with different phase
relationships sound pretty different, to me. (That is, I can reliably tell
them apart in blind randomized trials.)

Thought a few others might be interested to take a listen. So, I put the
.wav files and a bit of discussion up on my web page, at
http://www.cafewalter.com/cafewalter/signals/phase.htm.

By the way, does anyone know offhand whether MP3 encoders preserve relative
phase?

-walter


They do indeed sound very different. The one you call inphase has an
"aaaaahhh" sound, and the inverted an "ohhhhhh" sound. Roughly-

Saving them as MP3 preserves the phase information very nicely.

d

Pearce Consulting
http://www.pearce.uk.com
  #3   Report Post  
Stephen Sank
 
Posts: n/a
Default

I feel like my ears are being sucked out when I'm between two sound sources out of phase with
each other.

--
Stephen Sank, Owner & Ribbon Mic Restorer
Talking Dog Transducer Company
http://stephensank.com
5517 Carmelita Drive N.E.
Albuquerque, New Mexico [87111]
505-332-0336
Auth. Nakamichi & McIntosh servicer
Payments preferred through Paypal.com
"Don Pearce" wrote in message ...
On Thu, 9 Dec 2004 23:47:13 -0800, "Walter Harley"
wrote:

I know I'm hardly the first person to study whether relative phase is
audible :-) Probably not even the first person this week. Nonetheless, I
never actually tried it myself till now. So:

Two samples each containing a signal comprising 220Hz, 440Hz, 660Hz, and
880Hz sines at the same relative levels but with different phase
relationships sound pretty different, to me. (That is, I can reliably tell
them apart in blind randomized trials.)

Thought a few others might be interested to take a listen. So, I put the
.wav files and a bit of discussion up on my web page, at
http://www.cafewalter.com/cafewalter/signals/phase.htm.

By the way, does anyone know offhand whether MP3 encoders preserve relative
phase?

-walter


They do indeed sound very different. The one you call inphase has an
"aaaaahhh" sound, and the inverted an "ohhhhhh" sound. Roughly-

Saving them as MP3 preserves the phase information very nicely.

d

Pearce Consulting
http://www.pearce.uk.com



  #4   Report Post  
Don Pearce
 
Posts: n/a
Default

On Fri, 10 Dec 2004 04:09:34 -0700, "Stephen Sank"
wrote:

I feel like my ears are being sucked out when I'm between two sound sources out of phase with
each other.


They aren't out of phase with each other. It is the relative phase of
the harmonics from the fundamental that has been changed. But both
channels are identical.

d

Pearce Consulting
http://www.pearce.uk.com
  #5   Report Post  
David Satz
 
Posts: n/a
Default

Walter,

I think your experiment is a good one, but any conclusions need to be
carefully drawn for at least two reasons that I can think of:

[a] It's crucial to define exactly what question is being answered.
Psychoacoustics describes (and the anatomy of human hearing supports)
an ability to hear relative phase in the sense that you're using the
term, but only below 1500 Hz or so. Above that range the ability
disappears, but all your test signals were well below that point. So
let's beware the false dichotomy: "we can hear relative phase" / "we
cannot hear relative phase" when it's not quite so simple.

[b] When you vary the phase relationships among signal components, you
alter the peak levels of the composite signal (sometimes greatly) even
though the effective values remain the same. Some of your test
equipment may well behave differently given these changes in peak
level--a power amplifier or a recorder used in the experiment may
produce audibly higher or lower distortion levels, for example.
Listeners may well respond differently to these differences in peak
level.

Of course you can't be blamed the impossibility of keeping both the
effective loudness and the peak levels constant while altering the
phase relationships. Point [b] can indeed be considered a valid reason
to maintain relative phase relationships carefully. But this
uncontrolled variable limits the conclusions that can fairly be drawn
from any such experiment.

--best regards



  #7   Report Post  
Karl Winkler
 
Posts: n/a
Default

I know I'm hardly the first person to study whether relative phase
is
audible :-)



Certainly not, since, by definition, phase IS relative. There is no
such thing as absolute phase.


I think it depends on a definition. To me "absolute phase" is often the
term used when someone really means "polarity". But really, polarity is
also a relative term, requiring some reference, i.e. "point of origin"
(such as "positive polarity is a positive voltage on pin 2 and a
positive excursion of the woofer").

But it has been determined that "absolute phase" can be heard, if the
original waveform is asymetrical, such as from a single reed instrument
(sax, clarinet, etc.), human voice, drums and string instruments. In
other words, if the kick drum produces a compression of the local
volume of air on a hit, then the woofer reproducing it should also
create an increase in pressure, i.e. it is in "absolute phase" with the
point of origin.

Which brought me to a thought I had yesterday... many engineers use two
mics on the toms and snare of the drum kit. And to have the "absolute
phase" match for these two mics, they flip the bottom mic out of
polarity. However, I think it would be better to flip the *top* mic out
of polarity, since when the skin is first hit, it goes *down* on the
heads, thus pulling the diaphragm of the top mic out and pushing the
diaphragm of the bottom mic in. Thus to get a positive excursion at the
speaker, the bottom mic should be used as the reference, with the top
mic "flipped" to match.

Undoubtedly, there are engineers who are already doing this, and maybe
I've exposed a "secret". Sorry about that...
Karl Winkler
Lectrosonics, Inc.
http://www.lectrosonics.com

  #8   Report Post  
Michael Putrino
 
Posts: n/a
Default


"Karl Winkler" wrote in message
oups.com...

Which brought me to a thought I had yesterday... many engineers use two
mics on the toms and snare of the drum kit. And to have the "absolute
phase" match for these two mics, they flip the bottom mic out of
polarity. However, I think it would be better to flip the *top* mic out
of polarity, since when the skin is first hit, it goes *down* on the
heads, thus pulling the diaphragm of the top mic out and pushing the
diaphragm of the bottom mic in. Thus to get a positive excursion at the
speaker, the bottom mic should be used as the reference, with the top
mic "flipped" to match.

Karl Winkler
Lectrosonics, Inc.
http://www.lectrosonics.com


But why? When you sit and listen to a drummer play, you are at the side of
the snare. So, micing a snare the way you suggest (or even the other way)
would produce something other than what is heard in the room. That might be
why minimalist micing of drums is prefered by many...more realistic.

Mike


  #9   Report Post  
Mark
 
Posts: n/a
Default

If you combine the fundamental with its harmonics and vary the phase of
the harmonics, the peak amplitude of the waveform will change. If you
are not carful with scaling etc, the peaks can be clipped. It is
because of this distortion and other non-lineatieies that you might be
able to tell. If the playback system (and your ears) were linear, you
should not be able to perceive a change in the phase of the harmonics
relative to the fundamental.

Mark

  #10   Report Post  
Scott Dorsey
 
Posts: n/a
Default

Mark wrote:
If you combine the fundamental with its harmonics and vary the phase of
the harmonics, the peak amplitude of the waveform will change. If you
are not carful with scaling etc, the peaks can be clipped. It is
because of this distortion and other non-lineatieies that you might be
able to tell. If the playback system (and your ears) were linear, you
should not be able to perceive a change in the phase of the harmonics
relative to the fundamental.


What the original poster is measuring is the audibility of group delay.
When someone says "relative phase" I figure they are talking about phase
differences between channels.

There is some good research on the audibility of group delay out there.
Including Koray Oczam's paper, AES preprint 5740.
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."


  #11   Report Post  
David Morgan \(MAMS\)
 
Posts: n/a
Default


"Michael Putrino" wrote in message ...

"Karl Winkler" wrote in message
oups.com...

Which brought me to a thought I had yesterday... many engineers use two
mics on the toms and snare of the drum kit. And to have the "absolute
phase" match for these two mics, they flip the bottom mic out of
polarity. However, I think it would be better to flip the *top* mic out
of polarity, since when the skin is first hit, it goes *down* on the
heads, thus pulling the diaphragm of the top mic out and pushing the
diaphragm of the bottom mic in. Thus to get a positive excursion at the
speaker, the bottom mic should be used as the reference, with the top
mic "flipped" to match.

Karl Winkler
Lectrosonics, Inc.
http://www.lectrosonics.com


But why? When you sit and listen to a drummer play, you are at the side of
the snare. So, micing a snare the way you suggest (or even the other way)
would produce something other than what is heard in the room. That might be
why minimalist micing of drums is prefered by many...more realistic.

Mike



Unfortunately, 'realistic' is very often not do-able because of bad rooms.
Close miking a drum kit allows the producer to at least have a chance at
creating a space for the kit which actually fits a mix, rather than spend the
time figuring out how to eliminate the horrible sounding room that came with
the miniscule number of tracks available, without destroying the source.

In my experience, minimal miking only works in a special set of circumstances,
which span every variable from the drummer's performance to the room itself.
With a little practice, more sources will allow a good engineer to re-create what
may have been missing from the tracking quality. Sound design is a big part of
putting together the mix... it doesn't have to be something bigger than life and
can still end up sounding "natural". Two or three sources that are crammed
with multiple and possibly unbalanced, awful tones with waaay too much 'room'
are often nigh on impossible to work with if drums are meant to cut through
the mix at all.

In most cases that I have witnessed (in my 30 years of doing this) where
minimal miking was used on the drums are either very genre' specific, as in
jazz - but even more so lately, the main reasons people are purporting to be
'minimalist' in technique, is that they simply don't have the space, the mics,
the available tracks, or the experience to take a bigger picture to work with.

It may be preferred by many, but in the global scheme of things, in large
studios, that's actually very few.

--
David Morgan (MAMS)
http://www.m-a-m-s DOT com
Morgan Audio Media Service
Dallas, Texas (214) 662-9901
_______________________________________
http://www.artisan-recordingstudio.com





  #12   Report Post  
Karl Winkler
 
Posts: n/a
Default

Mike, this is indeed a good point you are making. In the "for my
method" camp I would say that by presenting a positive waveform to the
listener, it would create an "impact" that is associated with the
snare. But what you say is true, and really, what would be needed is
something that recreates a realistic impression of *height* as well as
L-R information. And at this point, no one seems to be working on this
principal. Clearly, that's what Ambisonics was/is doing... but none of
the current 5.1, 7.1 etc. systems seem to take this into account.

I'm personally a huge fan of minimal drum miking when it can be done
right. In fact, for several years when I was touring with the Air Force
jazz big band, I used 3 mics for the drums: two overheads and a kick
mic. I would often get comments about how "realistic" the overall sound
was. My goal was to maintain the impact of the drums, and the natural
relationships between the different sources in the kit, rather than
trying to isolate and present each source.
Karl Winkler
Lectrosonics, Inc.
http://www.lectrosonics.com

  #13   Report Post  
Michael Putrino
 
Posts: n/a
Default


"David Morgan (MAMS)" wrote in message
news:EKlud.3709$mn6.3377@trnddc07...

"Michael Putrino" wrote in message

...

"Karl Winkler" wrote in message
oups.com...

Which brought me to a thought I had yesterday... many engineers use

two
mics on the toms and snare of the drum kit. And to have the "absolute
phase" match for these two mics, they flip the bottom mic out of
polarity. However, I think it would be better to flip the *top* mic

out
of polarity, since when the skin is first hit, it goes *down* on the
heads, thus pulling the diaphragm of the top mic out and pushing the
diaphragm of the bottom mic in. Thus to get a positive excursion at

the
speaker, the bottom mic should be used as the reference, with the top
mic "flipped" to match.

Karl Winkler
Lectrosonics, Inc.
http://www.lectrosonics.com


But why? When you sit and listen to a drummer play, you are at the side

of
the snare. So, micing a snare the way you suggest (or even the other

way)
would produce something other than what is heard in the room. That might

be
why minimalist micing of drums is prefered by many...more realistic.

Mike



Unfortunately, 'realistic' is very often not do-able because of bad rooms.
Close miking a drum kit allows the producer to at least have a chance at
creating a space for the kit which actually fits a mix, rather than spend

the
time figuring out how to eliminate the horrible sounding room that came

with
the miniscule number of tracks available, without destroying the source.

In my experience, minimal miking only works in a special set of

circumstances,
which span every variable from the drummer's performance to the room

itself.
With a little practice, more sources will allow a good engineer to

re-create what
may have been missing from the tracking quality. Sound design is a big

part of
putting together the mix... it doesn't have to be something bigger than

life and
can still end up sounding "natural". Two or three sources that are

crammed
with multiple and possibly unbalanced, awful tones with waaay too much

'room'
are often nigh on impossible to work with if drums are meant to cut

through
the mix at all.

In most cases that I have witnessed (in my 30 years of doing this) where
minimal miking was used on the drums are either very genre' specific, as

in
jazz - but even more so lately, the main reasons people are purporting to

be
'minimalist' in technique, is that they simply don't have the space, the

mics,
the available tracks, or the experience to take a bigger picture to work

with.

It may be preferred by many, but in the global scheme of things, in large
studios, that's actually very few.

--
David Morgan (MAMS)
http://www.m-a-m-s DOT com
Morgan Audio Media Service
Dallas, Texas (214) 662-9901
_______________________________________
http://www.artisan-recordingstudio.com


Yes, I agree...room is everything. What I was trying to convey, but did so
badly, was that I don't think that it would make much difference whether
switching the phase on the top mic or the bottom mic. An outward or inward
pressure from the micing is not what you would normally hear in a
performance anyway...so pick one...hopefully the one that sounds better to
you at the time...mixed in with the whole band. Record them on seperate
tracks and you can flip to your hearts content.

Mike



  #14   Report Post  
DeserTBoB
 
Posts: n/a
Default

On Thu, 9 Dec 2004 23:47:13 -0800, "Walter Harley"
wrote:

Two samples each containing a signal comprising 220Hz, 440Hz, 660Hz, and
880Hz sines at the same relative levels but with different phase
relationships sound pretty different, to me. (That is, I can reliably tell
them apart in blind randomized trials.) snip


This has been known for years. The Hammond "organ", first sold in
1935, uses additive synthesis in a failed attempt to recreate the
sound of various organ stops. Due to the construction of the
tonewheel generator, all the tonewheel are in a different phase
relationship every time the organ starts up, since all the wheels are
clutch driven and slip slightly upon startup. The combination you
cite above would be equivalent to A below Middle C with the 8', 4', 2
2/3' and 2' drawbars pulled out. However, there's a catch here, since
Hammonds are roughly tuned (not exactly) to Equal Temperatment, so the
2 2/3' pitch would be slightly flat from Just Temperament, and thus,
not exactly 660 Hz. In this case, the 2 2/3' pitch would be 659.255 Hz
in ET, and Hammonds "stretch and shirnk" ET just a tad here and there
because of the limitations of the mathematics of the tonewheel
generator. In any event, there'll be some beating from this,
regardless of phase.

Every time the organ is started up, this combination will sound
different to the ear. The same will go when you use tones derived
from a top octave generator, or a bank of free running oscillators
locked with a PLL; vary the phase, the tone will sound slightly
different there and there.

So, to answer your question, yes, phase angle can change the timbre of
harmonically complex tones. How much is a matter of conjecture.
Tests done by the Allen Organ Company showed that relative phase is
far less a determining factor in timbre "footprint" than is relative
amplitude of harmonics, but, contrary to what many had said in the
past, it IS discrenable. However, as Messrs. Fletcher and Munson
learned at Bell Telephone Labs in the '20s, pitch recognition up above
the midrange area, say above 1800 Hz, becomes less accurate as
frequency increases.

Thus, it can be argued quite well that changes in phase angle that
mostly affect the top end, such as we see in digital PCM, do not
materially affect the tonality of the sound, but rather become mostly
indecipherable to the human ear. Again, amplitude is the prime
consideration, with phase angle ranking way down the list. At
frequencies above about 8 KHz, pitch recognition in most people goes
away, anyway, so phase is totally irrelevant UNLESS the change in
phase angle changes the difference products of IM distortion. THEN
you open a whole new kettle of fish!

dBdB
  #15   Report Post  
David Morgan \(MAMS\)
 
Posts: n/a
Default


"Michael Putrino" wrote in message ...

Yes, I agree...room is everything. What I was trying to convey, but did so
badly, was that I don't think that it would make much difference whether
switching the phase on the top mic or the bottom mic. An outward or inward
pressure from the micing is not what you would normally hear in a
performance anyway...so pick one...hopefully the one that sounds better to
you at the time...mixed in with the whole band. Record them on seperate
tracks and you can flip to your hearts content.



Ahhh... gotcha'. In my main recording room, the owner has a dozen or so
polarity reversed cables which he uses religiously on toms and other things
wherein he believes that the downward motion of the instrument, due to the
initial attack being a movement away from the microphone, will make a better
and more accurate recording. I've tried it and could tell diddly-squat difference
in my results, but he lives by it and won't consider anything else.

So... point now well taken. ;-)

--
David Morgan (MAMS)
http://www.m-a-m-s DOT com
Morgan Audio Media Service
Dallas, Texas (214) 662-9901
_______________________________________
http://www.artisan-recordingstudio.com





  #16   Report Post  
Nathan West
 
Posts: n/a
Default

"David Morgan (MAMS)" wrote:

Unfortunately, 'realistic' is very often not do-able because of bad rooms.


OT to the subject, but here is my thought on *realistic* sounds and drums.

I don't think Realistic sounding drums is possible since *realistic* is too relative a term for
anything as eclectic as drums. First where do we hear drums? Clubs? Bars? Concerts? Living
Rooms? What constitutes a realistic drum sound then? It can't be in a nice recording studio
room with great acoustics, since very few places that we listen to drums are like that, and
rarely do people listen to drums in those kinds of rooms. But we don't record drums to sound
like the aforementioned locations either (well mostly we don't).

So it appears recording *natural* sounding drums is not about letting them sound natural like
they really do in the majority of rooms we listen to them in, rather it is about eliminating
the bad acoustical drum noise to make room in a recording for some good acoustical drum
noise. Which BTW I think you do point out later in your post. Am I off base in this thinking
though?

--
Nathan

"Imagine if there were no Hypothetical Situations"


  #17   Report Post  
Mike Rivers
 
Posts: n/a
Default


In article .com writes:

I think it depends on a definition. To me "absolute phase" is often the
term used when someone really means "polarity".


Spoken like an ex-microphone salesman. g I don't think I've ever
heard anyone use the term "absolute phase" other than when they (I)
tried to explain that phase is the measure of the time relationship
between two signals. If you have only one signal, you have nothing for
it to have any phase relationship with.

But really, polarity is
also a relative term, requiring some reference, i.e. "point of origin"
(such as "positive polarity is a positive voltage on pin 2 and a
positive excursion of the woofer").


For microphones and analog tape (don't know about speakers) there IS a
reference. Positive pressure on the diaphragm makes Pin 2 go positive
with respect to Pin 3. Anything that works the other way is either
miswired or built before the standard was universally adopted,
sometime in the early 1980s, I think.

But it has been determined that "absolute phase" can be heard, if the
original waveform is asymetrical


That's what we call "absolute polarity" or more accurately and
snootily, "acoustical polarity."

Which brought me to a thought I had yesterday... many engineers use two
mics on the toms and snare of the drum kit. And to have the "absolute
phase" match for these two mics, they flip the bottom mic out of
polarity. However, I think it would be better to flip the *top* mic out
of polarity, since when the skin is first hit, it goes *down* on the
heads, thus pulling the diaphragm of the top mic out and pushing the
diaphragm of the bottom mic in.


But that's the way you hear it. Hit the drum and it sucks your eardrum
out of your ear, so, to be accurate, you want the speaker to to do the
same thing when playing the recording. But I've never heard of someone
inverting the polarity of a kick drum mic when it's placed on the
beater side of the head to make the speaker move in the same direction
as a mic more conventionally placed would make it move. Though I have
on occasion decided that the mix just worked a little better when I
inverted the polarity of the kick mic (no matter which side of the
head it was on) so I do it for the sake of sounding better rather than
making a recording that someone can watch and say "yup, he got the
polarity of the kick mic right."

But one thing you can't control in the studio is whether the listeners
at home have their speakers wired correctly.


--
I'm really Mike Rivers )
However, until the spam goes away or Hell freezes over,
lots of IP addresses are blocked from this system. If
you e-mail me and it bounces, use your secret decoder ring
and reach me he double-m-eleven-double-zero at yahoo
  #18   Report Post  
Neil Henderson
 
Posts: n/a
Default

"Karl Winkler" wrote in message
oups.com...
However, I think it would be better to flip the *top* mic out
of polarity, since when the skin is first hit, it goes *down* on the
heads, thus pulling the diaphragm of the top mic out and pushing the
diaphragm of the bottom mic in. Thus to get a positive excursion at the
speaker, the bottom mic should be used as the reference, with the top
mic "flipped" to match.

Undoubtedly, there are engineers who are already doing this, and maybe
I've exposed a "secret". Sorry about that...


No, I don't think so, anyway... the response of the head is fast enough that
unless you're both top & bottom mic'ing a given drum, the difference is
negligible, relative to the whole kit... unless by doing so you generate
other phase related issues such as how that mic's signal now relates to,
let's say, that of the overheads. Think about it... you've got all these
variables to consider:
1.) Is the mic ONLY picking up the sound generated by the center of the
head, where the stick (assumedly) strikes? No, it's also picking up the
sound from the shell, and from the outer edges of the head (which generate a
wave faster than the center does, since the edges have less distance to
travel on "recoil" than the center of the head does. So it's a pretty
complex sound that a drum mic is picking up, even apart from reflections
from room surfaces.
3.) Is the mic pointed with the capsule directly downward right at the
strike point? Never - except in the case of a mic stuck right in front of a
kick drum beater, perhaps... therefore you've got more "relative" phase
happening than "absolute" phase in every circumstance on each drum.
3.) The mic is also picking up reflections from the floor, and any
surrounding walls - how do these reflections relate to, for example, those
picked up from the overheads if you were to flip the phase on a top snare
mic?
4.) Is the few microseconds of difference in when the sound arrives at the
mic on a single close-mic'ed snare (again assuming top-micing only) if you
flip the polarity, going to make a detectable difference in phase relative
to - let's say - the kick mic, which is normally/often mic'ed so that the
head is in excursion relative to the mic diaphragm when it's "kicked"?
Probably not - what's more likely to happen is that the combination of the
waves generated by the kick shell & the floor & the surrounding wall
surfaces are going to have more of an impact as to whether it sounds more
in-phase or out of phase. Same goes for what the in-phase kick mic picks up
from the snare when the snare is struck (which is normally/often at a much
lower level if the mic is located inside the kick).

There's more, I'm sure, but off the top of my head, that's the stuff that
immediately comes to mind. I've tried messing around with what you mentioned
before, just out of curiosity, and to me it just makes more sense to keep
everything in the same relative polarity, but just maintain awareness of the
normal stuff that can cause phase issues (distances between mics, esp. how
the overheads are set up, dealing with reflections, etc).

Anyway, having said all that; Karl, have you tried that which you mentioned,
and do you prefer it that way?

Neil Henderson


  #19   Report Post  
Neil Henderson
 
Posts: n/a
Default


"David Satz" wrote in message
oups.com...
Walter,

I think your experiment is a good one, but any conclusions need to be
carefully drawn for at least two reasons that I can think of:

[a] It's crucial to define exactly what question is being answered.
Psychoacoustics describes (and the anatomy of human hearing supports)
an ability to hear relative phase in the sense that you're using the
term, but only below 1500 Hz or so. Above that range the ability
disappears,


Hey David... since the ear has a natural presence peak at around 3k,
wouldn't one be able to detect things in that range even more readily?

Neil Henderson


  #20   Report Post  
Mark
 
Posts: n/a
Default


DeserTBoB wrote:
On Thu, 9 Dec 2004 23:47:13 -0800, "Walter Harley"
wrote:

Two samples each containing a signal comprising 220Hz, 440Hz, 660Hz,

and
880Hz sines at the same relative levels but with different phase
relationships sound pretty different, to me. (That is, I can

reliably tell
them apart in blind randomized trials.) snip


This has been known for years. The Hammond "organ", first sold in
1935, uses additive synthesis in a failed attempt to recreate the
sound of various organ stops. Due to the construction of the
tonewheel generator, all the tonewheel are in a different phase
relationship every time the organ starts up, since all the wheels are
clutch driven and slip slightly upon startup. The combination you
cite above would be equivalent to A below Middle C with the 8', 4', 2
2/3' and 2' drawbars pulled out. However, there's a catch here,

since
Hammonds are roughly tuned (not exactly) to Equal Temperatment, so

the
2 2/3' pitch would be slightly flat from Just Temperament, and thus,
not exactly 660 Hz. In this case, the 2 2/3' pitch would be 659.255

Hz
in ET, and Hammonds "stretch and shirnk" ET just a tad here and there
because of the limitations of the mathematics of the tonewheel
generator. In any event, there'll be some beating from this,
regardless of phase.

Every time the organ is started up, this combination will sound
different to the ear. The same will go when you use tones derived
from a top octave generator, or a bank of free running oscillators
locked with a PLL; vary the phase, the tone will sound slightly
different there and there.


This is not the same thing as the OP talked about. The OP talked about
one note and harmonics of that note. What you described are differenct
notes of an organ and the fact that when they are not exactly equally
tempered it gives the organ body which is a good thing. These are two
different things.

Your ear cannot perceive a change in phase between the fundamental and
the harmonics unless there is a non-linearity that distorts the
waveform which then changes the amplitude of additional harmoincs.
Mark



  #21   Report Post  
Walter Harley
 
Posts: n/a
Default

"David Satz" wrote in message
oups.com...
Walter,

I think your experiment is a good one, but any conclusions need to be
carefully drawn for at least two reasons that I can think of: [...]


Both very good points. I do discuss those in my writeup.

-w


  #22   Report Post  
Walter Harley
 
Posts: n/a
Default

"Mark" wrote in message
oups.com...
Your ear cannot perceive a change in phase between the fundamental and
the harmonics unless there is a non-linearity that distorts the
waveform which then changes the amplitude of additional harmoincs.


Your ear might not be able to, but I just demonstrated that my ear can. I
do not believe there is any substantive nonlinearity in the system on which
I explored this (described in my writeup).

It is precisely this (mis-)conception which I hoped to address.

Out of interest: Mark, can you hear the difference between the two .wav
files?

-walter


  #23   Report Post  
Walter Harley
 
Posts: n/a
Default

"Scott Dorsey" wrote in message
...
What the original poster is measuring is the audibility of group delay.
When someone says "relative phase" I figure they are talking about phase
differences between channels.


Thanks for the correction in terminology. I'll update my web page.


There is some good research on the audibility of group delay out there.
Including Koray Oczam's paper, AES preprint 5740.


....and thanks for the reference. It was an article in the latest JAES that
motivated me to go explore this (I've always wondered about the schism
between people saying it's inaudible and people complaining about graphic
EQ's screwing up the sound, but never done anything about it). I've just
downloaded Oczam's paper and will check it out.

-walter


  #24   Report Post  
Scott Dorsey
 
Posts: n/a
Default

Walter Harley wrote:

...and thanks for the reference. It was an article in the latest JAES that
motivated me to go explore this (I've always wondered about the schism
between people saying it's inaudible and people complaining about graphic
EQ's screwing up the sound, but never done anything about it). I've just
downloaded Oczam's paper and will check it out.


Well, graphic EQs screw up the sound in enough different ways that the group
delay issue may not even be the most serious one. Just looking at the actual
frequency response of a graphic configured for a gradual rise will make you
feel queasy.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
  #25   Report Post  
Mark
 
Posts: n/a
Default


Walter Harley wrote:
"Mark" wrote in message
oups.com...
Your ear cannot perceive a change in phase between the fundamental

and
the harmonics unless there is a non-linearity that distorts the
waveform which then changes the amplitude of additional harmoincs.


Your ear might not be able to, but I just demonstrated that my ear

can. I
do not believe there is any substantive nonlinearity in the system on

which
I explored this (described in my writeup).

It is precisely this (mis-)conception which I hoped to address.

Out of interest: Mark, can you hear the difference between the two

..wav
files?

-walter


Why no Walter, I could not hear a difference but that means nothing.

Have you demonstated that you hear a difference using a double blind
test as you describe in the article?

If you wish to address the (mis)conception as you say, then you need to
also verify the validity of the experiment by checking the 2 waveforems
with an oscilloscope and a spectrum analyzer. Please use the scope to
verify that neither waveform is being distorted and use the SA to
verify that all the harmonics are at the same relative amplitude in
both cases. This will verify that the ONLY thing different is the
phase and that there are no additional harmoincs being added or that
the amplitude of the ones you put in are changed.


If you can verify the experimient is valid this way, and can still
statistically hear a differnce in a double blind test, then you will
begin to get my attention.

I applaud you for questioning the party line but you must apply
rigorous checks or else its cold fussion.


Mark



  #26   Report Post  
Mark
 
Posts: n/a
Default


Walter,

I just tried your experiment in my lab in a different way. I used 2
audio oscillators, set 1 to 440 and the other to 880. I summed them
and looked at the combination waveform on the scope and listend on my
monitors. Since the 880 is not exactly 2x the 440, the relative phase
drifts through slowly. To my surprise, I could hear a difference as
the waveform changed. But then I thought about it realized that my 440
generator (and yours) is not perfect and generates some 880. This 880
combines with the 880 that I added from the other generator and as the
phase relationship changes, the AMPLITUDE of the 880 changes. This in
fact is what it sounded like. So again, I belive you need to verify
your experimental setup to ensure that ALL the harmonics are at the
same AMPLITUDE in both waveforms. A change in the amplitude of the
harmoics will obviously change the "timbre" of the sound.
Thanks

Mark

  #27   Report Post  
 
Posts: n/a
Default

Hi Walter,

This is an interesting experiment. However, I feel, as some other
posters do, that there are some sources of error in it.

1) The maximum peak to rms ratio becomes larger as the number of tones
added together is
increased. Hence, to keep the maximum peak to rms ratio to a minimum,
I suggest that you
use only two tones.

2) Sound generation equipment generates harmonic distortion. By chosing
tones that
are harmonically related, you will augment or diminish the harmonics
generated by the equipment thereby accentuating the effect. So, I
suggest that you not only use two tones but that they be not related
harmonically. Perhap 220 Hz and pi*220 Hz may work.

3) Putting two or more tones through an amplifier results in intermods.
I suggest that you put one tone in the left channel and the other in
the right channel just to get this out of the picture.

Btw, I did listen to your experiment on junky cheap powered computer
speakers I bought
many years ago and could not hear any difference. (This being rec.audio
pro should I run for cover? But I do believe that on better
equipment the difference would be audible. And I did see that the
in-phase set had higher peaks than the out-of-phase set.

Joe

  #28   Report Post  
David Satz
 
Posts: n/a
Default

the ear has a natural presence peak at around 3k

That isn't quite what equal loudness curves (e.g. Fletcher/Munson)
represent, but OK--your meaning is clear enough.


wouldn't one be able to detect things in that range even more

readily?

That just doesn't turn out to be true in practice. So you've just
offered a highly intelligent explanation for a fact that doesn't exist.
--best regards

  #29   Report Post  
Neil Henderson
 
Posts: n/a
Default


"David Satz" wrote in message
oups.com...
the ear has a natural presence peak at around 3k


That isn't quite what equal loudness curves (e.g. Fletcher/Munson)
represent, but OK--your meaning is clear enough.


wouldn't one be able to detect things in that range even more

readily?

That just doesn't turn out to be true in practice. So you've just
offered a highly intelligent explanation for a fact that doesn't exist.
--best regards


Hmmm... I don't get what you're saying - can you run that by me again? That
was a serious question, BTW - not a flippant remark, if that's what you
thought.

Neil Henderson


  #30   Report Post  
Arny Krueger
 
Posts: n/a
Default

"Neil Henderson" wrote in message
. com
"David Satz" wrote in message
oups.com...
Walter,

I think your experiment is a good one, but any conclusions need to be
carefully drawn for at least two reasons that I can think of:

[a] It's crucial to define exactly what question is being answered.
Psychoacoustics describes (and the anatomy of human hearing supports)
an ability to hear relative phase in the sense that you're using the
term, but only below 1500 Hz or so. Above that range the ability
disappears,


Hey David... since the ear has a natural presence peak at around 3k,
wouldn't one be able to detect things in that range even more readily?


For what this "Me too" post is worth, David's got it exactly right.

The normal given frequency for the point where the ear is most sensitive
based on intensity, is more like 4 KHz than 3. The reason why is usually
given is the ear-canal resonance that you seem to be referring to.

While the ear is most sensitive to sound based on intensity at about 4 KHz ,
the ear is most sensitive to other aspects of sound at other frequencies.

For example, the ear is most sensitive to FM distortion when the FM
modulation occurs at very low frequencies, a few Hz. The ear is often most
sensitive to nonlinear distortion when the test signal is at some frequency
other than 4 KHz, but there is a spurious tone generated by the distortion
that appears around 4 KHz, and so on.




  #31   Report Post  
Ty Ford
 
Posts: n/a
Default

On Sun, 12 Dec 2004 01:20:50 -0500, David Satz wrote
(in article .com):

the ear has a natural presence peak at around 3k


That isn't quite what equal loudness curves (e.g. Fletcher/Munson)
represent, but OK--your meaning is clear enough.


wouldn't one be able to detect things in that range even more

readily?

That just doesn't turn out to be true in practice. So you've just
offered a highly intelligent explanation for a fact that doesn't exist.
--best regards


The FM curve illustrates that at low levels the 3kHz area is dominant. As the
SPL rises, however, the peak is not as dominant.

Regards,

Ty Ford



-- Ty Ford's equipment reviews, audio samples, rates and other audiocentric
stuff are at www.tyford.com

  #32   Report Post  
Neil Henderson
 
Posts: n/a
Default


"Arny Krueger" wrote in message
...

While the ear is most sensitive to sound based on intensity at about 4 KHz
, the ear is most sensitive to other aspects of sound at other
frequencies.


OK, I get what you're saying now. Thanks!

Neil Henderson


  #33   Report Post  
DeserTBoB
 
Posts: n/a
Default

On 10 Dec 2004 20:35:10 -0800, "Mark" wrote:

This is not the same thing as the OP talked about. snip


Yes, it is. Read this post.

The OP talked about
one note and harmonics of that note. snip


....which is what I was talking about.

What you described are differenct
notes of an organ and the fact that when they are not exactly equally
tempered it gives the organ body which is a good thing. These are two
different things. snip


Wrong on two counts. On a "real" organ, mutation stops, which
coincide with the third, fifth, and sixth (and on up) harmonics of the
fundamental, are tuned to Just Temperament. Unit organs, such as the
Wurlitzer used in theaters in the '20s, and other highly unified
organs, have derived mutations from their fundamental ranks, which is
tuned to Equal Temperament, and thus all dissonent mutation stops will
be technically "out of tune." The Hammond (except for the failed
G-100) use this same sort of unification, and thus, only the third,
fifth, sixth and the third harmonic of the suboctave fundamental are
mutations, or dissonant false "harmonics", which are actually tuned
approximately to Equal Temperament. All other tones (the consonent
harmonics) will be locked in phase once the tonewheel generator gets
up to speed, unless there is a defective tonewheel clutch somewhere.
You can easily prove this with a scope on the output.

Your ear cannot perceive a change in phase between the fundamental and
the harmonics unless there is a non-linearity that distorts the
waveform which then changes the amplitude of additional harmoincs. snip


Sensitivity to phase angle of harmonics of a fundamental was thought
for years to be imperceptible to the human ear, but research in the
'80s proved otherwise, but nothing major enough that really changes
any predating basic theories about timbral synthesis. As stated
earlier, Allen Organ, and developers of the Musicom system in the UK,
discovered that changes in relative phase DO influence the listener's
perception of "timbre," which, as Helmholz defined, is the sum total
effect of a fundamental and its various harmonics at varying
amplitudes which makes up the signature sound of a tuned instrument.
However, this effect only happens in the most pitch sensitive area of
human hearing, determined in the 1920s to be that area between 200 and
1800 Hz. Thus a 220 Hz (A below middle C) note on, say, a slender
scaled organ stop, will have a harmonic train of as many as 50
harmonics, all of which will have their own pecular phase relationship
to the fundamental. Now, the first three harmonics are both the most
discernable as to pitch, whereas pitch accuity above that point
becomes progressively poorer. Not a worry, since these harmonics are
locked in tune with the fundamental...or are they?

In this example, they surely are NOT. Due to the resistance to the
propagating wave front within the slender scaled pipe body of such a
stop, these first three harmonics actually undlate in phase enough to
cause a very tiny, almost inperceptable "beating" amongst themselves.
Tests determined that the listener CAN determine when this phase shift
occurs, even though the relative amplitudes of all concerned harmonics
varies less than .5 dB, an almost imperceptible change in amplitude to
the human ear even under the best of conditions. As the pitch of each
related harmonic rises, this "off phase" behavior increases with
frequency until, in the upper regions of audibility, they even waver
off pitch. The ear, however, cannot discern this pitch change per se,
but CAN detect the sum and difference byproducts (think IM distortion)
and the changes in amplitude caused by beating and phase cancellation.
It was found by Fast Fourier Analysis that the phase relationships of
this harmonic train don't stay in relative phase with each other,
either, which is to say that a 30° negative shift in phase of one
particular harmonic doesn't necessarily guarantee a 60° negative shift
of the next consonant harmonic in the train.

In another example I gave, using a top octave generator/divider
scheme, the harmonics WILL indeed stay in abosulute locked phase,
regardless of how high in frequency the harmonics extend, the limit of
which is the frequency outputs of the TOG itself. You can, however,
by using LC components, shift the phase of various harmonics after
they're "divided out" and bought out seperately, and thus you can test
the question that way, as has been also done in the past. Locked
phase relationship is why "divider organs" (Thomas, Lowery, others)
always sounded even more sterile than the Hammond, which was bad
enough. At least the imperfections of the Hammond due to tonewheel
magnetization and other noises would provide a LITTLE interest to the
composite tone; the divider organs had none. Example of a divider
organs that failed to live up to the Hammond name: The X-66.

All of this is why it took makers of electronic posuers so long to
figure out why their imitation strings would sometimes sound like
reeds (or, worse, "frying bacon.") It's still very difficult indeed
for a digital system, locked in phase such they are unless using VERY
long samples/models and VERY fast clocks, to faithfully produce the
sound of an organ string stop, especially one of very fine scale and
bright timbre. For that matter, a real violin or viola is a challenge
not met by synthesizers either, for much the same reasons, except that
now we're dealing with a mechanically excited string instead of a
narrow air column, and you're also dealing with the technique of the
player.

Back in the box. Go into your room, fire up that old tonewheeled
"coffee grinder" Hammond, connect a scope to one of the G-G terminals
(yes, old Hammonds have a tip-ring output, but with no center tap),
draw out the first, third, fourth, sixth and ninth drawbars to 8.
Observe. You will see LOTS of non-pitch related stuff going on there
(due to magnetized tonewheels, preamp distortion and noise, the works)
but you won't see much, if any, variation in relative phase between
the various consonent harmonics playing. Fine. If you have a memo
scope, take a snapshot now. Shut down the Hammond and let it coast
down. Fire it back up again, then look at the same note played the
same way. The waveform will be different, because all (or most) of
the tonewheels involved have slipped their clutches upon start up.
Does it sound the exact same? Close...but listen REAL close. It's a
LITTLE different now.

You have to be able to filter out all the "crap" that in even good
Hammonds (most players in rock prefer really screwed up Hammonds), as
magetized tonewheels will provide inharmonic "thumping" that's very
obvious in the output waveform. Concentrate on the sinusoids...they
will either be phase locked, or pretty damned close to it. Now, add a
touch of the fifth drawbar, which is a "borrowed" mutation at 2 2/3'
pitch. The wave form will start undulating wildly, as the Equal
Temperament of this pitch beats with the fundamental and all its
harmonics. Thus, you can't include any dissonant harmonics in such a
test of phase accuity when using a Hammond organ.

By the way, it's NOT that feature which provides the "warmth" of a
Hammond organ...it's that 22-H or 122 Leslie over there. Played
through the board (G-G goes to tip and ring, and in ya go...source
impedance: 220 ohms at around a +12 wide open...pad it down!) or even
through one of Hammond notoriously lousy tone cabinets, the Hammond
sounds horrid...unmusical as all hell. I don't even like them through
a Leslie, but I'm a classically trained organist, so Hammonds to me
are an abortion for real organ music. For R&B, pop and rock, they
have a signature sound WITH the Leslie that's and unavoidable part of
the fabric of American popular music.

Only until very recently have digital poseurs have even come close to
emulating the sound of a tonewheel organ because of 1.) failure to
realize that real Hammonds are NOT exactly tuned to Equal Temperament
due to the mathematics of running 91 tonewheels from only one 1200 RPM
motor shaft, and 2.) there's a lot more than just a bunch of added
sinusoids going on in that tonewheel generator. Best one I've heard
to date: The Voce. The Hammond products...wellll, they tried.
Roland's VK-7 and VK-77 missed the mark by a bit, too, but are pretty
good. The Hammond-Suzuki "new B-3" (not the XB-3...eh) is REAL good
(uses bifurcated palladium keying switches, just like the original),
but at $50K a copy, I wouldn't look for them in your local garage band
anytime soon. The XB-3? Well, let's just say it was a nice try.

dB

  #34   Report Post  
hank alrich
 
Posts: n/a
Default

DeserTBoB wrote:

By the way, it's NOT that feature which provides the "warmth" of a
Hammond organ...it's that 22-H or 122 Leslie over there.


Hey, those'll even warm up a Telecaster on bridge pickup. g

--
ha
  #35   Report Post  
Mark
 
Posts: n/a
Default


snipped lots of interesting information about tone wheels etc.

That was interesting and I thank you for it. However, if I understood
you correctly the harmonics in a tone wheel organ are created by
various wheels which may slip at startup and therefore have arbitrary
phase relationships to the fundamental. Fine. But you also said
yourself that these wheel generated harmoincs combine with the "actual"
harmonics created by the fundamental wheel and as the phase changes the
AMPLITUDE of the combined harmonics change. There is no argument that
these amplitude changes are audible. You also said yourself that the
harmonic amplitudes changed about 0.5 dB. This is a lot of change for
the kind of subtle effects we are talking about here.


You also seem to be saying that if the harmonic is off pitch, that is
audible. OK fine thats a frequency change and I certainly agree that a
frequency change can be audible as a pitch shift.

So I stand my my original contention, the phase relationship of the
harmonics to the fundamental are not audbile.

Any AMPLITUDE changes to the harmonic are audible as a change in
timbre.

Any FREQUENCY changes to the harmonic can be audible as a pitch shift.

You have not cited a case where the organ gnerates a harmonic phase
change that was audible without also a change to the amplitude or
frequency of the harmonic.

thanks

Mark



  #36   Report Post  
ScotFraser
 
Posts: n/a
Default

I don't even like them through
a Leslie, but I'm a classically trained organist, so Hammonds to me
are an abortion for real organ music.

Nah, it's just a whole different art form, as different from classical organ as
ballet is from sculpture.

For R&B, pop and rock, they
have a signature sound WITH the Leslie that's and unavoidable part of
the fabric of American popular music.
BRBR


And jazz, too.

Scott Fraser
  #37   Report Post  
ScotFraser
 
Posts: n/a
Default

it's that 22-H or 122 Leslie over there.

Hey, those'll even warm up a Telecaster on bridge pickup. g BRBR

They've done some great stuff to some violin & viola tracks of my acquaintance.
too.

Scott Fraser
Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Equalizers Howard Ferstler Audio Opinions 574 August 25th 04 03:39 AM
Doppler Distortion - Fact or Fiction Bob Cain Pro Audio 266 August 17th 04 06:50 AM
Transient response of actively filtered speakers Carlos Tech 64 November 26th 03 05:44 PM
science vs. pseudo-science ludovic mirabel High End Audio 91 October 3rd 03 09:56 PM
Blindtest question Thomas A High End Audio 74 August 25th 03 05:09 PM


All times are GMT +1. The time now is 04:25 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"