View Single Post
  #21   Report Post  
Posted to rec.audio.high-end
Audio Empire Audio Empire is offline
external usenet poster
 
Posts: 1,193
Default Mind Stretchers

On Sun, 17 Jun 2012 12:59:06 -0700, KH wrote
(in article ):

On 6/13/2012 4:47 PM, Sebastian Kaliszewski wrote:
KH wrote:
On 6/12/2012 8:20 PM, Sebastian Kaliszewski wrote:
KH wrote:


snip


And phase as Audio Empire points out.

Well yes, but what is phase except a temporal shift?


You're conflating phase and wavefront. You can have 180deg off phase
signals coming at the same moment.


What I meant to say was "phase differences". Two identical waves 180deg
off phase are different only in an f/2 temporal shift right?


in conjunction with the HRTF of the listener, create a spacial image.
That information was not, however, encoded into the recording except
as temporal and level information.

And possibly phase as well.

Ditto


See above. Phase is a property different from timing.


With respect to the specific discussion, I don't see how they can be
considered separate properties. For example, if we were to take two
instruments that could produce a single pure tone (hypothetically) and
place them on a stage, one 3M from the mic, and one 5M from the mic, and
shifted laterally 1M. If both instruments were started simultaneously,
calibrated to provide equal signal levels at the microphone, then the
wavefront at the microphone position would, from a direct perspective,
comprise two out of phase waves right? Yet, the only difference in the
signals is arrival times of the respective peaks and troughs, no?


Not necessarily. First of all, both players might night start the note in the
same place, but start it at the same time meaning that their wavefronts might
arrive at the diaphragm at a time shifted by their differences, but might be
greater of less than 180 degrees out of phase. Also a real flute waveform is
more complex than a simple sinewave, and therefore there will be phase
anomalies within the two flutes. Now if we use two speakers fed by a sinewave
generator, the problem will still exist that unless it's the same generator
through two speakers there will still be a random phase component other than
the distance component. But that hardly tells us anything about the real
world.

So add a second microphone, and you have the same signals recorded from
a different position in space. As long as you know the microphone
positions, its easy to determine the relative positions of the two
instruments, aurally or mathematically. Yet when you add in the effects
of the reverberant sound field you have a whole new set of signals of
varying strengths and arrival times, and thus phase differences. As a
listener, in place of the microphones, even minor head movements allow
you to localize the instruments by sampling different angular
presentations (i.e. the HRTF effect) and analyzing multiple wave fronts.
This depth of information is simply not captured in a stereo recording.


Sure it is. It's capture by only two mikes (ideally), but in the right
circumstances, that's enough.

That is the information that is missing; that's the information that
allows us to establish accurate positional data.


I maintain that in a properly made recording, it's not missing.