View Single Post
  #11   Report Post  
Posted to rec.audio.high-end
KH KH is offline
external usenet poster
 
Posts: 137
Default How pure is the signal when it reaches our ears?

On 1/22/2014 5:14 AM, ST wrote:
On Wednesday, 22 January 2014 11:12:32 UTC+8, KH wrote:
...
Well, the point is that once leaving the amplifier, all signals are
analog. Whatever front end you use, your speakers are oblivious, and
the physics of sound transmission from speaker to ear - for any given
room and ear - are identical irrespective of whether the signal started
as analog or digital.


Agreed. But the recorded sound is subjected to further distortion with vinyl. That’s where the difference lies.


But that difference has nothing to do with acoustic wave propagation.

The accepted practice when recording music is to place the microphones close to the source so that the distortion (reverberation) is avoided.


You'll likely encounter some strong opposing opinions on that one.

The sound together with the distortion is actually what we hear all the time once it leaves the source. Here the term distortion is used to refer to anything that alters the original signal.
Therefore, the sound played back using the loudspeakers' should sound identical to the source because it also goes through the same impediment while travelling from the speakers to the ears.


This is where there is confusion. Air is not an "impediment" through
which the sound has to traverse. The movement of air *is* the sound.
Acoustical instruments are designed for their radiation pattern as well
as tone, and that is part and parcel of the sound that you hear. Hence
the argument against close-miking.

(Please ignore other limitations for now. I know rec orded sound can never sound like live sound).
That brings us to another question; what was captured by the microphones?. We assume that all the sound emitted from the source is capable of being captured by the microphone.


No, "we" don't assume that at all. Why would anyone assume that?

In reality, the microphones capture only a tiny fraction of the sound. So too our ears.


In reality, they capture a great deal of the sound, as do our ears. I
don't know where you get the idea that they are so inefficient. They do
not, however, capture directional information with the exception of
left/right information.

The only difference here is the distance AND the inherent defects of our hearing mechanism. This is followed by the brain that is capable of doing the filtering such as ignoring the early reverberation. The ear hears a fraction of a bigger “distorted” sound, where else the microphones hear a fraction of clean undistorted sound at close distance. Finally, recording engineers add the right amount of “distortion” (reverberation) to make the sound to be correct to our ears.
Now going back to the playback, in either digital or analogue, what's happens is the tiny energy which is probably less than 0.001% (maybe a couple of more 0s?) of the original sound now amplified to represent the actual sound. In the case of vinyl, the amplification contains further distortion due to the mechanical process. Such distortion escapes digital. Now, which of these two versions would arrive to our ears with the right amount of “distortion” as heard by us in the live event? What kind of measurement we should take into consideration when judging sound quality as perceived by us. Here, I am referring to what the ears hear and matters to us. Not the measurements using oscilloscope or instruments which is not what we are hearing in real life.


You need to do some research on HRTF - that is the huge difference
between ears and microphones. Adding reverb does create a sense of
space by adding delayed sound simulating, to an extent, the reverberant
field in a live venue. Vinyl and vinyl playback systems do add, to
varying degrees, some phasiness that can create a similar effect.


So whatever "wigglyness" happens between the speaker and your ear
happens to all such signals irrespective of origin.


Agreed. Since “wigglyness” is part of the equation which is important to our subjective assessment of sound then why are we eliminating them in vinyl vs digital discussion by showing proof such as the video mentioned in the first post. Common argument used to be that analogue (vinyl) is too distorted to be accepted as preferred sound when the reality is “distorted” sound is what we hear. Shouldn’t the question be how much and what type “distortion” is needed for more realistic recorded sound playback? Is vinyl somehow getting it right?


This "common argument" is likely what Scott was calling "Hogwash" on.
When you sit in a concert hall and listen to, just for e.g. a string
quartet, *nothing* you hear is "distorted" by the intervening air.
You're hearing instruments as they were designed to sound, augmented by
the reverberant field they set up in the concert hall. It's simply a
false assumption that what you hear is a distortion because the "signal
is wiggly". That reverberant field, in conjunction with your HRTF, is
what allows you to spacially locate each instrument in 3-D. That is the
design of the human hearing system. The reverberant field is not
"distortion", it's just reality.

The differences between vinyl and digital happen way upstream and are an
artifact of bandwidth limitations, filtering, and electromechanical
transduction issues, among others. Not due to "distortion in the air".

p.s. I do not play vinyl.


I seldom do either.