Home |
Search |
Today's Posts |
#1
![]() |
|||
|
|||
![]()
Current audio systems do not reproduce the entire wavefront that
creates the listener's experience in the concert hall. At best, they measure a few channels and reproduce those, inexactly, through a few speakers. Yet the playback experience can be enjoyable and thrilling. Obviously something of the original sonic event is preserved. Something of the original time-evolving spectrum of sonic energy is reproduced. Ignoring for now the question of reproducing a wavefront, let's look at just how the signal in one channel is handled. It can be quite distorted and yet still recognizable. What aspects of a signal must be preserved for it to be recognizable? What aspects must be preserved for it to sound good, and to sound very much like the original signal? Engineers have addressed this question in many ways, for example designing compression algorithms. Some details of the original signal can be thrown away without losing much, perceptually. MP3's sound sorta like the orignal files. I'm interested in addressing the question "what makes an accurate signal" at a higher level of quality than that. For example, I've always preferred analog sources to digital, finding the former more lifelike. Does an analog recorder preserve some aspect of the signal better than a digital recorder? I know that many of you will say categorically not. Fine. Let's look anyway at one aspect of the signal. Intuitively, a musical signal is made of many "events"...for example attacks of notes. Intuitively I hear even sustained notes as made of events...little shifts of timbre, and so on. This idea is confirmed when we look at an audio signal and see periodic spikes, and also confirmed by the success of "granular synthesis" (a technique for synthesizing sustained sounds by summing many individual wavelets). Perhaps an important dimension of accurate sound reproduction is the accurate reproduction of the *relative timing* of these events. To clarify, perhaps we could conceive of each event as being recognized by the neural machinery and triggering a neuron to fire. And something about the pattern of this firing, the timing contained therein, is important to defining the sound quality. How does a particular recording/playback process affect the timing of transients? Recording processes are sometimes characterized in terms of frequency response. Digital has a very flat response in the region audible to the ear, meaning it doesn't introduce much distortion. However, it does introduce some distortion. And if we were somehow able to examine the relative firing times of neurons in response to a recorded/played-back signal, how much would a digital playback process distort those times? How much would an analog process distort those times? This is not a question about jitter. Certainly jitter is one distortion mechanism in digital (and analog) playback, but this more about how even a linear playback system will distort transients because it is band-limited. Changing the shape of the transient will likely have a small effect on neural timing. Both digital and analog recording processes distort the shape of the transient, but perhaps one of them does so in a way that better preserves the relative timing of neural events. My *suspicion* is that analog in fact does better preserve the timing of neural events. However, I would need to know more about neuroscience and non-linear systems to have a good answer to this, but perhaps someone reading is interested. Best, Mike |
Thread Tools | |
Display Modes | |
|
|
![]() |
||||
Thread | Forum | |||
CD RW for audio | Tech | |||
A Couple of questions on audioquest power cords and CD-Rs | Tech | |||
Our singer is really pulling us down! | Pro Audio | |||
Comparative High-End Tube Amp Costs - Then and Now | High End Audio | |||
Loudspeaker timing | Audio Opinions |