View Single Post
  #2   Report Post  
 
Posts: n/a
Default

I recall from a few weeks ago Stewart saying that he was just as moved
by a performance of the Elgar cello concerto on a car radio as on his
big system (I'm explaining from memory of reading it). Also, in a
recent post Bob said he didn't think the sound qualities of a system
(within normal ranges) influenced the experience of music. In the
thread "analog vs. digital--not" Stewart and "bear" said something to
the effect that a table radio can convey a musical performance as well
as anything else ("bear" was writing about what a conductor is
interested in). Since I don't have the exact posts to follow up,
please take these comments of mine as provisional until Stewart, Bob,
and "bear" confirm them. I just want to respond to the model implied
by this perspective (I'm sure SOMEBODY, SOMEWHERE holds this
perspective).

My experience is quite different, of course. In my experience, the
details of sound matter to the experience of music, to the experience
of a performance and what emotions it evokes, and so on. I thought I
might capture this disagreement in a revised model:

The model I believe Bob and Stewart and "bear" are using (and they may
confirm this or explain otherwise, of course):


sound pressure waves
|
|
V
ear

^
|
|
V
representation of sound ------ abstracted "peformance"
in the brain (a la midi)
| |
| |
[A] [b]
| |
| |
| |
| |
V V
CONSCIOUSNESS OF SOUND CONSCIOUSNESS OF MUSIC

O V E R A L L C O N S C I O U S N E S S


Considering this "abstracted performance", let me first describe
MIDI. MIDI is a digital protocal for representing musical performances
at the level of notes, timing, rhythms, "timbre" (patch selection),
dynamics, and to some extent, dynamic shapes within a note. It doesn't
represent sound itself, but rather something like a "score" that must
be turned into music by a synthesizer or a program like CSound.

Likewise, a composer creates a score, which is an abstracted
representation of sound. It must be turned into actual sound by a
musician, who supplies the many additional details not mentioned in
the score. Manfred Clynes has written much about
this; in his estimation there is one thousand times more information
in the actual sound than in the score.

What I understand Stewart, Bob, and "bear" as saying, is that their
experience of the music is constructed from a highly abstracted
representation of the music, concerned mainly with pitches, durations,
rhythms, and so on. This is the way I'm trying to understand what they
write; I welcome their clarifications.

In other words, the consciousness of music is developed through
channel B, which throws away a lot of details. You will notice on my
original diagram that there is no similar filter in my model--the
brain systems that construct an experience of music (body movement,
emotions, etc.) can, potentially, respond to any feature of the sound.

All this "modeling" can get a bit theoretical, but I'm using it to
describe a simple, concrete fact, which is that my impression of a
musical performance--my understanding of what WORKS about it--changes
as the playback changes.

My model describes my experience quite well. And the other model, I
see no reason to doubt, describes Stewart's/Bob's/bear's
experience. In their model, note that channel A is a much richer
source of information than B, and degradations of the sound have
little effect on channel B. So of course they feel that audio
comparisons are mainly about the sound, not the music. (They also
probably believe that consciousness has complete, and completely
conscious, completely subject to will and awareness, access through
channel A.)

What is curious to me is that each of us has arrived at a model
representing our own experience.. and these models have very different
implications about how comparisons (of any type, sighted or blind)
should be done.

Mike