View Single Post
  #23   Report Post  
Ryan
 
Posts: n/a
Default

Bob Cain wrote in message ...

The Ghost could address this in some detail if anyone could
get him to do something besides insult people. When he was
young he published with one of the pioneers in the field of
hearing research, someone who I believe got a Nobel Prize
for it.



The Ghost?

Is this what I'm asking for? I really don't know myself.


I'm having trouble figuring that out exactly too. :-)

In case you've received any new information that might help
you frame it better, would you care to try again?
Refinement to specs from vague ideas is not an uncommon
process in the user/marketing/engineering cyclic process.


Well, I think you had the right idea the first time, before I
attempted to be more conscise and confused you. I will jot out a
basic algorithm for the softwa

1. Analyze real instrument sound files. These files should inculde
every possible way every classical instrument can be played, from the
traditional to the avant garde. For the viols for example, from plain
jane arco to bartok's snapping strings to harmonics to different bow
pressures to playing behind the bridge to the tapping of fingers on
the body of the instruments. There should be files that represent the
instruments at all possible dynamic levels. There should be files
that feature the instruments playing in micotones if it can do so.
(Most classical instruments can.) Also, there should be analysis of
the instruments in "static form." By this I mean the part of the
sound after the intial attack, which can be looped over and over again
to give the impression the note is sustaining. This is done in
standard synthesis as well as good sample libraries. It may take
quite awhile to amass all these samples, but once collected the
analysis of them only has to be done once.

2. Deduct from these analyzations the prime aspects of these sounds.
If we only have, say ten frequencies to represent this sound, which
ones would be the most usefull. Or would some other type of info
about the file be more imprtant than it's frequencies? So now we have
a set of data instead of just a pcm sound file. We can call these
data sets, "fingerprints." This is mainly to help speed up the math
performed later during step 4, though it will compromise the accuracy
of the final product. Ideally, the user should be able to select the
amount of data to be derived from the samples.

3. Analyze any given sound file. These would be the "real world"
sounds. Or anything at all. In fact, I was thinking last night that
the ultimate test for this software would be to feed it, say,
Beethoven's 9th, and see how close it could aproximate it.

4. Run differential, or co-efficent on the "real world" sound file
compared to all the "sound fingerprints" the program created in step
2.

5. Create midi file. After the program has deduced what would be the
best combination of instruments in which playing styles at what
pitches and what dynamics, playing at what kind of rhytmic figures,
etc., the program would simply create a multiple staff midi file with
all said info scored on it.

Viola!