On Wed, May 26, 2010 at 5:14 PM, Paul Davis wrote:
True, there's a lot going on and a lot of factors to consider. I
should have said, "at some level of musical experience", because we
all have a lot in common. Barring amusia or any other major hearing
difficulty, people's sensory experience is much the same. Up to the
primary auditory cortex, the auditory system is highly specialized.
It's a neural architecture driven by evolution in pursuit of specific
auditory functions, that precedes learning and exposure to music.
For example, pitch perception is no longer considered a learned,
template-matched response (as it has been debated since the 60s/70s),
but is intrinsic to neurons themselves (see works by Julyan (JHE)
Carwright and Dante Chialvo, for reference).
At some level of conscious experience, we all hear the same things.
However a powerful model should also be able to explain why people
hear things differently.
>> As musicians and composers, we approach the "tiling problem"
I perhaps should have used the term "capable" rather than "able", b/c
what I was really getting at was the complete lack of a computer's
ability to say, "this sounds good" :)
I would like a computer to be able to say, "This would sound good, if
I were a human". Better yet, I'd like the computer to describe it to
me in numbers that I myself could not calculate.
There's certainly no point in having a computer tell me what I already
know, because I was there, I heard it, and I know what I like. But I
also can't listen to all the possibilities of music, though a
sufficiently powerful computer could do so.
Linux-audio-dev mailing list