Chris Cannam writes:
I've recently written a transcription system for polyphonic piano
music. It's based on the system described in "A Discriminative Model for
Polyphonic Piano Transcription" by Poliner and Ellis which uses support
vector machines for each piano note trained on spectral data to predict
sounding notes in small timeframes and hidden markov models to
temporally constrain those classification results. The results are
promising but not as good as I hoped (and sadly not as good as Poliner's
and Ellis'). I have a few ideas to improve the system but as I'm
currently searching for a job I have no time to implement those so I am
thinking about making my software open source (ATM it's a Python library
with command line tools).
If trained with other audio data than piano recordings it should also be
possible to transcribe other instruments but the restriction to one
instrument without slides, vibratos or anything like that remains.