[LAD] Audio2MIDI algorithms

Previous message: [thread] [date] [author]
Next message: [thread] [date] [author]
To: LAD <linux-audio-dev@...>
Date: Friday, November 19, 2010 - 5:08 pm

It was a previous discussion "Musescore "music trainer"?", about
polyphonic audio to MIDI recognition. I found a windows program that
claim to archive good results : TallStick TS-AudioToMIDI

On this webpage : http://tallstick.com/webhelp/algorithm.htm,
they wrote some interesting claims :

- "They (3 of the 4 algorithms) all are based on the set of oscillator
circuits named sensors. Each sensor gets wave signal as input and
produces some reply. Sensor's reply is a value proportional to the
amplitude of component with frequency about equal to sensor's
resonance one."

This is what I call "filtre en peigne" in french. Comb filter. Each
"teeth" of the comb will test for one frequency.

- After sensor's output is multiplied on correspond Equalizer values,
it arrives on Spectrum Window. All these methods analyze spectrum
data at each instant of time from left to right (from low to high
pitches). When spectral maximum is detected it assumed to be
fundamental frequency of note. This assumption is tested by comparing
spectra to Harmonic model setting. After this, if assumed note is
greater than Threshold value then note accepts, otherwise rejects. If
note is accepted, all it's spectral components are subtracted from
corresponding components of whole spectra.

This show that the whole algorithm is more complex than a simple
recursive filtering. They take in account the spectra of the music. You
can (and must) assign the instruments that play the music, before to
made the conversion.


"We have the heroes we deserve."
Linux-audio-dev mailing list

Previous message: [thread] [date] [author]
Next message: [thread] [date] [author]

This is the only confirmed message in this thread.