On Mon, Mar 22, 2010 at 04:13:54PM +0300, Louigi Verona wrote:
> I do not think you understood me. I am speaking about automating parameters
There are two ways to do this.
One could be called 'automation'. The term normally
means that an application (or HW) can remember how
you use its controls and is able to reproduce that
at the right time. Apart from mixers I don't know
of applications or equipment (apart from some very
specialised ones) that do this.
Many synths allow their (GUI) controls to be remote
controlled (usually by MIDI), but they are lacking
the other required parts: the control should be able
to act as source of (e.g. MIDI) data as well, AND
have some means to switch between its different modes
(in automation terms: manual, read, write, update).
These missing parts would need to be added to each
and every control of an electronic instrument. I doubt
very much if the result would be really usable.
The other way is programming, or writing a score in
some form. That is: you tell the application what to
do at some time, but not by manipulating the controls
that you'd use to do it manually, but in some abstract
form, either textual or graphic. This way you can also
do things that would be impossible manually.
Sequencers do this, but the possibilities are usually
rather limited. Music programming languages such as
Csound and SuperCollider can do this to any level of
complexity you can imagine. What these tools are missing
(AFAIK) is some way to sync execution of a score (Csound)
or an Sclang program (SuperCollider) to e.g. Jack transport,
or even just any form of randon access into their timeline.
A third solution would be a general-purpose 'control data
recorder and editor'. Never seem such an app, but I'm sure
it would be useful.
O tu, che porte, correndo si ?
E guerra e morte !
Linux-audio-dev mailing list