2010/12/6 Maurizio De Cecco :
It's some time that I actually have a somewhat related idea: when you
use physics-based modeling techniques a crucial point is proper
scheduling of operations, which has NOT to be done on a "processing
object" (i.e., plugin or whatever) basis, but by considering
dependencies among single input/output expressions even inside
"processing objects" (this may sound unclear, I know).
This means that in practice, if you deal with compiled code (a.k.a.
plugins), you just can't reasonably do it, as opposed to specialized
tools, and in particular special audio programming languages that can
either be interpreted or compiled to some sort of "understandable"
bytecode that preserves input/output time relationships (i.e., delays)
and then run in a virtual machine that solves the scheduling issue.
An example of such specialized tools is the audio DSP language I wrote
called Permafrost. Here, the scheduling issue is solved when compiling
the Permafrost source code to LV2 plugin source (C code and Turtle/RDF
metadata), but this is a one way operation - you can't use the output
plugin code to build a new plugin taking into account these issues,
whether at runtime or not.
My idea was to define an intermediate DSP bytecode, hopefully also
capable of including metadata, and a virtual machine that schedules,
optimizes and runs the whole chain/graph.
A stupid example of what can be achieved with such a thing is the
following: suppose you have a physics-based simulator of a tube
amplifier that allows you to "plug" a physics-based loudspeaker model
into it (if you are into this kind of stuff, suppose it is WDF-based)
- you would be able to change such loudspeaker model while the system
is running and have it all correctly scheduled, optimized and
On the other hand, I believe you are aiming for something different,
hence I do suggest you (as Paul did) to contact FAUST developers,
since they're pretty much into this kind of stuff, especially when it
comes to LLVM.
Linux-audio-dev mailing list