I'm not sure the gist of the thread has been adhered to, but as
someone who has sidetracked the odd thread in my enthusiasm, i'm no
Nevertheless, citing ardour as the ultimate answer doesn't address the
intent of the thread, which as i understand it was a question
concerning the continuity of data streaming across apps.
And although Ardour is a fine app, it's automation can be frustrating
to use, for me at least. I've had a taste of using an alternative, and
it's a lot easier to use, in my particular use case.
The weight of alternate responses seems to be geared toward a precise
definition of the use of automation, for a particular use case, which
is outside of the worklfow of some. That immediately narrows the
options for users, and forces those of use who work in a different way
into "this is the way it is, and you'll have to find a workaround."
Hardly the stuff of building a modular setup, with linux infinitely
variable in the opportunities it offers to bolt things together in a
manner the user wishes to exploit.
I've been hammering away at midi since it started, and in the process
of recording with midi driven projects, i discovered the following:
1.) Users do as much as they can in midi, before they record to audio.
In the hands of an experienced user, a project can be "tuned" to a
fairly decent result, before the recording to audio begins.
2.) No matter how much a user does to fine tune the midi, and there
are some fine midi charts out there, the user still has to edit the
audio after it's recorded, to further tweak the result into a closer
resemblance of human playing. This is successful to varying degrees,
dependent on the skills of the user, and what level of excellence he
or she wishes to strive for, but in the case of recording orchestral
work, it's been my experience that sorting out the velocities in the
midi chart, and then using automation for audio to emulate swells, and
smooth volume transitions gives the best result, generally speaking.
3.) Contrary to popular belief, midi is not the panacea for automation
control. It's widespread adoption is more to do with the decisions
taken by devs and companies in the commercial world, when midi finally
got past the angst of commercial operators having to agree on
something. As a protocol to render smooth automated lines, it too does
its job with varying degrees of success. But it doesn't effectively
render the smoothness associated with live playing. I know this as a
former orchestral player, and have on many occasions had to render
dozens of takes, simply to layer together enough audio to manipulate
into some semblance of natural playing.
4.) Midi is a multiplexed format. If the user wants to automate using
one single data stream, then like it or not, he is using 1 port=16
channels =0-127 bits of control data. In my recent tests, the CV data
i wrote and used to automate a line gave a far smoother transition,
from 0.0-1.0, and that was a single data stream. No extra channels, or
the stepped result of using a defined data jump from 0 to 1 to 2 to 3,
etc... On top of this, the user has to sort out which channel they're
going to use, where they're going to send it, and which channel in
which port they're going to attach it to.
5.) It's also my experience that users i've communicated with over
many years, who write from midi recorded into audio, are likely going
to write automation for volume, and maybe a little pan from time to
time. This is very much dependent on the use case, but it's fair to
say that those who require as smooth a volume change as possible don't
get it using midi, at least to a decent standard of excellence. Again,
relying on those years of use, and a fair degree of objectivity, i was
enthused by the smoothness of using a CV stream to render volume
transitions. One of those long held wishes that has come to life, and
saved me a stack of donkey work in the process.
6.) Crossfading is neither here not there in a discussion about
automation. It's the province of an app to provide an effective
crossfade framework, something Ardour does well i might add.
7.) For electronic music writers in particular, but also for the wider
user base, manipulating an external synth (and we are discussing a
modular setup, yes?) via CV lanes seems to be an opportunity waiting
to happen. The user adds a lane, names it to the control he wants to
manipulate, and goes to work, recording smooth automation lines and
getting the weird and sometimes original result from the synth. That's
it. Add a lane, port it to a control and name it for easy recognition
in the project. Couldn't be easier. and there's no "steps" (0-127)
when the control is changed.
8.) I'm not wasting anymore valuable time on this,as it's clear any
effort to do so would be futile in the face of the current status quo,
and the intent to keep things as they are.
Linux-audio-dev mailing list