Re: [LAD] automation on Linux (modular approach)

Previous message: [thread] [date] [author]
Next message: [thread] [date] [author]
To: <linux-audio-dev@...>
Date: Wednesday, March 24, 2010 - 1:24 pm

On Wednesday 24 March 2010, at 11.06.43, Ralf Mardorf wrote:

Not quite sure what "everyone" means by cv in this context ("cv" makes me
think of pitch control in synths, specifically), but here's my take on it; a
simple synth for a small prototyping testbed I hacked the other day:
----------------------------------------------------------------------
function Create(cfg, preset)
{
local t = table [
.out nil,
.gate 0.,
.vel 0.,
.ff cfg.fc0 * 2. * PI / cfg.fs,
.ph 0.,
.phinc 1.,
.amp 0.,
procedure Connect(self, aport, buffer)
{
switch aport
case AOUTPUT
self.out = buffer;
}
procedure Control(self, cport, value)
{
switch cport
case GATE
{
self.gate = value;
if value
{
// Latch velocity and reset phase!
self.amp = self.vel;
self.ph = 0.;
}
}
case VEL
self.vel = value;
case PITCH
self.phinc = self.ff * 2. ** value;
}
function Process(self, frames)
{
if not self.out or not self.amp
return false;
local out, local amp, local ph, local phinc =
self.(out, amp, ph, phinc);
local damp = 0.;
local running = true;
if not self.gate
{
// Linear fade-out over one period.
damp = -self.vel * phinc / (2. * PI);
if -damp * frames >= amp
{
// We're done after this fragment!
damp = -amp / frames;
self.amp = 0.;
running = false;
}
}
for local s = 0, frames - 1
{
out[s] += sin(ph) * amp;
ph += phinc;
amp += damp;
}
self.(ph, amp) = ph, amp;
return running;
}
];
return t;
}
----------------------------------------------------------------------

So, it's just 1.0/octave "linear pitch", and here I'm using a configurable
"middle C" (at 261.625565 Hz by default) to define what a pitch value of 0.0
means. MIDI pitch 60 would translate to 0.0, pitch 72 would translate to 1.0
etc.

You could pass this around like events of some sort (buffer-splitting +
function calls as I do here for simplicity, or timestamped events), much like
MIDI, or you could use an audio rate stream of values, if you can afford it.
Just different transport protocols and (fixed or variable) sample rates...

> Fons: "Another limitation of MIDI is its handling of context, the only

These issues seem orthogonal to me. Addressing individual notes is just a
matter of providing some more information. You could think of it as MIDI using
note pitch as an "implicit" note/voice ID. NoteOff uses pitch to "address"
notes - and so does Poly Pressure, BTW!

Anyway, what I do in that aforementioned prototyping thing is pretty much what
was once discussed for the XAP plugin API; I'm using explicit "virtual voice
IDs", rather than (ab)using pitch or some other control values to keep track
of notes.

You can't really see it in the code above, though, as synth plugins are
monophonic (can have channel wide state and code and stuff, though, but those
are implementation details), but that actually just makes it easier to
understand, as one synth instance corresponds directly to one "virtual voice".
Here's a piece of the "channel" code that manages polyphony and voices within
a channel:
----------------------------------------------------------------------
// Like the instrument Control() method, but this adds
// "virtual voice" addressing for polyphony. Each virtual
// voice addresses one instance of the instrument. An instance
// is created automatically whenever a voice is addressed the
// first time. Virtual voice ID -1 means "all voices".
procedure Control(self, vvoice, cport, value)
{
// Apply to all voices?
if vvoice == -1
{
local vs = self.voices;
// This is channel wide; cache for new voices!
self.controls[cport] = value;
// Control transforms
if cport == S.PITCH
{
self.pitch.#* 0.;
self.pitch.#+ value;
value += self.ccontrols[PITCH];
}
// Apply!
for local i = 0, sizeof vs - 1
if vs[i]
vs[i]:Control(cport, value);
return;
}

// Instantiate new voices as needed!
local v = nil;
try
v = self.voices[vvoice];
if not v
{
// New voice!
v, self.voices[vvoice] = self.descriptor.
Create(self.(config, preset));
v:Connect(S.AOUTPUT, self.mixbuf);
if self.chstate
v:SetSharedState(self.chstate);

// Apply channel wide voice controls
local cc = self.controls;
for local i = 0, sizeof cc - 1
self:Control(vvoice, i, cc[i]);
}

// Control transforms
if cport == S.PITCH
{
self.pitch[vvoice] = value;
value += self.ccontrols[PITCH];
}

// Apply!
v:Control(cport, value);
}

// Detach the physical voice from the virtual voice. The voice
// will keep playing until finished (release envelopes etc), and
// will then be deleted. The virtual voice index will
// immediately be available to control a new physical voice.
procedure Detach(self, vvoice)
{
local v = self.voices;
if not v[vvoice]
return;
self.dvoices.+ v[vvoice];
v[vvoice] = nil;
}
----------------------------------------------------------------------

The Detach() feature sort of illustrates the relation between virtual vocies
and actual voices. Virtual voices are used by the "sender" to define and
address contexts, whereas the actual management of physical voices is done on
the receiving end.

As to MIDI (which is what my keyboard transmits), I just use the MIDI pitch
values for virtual voice addressing. Individual voice addressing with
polyphonic voice management as a free bonus, sort of. ;-) (No voice stealing
here, but one could do that too without much trouble.)

BTW, the language is EEL - the Extensible Embeddable Language. Basically like
Lua with more C-like syntax, and intended for realtime applications. (Uses
refcounting instead of garbage collection, among other things.) The #*, #+ etc
are vector operators, and . is an in-place operation - so
'self.pitch.#* 0.' means "multiply all elements of the self.pitch vector with
0." Typing is dynamic. A "table" is an associative array, and these are used
for all sorts of things, including data structures and OOP style objects. No
"hardwired" OOP support except for some syntactic sugar like the
object:Method(arg) thing, which is equivalent to object.Method(object, arg).

> Resp. Linux than only would

Well, you can translate back and forth between MIDI and cv + "virtual voice"
addressing, but since the latter can potentially express things that MIDI
cannot, there may be issues when translating data that didn't originate from
MIDI... I believe the user will have to decide how to deal with this; map
virtual voices to MIDI channels, use some SysEx extension, just drop or "mix"
the information that doesn't fit, or whatever.

> I'm asking myself, if cv has advantages compared to MIDI, what is the

It's not easy to replace an existing standard with massive support everywhere,
that gets the job done "well enough" for the vast majority of users...

[...]

Because no popular hosts can handle cv controlled synths properly...? And, how
many musicians ACTUALLY need this for their everyday work?

> Even if this is a PITA for me, I stay at Linux. Musicians now need to

Well, you do need a properly configured Linux kernel. Don't know much about
the latest Windows developments, but not long ago, I did some vocals recording
and editing on a Windows laptop with a USB sound card, and it was pretty much
rock solid down to a few ms of buffering. (After all those problems I've had
with Windoze, which actually drove me over to Linux, I was actually slightly
impressed! :-D) I've been lower than than with Linux, that's WITH massive
system stress (which the Windows laptop couldn't take any) - but sure, you
won't get that out of the box with your average Linux distro.

Either way, if you're having latency issues with Windows (like I had when I
first tried to do that job on another laptop...), you'll most likely have the
same issues with Linux, and vice versa. A hardware issue is a hardware issue.
A common problem is "super NMIs" (usually wired to BIOS code) freezing the
whole system for a few ms every now and then. Absolute showstopper if you're
running RT-Linux or RTAI. There are fixes for most of those for Linux... Maybe
Windows has corresponding fixes built-in these days...? Other than that, I
don't know where the difference could be, really.

> Are they interested in being compatible to

Are we talking about OS distros, external hardware support (ie MIDI devices),
file format (ie standard MIDI files for automation), APIs, or what is this
about, really...?

Supporting all sorts of PC hardware out of the box with any OS is a massive
task! Some Linux distros are trying, but without loads of testing, there will
invariably be problems with a relatively large percentage of machines. Then
again, I talked to a studio owner some time ago, who had been struggling for
weeks and months getting ProTools (software + hardware) to work on a Windoze
box until he discovered that the video card was causing the problems... In
short, regardless of OS, you need to buy a turn-key audio workstation if you
want any sort of guarantee that things will Just Work(TM). Nothing much we -
or Microsoft, for that matter - can do about this. Mainstream PC hardware is
just not built for low latency realtime applications, so there WILL be issues
with some of it.

I mean, standard cars aren't meant for racing either. You may find some that
accidentally work "ok", but most likely, you'll be spending some time in the
garage fixing various issues. Or, you go to Caterham, Westfield, Radical or
what have you, and buy a car that's explicitly built for the race track. Right
tools for the job.

[...]

Those issues will have to be solved either way. Having proper APIs, file
formats etc in the Linux domain will probably only make it MORE likely that
these issues will be solved, actually. Why spend time making various devices
work with Linux if you have no software that can make much use of them anyway?
A bit of a Catch-22 situation, maybe...

> Or do we need to buy special mobos,

Yes, or at least the "right" ones - but that goes for Windows too...

> do we need to use special MIDI interfaces etc.

If you can do cv<->MIDI mapping in the interface, you may as well do it
somewhere between the driver and the application instead.

If you want to network machines with other protocols, I don't think there's a
need for any custom hardware for that. Just use Ethernet, USB, 1394 or
something; plenty of bandwith and supported hardware available for any OS,
pretty much.

Of course, supporting some "industry standards" would be nice, but we need
open specifications for that. NDAs and restrictive per-user licenses don't mix
very well with Free/Open Source software.

> to

Well, being able to wire Linux applications, plugins, machines etc together
would help, but I'm not sure how that relates to what you're thinking of
here...

> I do agree that everybody I know, me too, sometimes do have problems

Indeed. Like I said, it gets the job done "well enough" for the vast majority
of users. So, replacing MIDI is of little interest unless you want to do some
pretty advanced stuff, or just want to design a clean, simple plugin API or
something - and the latter has very little to do with connectivity to external
hardware devices.

> Networking of sequencers, sound

Still not quite sure I'm following, but looking at some other posts in this
thread, I get the impression that this cv thing is more about application
implementation, APIs and protocols, and not so much about interfacing with
external hardware.

>From that POV, you can think of cv (or some Linux Automation Data protocol, or
whatever) as a way of making automation data easier to deal with inside
applications, and a way of making applications communicate better. Wiring that
to MIDI and other protocols is (mostly) orthogonal; you just need something
that's at least as expressive and MIDI. Nice bonus if it's much more
expressive, while nicer and simpler to deal with in code.

--
//David Olofson - Developer, Artist, Open Source Advocate

.--- Games, examples, libraries, scripting, sound, music, graphics ---.
| http://olofson.net http://kobodeluxe.com http://audiality.org |
| http://eel.olofson.net http://zeespace.net http://reologica.se |
'---------------------------------------------------------------------'
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev

Previous message: [thread] [date] [author]
Next message: [thread] [date] [author]

Messages in current thread:
Re: [LAD] automation on Linux (modular approach), Ralf Mardorf, (Wed Mar 24, 10:07 am)
Re: [LAD] automation on Linux (modular approach), Tim E. Real, (Wed Mar 24, 10:52 pm)
Re: [LAD] automation on Linux (modular approach), Paul Davis, (Wed Mar 24, 10:45 pm)
Re: [LAD] automation on Linux (modular approach), David Olofson, (Wed Mar 24, 1:24 pm)
Re: [LAD] automation on Linux (modular approach), Ralf Mardorf, (Wed Mar 24, 4:52 pm)
Re: [LAD] automation on Linux (modular approach), Louigi Verona, (Wed Mar 24, 10:45 am)
Re: [LAD] automation on Linux (modular approach), Dave Phillips, (Wed Mar 24, 11:01 am)
Re: [LAD] automation on Linux (modular approach), Ralf Mardorf, (Wed Mar 24, 4:28 pm)
Re: [LAD] automation on Linux (modular approach), Paul Davis, (Wed Mar 24, 10:58 am)
Re: [LAD] automation on Linux (modular approach), Louigi Verona, (Wed Mar 24, 11:01 am)
Re: [LAD] automation on Linux (modular approach), Nick Copeland, (Wed Mar 24, 11:50 am)
Re: [LAD] automation on Linux (modular approach), Rui Nuno Capela, (Wed Mar 24, 11:26 am)
Re: [LAD] automation on Linux (modular approach), Louigi Verona, (Wed Mar 24, 11:29 am)
Re: [LAD] automation on Linux (modular approach), Dave Phillips, (Wed Mar 24, 11:08 am)
Re: [LAD] automation on Linux (modular approach), Nick Copeland, (Wed Mar 24, 10:57 am)
Re: [LAD] automation on Linux (modular approach), Ralf Mardorf, (Wed Mar 24, 4:24 pm)
Re: [LAD] automation on Linux (modular approach), Nick Copeland, (Wed Mar 24, 5:43 pm)
Re: [LAD] automation on Linux (modular approach), Ralf Mardorf, (Wed Mar 24, 6:00 pm)