On Friday 11 November 2011, at 23.19.44, email@example.com wrote:
There are issues with both methods, depending on what you want to do.
Function calls add overhead that can become significant if you're doing very
frequent parameter changes.
Polling, as I understand the latter approach to be, might be a great idea if
you're only reading the parameter once per "block" of processing. You'll need
to get that number from *somewhere* no matter what; be it a private closure,
or some more public structure. However, if there are expensive calculations
between that parameter and what you actually need inside the DSP loop, it
might not be all that great. Things get even worse if you want to handle
parameter changes with sample accurate timing. (Function calls can handle that
just fine; just add a timestamp argument, and have plugins/units handle that
internally in whatever way is appropriate.)
Some sort of event queues can offer some of the advantages of both of these,
if designed properly. If they're delivered in timestamp order (either by
system design or by means of priority queues or similar), processing them
becomes very efficient and scalable the large numbers of "control targets";
you only have to check the timestamp of the next event, then process audio
until you get there, and you only ever consider changes that actually occured
- no polling.
All that said, how far do you need to take it? Unless you're going to throw
tens of thousands of parameter changes at your units while processing, this
overhead may not be as significant as one might think at first. It might be a
better idea to focus on features and interfaces first. Remember, premature
optimization is the root of all evil... :-)
> Along the same lines, say I have a single linked list of AudioElements, and
I tend to go with "connection" logic and some sort of direct references when
designing that sort of things - but again, that depends on the application and
usage patterns you're designing for.
For example, in a physics engine (game or simulation), you have potentially
hundreds or even thousands of bodies moving around, and you have to rely on
spatial partitioning of some sort to figure out which bodies *can* potentially
collide within the time frame currently being evaluated. In a naïve design,
you essentially have to check every body against every other body, every
single frame, and that... doesn't scale very well at all. :-)
As a more relevant (I think) extreme in the other direction, we have musical
synthesis systems: Hundreds or even thousands of units processing audio in
various ways (I'm thinking modular synthesis here, obviously - no way any sane
person would use that many units otherwise... I think ;-) - but you won't
normally see random communication between arbitrary units! What you will
normally have is a number of (relatively) long "conversations" between units,
usually best abstracted as some sort of persistent connections. Obviously,
this saves a lot of time, as there is no overhead for looking units up, except
possibly when making new connections. (Probably no need for that if you wire
things as you build the graph.)
> I'm seeing downsides to each approach:
How are you going to avoid that anyway? Even if you do want to filter high
frequency control data down, you'll need to deal with all data there is, or
risk "random" behavior due to aliasing distortion. (Like downsampling audio
without filtering or interpolation.)
Or, if you're going to use a lot of potentially high frequency control data,
why not use audio rate control ports? Or some sort of hybrid, allowing you to
switch as needed - but that quickly becomes a complexity explosion...
> 2: Request it every time it runs -> Keeping control over the many values &
Well, if designed properly, this should scale with the graph. Basically, each
connectable entity should only ever need to know what it's connected to - if
even that. (See LADSPA ports.) Also, keeping such connection state data along
with the state data of units might be a good idea performance wise, as it can
make memory access more cache friendly.
But of course, the ultimate answer to all such questions is: Benchmarking!
Though having a rough idea about how modern hardware works can help getting
the initial design reasonably non-broken.
//David Olofson - Consultant, Developer, Artist, Open Source Advocate
.--- Games, examples, libraries, scripting, sound, music, graphics ---.
| http://consulting.olofson.net http://olofsonarcade.com |
Linux-audio-dev mailing list