On Fri, Feb 19, 2010 at 01:59:34PM +0000, Simon Jenkins wrote:
True. The advantage is that if there is a 'standard' for
such control signals (e.g. 1/16) it becomes practical to
store them as well. Of course you could do that at audio
rate, but just imagine the consequences if you have e.g.
at least 4 control signals for each audio channel, as is
the case in the WFS system here. There is a huge difference
between having to store 48 audio files of an hour each,
(25 GB) and 240 of them (125 GB) - in particular if most
of that storage is wasted anyway. In a mixdown session
there can easily be much more than 4 automation tracks
for each audio one. Reducing the rate at least brings
this back to manageable levels.
> If a receiving application, for example, wants to update
All true, but you are confusing two quite separate issues:
*internal update rate* and *useful bandwidth*.
- The internal update rate of e.g. a filter or gain control
would always have to be audio rate, to avoid 'zipper' effects.
The filter could e.g. use linear interpolation over segments
of 16 samples, or 32, or 256. This is an implementation
detail of the DSP code.
- The useful bandwidth of control signals in audio is very
low. Even if the internal update rate is audio, there will
be no energy in the control signal above a few tens of Hz.
If you modulate a filter or gain stage with anything above
that bandwidth it is no longer just a filter or gain control
- you will be producing quite an obvious effect (e.g. vibrato).
That makes sense in synthesisers etc., but in any normal
audio processing it's something to be avoided.
So with the exception of synth modules etc., control signals
never need to be high rate, and if they are the DSP code would
have to filter out the HF parts. Actually 1/16 (3 kHz) would
be more than sufficient for use as envelopes etc. in a synth
as well, anything faster would just generate clicks.
O tu, che porte, correndo si ?
E guerra e morte !
Linux-audio-dev mailing list