I like to to see it the other way around:
Information about the data (format, timing) needs to be sent with the
audio stream to enable all listeners to decode it. This way, the audio
stream can be sent multicast.
In addition, it will be useful to be able to parametrise the sound card
in a way like you describe.
Some ideas about using OSC:
- yes, there is an overhead in using OSC. But the protocol is very
simple, and the text based routing protocol can be used in a very
efficient matter if it is coded efficiently (e.g. using short address
- it could be very useful to have a bridge OSC<->jack
- OSC is transport agnostic. Usually it is implemented on top of UDP,
but it is equaly valid to make it sit on top of TCP. It is thus possible
to use UDP for "best effort" scenarios, where latency must be minimized
but packets may get lost, and alternatively TCP for situations where
packet loss is not acceptable, such as for recording.
To implement OSC transport for audio, there should be an agreement about
some formatting, especially the OSC address patterns used and what data
shall be sent in addition to audio.
In a very naive approach, it would then suffice to send the data stream
and decode it on the other side. The timing / synchroinisation is a
different matter, but can be solved if the system clocks are synchronised.
If UDP is used, there should be a mechanism to cope with lissing
packets, and packets arriving out of order. There is a nice mechanism
for that in jacktrip:
I have not looked at the code of jacktrip, but maybe a lightweight
version of that could be implemented instead of the OSC approach?
Nedre Gate 5
N-0551 Oslo Norway
tlf.: +47 22358065
Linux-audio-user mailing list