Re: [LAD] how to store deferred events in a real time audio app?

Previous message: [thread] [date] [author]
Next message: [thread] [date] [author]
To: <linux-audio-dev@...>
Date: Wednesday, December 21, 2011 - 12:50 am

--001636af02d733c5a304b48f90fc
Content-Type: text/plain; charset=ISO-8859-1

Either I'm misunderstanding the answers, or I have not done a good job of
asking my question.

In more detail, here's what I'm curious about how people have done:

The sequencer has a clock, it nows what time 'now' is in bars:beats:ticks.
Events are stored somehow, encoded to a time in bars:beats:ticks. These may
be added on the fly to any time, and the sequencer must be able to hop
around non-linearly in time ( looping, jumping to marks, etc). How does the
sequencer engine find events stored for 'now', quickly enough that we can
be somewhat deterministic about making sure it can get all the events for
any time? ( 'now' may even be different on a track by track basis ).

Does it look up 'now' in some kind of hashed pile of events, where events
are keyed by a time? This makes me worry about hashing algorithms, but
would sure be the easiest to implement.

Is there some kind of master timeline array that events get attached to?
This seems like it would be quick to seek to a point, but use up a lot of
ram for the timeline array and I'm not sure how one would handle unlimited
length timelines.

I'd not clear how the above have to do with communicating between threads
using ringbuffers, I'm just talking about how the audio call back stores
events for a given time and then finds them quickly at that time. But maybe
I'm totally missing something here.

Would love to hear in pseudo code how others have tackled storing and
finding events in time.

thanks!
Iain

--001636af02d733c5a304b48f90fc
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Either I'm misunderstanding the answers, or I have not done a good=
job of asking my question.In more detail, he=
re's what I'm curious about how people have done:
The sequencer has a clock, it nows what time 'now' is in=
bars:beats:ticks. Events are stored somehow, encoded to a time in bars:bea=
ts:ticks. These may be added on the fly to any time, and the sequencer must=
be able to hop around non-linearly in time ( looping, jumping to marks, et=
c). How does the sequencer engine find events stored for 'now', qui=
ckly enough that we can be somewhat deterministic about making sure it can =
get all the events for any time? ( 'now' may even be different on a=
track by track basis ).
Does it look up 'now' in some kind of hashed pi=
le of events, where events are keyed by a time? This makes me worry about h=
ashing algorithms, but would sure be the easiest to implement.
Is there some kind of master timeline array that events get =
attached to? This seems like it would be quick to seek to a point, but use =
up a lot of ram for the timeline array and I'm not sure how one would h=
andle unlimited length timelines.=A0
I'd not clear how the above have to do with communi=
cating between threads using ringbuffers, I'm just talking about how th=
e audio call back stores events for a given time and then finds them quickl=
y at that time. But maybe I'm totally missing something here.
Would love to hear in pseudo code how others have tackl=
ed storing and finding events in time.thanks!Iain

--001636af02d733c5a304b48f90fc--

Previous message: [thread] [date] [author]
Next message: [thread] [date] [author]

Messages in current thread:
Re: [LAD] how to store deferred events in a real time audio ..., Iain Duncan, (Wed Dec 21, 12:50 am)