On Tue, Jun 14, 2011 at 02:42:18PM +0300, Dan Muresan wrote:
> For sure, two user-space caches add a useless extra layer of copying.
Not only copying (which is cheap anyway), but also logic.
Your application layer cache knows what you want to achieve,
will have some ad-hoc strategies and try to do the right thing.
The intermediate one doesn't have that and may well work against
you. The same problem exists with the system level buffering,
but you can't do much about that.
> > One way to organise the buffer is to divide it in fragments
I don't know what your app is doing, so I'll assume it's some sort
of player. Now if you relocate, you send the commands to read the
data at the new position to your reader thread. Assume your buffer
is 2 seconds, so that's 8 commands to read 1/4 of second. You can't
safely start playback again unless you have at least a second or
so buffered. No assume you have a new relocate before that time.
Again you send the commands to read 8 blocks of 1/4 second. There
is some logic in the app that makes these cancel the previous ones
that have not yet started. So you end up with 1 you can't cancel
against 8 that have to be done anyway. That's not big loss. One
of my players works this way. From a user point of view a relocate
happens almost instantly.
If the read bandwidth is just above what is required for continuous
streaming, then very probably you can't support this sort of thing
without extra delays. But even in this case they don't need to be
very big. In the example above one extra fragment is read (the one
you can't cancel) compared to the four or so you need to have done
anyway before you can resume playing safely. So it takes as most
25% more time.
If the reads happen on an NFS volume then even if the cancel would
work on the client side that doesn't imply that the data won't
be transmitted by the server anyway. So nothing would be gained
Linux-audio-dev mailing list