*** wumpus has quit IRC | 00:29 | |
*** wumpus has joined #lv2 | 00:39 | |
*** unclechu has quit IRC | 01:39 | |
*** Spark[01] has quit IRC | 03:18 | |
*** oofus has quit IRC | 04:53 | |
*** oofus has joined #lv2 | 04:53 | |
*** Spark[01] has joined #lv2 | 06:00 | |
*** Spark[01] has quit IRC | 06:06 | |
*** edogawa has joined #lv2 | 06:09 | |
*** Spark[01] has joined #lv2 | 06:13 | |
*** Spark[01] has joined #lv2 | 06:14 | |
*** Spark[01] has quit IRC | 06:32 | |
*** dsheeler has quit IRC | 06:39 | |
*** dsheeler has joined #lv2 | 06:41 | |
*** edogawa has quit IRC | 07:14 | |
*** oofus has quit IRC | 07:40 | |
*** oofus has joined #lv2 | 07:45 | |
*** EntropySink has joined #lv2 | 08:00 | |
*** sigma6 has joined #lv2 | 08:07 | |
*** Spark[01] has joined #lv2 | 08:51 | |
*** ricardocrudo has joined #lv2 | 08:56 | |
*** drobilla has quit IRC | 09:01 | |
*** falktx|work has joined #lv2 | 09:08 | |
*** falktx|work has quit IRC | 09:08 | |
*** falktx|work has joined #lv2 | 09:09 | |
*** frinknet has quit IRC | 10:26 | |
*** Spark[01] has quit IRC | 10:37 | |
*** Spark[01] has joined #lv2 | 10:50 | |
*** unclechu has joined #lv2 | 10:51 | |
*** unclechu has quit IRC | 11:02 | |
*** unclechu has joined #lv2 | 11:04 | |
*** unclechu has joined #lv2 | 11:05 | |
*** oofus_ has joined #lv2 | 12:11 | |
*** oofus has quit IRC | 12:14 | |
*** LAbot has joined #lv2 | 12:39 | |
*** drobilla has joined #lv2 | 14:33 | |
*** frinknet has joined #lv2 | 14:34 | |
*** frinknet has quit IRC | 14:54 | |
*** frinknet has joined #lv2 | 15:45 | |
*** frinknet has quit IRC | 16:01 | |
*** sigma6 has quit IRC | 16:15 | |
rgareus | falktx|work, drobilla et al: can a plugin run() while state restore is called? | 16:42 |
---|---|---|
drobilla | rgareus: Nope | 16:46 |
rgareus | drobilla: OK, so I'll need to fix ardour. | 16:47 |
drobilla | An RT or thread-safe restore mechanism would certainly be nice, but we went with the simple/safe thing | 16:47 |
rgareus | drobilla: worker thread is fine here | 16:47 |
rgareus | but more work for the plugin | 16:47 |
drobilla | Yeah, I vaguely recall there being an awful lot of incorrectness in implementation out there (some of it my fault) | 16:47 |
drobilla | Exposing some kind of state object the host can get in another thread, then apply in the audio one (in RT, probably by providing a garbage collector facility or promising to call a free function later) would have been a nice design. More onerous in the common case, though | 16:51 |
rgareus | drobilla: so schedule_work() is safe to be called from state-restore? | 16:52 |
drobilla | um | 16:53 |
drobilla | Is that even possible? | 16:53 |
drobilla | Pretty sure no. What would be the point in scheduling non-RT work from a function with no RT constraints and a guarantee that nothing else is running concurrently anyway? | 16:54 |
drobilla | restore is basically like instantiate | 16:54 |
rgareus | drobilla: to keep restore fast | 16:57 |
rgareus | drobilla: if restore() blocks run() it should return quickly | 16:57 |
rgareus | and rather schedule work in the background | 16:57 |
rgareus | if restore() and run() are mutually exclusive. calling schedule_work () should work | 16:58 |
drobilla | rgareus: This is not currently possible. We don't have dropout-free state restoration, unfortunately | 16:59 |
drobilla | (or anything close if restore takes forever) | 16:59 |
drobilla | rgareus: I think it would be relatively straightforward to add if you're into that, though | 17:00 |
falktx|work | afaik state save can happen at anytime. but state restore cannot happen while processing | 17:01 |
drobilla | Not quite "any time", but close enough (basically just not concurrently with anything that defines itself to not be allowed to run concurrently with anything else, e.g. instantiation stuff) | 17:02 |
*** _FrnchFrgg_ has joined #lv2 | 17:02 | |
drobilla | Hm, we could use the features parameters to get actual click-free restore | 17:04 |
drobilla | I see 3 feasible options: | 17:04 |
rgareus | drobilla: in ardour it just works since restore writes to a ringbuffer. | 17:04 |
drobilla | rgareus: You mean the plugin does? and applies it in run later? | 17:05 |
rgareus | drobilla: jalv_worker_schedule is also save -- as long as there's a single writer | 17:05 |
drobilla | oh, "it" being calling the worker in restore | 17:06 |
rgareus | yes | 17:06 |
rgareus | drobilla: restore() { prepare a background instance, schedule work to instantiate it;} | 17:06 |
drobilla | Well, the single writer thing is the issue there, but since run() is the writer and they aren't allowed to be run concurrently I guess that's safe | 17:06 |
rgareus | run () { contiues do do its thing } while the worker works | 17:06 |
drobilla | Using the worker for this seems a bit clunky to me, but maybe | 17:07 |
rgareus | in convo.lv2 restore loaded a file and resampled it and then switched instances. due to a bug in Ardour restore () is called in the GUI thread -- concurrent w/ run() | 17:08 |
rgareus | and that resulted in crashes.. | 17:08 |
drobilla | I suppose that does pretty much provide a facility to load your state as a separate object from your instance and get it in run() whenever it's ready | 17:08 |
drobilla | We would just need a feature or different state:interface predicate or something to say this is okay | 17:09 |
rgareus | drobilla: yes, well in work_response() which is called in RT context _after_ run. | 17:10 |
drobilla | Still, making restore() (non-RT anyway) itself fast by shunting the work to another non-RT slow thread is pretty roundabout | 17:10 |
rgareus | drobilla: the "Blocking" part is key | 17:11 |
rgareus | restore() blocks run)( | 17:11 |
rgareus | restore() blocks run() | 17:11 |
rgareus | if restore can take 10 seconds that's bad. | 17:11 |
drobilla | Sure | 17:11 |
rgareus | and loading huge sampler banks.. could even take more than 10 sec | 17:12 |
drobilla | I'm just thinking, using the worker means restore() could be actually RT and called in process(). That's the whole point of the worker. | 17:12 |
falktx|work | can't you do that yourself in the plugin? | 17:12 |
drobilla | Something in here is the right thing, and it's incredibly close to strictly click-free restore, but I'm not quite sure what it is | 17:12 |
falktx|work | if the host called restore, set a flag for later changing state | 17:12 |
drobilla | One reason I don't like an almost-there solutions is that stopping run, aside from materially sucking for users, is annoying to implement in hosts (as evidenced by Ardour not doing it...) | 17:14 |
drobilla | If the plugin is going to pay the price of using worker mechanisms that make this no longer necessary, might as well go all out | 17:14 |
rgareus | drobilla: it'd be easy in ardour just just take the ProcessLock | 17:14 |
rgareus | drobilla: but that'd stop _all_ processing. | 17:14 |
drobilla | rgareus: Yeah, that's not... great | 17:15 |
rgareus | we do that for session load.. | 17:15 |
rgareus | it does not block jack. only ardour processing | 17:16 |
rgareus | still not great | 17:16 |
drobilla | Well, sure, but any other time... dropouts are really the ultimate thing that makes audio software feel like their garbage. No quality. | 17:17 |
rgareus | I've worked around this in convo.lv2 now. it's fine with concurrent run() and restore() now | 17:19 |
drobilla | How? | 17:19 |
rgareus | it could potentially fall over if restore() and a patch-set message in run() arrive concurrently | 17:19 |
falktx|work | in zyn-dpf I make it silent while restore happens | 17:20 |
rgareus | drobilla: schedule work from restore() | 17:20 |
drobilla | rgareus: This is probably not okay if it's the worker passed to instantiate() | 17:20 |
rgareus | drobilla: how so? | 17:21 |
drobilla | rgareus: schedule() is explicitly to be called from run() context things | 17:21 |
drobilla | So it's okay if the host follows the rules which they aren't, being the problem :) | 17:22 |
falktx|work | we need a lv2-validation tool. not just for meta-data but for realtime stuff | 17:22 |
drobilla | yep | 17:22 |
rgareus | drobilla: in reality: I can't see how I woule implement a work queue where schedule_work() would only be valid in one thread. | 17:23 |
rgareus | drobilla: as long as it's not called concurrently. | 17:23 |
falktx|work | you can do it on purpose and make it null after run :P | 17:24 |
rgareus | convo.lv2 is actually fine in that sepect as well, since the background instance is a singleton. | 17:24 |
drobilla | Okay, what if we define a different pred/feature/whatever for restore that says "this function may be run concurrently with any other function, including run(), must be RT safe, and must use the passed worker feature to schedule any restore work. In practice this essentiallyl means it may not modify the plugin instance but only have the effect of scheduling work" | 17:24 |
drobilla | Then the usual work() and work_response() mechanism will get your result to the run context eventually, at which point you apply it. | 17:25 |
drobilla | Garbage is the problem. | 17:25 |
rgareus | falktx|work: not directy you can't change the pointers that the plugin knows.. but sure you could make the host fail intentionally. | 17:26 |
rgareus | in any case for now it's a workaround for b0rked hosts | 17:26 |
drobilla | I guess you can just schedule yet more work to free things or whatever | 17:26 |
rgareus | drobilla: in convo.lv2 it's a 3 step process. | 17:27 |
drobilla | rgareus: Yeah, given the no concurrency it's probably not really a problem. Just seems obviously stupid if you try to write down the rules. Enforced dropouts for no reason. | 17:27 |
rgareus | work {allocate instance.. } work_response { swap instances; schedule free;} work { free instance } | 17:28 |
drobilla | Right | 17:28 |
drobilla | I thiiiink this is probably a great idea | 17:29 |
drobilla | In terms of rules I'm leaning towards two options: current situation (restore() stops the world, time and concurrency and everything else is irrelevant), and alternate interface (same API) with the above-mentioned very strict ones (don't screw with your instance whatsoever) | 17:29 |
drobilla | eg-sampler is a simple case to try it out on | 17:30 |
drobilla | Where the latter is truly dropout free, no need for the host to interrupt run calls | 17:30 |
drobilla | (Not that plugins are likely to be able to apply state without clicks, but in terms of the interface's power, anyway) | 17:31 |
rgareus | on convol NOW restore {allocate & configure instance (fast); schedule work } work {intialize instance (slow) } work_response { swap instances; schedule free;} work { free instance } | 17:33 |
rgareus | _FrnchFrgg_ here discovered the issue by switching presets in Ardour fast. | 17:34 |
rgareus | s/fast/rapidly/ | 17:35 |
drobilla | Right | 17:35 |
drobilla | We'll need to fix that bug in Ardour anyway | 17:35 |
rgareus | ideally we'd only block the given plugin instance | 17:35 |
rgareus | with parallel processing other stuff could continue | 17:35 |
rgareus | and hopefully no x-runs. | 17:36 |
drobilla | Well, you don't need to literally block | 17:36 |
rgareus | the bad part: if it's not fast enough it will x-run and click | 17:36 |
drobilla | Set a don't actually run flag or some such | 17:36 |
rgareus | if we take the process lock and block everything: ardour would already de-click and not take down jack | 17:36 |
drobilla | Contiguous time guarantees get thrown out then though | 17:36 |
drobilla | I am more interested in defining the new unshitty way, then that | 17:36 |
drobilla | than* | 17:37 |
drobilla | ... actualy, both | 17:37 |
drobilla | heh | 17:37 |
rgareus | drobilla: personally I think the best would be to change the spec and require restore() to be RT-safe and allow calling the worker from there | 17:37 |
drobilla | rgareus: I need to run out and do some irritating bureaucratic things, I'll try to amend the spec later | 17:37 |
drobilla | rgareus: We can't change-change it | 17:37 |
drobilla | rgareus: I'll probably define a state:realTimeInterface predicate or something for providing the interface with those rules | 17:38 |
rgareus | drobilla: and deprecate the current state interface? | 17:38 |
drobilla | rgareus: I dunno. Probably not actively. If having only that is the best, host authors can not support the other one and force plugin authors. Fight amongst yourselves :D | 17:39 |
drobilla | rgareus: I'll have a better idea of the burden when I implement it | 17:39 |
drobilla | Using the worker is a bit more annoying in some ways, but less in others, and we have that existing mechanism that does all the things which is nice | 17:39 |
rgareus | falktx|work: does carla currently allow calling schedule_work() from restore() ? | 17:40 |
drobilla | The zero thread restrictions rule lets hosts be clever and not actually use all the unnecessary ringbuffers. You could call restore() in some other thread and just immediately call plugin.work() there | 17:40 |
falktx|work | rgareus: no idea tbh. undefined behaviour | 17:41 |
rgareus | falktx|work: do you use a ringbuffer for the work queue (like jalv)? | 17:41 |
falktx|work | yes | 17:41 |
rgareus | falktx|work: then it should be fine, I suppose | 17:42 |
falktx|work | but if offline, it triggers work() right away | 17:42 |
falktx|work | offline/freewheel | 17:42 |
rgareus | right | 17:42 |
drobilla | oh yeah, this gets us sample accurate / dropout free restore / state restore | 17:42 |
falktx|work | rgareus: hmm, actually carla handles work schedule from any thread | 17:42 |
drobilla | People automating a preset load every 2ms in 4, 3... | 17:43 |
rgareus | which in convo.lv2's case would do what it did before (intialize instance in restore()) | 17:43 |
falktx|work | rgareus: there's a lock around the atom ringbuffer used for those events | 17:43 |
drobilla | falktx|work: That'd make schedule non-RT though? | 17:43 |
rgareus | falktx|work: that's even better than jalv/ardour | 17:43 |
rgareus | falktx|work: so in your case there'll be no concurreny issues ever if it's called from another thread | 17:44 |
drobilla | Hardly! | 17:44 |
rgareus | as long as there's no contention. | 17:44 |
falktx|work | it's non-rt yes, but for a short time | 17:44 |
drobilla | Clearly our definitions of "better" differ dramatically :P | 17:44 |
rgareus | :) | 17:44 |
falktx|work | the non-rt thread allocates a same-size buffer and copies over the data | 17:44 |
rgareus | well, a lock free ringbuffer --- with a lock around it. is indeed odd | 17:45 |
falktx|work | this ringbuffer was for single master, single client use | 17:46 |
falktx|work | I re-used it for lv2. things got complicated with ui events and worker | 17:47 |
drobilla | IIRC jalv is incorrect in the same way Ardour is, though | 17:47 |
drobilla | There's something wrong about state, anyway, and I think that was it | 17:47 |
falktx|work | hmm but my worker does the non-rt stuff on the idle thread | 17:47 |
falktx|work | too many pending changes might be bad | 17:48 |
rgareus | falktx|work: so one worker for _all_ plugins? | 17:48 |
falktx|work | no | 17:48 |
falktx|work | 1 worker per plugin, if needed | 17:48 |
falktx|work | most plugins don't need it | 17:48 |
rgareus | falktx|work: in that case you would not need a lock for the work-queue (according to official specs anyway) | 17:49 |
falktx|work | I think there was some odd case I was worried about | 17:49 |
falktx|work | it's been some time since I work on carla | 17:50 |
rgareus | falktx|work: does the mod-host support worker? | 17:50 |
falktx|work | yes | 17:50 |
falktx|work | hmm my lv2 ringbuffer does not expose the lock | 17:52 |
falktx|work | you can only put new things into the buffer (scoped lock), or copy data to a different object (also scoped lock) | 17:53 |
falktx|work | so I think that's good enough | 17:53 |
*** ricardocrudo has quit IRC | 17:54 | |
falktx|work | bbl | 17:54 |
*** falktx|work has quit IRC | 17:54 | |
drobilla | Hmmmmmmm... you need to call the state retrieve function from restore though... | 18:08 |
drobilla | Perhaps it is better if restore() still has no RT restrictions and can be slowish and expected to be called from another thread, just concurrently with run() for the new one. | 18:10 |
drobilla | So you retrieve your state and maybe do a bit of work and can shunt the result to run() via the worker. How you split that up is your choice but it makes no difference (the work part of the worker is pretty much redundant in this case, we just want the response mechanism, really) | 18:11 |
rgareus | drobilla: how to "shunt the result to run()" ? atomics ? | 18:11 |
drobilla | rgareus: "via the worker" | 18:12 |
rgareus | good | 18:12 |
drobilla | Though it might make more sense to just reuse some of the interface but skip the work() part | 18:12 |
rgareus | drobilla: so the work queue would need a lock -- to prevent corrupting the RB when scheduling from restore() and run() | 18:13 |
drobilla | Do the work in restore(), "schedule" the result to be delivered verbatim via work_response | 18:13 |
drobilla | rgareus: No. Different worker, passed as feature to restore() | 18:13 |
rgareus | drobilla: but identical work_response() | 18:13 |
drobilla | rgareus: Which a sane host would implement without all the unnecessary ring buffers and such anyway. | 18:13 |
drobilla | rgareus: Yeah | 18:13 |
drobilla | Not sure how weird using half the worker will be in practice until I try | 18:14 |
drobilla | But there seems to be no point (just PITA) to adding RT restrictions to restore() | 18:14 |
drobilla | So queueing things off to yet another thread to be processed then have that result queued to run() is just silly | 18:15 |
rgareus | drobilla: in reality that will make plugins a lot more complex | 18:15 |
rgareus | drobilla: it's theoretically possible to schedule work in both workers at the same time | 18:15 |
rgareus | drobilla: so a plugin would need to check for that | 18:15 |
rgareus | in which case you might just use one worker anyway | 18:16 |
drobilla | rgareus: I think the work() part needs to be skipped here one way or another, which avoids that problem | 18:17 |
rgareus | and the "work" routine in the plugin will in 99.9% - possibly even 100% - be identical for both cases. | 18:17 |
drobilla | Really we just want an enqueue_result(size, buf) to be called from restore(), and a way to get at that in the run context | 18:17 |
drobilla | Unless it would actually be nice for hosts to be able to call restore() in the audio thread | 18:18 |
drobilla | That requires retrieve() to be safe, and the plugin to not allocate or anything in retrieve() which seems a bit excessive | 18:18 |
rgareus | indeed | 18:19 |
drobilla | I don't think "multiple" workers in retrieve is an issue. If you're doing this, you must have some sort of type field in your payload which you dispatch on anyway | 18:19 |
drobilla | It might be confusing to use worker API things that work differently, though | 18:20 |
rgareus | drobilla: an Atom message would be nice. restore() could act like a plugin-GUI and tell run() about the change using a message | 18:20 |
rgareus | but that would require the DSP to have an atom port | 18:21 |
drobilla | rgareus: Indeed | 18:21 |
drobilla | run() has no access to the retrieve stuff though, so only works for big plugins that really just want a filename or whatever, unless we provided such a thing | 18:21 |
rgareus | which is probably fine. if you need fancy restore() you also need an Atom port. | 18:22 |
drobilla | anyway, bbl | 18:22 |
rgareus | which is probably already there in those cases anyway (file i/o) | 18:22 |
drobilla | I guess the split might be useful since restore() may be called in a GUI thread, and work() can just take forever? | 18:23 |
drobilla | I dislike the complexity of two methods where one will do, though | 18:24 |
*** _FrnchFrgg_ has left #lv2 | 18:38 | |
*** unclechu has quit IRC | 20:01 | |
*** ricardocrudo has joined #lv2 | 20:21 | |
*** frinknet has joined #lv2 | 20:21 | |
*** rncbc has joined #lv2 | 20:37 | |
*** rncbc has quit IRC | 20:45 | |
*** rncbc has joined #lv2 | 20:45 | |
*** frinknet has quit IRC | 20:48 | |
*** rncbc has quit IRC | 20:52 | |
*** rncbc_ has joined #lv2 | 20:53 | |
*** rncbc_ is now known as rncbc | 20:54 | |
*** rncbc has quit IRC | 21:05 | |
*** rncbc has joined #lv2 | 21:07 | |
*** rncbc has quit IRC | 21:24 | |
*** rncbc has joined #lv2 | 21:25 | |
*** rncbc has quit IRC | 21:27 | |
*** Spark[01] has quit IRC | 21:31 | |
*** unclechu has joined #lv2 | 21:56 | |
*** unclechu has quit IRC | 21:56 | |
*** unclechu has joined #lv2 | 21:56 | |
*** Spark[01] has joined #lv2 | 21:57 | |
*** NickSB2 has quit IRC | 22:02 | |
*** NickSB2 has joined #lv2 | 22:05 | |
*** oofus_ has quit IRC | 22:19 | |
*** oofus has joined #lv2 | 22:20 | |
*** Spark[01] has quit IRC | 22:24 | |
*** Spark[01] has joined #lv2 | 22:33 | |
*** edogawa has joined #lv2 | 22:47 | |
*** ricardocrudo has quit IRC | 23:32 | |
*** edogawa has quit IRC | 23:50 |
Generated by irclog2html.py 2.13.0 by Marius Gedminas - find it at mg.pov.lt!