Friday, 2016-07-29

*** wumpus has quit IRC00:29
*** wumpus has joined #lv200:39
*** unclechu has quit IRC01:39
*** Spark[01] has quit IRC03:18
*** oofus has quit IRC04:53
*** oofus has joined #lv204:53
*** Spark[01] has joined #lv206:00
*** Spark[01] has quit IRC06:06
*** edogawa has joined #lv206:09
*** Spark[01] has joined #lv206:13
*** Spark[01] has joined #lv206:14
*** Spark[01] has quit IRC06:32
*** dsheeler has quit IRC06:39
*** dsheeler has joined #lv206:41
*** edogawa has quit IRC07:14
*** oofus has quit IRC07:40
*** oofus has joined #lv207:45
*** EntropySink has joined #lv208:00
*** sigma6 has joined #lv208:07
*** Spark[01] has joined #lv208:51
*** ricardocrudo has joined #lv208:56
*** drobilla has quit IRC09:01
*** falktx|work has joined #lv209:08
*** falktx|work has quit IRC09:08
*** falktx|work has joined #lv209:09
*** frinknet has quit IRC10:26
*** Spark[01] has quit IRC10:37
*** Spark[01] has joined #lv210:50
*** unclechu has joined #lv210:51
*** unclechu has quit IRC11:02
*** unclechu has joined #lv211:04
*** unclechu has joined #lv211:05
*** oofus_ has joined #lv212:11
*** oofus has quit IRC12:14
*** LAbot has joined #lv212:39
*** drobilla has joined #lv214:33
*** frinknet has joined #lv214:34
*** frinknet has quit IRC14:54
*** frinknet has joined #lv215:45
*** frinknet has quit IRC16:01
*** sigma6 has quit IRC16:15
rgareusfalktx|work, drobilla et al:  can a plugin  run()  while  state restore is called?16:42
drobillargareus: Nope16:46
rgareusdrobilla: OK, so I'll need to fix ardour.16:47
drobillaAn RT or thread-safe restore mechanism would certainly be nice, but we went with the simple/safe thing16:47
rgareusdrobilla: worker thread is fine here16:47
rgareusbut more work for the plugin16:47
drobillaYeah, I vaguely recall there being an awful lot of incorrectness in implementation out there (some of it my fault)16:47
drobillaExposing some kind of state object the host can get in another thread, then apply in the audio one (in RT, probably by providing a garbage collector facility or promising to call a free function later) would have been a nice design.  More onerous in the common case, though16:51
rgareusdrobilla: so  schedule_work() is safe to be called from  state-restore?16:52
drobillaum16:53
drobillaIs that even possible?16:53
drobillaPretty sure no.  What would be the point in scheduling non-RT work from a function with no RT constraints and a guarantee that nothing else is running concurrently anyway?16:54
drobillarestore is basically like instantiate16:54
rgareusdrobilla: to keep restore fast16:57
rgareusdrobilla: if restore() blocks run()  it should return quickly16:57
rgareusand rather schedule work in the background16:57
rgareusif  restore() and run() are mutually exclusive.  calling  schedule_work () should work16:58
drobillargareus: This is not currently possible.  We don't have dropout-free state restoration, unfortunately16:59
drobilla(or anything close if restore takes forever)16:59
drobillargareus: I think it would be relatively straightforward to add if you're into that, though17:00
falktx|workafaik state save can happen at anytime. but state restore cannot happen while processing17:01
drobillaNot quite "any time", but close enough (basically just not concurrently with anything that defines itself to not be allowed to run concurrently with anything else, e.g. instantiation stuff)17:02
*** _FrnchFrgg_ has joined #lv217:02
drobillaHm, we could use the features parameters to get actual click-free restore17:04
drobillaI see 3 feasible options:17:04
rgareusdrobilla: in ardour it just works since restore writes to a ringbuffer.17:04
drobillargareus: You mean the plugin does?  and applies it in run later?17:05
rgareusdrobilla: jalv_worker_schedule is also save  -- as long as there's a single writer17:05
drobillaoh, "it" being calling the worker in restore17:06
rgareusyes17:06
rgareusdrobilla: restore() { prepare a background instance,   schedule work to instantiate it;}17:06
drobillaWell, the single writer thing is the issue there, but since run() is the writer and they aren't allowed to be run concurrently I guess that's safe17:06
rgareusrun () {  contiues do do its thing }      while the worker works17:06
drobillaUsing the worker for this seems a bit clunky to me, but maybe17:07
rgareusin convo.lv2   restore   loaded a file and resampled it  and then switched instances.   due to a bug in Ardour restore () is called in the GUI thread -- concurrent w/ run()17:08
rgareusand  that resulted in crashes..17:08
drobillaI suppose that does pretty much provide a facility to load your state as a separate object from your instance and get it in run() whenever it's ready17:08
drobillaWe would just need a feature or different state:interface predicate or something to say this is okay17:09
rgareusdrobilla: yes,  well in work_response()  which is called in RT context _after_ run.17:10
drobillaStill, making restore() (non-RT anyway) itself fast by shunting the work to another non-RT slow thread is pretty roundabout17:10
rgareusdrobilla: the "Blocking" part is key17:11
rgareusrestore() blocks run)(17:11
rgareusrestore() blocks run()17:11
rgareusif restore can take 10 seconds  that's bad.17:11
drobillaSure17:11
rgareusand loading huge sampler banks.. could even take more than 10 sec17:12
drobillaI'm just thinking, using the worker means restore() could be actually RT and called in process().  That's the whole point of the worker.17:12
falktx|workcan't you do that yourself in the plugin?17:12
drobillaSomething in here is the right thing, and it's incredibly close to strictly click-free restore, but I'm not quite sure what it is17:12
falktx|workif the host called restore, set a flag for later changing state17:12
drobillaOne reason I don't like an almost-there solutions is that stopping run, aside from materially sucking for users, is annoying to implement in hosts (as evidenced by Ardour not doing it...)17:14
drobillaIf the plugin is going to pay the price of using worker mechanisms that make this no longer necessary, might as well go all out17:14
rgareusdrobilla: it'd be easy in ardour just just take the ProcessLock17:14
rgareusdrobilla: but that'd stop _all_ processing.17:14
drobillargareus: Yeah, that's not... great17:15
rgareuswe do that for session load..17:15
rgareusit does not block jack.  only ardour processing17:16
rgareusstill not great17:16
drobillaWell, sure, but any other time... dropouts are really the ultimate thing that makes audio software feel like their garbage.  No quality.17:17
rgareusI've worked around this in convo.lv2 now.  it's fine with concurrent run() and restore() now17:19
drobillaHow?17:19
rgareusit could potentially fall over  if restore() and  a patch-set message in run() arrive concurrently17:19
falktx|workin zyn-dpf I make it silent while restore happens17:20
rgareusdrobilla: schedule work from restore()17:20
drobillargareus: This is probably not okay if it's the worker passed to instantiate()17:20
rgareusdrobilla: how so?17:21
drobillargareus: schedule() is explicitly to be called from run() context things17:21
drobillaSo it's okay if the host follows the rules which they aren't, being the problem :)17:22
falktx|workwe need a lv2-validation tool. not just for meta-data but for realtime stuff17:22
drobillayep17:22
rgareusdrobilla: in reality: I can't see how I woule implement a work queue where  schedule_work()  would only be valid in one thread.17:23
rgareusdrobilla: as long as it's not called concurrently.17:23
falktx|workyou can do it on purpose and make it null after run :P17:24
rgareusconvo.lv2 is actually fine in that sepect as well, since the background instance is a singleton.17:24
drobillaOkay, what if we define a different pred/feature/whatever for restore that says "this function may be run concurrently with any other function, including run(), must be RT safe, and must use the passed worker feature to schedule any restore work.  In practice this essentiallyl means it may not modify the plugin instance but only have the effect of scheduling work"17:24
drobillaThen the usual work() and work_response() mechanism will get your result to the run context eventually, at which point you apply it.17:25
drobillaGarbage is the problem.17:25
rgareusfalktx|work: not directy you can't change the pointers that the plugin knows.. but sure you could make the host fail intentionally.17:26
rgareusin any case for now it's a workaround for b0rked hosts17:26
drobillaI guess you can just schedule yet more work to free things or whatever17:26
rgareusdrobilla: in convo.lv2 it's a 3 step process.17:27
drobillargareus: Yeah, given the no concurrency it's probably not really a problem.  Just seems obviously stupid if you try to write down the rules.  Enforced dropouts for no reason.17:27
rgareuswork {allocate instance.. }   work_response { swap instances; schedule free;}   work { free instance }17:28
drobillaRight17:28
drobillaI thiiiink this is probably a great idea17:29
drobillaIn terms of rules I'm leaning towards two options: current situation (restore() stops the world, time and concurrency and everything else is irrelevant), and alternate interface (same API) with the above-mentioned very strict ones (don't screw with your instance whatsoever)17:29
drobillaeg-sampler is a simple case to try it out on17:30
drobillaWhere the latter is truly dropout free, no need for the host to interrupt run calls17:30
drobilla(Not that plugins are likely to be able to apply state without clicks, but in terms of the interface's power, anyway)17:31
rgareuson convol NOW  restore {allocate & configure instance (fast); schedule work }  work {intialize instance (slow) }  work_response { swap instances; schedule free;}   work { free instance }17:33
rgareus_FrnchFrgg_ here discovered the issue by switching presets in Ardour  fast.17:34
rgareuss/fast/rapidly/17:35
drobillaRight17:35
drobillaWe'll need to fix that bug in Ardour anyway17:35
rgareusideally we'd only block the given plugin instance17:35
rgareuswith parallel processing other stuff could continue17:35
rgareusand hopefully no x-runs.17:36
drobillaWell, you don't need to literally block17:36
rgareusthe bad part: if it's not fast enough it will x-run and click17:36
drobillaSet a don't actually run flag or some such17:36
rgareusif we take the process lock and block everything:  ardour would already de-click and  not take down jack17:36
drobillaContiguous time guarantees get thrown out then though17:36
drobillaI am more interested in defining the new unshitty way, then that17:36
drobillathan*17:37
drobilla... actualy, both17:37
drobillaheh17:37
rgareusdrobilla: personally I think the best would be to change the spec and require restore() to be RT-safe  and allow calling the worker from there17:37
drobillargareus: I need to run out and do some irritating bureaucratic things, I'll try to amend the spec later17:37
drobillargareus: We can't change-change it17:37
drobillargareus: I'll probably define a state:realTimeInterface predicate or something for providing the interface with those rules17:38
rgareusdrobilla: and deprecate the current state interface?17:38
drobillargareus: I dunno.  Probably not actively.  If having only that is the best, host authors can not support the other one and force plugin authors.  Fight amongst yourselves :D17:39
drobillargareus: I'll have a better idea of the burden when I implement it17:39
drobillaUsing the worker is a bit more annoying in some ways, but less in others, and we have that existing mechanism that does all the things which is nice17:39
rgareusfalktx|work: does carla currently allow calling  schedule_work() from  restore() ?17:40
drobillaThe zero thread restrictions rule lets hosts be clever and not actually use all the unnecessary ringbuffers.  You could call restore() in some other thread and just immediately call plugin.work() there17:40
falktx|workrgareus: no idea tbh. undefined behaviour17:41
rgareusfalktx|work: do you use a ringbuffer for the work queue (like jalv)?17:41
falktx|workyes17:41
rgareusfalktx|work: then it should be fine, I suppose17:42
falktx|workbut if offline, it triggers work() right away17:42
falktx|workoffline/freewheel17:42
rgareusright17:42
drobillaoh yeah, this gets us sample accurate / dropout free restore / state restore17:42
falktx|workrgareus: hmm, actually carla handles work schedule from any thread17:42
drobillaPeople automating a preset load every 2ms in 4, 3...17:43
rgareuswhich in convo.lv2's case would do what it did before (intialize instance in  restore())17:43
falktx|workrgareus: there's a lock around the atom ringbuffer used for those events17:43
drobillafalktx|work: That'd make schedule non-RT though?17:43
rgareusfalktx|work: that's even better than jalv/ardour17:43
rgareusfalktx|work: so in your case there'll be no concurreny issues ever  if it's called from another thread17:44
drobillaHardly!17:44
rgareusas long as there's no contention.17:44
falktx|workit's non-rt yes, but for a short time17:44
drobillaClearly our definitions of "better" differ dramatically :P17:44
rgareus:)17:44
falktx|workthe non-rt thread allocates a same-size buffer and copies over the data17:44
rgareuswell, a lock free ringbuffer  --- with a lock around it.  is indeed odd17:45
falktx|workthis ringbuffer was for single master, single client use17:46
falktx|workI re-used it for lv2. things got complicated with ui events and worker17:47
drobillaIIRC jalv is incorrect in the same way Ardour is, though17:47
drobillaThere's something wrong about state, anyway, and I think that was it17:47
falktx|workhmm but my worker does the non-rt stuff on the idle thread17:47
falktx|worktoo many pending changes might be bad17:48
rgareusfalktx|work: so one worker for _all_ plugins?17:48
falktx|workno17:48
falktx|work1 worker per plugin, if needed17:48
falktx|workmost plugins don't need it17:48
rgareusfalktx|work: in that case you would not need a lock for the work-queue (according to official specs anyway)17:49
falktx|workI think there was some odd case I was worried about17:49
falktx|workit's been some time since I work on carla17:50
rgareusfalktx|work: does the mod-host support worker?17:50
falktx|workyes17:50
falktx|workhmm my lv2 ringbuffer does not expose the lock17:52
falktx|workyou can only put new things into the buffer (scoped lock), or copy data to a different object (also scoped lock)17:53
falktx|workso I think that's good enough17:53
*** ricardocrudo has quit IRC17:54
falktx|workbbl17:54
*** falktx|work has quit IRC17:54
drobillaHmmmmmmm... you need to call the state retrieve function from restore though...18:08
drobillaPerhaps it is better if restore() still has no RT restrictions and can be slowish and expected to be called from another thread, just concurrently with run() for the new one.18:10
drobillaSo you retrieve your state and maybe do a bit of work and can shunt the result to run() via the worker.  How you split that up is your choice but it makes no difference (the work part of the worker is pretty much redundant in this case, we just want the response mechanism, really)18:11
rgareusdrobilla: how to "shunt the result to run()" ?  atomics ?18:11
drobillargareus: "via the worker"18:12
rgareusgood18:12
drobillaThough it might make more sense to just reuse some of the interface but skip the work() part18:12
rgareusdrobilla: so the work queue would need a lock -- to prevent corrupting the RB when scheduling from  restore() and run()18:13
drobillaDo the work in restore(), "schedule" the result to be delivered verbatim via work_response18:13
drobillargareus: No.  Different worker, passed as feature to restore()18:13
rgareusdrobilla: but identical work_response()18:13
drobillargareus: Which a sane host would implement without all the unnecessary ring buffers and such anyway.18:13
drobillargareus: Yeah18:13
drobillaNot sure how weird using half the worker will be in practice until I try18:14
drobillaBut there seems to be no point (just PITA) to adding RT restrictions to restore()18:14
drobillaSo queueing things off to yet another thread to be processed then have that result queued to run() is just silly18:15
rgareusdrobilla: in reality that will make plugins a lot more complex18:15
rgareusdrobilla: it's theoretically possible to schedule work in both workers at the same time18:15
rgareusdrobilla: so a plugin would need to check for that18:15
rgareusin which case you might just use one worker anyway18:16
drobillargareus: I think the work() part needs to be skipped here one way or another, which avoids that problem18:17
rgareusand the "work" routine in the plugin will in 99.9% - possibly even 100% - be identical for both cases.18:17
drobillaReally we just want an enqueue_result(size, buf) to be called from restore(), and a way to get at that in the run context18:17
drobillaUnless it would actually be nice for hosts to be able to call restore() in the audio thread18:18
drobillaThat requires retrieve() to be safe, and the plugin to not allocate or anything in retrieve() which seems a bit excessive18:18
rgareusindeed18:19
drobillaI don't think "multiple" workers in retrieve is an issue.  If you're doing this, you must have some sort of type field in your payload which you dispatch on anyway18:19
drobillaIt might be confusing to use worker API things that work differently, though18:20
rgareusdrobilla: an Atom message would be nice.   restore()  could act like a plugin-GUI   and tell run() about the change using a  message18:20
rgareusbut that would require the DSP to have an atom port18:21
drobillargareus: Indeed18:21
drobillarun() has no access to the retrieve stuff though, so only works for big plugins that really just want a filename or whatever, unless we provided such a thing18:21
rgareuswhich is probably fine.   if you need  fancy restore()  you also need an Atom port.18:22
drobillaanyway, bbl18:22
rgareuswhich is probably already there in those cases anyway  (file i/o)18:22
drobillaI guess the split might be useful since restore() may be called in a GUI thread, and work() can just take forever?18:23
drobillaI dislike the complexity of two methods where one will do, though18:24
*** _FrnchFrgg_ has left #lv218:38
*** unclechu has quit IRC20:01
*** ricardocrudo has joined #lv220:21
*** frinknet has joined #lv220:21
*** rncbc has joined #lv220:37
*** rncbc has quit IRC20:45
*** rncbc has joined #lv220:45
*** frinknet has quit IRC20:48
*** rncbc has quit IRC20:52
*** rncbc_ has joined #lv220:53
*** rncbc_ is now known as rncbc20:54
*** rncbc has quit IRC21:05
*** rncbc has joined #lv221:07
*** rncbc has quit IRC21:24
*** rncbc has joined #lv221:25
*** rncbc has quit IRC21:27
*** Spark[01] has quit IRC21:31
*** unclechu has joined #lv221:56
*** unclechu has quit IRC21:56
*** unclechu has joined #lv221:56
*** Spark[01] has joined #lv221:57
*** NickSB2 has quit IRC22:02
*** NickSB2 has joined #lv222:05
*** oofus_ has quit IRC22:19
*** oofus has joined #lv222:20
*** Spark[01] has quit IRC22:24
*** Spark[01] has joined #lv222:33
*** edogawa has joined #lv222:47
*** ricardocrudo has quit IRC23:32
*** edogawa has quit IRC23:50

Generated by irclog2html.py 2.13.0 by Marius Gedminas - find it at mg.pov.lt!