> From: Paul Davis
> VST3 allows the GUI to run in a different process?
" The design of VST 3 suggests a complete separation of processor and edit
controller by implementing two components. Splitting up an effect into these
two parts requires some extra efforts for an implementation of course.
But this separation enables the host to run each component in a different
context. It can even run them on different computers. Another benefit is
that parameter changes can be separated when it comes to automation. While
for processing these changes need to be transmitted in a sample accurate
way, the GUI part can be updated with a much lower frequency and it can be
shifted by the amount that results from any delay compensation or other
> > The host needs to see every parameter tweak. It needs to be between the
Yeah. It's important to realise that at any instant 3 entities hold a
-The audio processer part of the plugin.
-The GUI Part.
A parameter change can come from several sources:
- The GUI.
- The Host's automation playback.
- A MIDI controller.
- Sometimes the Audio processor (e.g. VU Meter).
If several of these are happening at once, some central entity needs to give
one priority. For example if a parameter/knob is moving due to automation
and you click that control - the automation needs to relinquish control
until you release the mouse. The host is the best place for this logic.
Think of the host as holding the parameter, the GUI and Audio processor as
'listeners'. Or the host's copy of the parameter as the 'model' and the GUI
and audio processor as 'views' (Model-View-Controller pattern).
Linux-audio-dev mailing list