First of all, I want to thank you all for your answers. I will take into
consideration everything you wrote and I hope I 'll manage to fix the app and
release it under GPL soon enough.
But there is still room for some clarifications and I will try to make myself
more clear of what I am trying to achieve.
The application is a dj-style (wannabe) app like mixxx, virtualdj etc. so it
needs to decode in realtime a portion of the song, apply gain, fx etc and pass
it to jack with as low latency as it can get.
Currently, there are two readers (threads of high priority) where the decoding
happens, which write the samples to two buffers. These two buffers, gets mixed
and written (in a highest priority thread) to another buffer which is the one
that feeds jack.
I am sure there is a flaw with this design but I would like some more info on
what is wrong and which is the suitable design.
So, after reading your replies, I am wondering if this is the way to do it
First, fill the two input buffers(non-realtime?),
Then in the mixer, mix these buffers to a ringbuffer (realtime),
and finally, read the ringbuffer from jack process(realtime).
Also it needs an extra thread to decode the whole song, so I can get the
waveform, the BPM etc, but I think this can be done in a lower priority
And some side-questions: where does the midi thread comes in (does it need
a separate one)? And at what point should the fx(ladspa/lv2) be processed?
Thank you again for your time and your valuable replies. I am relatively new
in programming with realtime audio and threads and still trying to get the
grasp of it, so some recommendations may lead me to the desirable result.
Linux-audio-dev mailing list