Re: [LAD] Pipes vs. Message Queues

Previous message: [thread] [date] [author]
Next message: [thread] [date] [author]
To: <clemens@...>
Cc: <linux-audio-dev@...>
Date: Friday, November 25, 2011 - 2:00 pm

--_d081c686-c1fa-4d49-8d9a-a9a00e1bb0db_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

> From: clemens@ladisch.de

s

Not sure about that. The CPU(95%) was all in the kernel=2C not in the proce=
ss=20
itself so any improvements to what it scheduled for the process would only
translate into a small percentage difference. Isn't it more likely that the=
pipe
code is using an inefficient kernel lock on the pipe to ensure it is thread=
safe?
Please don't misunderstand my 'not sure about'=2C I am relieved to say I am=
not=20
a kernel programmer but understanding these kinds of limitations is interes=
ting
as it bears directly on application implementation (see below).

> > The results with 1M messages had wild variance with SCHED_FIFO=2C

Dave can comment on what he wanted to actually achieve=2C I was interested =
in=20
whether the results could be shown to be general. I take your points on the=
use
of SCHED_FIFO but there are still some weirdness

> It's no surprise that it doesn't work well

It does work very well=2C just not with piped messages.

> when you have two threads that both want to grab 100 % of the CPU

My system does have 200% available though=2C it's was dual core and the que=
stion
I raised was why there is a scheduling problem between the two separate thr=
eads
with pipes whilst it could be demonstrated that there was no real need to h=
ave=20
such contention.

Perhaps I should revisit another project I was working on which was syslog =
event
correlation: it used multiple threads to be scalable to >1M syslog per seco=
nd
(big installation). I was testing it with socketpair()s and other stuff. I =
would be
interested to know if scheduler changes affect it too.

I actually quite like your idea of shared memory - dump a ringbuffer over t=
hat and
it could give interesting IPC. Am not going to test that as it would be a s=
ignificant
change to Dave's code but on the Intel platform it could give some very hig=
h
performance without the need for any recursion to the kernel. The event cor=
relator
would not benefit from the use of shmem since it was threaded=2C not multip=
rocessed.

Kind regards=2C nick
=

--_d081c686-c1fa-4d49-8d9a-a9a00e1bb0db_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

&gt=3B From: clemens@ladisch.de&gt=3B To: nickycopeland@hotmail.co=
m&gt=3B CC: d@drobilla.net=3B linux-audio-dev@lists.linuxaudio.org&=
gt=3B Subject: Re: [LAD] Pipes vs. Message Queues&gt=3B &gt=3B Nick=
Copeland wrote:&gt=3B &gt=3B &gt=3B I got curious=2C so I bashed out a=
quick program to benchmark pipes vs&gt=3B &gt=3B &gt=3B POSIX message =
queues. It just pumps a bunch of messages through the&gt=3B &gt=3B &gt=
=3B pipe/queue in a tight loop.&gt=3B &gt=3B This benchmark measure=
s data transfer bandwidth. If increasing that&gt=3B were your goal=2C =
you should use some zero-copy mechanism such as shared&gt=3B memory or =
(vm)splice.&gt=3B &gt=3B &gt=3B You might be running into some basi=
c scheduler weirdness here though&gt=3B &gt=3B and not something inhere=
ntly wrong with the POSIX queues.&gt=3B &gt=3B The difference betwe=
en pipes and message queues is that the latter are&gt=3B typically used=
for synchronization=2C so it's possible that the kernel&gt=3B tries to=
optimize for this by doing some scheduling for the receiving&gt=3B pro=
cess.Not sure about that. The CPU(95%) was all in the kernel=2C not=
in the process itself so any improvements to what it scheduled for the=
process would onlytranslate into a small percentage difference. Isn't =
it more likely that the pipecode is using an inefficient kernel lock on=
the pipe to ensure it is thread safe?Please don't misunderstand my 'no=
t sure about'=2C I am relieved to say I am not a kernel programmer but =
understanding these kinds of limitations is interestingas it bears dire=
ctly on application implementation (see below).&gt=3B &gt=3B The re=
sults with 1M messages had wild variance with SCHED_FIFO=2C&gt=3B &=
gt=3B SCHED_FIFO is designed for latency=2C not for throughput. It's no&gt=3B surprise that it doesn't work well when you have two threads that b=
oth&gt=3B want to grab 100 % of the CPU.Dave can comment on wha=
t he wanted to actually achieve=2C I was interested in whether the resu=
lts could be shown to be general. I take your points on the useof SCHED=
_FIFO but there are still some weirdness&gt=3B It's no surprise th=
at it doesn't work wellIt does work very well=2C just not with pipe=
d messages.&gt=3B when you have two threads that both want to grab =
100 % of the CPUMy system does have 200% available though=2C it's w=
as dual core and the questionI raised was why there is a scheduling pro=
blem between the two separate threadswith pipes whilst it could be demo=
nstrated that there was no real need to have such contention.Pe=
rhaps I should revisit another project I was working on which was syslog ev=
entcorrelation: it used multiple threads to be scalable to &gt=3B1M sys=
log per second(big installation). I was testing it with socketpair()s a=
nd other stuff. I would beinterested to know if scheduler changes affec=
t it too.I actually quite like your idea of shared memory - dump a =
ringbuffer over that andit could give interesting IPC. Am not going to =
test that as it would be a significantchange to Dave's code but on the =
Intel platform it could give some very highperformance without the need=
for any recursion to the kernel. The event correlatorwould not benefit=
from the use of shmem since it was threaded=2C not multiprocessed.=
Kind regards=2C nick
=

--_d081c686-c1fa-4d49-8d9a-a9a00e1bb0db_--

Previous message: [thread] [date] [author]
Next message: [thread] [date] [author]

Messages in current thread:
[LAD] Pipes vs. Message Queues, David Robillard, (Fri Nov 25, 12:10 am)
Re: [LAD] Pipes vs. Message Queues, Nick Copeland, (Fri Nov 25, 11:07 am)
Re: [LAD] Pipes vs. Message Queues, Clemens Ladisch, (Fri Nov 25, 12:33 pm)
Re: [LAD] Pipes vs. Message Queues, David Robillard, (Fri Nov 25, 8:51 pm)
Re: [LAD] Pipes vs. Message Queues, Nick Copeland, (Fri Nov 25, 2:00 pm)
Re: [LAD] Pipes vs. Message Queues, Nick Copeland, (Fri Nov 25, 2:21 pm)
Re: [LAD] Pipes vs. Message Queues, David Robillard, (Sat Nov 26, 12:00 am)