diff options
| author | Garrett D'Amore <garrett@damore.org> | 2018-05-09 17:21:27 -0700 |
|---|---|---|
| committer | Garrett D'Amore <garrett@damore.org> | 2018-05-14 17:09:20 -0700 |
| commit | 16b4c4019c7b7904de171c588ed8c72ca732d2cf (patch) | |
| tree | 9e5a8416470631cfb48f5a6ebdd4b16e4b1be3d6 /src/transport/inproc | |
| parent | e0beb13b066d27ce32347a1c18c9d441828dc553 (diff) | |
| download | nng-16b4c4019c7b7904de171c588ed8c72ca732d2cf.tar.gz nng-16b4c4019c7b7904de171c588ed8c72ca732d2cf.tar.bz2 nng-16b4c4019c7b7904de171c588ed8c72ca732d2cf.zip | |
fixes #352 aio lock is burning hot
fixes #326 consider nni_taskq_exec_synch()
fixes #410 kqueue implementation could be smarter
fixes #411 epoll_implementation could be smarter
fixes #426 synchronous completion can lead to panic
fixes #421 pipe close race condition/duplicate destroy
This is a major refactoring of two significant parts of the code base,
which are closely interrelated.
First the aio and taskq framework have undergone a number of simplifications,
and improvements. We have ditched a few parts of the internal API (for
example tasks no longer support cancellation) that weren't terribly useful
but added a lot of complexity, and we've made aio_schedule something that
now checks for cancellation or other "premature" completions. The
aio framework now uses the tasks more tightly, so that aio wait can
devolve into just nni_task_wait(). We did have to add a "task_prep()"
step to prevent race conditions.
Second, the entire POSIX poller framework has been simplified, and made
more robust, and more scalable. There were some fairly inherent race
conditions around the shutdown/close code, where we *thought* we were
synchronizing against the other thread, but weren't doing so adequately.
With a cleaner design, we've been able to tighten up the implementation
to remove these race conditions, while substantially reducing the chance
for lock contention, thereby improving scalability. The illumos poller
also got a performance boost by polling for multiple events.
In highly "busy" systems, we expect to see vast reductions in lock
contention, and therefore greater scalability, in addition to overall
improved reliability.
One area where we currently can do better is that there is still only
a single poller thread run. Scaling this out is a task that has to be done
differently for each poller, and carefuly to ensure that close conditions
are safe on all pollers, and that no chance for deadlock/livelock waiting
for pfd finalizers can occur.
Diffstat (limited to 'src/transport/inproc')
| -rw-r--r-- | src/transport/inproc/inproc.c | 14 |
1 files changed, 12 insertions, 2 deletions
diff --git a/src/transport/inproc/inproc.c b/src/transport/inproc/inproc.c index 8bfb097e..0f159d3a 100644 --- a/src/transport/inproc/inproc.c +++ b/src/transport/inproc/inproc.c @@ -349,6 +349,7 @@ nni_inproc_ep_connect(void *arg, nni_aio *aio) { nni_inproc_ep *ep = arg; nni_inproc_ep *server; + int rv; if (nni_aio_begin(aio) != 0) { return; @@ -375,7 +376,11 @@ nni_inproc_ep_connect(void *arg, nni_aio *aio) // We don't have to worry about the case where a zero timeout // on connect was specified, as there is no option to specify // that in the upper API. - nni_aio_schedule(aio, nni_inproc_ep_cancel, ep); + if ((rv = nni_aio_schedule(aio, nni_inproc_ep_cancel, ep)) != 0) { + nni_mtx_unlock(&nni_inproc.mx); + nni_aio_finish_error(aio, rv); + return; + } nni_list_append(&server->clients, ep); nni_aio_list_append(&ep->aios, aio); @@ -407,6 +412,7 @@ static void nni_inproc_ep_accept(void *arg, nni_aio *aio) { nni_inproc_ep *ep = arg; + int rv; if (nni_aio_begin(aio) != 0) { return; @@ -416,7 +422,11 @@ nni_inproc_ep_accept(void *arg, nni_aio *aio) // We need not worry about the case where a non-blocking // accept was tried -- there is no API to do such a thing. - nni_aio_schedule(aio, nni_inproc_ep_cancel, ep); + if ((rv = nni_aio_schedule(aio, nni_inproc_ep_cancel, ep)) != 0) { + nni_mtx_unlock(&nni_inproc.mx); + nni_aio_finish_error(aio, rv); + return; + } // We are already on the master list of servers, thanks to bind. // Insert us into pending server aios, and then run accept list. |
