diff options
| author | Garrett D'Amore <garrett@damore.org> | 2018-05-09 17:21:27 -0700 |
|---|---|---|
| committer | Garrett D'Amore <garrett@damore.org> | 2018-05-14 17:09:20 -0700 |
| commit | 16b4c4019c7b7904de171c588ed8c72ca732d2cf (patch) | |
| tree | 9e5a8416470631cfb48f5a6ebdd4b16e4b1be3d6 /src/core/aio.h | |
| parent | e0beb13b066d27ce32347a1c18c9d441828dc553 (diff) | |
| download | nng-16b4c4019c7b7904de171c588ed8c72ca732d2cf.tar.gz nng-16b4c4019c7b7904de171c588ed8c72ca732d2cf.tar.bz2 nng-16b4c4019c7b7904de171c588ed8c72ca732d2cf.zip | |
fixes #352 aio lock is burning hot
fixes #326 consider nni_taskq_exec_synch()
fixes #410 kqueue implementation could be smarter
fixes #411 epoll_implementation could be smarter
fixes #426 synchronous completion can lead to panic
fixes #421 pipe close race condition/duplicate destroy
This is a major refactoring of two significant parts of the code base,
which are closely interrelated.
First the aio and taskq framework have undergone a number of simplifications,
and improvements. We have ditched a few parts of the internal API (for
example tasks no longer support cancellation) that weren't terribly useful
but added a lot of complexity, and we've made aio_schedule something that
now checks for cancellation or other "premature" completions. The
aio framework now uses the tasks more tightly, so that aio wait can
devolve into just nni_task_wait(). We did have to add a "task_prep()"
step to prevent race conditions.
Second, the entire POSIX poller framework has been simplified, and made
more robust, and more scalable. There were some fairly inherent race
conditions around the shutdown/close code, where we *thought* we were
synchronizing against the other thread, but weren't doing so adequately.
With a cleaner design, we've been able to tighten up the implementation
to remove these race conditions, while substantially reducing the chance
for lock contention, thereby improving scalability. The illumos poller
also got a performance boost by polling for multiple events.
In highly "busy" systems, we expect to see vast reductions in lock
contention, and therefore greater scalability, in addition to overall
improved reliability.
One area where we currently can do better is that there is still only
a single poller thread run. Scaling this out is a task that has to be done
differently for each poller, and carefuly to ensure that close conditions
are safe on all pollers, and that no chance for deadlock/livelock waiting
for pfd finalizers can occur.
Diffstat (limited to 'src/core/aio.h')
| -rw-r--r-- | src/core/aio.h | 15 |
1 files changed, 7 insertions, 8 deletions
diff --git a/src/core/aio.h b/src/core/aio.h index 9b7ac46f..2ed0fb5b 100644 --- a/src/core/aio.h +++ b/src/core/aio.h @@ -146,14 +146,13 @@ extern void nni_aio_bump_count(nni_aio *, size_t); // nni_aio_schedule indicates that the AIO has begun, and is scheduled for // asychronous completion. This also starts the expiration timer. Note that -// prior to this, the aio is uncancellable. -extern void nni_aio_schedule(nni_aio *, nni_aio_cancelfn, void *); - -// nni_aio_schedule_verify is like nni_aio_schedule, except that if the -// operation has been run with a zero time (NNG_FLAG_NONBLOCK), then it -// returns NNG_ETIMEDOUT. This is done to permit bypassing scheduling -// if the operation could not be immediately completed. -extern int nni_aio_schedule_verify(nni_aio *, nni_aio_cancelfn, void *); +// prior to this, the aio is uncancellable. If the operation has a zero +// timeout (NNG_FLAG_NONBLOCK) then NNG_ETIMEDOUT is returned. If the +// operation has already been canceled, or should not be run, then an error +// is returned. (In that case the caller should probably either return an +// error to its caller, or possibly cause an asynchronous error by calling +// nni_aio_finish_error on this aio.) +extern int nni_aio_schedule(nni_aio *, nni_aio_cancelfn, void *); extern void nni_sleep_aio(nni_duration, nni_aio *); |
