diff options
| author | Garrett D'Amore <garrett@damore.org> | 2020-01-05 11:16:03 -0800 |
|---|---|---|
| committer | Garrett D'Amore <garrett@damore.org> | 2020-01-05 13:22:32 -0800 |
| commit | 1eaf9e86a8f54d77d6f392829d1b859c94965329 (patch) | |
| tree | 2efa5ea0befd760b9011989639f9572a58a55f03 /src/platform/posix/posix_pollq_kqueue.c | |
| parent | 36ff88911f8c4a0859457b0fc511333965163c82 (diff) | |
| download | nng-1eaf9e86a8f54d77d6f392829d1b859c94965329.tar.gz nng-1eaf9e86a8f54d77d6f392829d1b859c94965329.tar.bz2 nng-1eaf9e86a8f54d77d6f392829d1b859c94965329.zip | |
fixes #1112 POSIX pollq finalizers could be simpler
We reap the connections when closing, to ensure that the clean up is
done outside the pollq thread. This also reduces pressure on the
pollq, we think. But more importantly it eliminates some complex
code that was meant to avoid deadlocks, but ultimately created other
use-after-free challenges. This work is an enabler for further
simplifications in the aio/task logic. While here we converted some
potentially racy locking of the dialers and reference counts to simpler
lock-free reference counting.
Diffstat (limited to 'src/platform/posix/posix_pollq_kqueue.c')
| -rw-r--r-- | src/platform/posix/posix_pollq_kqueue.c | 34 |
1 files changed, 18 insertions, 16 deletions
diff --git a/src/platform/posix/posix_pollq_kqueue.c b/src/platform/posix/posix_pollq_kqueue.c index 72d306c7..299479ab 100644 --- a/src/platform/posix/posix_pollq_kqueue.c +++ b/src/platform/posix/posix_pollq_kqueue.c @@ -1,5 +1,5 @@ // -// Copyright 2019 Staysail Systems, Inc. <info@staysail.tech> +// Copyright 2020 Staysail Systems, Inc. <info@staysail.tech> // Copyright 2018 Capitar IT Group BV <info@capitar.com> // Copyright 2018 Liam Staskawicz <liam@stask.net> // @@ -124,22 +124,24 @@ nni_posix_pfd_fini(nni_posix_pfd *pf) nni_posix_pfd_close(pf); - if (!nni_thr_is_self(&pq->thr)) { - struct kevent ev; - nni_mtx_lock(&pq->mtx); - nni_list_append(&pq->reapq, pf); - EV_SET(&ev, 0, EVFILT_USER, EV_ENABLE | EV_CLEAR, NOTE_TRIGGER, - 0, NULL); - - // If this fails, the cleanup will stall. That should - // only occur in a memory pressure situation, and it - // will self-heal when the next event comes in. - (void) kevent(pq->kq, &ev, 1, NULL, 0, NULL); - while (!pf->closed) { - nni_cv_wait(&pf->cv); - } - nni_mtx_unlock(&pq->mtx); + // All consumers take care to move finalization to the reap thread, + // unless they are synchronous on user threads. + NNI_ASSERT(!nni_thr_is_self(&pq->thr)); + + struct kevent ev; + nni_mtx_lock(&pq->mtx); + nni_list_append(&pq->reapq, pf); + EV_SET( + &ev, 0, EVFILT_USER, EV_ENABLE | EV_CLEAR, NOTE_TRIGGER, 0, NULL); + + // If this fails, the cleanup will stall. That should + // only occur in a memory pressure situation, and it + // will self-heal when the next event comes in. + (void) kevent(pq->kq, &ev, 1, NULL, 0, NULL); + while (!pf->closed) { + nni_cv_wait(&pf->cv); } + nni_mtx_unlock(&pq->mtx); (void) close(pf->fd); nni_cv_fini(&pf->cv); |
