| Commit message (Collapse) | Author | Age |
| ... | |
| |
|
|
|
|
|
|
|
| |
We introduced the compat_msg.c from the old msg.c in the nanomsg
repo. While here, we found that the handling for send() was badly
wrong, by a level of indirection. We simplified the code to so that
nn_send() and nn_recv() are simple wrappers around the nn_sendmsg()
and nn_recvmsg() APIs (as in old nanomsg). This may not be quite as
fast, but it's more likely to be correct and reduces complexity.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
With the new reapers, we've seen some problems caused by the reaper
running after the taskq that they have to wait on (completion tasks
for aios) are destroyed. We need to make sure that we tear down major
subsystems in the correct order.
|
| | |
|
| |
|
|
|
|
|
| |
This change mirrors the change we made for pipes yesterday,
moving the endpoint cleanup to its own thread, ensuring that
the blocking operations we need to perform during clean up
do not gum up the works in the main system taskq.
|
| |
|
|
|
|
|
|
| |
The problem is that reaping these things performs some blocking
operations which can tie up slots in the taskq, preventing other
tasks from running. Ultimately this can lead to a deadlock as
tasks that are blocked wind up waiting for tasks that can't get
scheduled. Blocking tasks really should not run on the system taskq.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
This passes valgrind 100% clean for both helgrind and deep leak
checks. This represents a complete rethink of how the AIOs work,
and much simpler synchronization; the provider API is a bit simpler
to boot, as a number of failure modes have been simply eliminated.
While here a few other minor bugs were squashed.
|
| | |
|
| |
|
|
|
|
| |
This includes async send and recv, driven from the poller. This will
be requierd to support the underlying UDP and ZeroTier transports in
the future. (ZeroTier is getting done first.)
|
| | |
|
| |
|
|
| |
block for any AIO completion.
|
| |
|
|
|
|
|
| |
The queue is bound at initialization time of the task, and we call
entries just tasks, so we don't have to pass around a taskq pointer
across all the calls. Further, nni_task_dispatch is now guaranteed
to succeed.
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
| |
We need to remember that protocol stops can run synchronously, and
therefore we need to wait for the aio to complete. Further, we need
to break apart shutting down aio activity from deallocation, as we need
to shut down *all* async activity before deallocating *anything*.
Noticed that we had a pipe race in the surveyor pattern too.
|
| |
|
|
|
|
|
|
| |
We have seen some yet another weird situation where we had an orphaned
pipe, which was caused by not completing the callback. If we are going
to run nni_aio_fini, we should still run the callback (albeit with a
return value of NNG_ECANCELED or somesuch) to be sure that we can't
orphan stuff.
|
| |
|
|
|
|
|
|
|
|
|
| |
This one is caused by us deallocating the msg queue before we
stop all asynchronous I/O operations; consequently we can wind
up with a thread trying to access a msg queue after it has been
destroyed.
A lesson here is that nni_aio_fini() needs to be treated much like
nni_thr_fini() - you should do this *before* deallocating anything
that callback functions might be referencing.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
This resolves the orphaned pipedesc, which actually could have affected
Windows too. I think maybe we are race free. Lots more testing is
still required, but stress runs seem to be passing now.
|
| |
|
|
|
|
| |
Apparently there are circumstances when a pipedesc may get orphaned form the
pollq. This triggers an assertion failure when it occurs. I am still
trying to understand how this can occur. Stay tuned.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
| |
We have seen leaks of pipes causing test failures (e.g. the Windows
IPC test) due to EADDRINUSE. This was caused by a case where we
failed to pass the pipe up because the AIO had already been canceled,
and we didn't realize that we had oprhaned the pipe. The fix is to
add a return value to nni_aio_finish, and verify that we did finish
properly, or if we did not then we must free the pipe ourself. (The
zero return from nni_aio_finish indicates that it accepts ownership
of resources passed via the aio.)
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
| |
This fixes a potential nasty bug associated with the objhash table
resizing, and rewrites the scalability test to use just a single thread
handling some 2000 client sockets. This proves that the framework can
deal with vast numbers of sockets, regardless of the supported number
of operating system threads.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
This cleans up the pipe creation logic greatly, and eliminates
a nasty potential deadlock (lock-order incorrect.) It also
adds a corret binary exponential and randomized backoff on both
accept and connect.
|
| | |
|
| |
|
|
|
|
|
|
| |
We closed a few subtle races in the AIO subsystem as well, and now
we were able to eliminate the separate timer handling the MQ code.
There appear to be some opportunities to further enhance the code
for MQs as well -- eventually probably the only access to MQs will
be with AIOs.
|
| |
|
|
|
|
|
|
|
| |
Most of the races around close were probably here - the cancellation was
not getting through on endpoint close, which meant that we could actually
toss endpoints while they were in use.
We need to fix the timeouts stuff -- especially for reconnects etc, but
we are just about ready for this stuff to be reintegrated into master.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|