| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
| |
With the new reapers, we've seen some problems caused by the reaper
running after the taskq that they have to wait on (completion tasks
for aios) are destroyed. We need to make sure that we tear down major
subsystems in the correct order.
|
| | |
|
| |
|
|
|
|
|
| |
This change mirrors the change we made for pipes yesterday,
moving the endpoint cleanup to its own thread, ensuring that
the blocking operations we need to perform during clean up
do not gum up the works in the main system taskq.
|
| |
|
|
|
|
|
|
| |
The problem is that reaping these things performs some blocking
operations which can tie up slots in the taskq, preventing other
tasks from running. Ultimately this can lead to a deadlock as
tasks that are blocked wind up waiting for tasks that can't get
scheduled. Blocking tasks really should not run on the system taskq.
|
| |
|
|
|
|
|
|
|
| |
This passes valgrind 100% clean for both helgrind and deep leak
checks. This represents a complete rethink of how the AIOs work,
and much simpler synchronization; the provider API is a bit simpler
to boot, as a number of failure modes have been simply eliminated.
While here a few other minor bugs were squashed.
|
| | |
|
| |
|
|
|
|
| |
This includes async send and recv, driven from the poller. This will
be requierd to support the underlying UDP and ZeroTier transports in
the future. (ZeroTier is getting done first.)
|
| |
|
|
| |
block for any AIO completion.
|
| |
|
|
|
|
|
| |
The queue is bound at initialization time of the task, and we call
entries just tasks, so we don't have to pass around a taskq pointer
across all the calls. Further, nni_task_dispatch is now guaranteed
to succeed.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
We need to remember that protocol stops can run synchronously, and
therefore we need to wait for the aio to complete. Further, we need
to break apart shutting down aio activity from deallocation, as we need
to shut down *all* async activity before deallocating *anything*.
Noticed that we had a pipe race in the surveyor pattern too.
|
| |
|
|
|
|
|
|
| |
We have seen some yet another weird situation where we had an orphaned
pipe, which was caused by not completing the callback. If we are going
to run nni_aio_fini, we should still run the callback (albeit with a
return value of NNG_ECANCELED or somesuch) to be sure that we can't
orphan stuff.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
Apparently there are circumstances when a pipedesc may get orphaned form the
pollq. This triggers an assertion failure when it occurs. I am still
trying to understand how this can occur. Stay tuned.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
| |
We have seen leaks of pipes causing test failures (e.g. the Windows
IPC test) due to EADDRINUSE. This was caused by a case where we
failed to pass the pipe up because the AIO had already been canceled,
and we didn't realize that we had oprhaned the pipe. The fix is to
add a return value to nni_aio_finish, and verify that we did finish
properly, or if we did not then we must free the pipe ourself. (The
zero return from nni_aio_finish indicates that it accepts ownership
of resources passed via the aio.)
|
| | |
|
| |
|
|
|
|
|
|
| |
This fixes a potential nasty bug associated with the objhash table
resizing, and rewrites the scalability test to use just a single thread
handling some 2000 client sockets. This proves that the framework can
deal with vast numbers of sockets, regardless of the supported number
of operating system threads.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
This cleans up the pipe creation logic greatly, and eliminates
a nasty potential deadlock (lock-order incorrect.) It also
adds a corret binary exponential and randomized backoff on both
accept and connect.
|
| | |
|
| |
|
|
|
|
|
|
| |
We closed a few subtle races in the AIO subsystem as well, and now
we were able to eliminate the separate timer handling the MQ code.
There appear to be some opportunities to further enhance the code
for MQs as well -- eventually probably the only access to MQs will
be with AIOs.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
This actually is breaking at the moment, because we don't have
good integration with timeouts, and there are some frustrating
races with timeouts at points that can cause apparent hangs.
|
| |
|
|
|
|
| |
This logic leaves a race condition in the dial side, which will
be fixed with a subsequent change to convert that to fully asynchronous
as well.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
This means that pipe_start always succeeds, and we can guarantee that
the pipe_start_cb is always executed, and in another context. This may help
when we need to change the way that sockets and endpoints are associated.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
| |
This is only lightly tested, and I expect that there remain
some race conditions. Endpoint logic in particular needs
work.
|
| |
|
|
|
|
| |
We still have endpoint related races apparently; we need to examine
the possibility of handling endpoints much like we do pipes, which seem
to be race free.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
| |
Modern Windows (Vista and later) have light weight Slim Read/Write locks
which only occupy 64 bits, and don't require any memory allocation to
create.
While here clean up a few more unreferenced variables found with the
Microsoft compilers.
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It turns out that I had to fix a number of subtle asynchronous
handling bugs, but now TCP is fully asynchronous. We need to
change the high-level dial and listen interfaces to be async
as well.
Some of the transport APIs have changed here, and I've elected
to change what we expose to consumers as endpoints into seperate
dialers and listeners. Under the hood they are the same, but
it turns out that its helpful to know the intended use of the
endpoint at initialization time.
Scalability still occasionally hangs on Linux. Investigation
pending.
|
| | |
|