| Commit message (Collapse) | Author | Age |
| | |
|
| |
|
|
|
|
|
|
| |
We closed a few subtle races in the AIO subsystem as well, and now
we were able to eliminate the separate timer handling the MQ code.
There appear to be some opportunities to further enhance the code
for MQs as well -- eventually probably the only access to MQs will
be with AIOs.
|
| |
|
|
|
|
|
|
|
| |
Most of the races around close were probably here - the cancellation was
not getting through on endpoint close, which meant that we could actually
toss endpoints while they were in use.
We need to fix the timeouts stuff -- especially for reconnects etc, but
we are just about ready for this stuff to be reintegrated into master.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
| |
This actually is breaking at the moment, because we don't have
good integration with timeouts, and there are some frustrating
races with timeouts at points that can cause apparent hangs.
|
| | |
|
| |
|
|
|
|
| |
This logic leaves a race condition in the dial side, which will
be fixed with a subsequent change to convert that to fully asynchronous
as well.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
This means that pipe_start always succeeds, and we can guarantee that
the pipe_start_cb is always executed, and in another context. This may help
when we need to change the way that sockets and endpoints are associated.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
| |
This is only lightly tested, and I expect that there remain
some race conditions. Endpoint logic in particular needs
work.
|
| |
|
|
|
|
| |
We still have endpoint related races apparently; we need to examine
the possibility of handling endpoints much like we do pipes, which seem
to be race free.
|
| |
|
|
|
| |
We are still seeing likely errors with pipes outliving their associated
endpoints, so work is still needed here.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
The IOCP code has been refactored to improve reuse, and hopefully
will be easier to use with TCP now. Windows IPC using Named Pipes
is mostly working -- mostly because there is a gnarly close-race.
It seems that we need to take some more care to ensure that the
pipe is not released while requests may be outstanding -- so some
deeper synchronization between the IOCP callback logic and the
win_event code is needed. In short, we need to add a condvar to
the event, and notice when we have submitted work for async completion,
and make sure we flag the event "idle" after either completion or
cancellation of the event.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
| |
Modern Windows (Vista and later) have light weight Slim Read/Write locks
which only occupy 64 bits, and don't require any memory allocation to
create.
While here clean up a few more unreferenced variables found with the
Microsoft compilers.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It turns out that I had to fix a number of subtle asynchronous
handling bugs, but now TCP is fully asynchronous. We need to
change the high-level dial and listen interfaces to be async
as well.
Some of the transport APIs have changed here, and I've elected
to change what we expose to consumers as endpoints into seperate
dialers and listeners. Under the hood they are the same, but
it turns out that its helpful to know the intended use of the
endpoint at initialization time.
Scalability still occasionally hangs on Linux. Investigation
pending.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The connect & accept logic for IPC is now fully asynchronous.
This will serve as a straight-forward template for TCP. Note that
the upper logic still uses a thread to run this "synchronously", but
that will be able to be removed once the last transport (TCP) is made
fully async.
The unified ipcsock is also now separated, and we anticipate being
able to remove the posix_sock.c logic shortly. Separating out the
endpoint logic from the pipe logic helps makes things clearer, and
may faciliate a day where endpoints have multiple addresses (for
example with a connect() endpoint that uses a round-robin DNS list
and tries to run the entire list in parallel, stopping with the first
connection made.)
The platform header got a little cleanup while we were here.
|
| | |
|
| |
|
|
|
|
| |
This prevents a slow partner from blocking new connections from being
established on the server. Before this a single partner could cause
the server to block waiting to complete the negotiation.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|