| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This makes the operations that work on headers start with
nni_msg_header or nng_msg_header. It also renames _trunc to
_chop (same strlen as _trim), and renames prepend to insert.
We add a shorthand for clearing message content, and make
better use of the endian safe 32-bit accessors too.
This also fixes a bug in inserting large headers into messages.
A test suite for message handling is included.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
| |
This passes valgrind 100% clean for both helgrind and deep leak
checks. This represents a complete rethink of how the AIOs work,
and much simpler synchronization; the provider API is a bit simpler
to boot, as a number of failure modes have been simply eliminated.
While here a few other minor bugs were squashed.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
We need to remember that protocol stops can run synchronously, and
therefore we need to wait for the aio to complete. Further, we need
to break apart shutting down aio activity from deallocation, as we need
to shut down *all* async activity before deallocating *anything*.
Noticed that we had a pipe race in the surveyor pattern too.
|
| |
|
|
|
|
| |
This resolves the orphaned pipedesc, which actually could have affected
Windows too. I think maybe we are race free. Lots more testing is
still required, but stress runs seem to be passing now.
|
| |
|
|
|
|
|
|
|
|
|
| |
We have seen leaks of pipes causing test failures (e.g. the Windows
IPC test) due to EADDRINUSE. This was caused by a case where we
failed to pass the pipe up because the AIO had already been canceled,
and we didn't realize that we had oprhaned the pipe. The fix is to
add a return value to nni_aio_finish, and verify that we did finish
properly, or if we did not then we must free the pipe ourself. (The
zero return from nni_aio_finish indicates that it accepts ownership
of resources passed via the aio.)
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
| |
Most of the races around close were probably here - the cancellation was
not getting through on endpoint close, which meant that we could actually
toss endpoints while they were in use.
We need to fix the timeouts stuff -- especially for reconnects etc, but
we are just about ready for this stuff to be reintegrated into master.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
| |
This is only lightly tested, and I expect that there remain
some race conditions. Endpoint logic in particular needs
work.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It turns out that I had to fix a number of subtle asynchronous
handling bugs, but now TCP is fully asynchronous. We need to
change the high-level dial and listen interfaces to be async
as well.
Some of the transport APIs have changed here, and I've elected
to change what we expose to consumers as endpoints into seperate
dialers and listeners. Under the hood they are the same, but
it turns out that its helpful to know the intended use of the
endpoint at initialization time.
Scalability still occasionally hangs on Linux. Investigation
pending.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The connect & accept logic for IPC is now fully asynchronous.
This will serve as a straight-forward template for TCP. Note that
the upper logic still uses a thread to run this "synchronously", but
that will be able to be removed once the last transport (TCP) is made
fully async.
The unified ipcsock is also now separated, and we anticipate being
able to remove the posix_sock.c logic shortly. Separating out the
endpoint logic from the pipe logic helps makes things clearer, and
may faciliate a day where endpoints have multiple addresses (for
example with a connect() endpoint that uses a round-robin DNS list
and tries to run the entire list in parallel, stopping with the first
connection made.)
The platform header got a little cleanup while we were here.
|
| | |
|
| |
|
|
|
|
| |
This prevents a slow partner from blocking new connections from being
established on the server. Before this a single partner could cause
the server to block waiting to complete the negotiation.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
| |
As with TCP, we're still using threads under the hood. But this
completes the send/recv logic conversion for POSIX to our AIO framework,
and hence represents a substantial milestone towards full asyncronous
operation.
We still need to do accept/connect operations asynchronously, then making.
Windows overlapped IO work properly. After that, poll/epoll/kqueue, etc.
|
| | |
|
| |
|
|
|
|
|
| |
Transport-level pipe initialization is now sepearate and explicit.
The POSIX send/recv logic still uses threads under the hood, but
makes use of the AIO framework for send/recv. This is a key stepping
stone towards enabling poll() or similar async I/O approaches.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The CMSG handling was completely borked. This is fixed now, and
we stash the SP header size (ugh) in the CMSG contents to match what
nanomsg does. We now pass the cmsg validation test.
We also fixed handling of certain endpoint-related options, so that
endpoints can get options from the socket at initialization time.
This required a minor change to the transport API for endpoints.
Finally, we fixed a critical fault in the REP handling of RAW sockets,
which caused them to always return NNG_ESTATE in all cases. It should
now honor the actual socket option.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
There are lots of changes here, mostly stuff we did in support of
Windows TCP. However, there are some bugs that were fixed, and we
added some new error codes, and generalized the handling of some failures
during accept. Windows IPC (NamedPipes) is still missing.
|
| | |
|
| | |
|
| | |
|