| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fixes #596 POSIX IPC should move away from pipedesc/epdesc
fixes #598 TLS and TCP listeners could support NNG_OPT_LOCADDR
fixes #594 Windows IPC should use "new style" win_io code.
fixes #597 macOS could support PEER PID
This large change set cleans up the IPC support on Windows and
POSIX. This has the beneficial impact of significantly reducing
the complexity of the code, reducing locking, increasing
concurrency (multiple dial and accepts can be outstanding now),
reducing context switches (we complete thins synchronously now).
While here we have added some missing option support, and fixed a
few more bugs that we found in the TCP code changes from last week.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fixes #326 consider nni_taskq_exec_synch()
fixes #410 kqueue implementation could be smarter
fixes #411 epoll_implementation could be smarter
fixes #426 synchronous completion can lead to panic
fixes #421 pipe close race condition/duplicate destroy
This is a major refactoring of two significant parts of the code base,
which are closely interrelated.
First the aio and taskq framework have undergone a number of simplifications,
and improvements. We have ditched a few parts of the internal API (for
example tasks no longer support cancellation) that weren't terribly useful
but added a lot of complexity, and we've made aio_schedule something that
now checks for cancellation or other "premature" completions. The
aio framework now uses the tasks more tightly, so that aio wait can
devolve into just nni_task_wait(). We did have to add a "task_prep()"
step to prevent race conditions.
Second, the entire POSIX poller framework has been simplified, and made
more robust, and more scalable. There were some fairly inherent race
conditions around the shutdown/close code, where we *thought* we were
synchronizing against the other thread, but weren't doing so adequately.
With a cleaner design, we've been able to tighten up the implementation
to remove these race conditions, while substantially reducing the chance
for lock contention, thereby improving scalability. The illumos poller
also got a performance boost by polling for multiple events.
In highly "busy" systems, we expect to see vast reductions in lock
contention, and therefore greater scalability, in addition to overall
improved reliability.
One area where we currently can do better is that there is still only
a single poller thread run. Scaling this out is a task that has to be done
differently for each poller, and carefuly to ensure that close conditions
are safe on all pollers, and that no chance for deadlock/livelock waiting
for pfd finalizers can occur.
|
| |
|
|
|
| |
We offer uid, gid, process id, and even zone id where we have them.
Docs and tests are provided.
|
| |
|
|
|
|
|
|
|
|
| |
fixes #382 Permissions support for IPC on POSIX
This adds support for permission management on Windows and
POSIX systems. There are two different properties, and they
are very different.
Tests and documentation are included.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This closes a fundamental flaw in the way aio structures were
handled. In paticular, aio expiration could race ahead, and
fire before the aio was properly registered by the provider.
This ultimately led to the possibility of duplicate completions
on the same aio.
The solution involved breaking up nni_aio_start into two functions.
nni_aio_begin (which can be run outside of external locks) simply
validates that nni_aio_fini() has not been called, and clears certain
fields in the aio to make it ready for use by the provider.
nni_aio_schedule does the work to register the aio with the expiration
thread, and should only be called when the aio is actually scheduled
for asynchronous completion. nni_aio_schedule_verify does the same thing,
but returns NNG_ETIMEDOUT if the aio has a zero length timeout.
This change has a small negative performance impact. We have plans to
rectify that by converting nni_aio_begin to use a locklesss flag for
the aio->a_fini bit.
While we were here, we fixed some error paths in the POSIX subsystem,
which would have returned incorrect error codes, and we made some
optmizations in the message queues to reduce conditionals while holding
locks in the hot code path.
|
| |
|
|
| |
fixes #290 sockaddr improvements
|
| |
|
|
|
|
| |
We changed the timers to use msec granularity, but we missed this
one. The result is that in certain code flows the IPC connection
times can look quite long -- with weird 10 sec stalls.
|
| |
|
|
|
|
| |
ConnectNamedPipe can return ERROR_PIPE_CONNECTED, and does not
enqueue a completion packet if it does. So we need to handle
that specially.
|
| |
|
|
|
|
|
| |
We enabled verbose compiler warnings, and found a lot of issues.
Some of these were even real bugs. As a bonus, we actually save
some initialization steps in the compat layer, and avoid passing
some variables we don't need.
|
| |
|
|
|
|
|
| |
It turns out that at least on some systems, the CreateNamedPipeW
does not behave as we'd expect. Furthermore, using the Unicode
variants seems have a negative impact on compatibility with legacy
nanomsg.
|
| |
|
|
|
|
|
|
| |
This addresses the use of the pipe special field, and eliminates it.
The message APIs (recvmsg, sendmsg) need to be updated as well still,
but I want to handle that as part of a separate issue.
While here we fixed various compiler warnings, etc.
|
| |
|
|
| |
While here, we cleaned up a few other unused variables in the HTTP code.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This introduces enough of the HTTP API to support fully server
applications, including creation of websocket style protocols,
pluggable handlers, and so forth.
We have also introduced scatter/gather I/O (rudimentary) for
aios, and made other enhancements to the AIO framework. The
internals of the AIOs themselves are now fully private, and we
have eliminated the aio->a_addr member, with plans to remove the
pipe and possibly message members as well.
A few other minor issues were found and fixed as well.
The HTTP API includes request, response, and connection objects,
which can be used with both servers and clients. It also defines
the HTTP server and handler objects, which support server applications.
Support for client applications will require a client object to be
exposed, and that should be happening shortly.
None of this is "documented" yet, bug again, we will follow up shortly.
|
| |
|
|
| |
fixes #155 POSIX TCP & IPC could avoid a lot of context switches
|
| | |
|
| |
|
|
|
|
|
|
|
| |
This moves the DNS related functionality into common code, and also
removes all the URL parsing stuff out of the platform specific code
and into the transports. Now the transports just take sockaddr's on
initialization. (We may want to move this until later.)
We also add UDP resolution as another separate API.
|
| |
|
|
|
|
|
|
|
|
| |
We only compile files that are appropriate for the platform. (We
still have guards in place, to allow for a future single .C file
to be built from all the sources.) We also remove the subsystem defines;
if a new platform needs to deviate from POSIX in ways beyond what we
intended here, then that platform should just copy those parts into
a new platform directory, rather than cross including portions from
POSIX.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the underlying platform fails (FreeBSD is the only one I'm aware
of that does this!), we use a global lock or condition variable instead.
This means that our lock initializers never ever fail.
Probably we could eliminate most of this for Linux and Darwin, since
on those platforms, mutex and condvar initialization reasonably never
fails. Initial benchmarks show little difference either way -- so we
can revisit (optimize) later.
This removes a lot of otherwise untested code in error cases and so forth,
improving coverage and resilience in the face of allocation failures.
Platforms other than POSIX should follow a similar pattern if they need
this. (VxWorks, I'm thinking of you.) Most sane platforms won't have
an issue here, since normally these initializations do not need to allocate
memory. (Reportedly, even FreeBSD has plans to "fix" this in libthr2.)
While here, some bugs were fixed in initialization & teardown.
The fallback code is properly tested with dedicated test cases.
|
| |
|
|
|
|
|
|
|
| |
This passes valgrind 100% clean for both helgrind and deep leak
checks. This represents a complete rethink of how the AIOs work,
and much simpler synchronization; the provider API is a bit simpler
to boot, as a number of failure modes have been simply eliminated.
While here a few other minor bugs were squashed.
|
| |
|
|
|
|
| |
Apparently there are circumstances when a pipedesc may get orphaned form the
pollq. This triggers an assertion failure when it occurs. I am still
trying to understand how this can occur. Stay tuned.
|
| |
|
|
|
|
|
|
|
|
|
| |
We have seen leaks of pipes causing test failures (e.g. the Windows
IPC test) due to EADDRINUSE. This was caused by a case where we
failed to pass the pipe up because the AIO had already been canceled,
and we didn't realize that we had oprhaned the pipe. The fix is to
add a return value to nni_aio_finish, and verify that we did finish
properly, or if we did not then we must free the pipe ourself. (The
zero return from nni_aio_finish indicates that it accepts ownership
of resources passed via the aio.)
|
| | |
|
| |
|
|
|
|
| |
This is only lightly tested, and I expect that there remain
some race conditions. Endpoint logic in particular needs
work.
|
| |
|
|
|
|
| |
We still have endpoint related races apparently; we need to examine
the possibility of handling endpoints much like we do pipes, which seem
to be race free.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
The IOCP code has been refactored to improve reuse, and hopefully
will be easier to use with TCP now. Windows IPC using Named Pipes
is mostly working -- mostly because there is a gnarly close-race.
It seems that we need to take some more care to ensure that the
pipe is not released while requests may be outstanding -- so some
deeper synchronization between the IOCP callback logic and the
win_event code is needed. In short, we need to add a condvar to
the event, and notice when we have submitted work for async completion,
and make sure we flag the event "idle" after either completion or
cancellation of the event.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
Test code needs to use the static libraries so that they can get access
to the entire set of symbols, including private ones that are not exported.
|
| | |
|
| |
|
|
|
|
|
| |
There are lots of changes here, mostly stuff we did in support of
Windows TCP. However, there are some bugs that were fixed, and we
added some new error codes, and generalized the handling of some failures
during accept. Windows IPC (NamedPipes) is still missing.
|
| |
|
|
|
|
| |
Windows is getting there. Needs a couple of more more hours to enable
everything, especially IPC, and most of the work at this point is probably
some combination of debug and tweaking things like error handling.
|
| |
|