| Commit message (Collapse) | Author | Age |
| | |
|
| | |
|
| |
|
|
|
| |
This also introduces a more efficient reference counting usage based
on atomics, rather than locks.
|
| |
|
|
|
|
|
|
|
| |
This is a major change, and includes changes to use a polymorphic
stream API for all transports. There have been related bugs fixed
along the way. Additionally the man pages have changed.
The old non-polymorphic APIs are removed now. This is a breaking
change, but the old APIs were never part of any released public API.
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This changes much of the internal API for TCP option handling, and
includes hooks for some of this in various consumers. Note that the
consumers still need to have additional work done to complete them,
which will be part of providing public "raw" TLS and WebSocket APIs.
We would also like to finish addressing the call sites of
nni_tcp_listener_start() that assume the sockaddr is modified --
it would be superior to use the NNG_OPT_LOCADDR option. Thaat will be
addressed in a follow up PR.
|
| |
|
|
|
|
|
|
|
|
| |
This change makes embedding nng + nggpp (or other projects depending on
nng) in cmake easier. The header files are moved to a separate include
directory. This also makes installation of the headers easier, and
allows clearer identification of private vs public heade files.
Some additional cleanups were performed by @gedamore, but the main
credit for this change belongs with @gregorburger.
|
| |
|
|
| |
fixes #776 Configuration of mbedTLS should warn about license
|
| |
|
|
|
|
|
|
|
|
|
| |
This is a significant refactor of the library configuration.
We use the modern package configuration helper, with a template
script that also does the find_package dance for any of our
dependencies.
We also have restructured the code so that most protocols and
transports have their configuration isolated to their own CMakeLists
file, reducing the size of the global CMakeLists file.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
This changes the signature of the aio cancellation routines
to take the argument for cancellation directly, so we do not
need to lookup the argument using the nni_aio_get_prov_data.
We should probably consider eliminating nni_aio_get_prov_data,
and co, and changing the prov_extra to reflect prov_data. Later.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fixes #179 DNS resolution should be done at connect time
fixes #586 Windows IO completion port work could be better
fixes #339 Windows iocp could use synchronous completions
fixes #280 TCP abstraction improvements
This is a rather monstrous set of changes, which refactors TCP, and
the underlying Windows I/O completion path logic, in order to obtain
a cleaner, simpler API, with support for asynchronous DNS lookups performed
on connect rather than initialization time, the ability to have multiple
connects or accepts pending, as well as fewer extraneous function calls.
The Windows code also benefits from greatly reduced context switching,
fewer lock operations performed, and a reduced number of system calls
on the hot code path. (We use automatic event resetting instead of manual.)
Some dead code was removed as well, and a few potential edge case leaks
on failure paths (in the websocket code) were plugged.
Note that all TCP based transports benefit from this work. The IPC code
on Windows still uses the legacy IOCP for now, as does the UDP code (used
for ZeroTier.) We will be converting those soon too.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* fixes #419 want to nni_aio_stop without blocking
This actually introduces an nni_aio_close() API that causes
nni_aio_begin to return NNG_ECLOSED, while scheduling a callback
on the AIO to do an NNG_ECLOSED as well. This should be called
in non-blocking close() contexts instead of nni_aio_stop(), and
the cases where we call nni_aio_fini() multiple times are updated
updated to add nni_aio_stop() calls on all "interlinked" aios before
finalizing them.
Furthermore, we call nni_aio_close() as soon as practical in the
close path. This closes an annoying race condition where the
callback from a lower subsystem could wind up rescheduling an
operation that we wanted to abort.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fixes #326 consider nni_taskq_exec_synch()
fixes #410 kqueue implementation could be smarter
fixes #411 epoll_implementation could be smarter
fixes #426 synchronous completion can lead to panic
fixes #421 pipe close race condition/duplicate destroy
This is a major refactoring of two significant parts of the code base,
which are closely interrelated.
First the aio and taskq framework have undergone a number of simplifications,
and improvements. We have ditched a few parts of the internal API (for
example tasks no longer support cancellation) that weren't terribly useful
but added a lot of complexity, and we've made aio_schedule something that
now checks for cancellation or other "premature" completions. The
aio framework now uses the tasks more tightly, so that aio wait can
devolve into just nni_task_wait(). We did have to add a "task_prep()"
step to prevent race conditions.
Second, the entire POSIX poller framework has been simplified, and made
more robust, and more scalable. There were some fairly inherent race
conditions around the shutdown/close code, where we *thought* we were
synchronizing against the other thread, but weren't doing so adequately.
With a cleaner design, we've been able to tighten up the implementation
to remove these race conditions, while substantially reducing the chance
for lock contention, thereby improving scalability. The illumos poller
also got a performance boost by polling for multiple events.
In highly "busy" systems, we expect to see vast reductions in lock
contention, and therefore greater scalability, in addition to overall
improved reliability.
One area where we currently can do better is that there is still only
a single poller thread run. Scaling this out is a task that has to be done
differently for each poller, and carefuly to ensure that close conditions
are safe on all pollers, and that no chance for deadlock/livelock waiting
for pfd finalizers can occur.
|
| |
|
|
| |
fixes #106 TCP keepalive tuning
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This closes a fundamental flaw in the way aio structures were
handled. In paticular, aio expiration could race ahead, and
fire before the aio was properly registered by the provider.
This ultimately led to the possibility of duplicate completions
on the same aio.
The solution involved breaking up nni_aio_start into two functions.
nni_aio_begin (which can be run outside of external locks) simply
validates that nni_aio_fini() has not been called, and clears certain
fields in the aio to make it ready for use by the provider.
nni_aio_schedule does the work to register the aio with the expiration
thread, and should only be called when the aio is actually scheduled
for asynchronous completion. nni_aio_schedule_verify does the same thing,
but returns NNG_ETIMEDOUT if the aio has a zero length timeout.
This change has a small negative performance impact. We have plans to
rectify that by converting nni_aio_begin to use a locklesss flag for
the aio->a_fini bit.
While we were here, we fixed some error paths in the POSIX subsystem,
which would have returned incorrect error codes, and we made some
optmizations in the message queues to reduce conditionals while holding
locks in the hot code path.
|
| |
|
|
|
|
|
| |
on a python wrapper (cffi).
Mostly this is fixing inconsistencies in our public API and the actual
implementation.
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
While here we also fixed a bug in the --file handling that we noticed
while writing the TLS handling.
We also fixed a warning in the core (msgqueue) for set but unused variables.
|
| |
|
|
|
|
|
| |
We enabled verbose compiler warnings, and found a lot of issues.
Some of these were even real bugs. As a bonus, we actually save
some initialization steps in the compat layer, and avoid passing
some variables we don't need.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This introduces enough of the HTTP API to support fully server
applications, including creation of websocket style protocols,
pluggable handlers, and so forth.
We have also introduced scatter/gather I/O (rudimentary) for
aios, and made other enhancements to the AIO framework. The
internals of the AIOs themselves are now fully private, and we
have eliminated the aio->a_addr member, with plans to remove the
pipe and possibly message members as well.
A few other minor issues were found and fixed as well.
The HTTP API includes request, response, and connection objects,
which can be used with both servers and clients. It also defines
the HTTP server and handler objects, which support server applications.
Support for client applications will require a client object to be
exposed, and that should be happening shortly.
None of this is "documented" yet, bug again, we will follow up shortly.
|
| |
|
|
|
| |
fixes #210 Want NNG_OPT_TLS_* options for TLS transport
fixes #212 Eliminate a_endpt member of aio
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is useful to have support for validating that a peer *was*
verified, especially in the presence of optional validation.
We have added a property that does this, NNG_OPT_TLS_VERIFIED.
Further, all the old NNG_OPT_WSS_TLS_* property names have also been
renamed to generic NNG_OPT_TLS property names, which have been
moved to nng.h to facilitate reuse and sharing, with the comments
moved and corrected as well.
Finally, the man pages have been updated, with substantial
improvements to the nng_ws man page in particular.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds support for configuration of TLS websockets using the files
for keys, certificates, and CRLs. Significant changes to the websocket,
TLS, and HTTP layers were made here. We now expect TLS configuration to
be tied to the HTTP layer, and the HTTP code creates default configuration
objects based on the URL supplied. (HTTP dialers and listeners are now
created with a URL rather than a sockaddr, giving them access to the scheme
as well.)
We fixed several bugs affecting TLS validation, and added a test suite
that confirms that validation works as it should. We also fixed an orphaned
socket during HTTP negotiation, responsible for an occasional assertion
error if the http handshake does not complete successfully. Finally several
use-after-free races were closed.
TLS layer changes include reporting of handshake failures using newly
created "standard" error codes for peer authentication and cryptographic
failures.
The use of the '*' wild card in URLs at bind time is no longer supported
for websocket at least.
Documentation updates for all this are in place as well.
|
| | |
|
| | |
|
| | |
|
|
|
This introduces the wss:// scheme, which is available and works like
the ws:// scheme if TLS is enabled in the library.
The library modularization is refactored somewhat, to make it easier
to use. There is now a single NNG_ENABLE_TLS that enables TLS support
under the hood.
This also adds a new option for the TLS transport, NNG_OPT_TLS_CONFIG
(and a similar one for WSS, NNG_OPT_TLS_WSS_CONFIG) that offer access
to the underlying TLS configuration object, which now has a public API
to go with it as well.
Note that it is also possible to use pure HTTPS using the *private*
API, which will be exposed in a public form soon.
|