| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
| |
Also, some instances nni_aio are changed to nng_aio. We think we want to harmonize
some of these types going forward as it will reduce the need to include headers
hopefully letting us get away with just "defs.h" in more places.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This introduces a new experimental transport for DTLS, that
provides encryption over UDP. It has a simpler protocol than
the current UDP SP protocol (but we intend to fix that by making
the UDP transport simpler in a follow up!)
There are a few other fixes in the TLS layer itself, and in
the build, that were needed to accomplish this work.
Also there was an endianness bug in the UDP protocol handling, which
is fixed here.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
| |
This allows us to break the assumption that the bottom half is
TCP, or even an nng_stream, since the DTLS layer will use a totally
different layer. Only nng_stream neeeds to support dial and listen.
Also: UDP: Make the sockaddr arguments to open const.
Also: Align the IPv6 address in the sockaddr (this allows for
efficient 64-bit or even 128-bit operations on these values.)
|
| | |
|
| |
|
|
|
|
| |
The only thing using this was the transport lookups, but as
those transports are now fully initialized in nng_init, we
no longer need to lock that at all.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
| |
This avoids the need to perform multiple allocations for dialing,
eliminating additional potential failures. Cancellation is also
made simpler and more perfectly robust.
|
| |
|
|
| |
This is done for kqueue and poll. Others coming soon.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This allows us to explicitly stop streams, dialers, and listeners,
before we start tearing down things. This hopefully will be useful
in resolving use-after-free bugs in http, tls, and websockets.
The new functions are not yet documented, but they are
nng_stream_stop, nng_stream_dialer_stop, and nng_stream_listener_stop.
They should be called after close, and before free. The close
functions now close without blocking, but the stop function is
allowed to block.
|
| | |
|
| |
|
|
|
| |
While TCP and UDP port numbers are 16-bits, ZT uses a larger (24-bit)
port number.
|
| |
|
|
|
|
|
|
|
|
|
| |
The idea here is to reduce the dynamic allocations used for
URLs, and also the back and forth with parsing begin strings
and port numbers. We always resolve to a port number, and
this is easier for everyone.
The real goal in the long term is to eliminate dynamic allocation
of the URL fields altogether, but that requires a little more
work. This is a step in the right direction.
|
| |
|
|
|
|
|
|
|
| |
Applications must now call nng_init(), but they can supply
a set of parameters optionally. The code is now safe for
multiple libraries to do this concurrently, meaning nng_fini
no longer can race against another instance starting up.
The nni_init checks on all public APIs are removed now.
|
| |
|
|
|
|
|
|
| |
(#1838)
This exposes the UDP methods as nng_ methods, and adds support for Multicast Membership,
which is useful in a variety of situations.
No documentation is provided, and applications should consider thios API experimental.
|
| |
|
|
|
| |
The realtime clock is not (yet) exposed for user applications, but it
is used for logging timestamps accurately.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This transport only listens, and creates connections when
the application calls setopt on the lister with NNG_OPT_SOCKET_FD,
to pass a file descriptor. The FD is turned into an nng_stream,
and utilized for SP. The protocol over the descriptor is identical
to the TCP protocol (not the IPC protocol).
The options for peer information are borrowed from the IPC transport,
as they may be useful for these purposes.
This includes a test suite and full documentation.
|
| |
|
|
|
| |
This is initially used for TLS to make loading the engine pointer
faster, eliminating a much more expensive lock operation.
|
| | |
|
| |
|
|
|
|
| |
This provides the initial implementation, and converts the
transport lookup routines to use it. This is probably of limited
performance benefit, but rwlock's may be useful in further future work.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a sweeping cleanup of the transport logic around options,
and also harmonizes the names used when setting or getting options.
Additionally, legacy methods are now moved into a separate file and
can be elided via CMake or a preprocessor define.
Fundamentally, the ability to set to transport options via the socket
is deprecated; there are numerous problems with this and my earlier
approaches to deal with this have been somewhat misguided. Further
these approaches will not work with future protocol work that is
planned (were some options need to be negotiated with peers at the
time of connection establishment.)
Documentation has been updated to reflect this. The test suites still
make rather broad use of the older APIs, and will be converted later.
|
| |
|
|
|
|
|
|
|
|
| |
fixes #1317 IPv6 listener get port is incorrect
fixes #1319 Want symbolic service names
This is phase 1 of reducing the memory foot-print of aios, and
also of pipes. This removes the largest consumer the socket
address information, from the aio, which was only used by a few
consumers.
|
| |
|
|
|
|
| |
This also exposes an nng_thread_set_name() function for
applications to use. All NNG thread names start with "nng:".
Note that support is highly dependent on the operating system.
|
| |
|
|
|
|
|
|
|
|
| |
fixes #1224 wss fails on IPV6 address
This fixes bugs and inconsistencies in the way addresses are
handled for HTTP (and consequently websocket). The Host:
address line needs to look at numeric IPs and treat wildcards
as if they are not specified, and needs to understand the IPv6
address format using brackets (e.g. [::1]:80).
|
| |
|
|
|
|
|
|
|
|
| |
The TTL in these cases should have been atomic. To facilitate
things we actually introduce an atomic int for convenience. We
also introduce a convenience nni_msg_must_append_u32() and
nni_msg_header_must_append_u32(), so that we can eliminate some
failure tests that cannot ever happen. Combined with a new test
for xreq, we have 100% coverage for xreq and more coverage for
the other REQ/REP protocols.
|
| |
|
|
|
| |
This also introduces an nni_atomic_cas64 to help with lock-free designs.
Some mechanical renaming was done in some of the protocols for spelling.
|
| | |
|
| |
|
|
|
| |
This also introduces a new atomic boolean type, so we can use that
to trigger whether we've added the HTTP handler or not.
|
| | |
|
| |
|
|
|
| |
This also introduces a more efficient reference counting usage based
on atomics, rather than locks.
|
| |
|
|
|
| |
This permits the stats dump to avoid some extra buffering,
and resolves a complaint about possible format buffer overruns.
|
| |
|
|
|
|
|
|
|
| |
This is a major change, and includes changes to use a polymorphic
stream API for all transports. There have been related bugs fixed
along the way. Additionally the man pages have changed.
The old non-polymorphic APIs are removed now. This is a breaking
change, but the old APIs were never part of any released public API.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This changes much of the internal API for TCP option handling, and
includes hooks for some of this in various consumers. Note that the
consumers still need to have additional work done to complete them,
which will be part of providing public "raw" TLS and WebSocket APIs.
We would also like to finish addressing the call sites of
nni_tcp_listener_start() that assume the sockaddr is modified --
it would be superior to use the NNG_OPT_LOCADDR option. Thaat will be
addressed in a follow up PR.
|
| |
|
|
|
| |
This also makes some smaller related changes to use the new
nni_type instead of nni_opt_type.
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
This introduces a basic IPC API, modeled on the TCP API, for direct access.
Only connection options are exposed at present -- we need to add options
for dialers and listeners (and particularly listener settings for
permissions and security attributes.) Documentation is still outstanding,
but a very limited test suite exists.
|
| | |
|
| |
|
|
|
| |
This also fixes a leaked TCP connection on a failure path, which we
noticed while working this change.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fixes #596 POSIX IPC should move away from pipedesc/epdesc
fixes #598 TLS and TCP listeners could support NNG_OPT_LOCADDR
fixes #594 Windows IPC should use "new style" win_io code.
fixes #597 macOS could support PEER PID
This large change set cleans up the IPC support on Windows and
POSIX. This has the beneficial impact of significantly reducing
the complexity of the code, reducing locking, increasing
concurrency (multiple dial and accepts can be outstanding now),
reducing context switches (we complete thins synchronously now).
While here we have added some missing option support, and fixed a
few more bugs that we found in the TCP code changes from last week.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fixes #179 DNS resolution should be done at connect time
fixes #586 Windows IO completion port work could be better
fixes #339 Windows iocp could use synchronous completions
fixes #280 TCP abstraction improvements
This is a rather monstrous set of changes, which refactors TCP, and
the underlying Windows I/O completion path logic, in order to obtain
a cleaner, simpler API, with support for asynchronous DNS lookups performed
on connect rather than initialization time, the ability to have multiple
connects or accepts pending, as well as fewer extraneous function calls.
The Windows code also benefits from greatly reduced context switching,
fewer lock operations performed, and a reduced number of system calls
on the hot code path. (We use automatic event resetting instead of manual.)
Some dead code was removed as well, and a few potential edge case leaks
on failure paths (in the websocket code) were plugged.
Note that all TCP based transports benefit from this work. The IPC code
on Windows still uses the legacy IOCP for now, as does the UDP code (used
for ZeroTier.) We will be converting those soon too.
|
| |
|
|
|
|
|
|
|
|
| |
fixes #573 atomic flags could help
This introduces a new atomic flag, and reduces some of the global
locking. The lock refactoring work is not yet complete, but this is
a positive step forward, and should help with certain things.
While here we also fixed a compile warning due to incorrect types.
|
| |
|
|
|
|
| |
This should work on both Windows and the most common POSIX
variants. We will create at least two threads for running
completions, but there are numerous other threads in the code.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fixes #326 consider nni_taskq_exec_synch()
fixes #410 kqueue implementation could be smarter
fixes #411 epoll_implementation could be smarter
fixes #426 synchronous completion can lead to panic
fixes #421 pipe close race condition/duplicate destroy
This is a major refactoring of two significant parts of the code base,
which are closely interrelated.
First the aio and taskq framework have undergone a number of simplifications,
and improvements. We have ditched a few parts of the internal API (for
example tasks no longer support cancellation) that weren't terribly useful
but added a lot of complexity, and we've made aio_schedule something that
now checks for cancellation or other "premature" completions. The
aio framework now uses the tasks more tightly, so that aio wait can
devolve into just nni_task_wait(). We did have to add a "task_prep()"
step to prevent race conditions.
Second, the entire POSIX poller framework has been simplified, and made
more robust, and more scalable. There were some fairly inherent race
conditions around the shutdown/close code, where we *thought* we were
synchronizing against the other thread, but weren't doing so adequately.
With a cleaner design, we've been able to tighten up the implementation
to remove these race conditions, while substantially reducing the chance
for lock contention, thereby improving scalability. The illumos poller
also got a performance boost by polling for multiple events.
In highly "busy" systems, we expect to see vast reductions in lock
contention, and therefore greater scalability, in addition to overall
improved reliability.
One area where we currently can do better is that there is still only
a single poller thread run. Scaling this out is a task that has to be done
differently for each poller, and carefuly to ensure that close conditions
are safe on all pollers, and that no chance for deadlock/livelock waiting
for pfd finalizers can occur.
|