| Commit message (Collapse) | Author | Age |
| ... | |
| |
|
|
|
|
|
|
|
|
|
| |
This is a significant refactor of the library configuration.
We use the modern package configuration helper, with a template
script that also does the find_package dance for any of our
dependencies.
We also have restructured the code so that most protocols and
transports have their configuration isolated to their own CMakeLists
file, reducing the size of the global CMakeLists file.
|
| | |
|
| | |
|
| |
|
|
|
| |
These options are undocumented -- for now. We need to get some
experience with them to ensure that they are worth keeping.
|
| |
|
|
|
|
|
|
| |
While the fragment size itself fits within 16-bits, when we do math
on it (such as multiplying by a fragment count), we may need to calculate
values that exceed 16-bits (e.g. 1MB). Therefore we should treat this
as a size_t, but cast it down to 16-bits only when we write it out to
the packet field.
|
| | |
|
| |
|
|
|
|
|
| |
This changes the code to make use of a different project we have
created (libzerotiercore) that is "CMake clean". This should make
using and configuring this code *much* better. It may also have the
benefit of making this configuration work better for Windows systems.
|
| |
|
|
|
|
| |
This reverts 707, and provides a correct fix for busted
port discrimination. It also has a few minor cleanups
and reduces the listen timeout to 10 seconds.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This introduces new public APIs for obtaining statistics,
and adds some generic stats for dialers, listeners, pipes, and
sockets. Also added are stats for inproc and pairv1 protocol.
The other protocols and transports will have stats added
incrementally as time goes on.
A simple test program, and man pages are provided for this.
Start by looking at nng_stat(5).
Statistics does have some impact, and they can be disabled by
using the advanced NNG_ENABLE_STATS (setting it to OFF, it's
ON by default) if you need to build a minimized configuration.
|
| |
|
|
|
| |
While here we separate out the dialer and listener options, so
that options for tuning connection are only available for listeners.
|
| |
|
|
|
|
| |
This both makes new functions available to the core,
and addresses a bug which would have prevented building the
ZeroTier transport on Windows.
|
| | |
|
| | |
|
| |
|
|
|
| |
This also fixes a leaked TCP connection on a failure path, which we
noticed while working this change.
|
| |
|
|
|
|
|
|
|
| |
This changes the signature of the aio cancellation routines
to take the argument for cancellation directly, so we do not
need to lookup the argument using the nni_aio_get_prov_data.
We should probably consider eliminating nni_aio_get_prov_data,
and co, and changing the prov_extra to reflect prov_data. Later.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
While here, perform a more aggressive close of the pipe on
reaping (IPC).
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
| |
fixes #615 IPC close on Windows leaves handle open
This reintroduces the code to eradicate the separate transport
start function for IPC, as a incremental step towards the full
fix for 599 and 208. It also addresses 615, by including revised
logic for the handling of close.
|
| |
|
|
|
| |
This changeset needs work. We are seeing errors described by
This reverts commit d7f7c896c0ede24249ef63b1e45b1878bf4bd473.
|
| |
|
|
|
|
|
|
|
|
| |
fixes #208 pipe start should occur before connect / accept
fixes #616 Race condition closing between header & body
This refactors the transports to handle their own connection
handshaking before passing the pipe to the socket. This
changes and simplifies the setup. This also fixes a rather
challenging race condition described by #616.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fixes #596 POSIX IPC should move away from pipedesc/epdesc
fixes #598 TLS and TCP listeners could support NNG_OPT_LOCADDR
fixes #594 Windows IPC should use "new style" win_io code.
fixes #597 macOS could support PEER PID
This large change set cleans up the IPC support on Windows and
POSIX. This has the beneficial impact of significantly reducing
the complexity of the code, reducing locking, increasing
concurrency (multiple dial and accepts can be outstanding now),
reducing context switches (we complete thins synchronously now).
While here we have added some missing option support, and fixed a
few more bugs that we found in the TCP code changes from last week.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fixes #179 DNS resolution should be done at connect time
fixes #586 Windows IO completion port work could be better
fixes #339 Windows iocp could use synchronous completions
fixes #280 TCP abstraction improvements
This is a rather monstrous set of changes, which refactors TCP, and
the underlying Windows I/O completion path logic, in order to obtain
a cleaner, simpler API, with support for asynchronous DNS lookups performed
on connect rather than initialization time, the ability to have multiple
connects or accepts pending, as well as fewer extraneous function calls.
The Windows code also benefits from greatly reduced context switching,
fewer lock operations performed, and a reduced number of system calls
on the hot code path. (We use automatic event resetting instead of manual.)
Some dead code was removed as well, and a few potential edge case leaks
on failure paths (in the websocket code) were plugged.
Note that all TCP based transports benefit from this work. The IPC code
on Windows still uses the legacy IOCP for now, as does the UDP code (used
for ZeroTier.) We will be converting those soon too.
|
| | |
|
| |
|
|
|
|
|
|
|
|
| |
fixes #573 atomic flags could help
This introduces a new atomic flag, and reduces some of the global
locking. The lock refactoring work is not yet complete, but this is
a positive step forward, and should help with certain things.
While here we also fixed a compile warning due to incorrect types.
|
| |
|
|
|
|
|
|
|
|
| |
This separates the plumbing for endpoints into distinct
dialer and listeners. Some of the transports could benefit
from further separation, but we've done some rather larger
separation e.g. for the websocket transport.
IPC would be a good one to update later, when we start looking
at exposing a more natural underlying API.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
fixes #538 setopt should have an explicit chkopt routine
fixes #537 Internal TCP API needs better name separation
fixes #524 Option types should be "typed"
This is a rework of the option management code, to make it both clearer
and to prepare for further work to break up endpoints. This reduces
a certain amount of dead or redundant code, and actually saves cycles
when setting options, as some loops were not terminated that should have
been.
|
| |
|
|
|
|
| |
This special cases the URL parser for inproc and IPC urls,
changing so that they no longer parse the thing after the ://
as anything special. This allows IPC URLs to be relative.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fixes #490 posix_epdesc use-after-free bug
fixes #489 Sanitizer based testing would help
fixes #492 Numerous memory leaks found with sanitizer
This introduces support for compiler-based sanitizers when using
clang or gcc (and not on Windows). See NNG_SANITIZER for possible
settings such as "thread" or "address".
Furthermore, we have fixed the issues we found with both the
thread and address sanitizers. We believe that the thread issues
pointed to a low frequency use-after-free responsible for rare
crashes in some of the tests.
The tests generally have their timeouts doubled when running under
a sanitizer, to account for the extra long times that the sanitizer
can cause these to take.
While here, we also changed the compat_ws test to avoid a particularly
painful and time consuming DNS lookup, and we made the nngcat_unlimited
test a bit more robust by waiting before sending traffic.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
fixes #464 Support NN_WS_MSG_TYPE option (compat)
fixes #415 websocket does not honor recv maxsize
This fixes a significant (and security) issue in websocket, where the
code does not honor a maximum receive size. We've exposed new API
(internal) to set the limit on the frame size, and we've changed the
default to *unlimited* for that internal API. (But the default for SP
sockets, which are the only consumers at present, is still 1MB just like
all other SP transports.)
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
fixes #429 async websocket reap leads to crash
This tightens up the code for shutdown, ensuring that transport
callbacks are completely stopped before advancing to the next step
of teardown of transport pipes or endpoints.
It also fixes a problem where task_wait would sometimes get "stuck"
as tasks transitioned between asynch and synchronous completions.
Finally, it saves a few cycles by only calling a cancellation callback
once during cancellation of an aio.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* fixes #419 want to nni_aio_stop without blocking
This actually introduces an nni_aio_close() API that causes
nni_aio_begin to return NNG_ECLOSED, while scheduling a callback
on the AIO to do an NNG_ECLOSED as well. This should be called
in non-blocking close() contexts instead of nni_aio_stop(), and
the cases where we call nni_aio_fini() multiple times are updated
updated to add nni_aio_stop() calls on all "interlinked" aios before
finalizing them.
Furthermore, we call nni_aio_close() as soon as practical in the
close path. This closes an annoying race condition where the
callback from a lower subsystem could wind up rescheduling an
operation that we wanted to abort.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fixes #326 consider nni_taskq_exec_synch()
fixes #410 kqueue implementation could be smarter
fixes #411 epoll_implementation could be smarter
fixes #426 synchronous completion can lead to panic
fixes #421 pipe close race condition/duplicate destroy
This is a major refactoring of two significant parts of the code base,
which are closely interrelated.
First the aio and taskq framework have undergone a number of simplifications,
and improvements. We have ditched a few parts of the internal API (for
example tasks no longer support cancellation) that weren't terribly useful
but added a lot of complexity, and we've made aio_schedule something that
now checks for cancellation or other "premature" completions. The
aio framework now uses the tasks more tightly, so that aio wait can
devolve into just nni_task_wait(). We did have to add a "task_prep()"
step to prevent race conditions.
Second, the entire POSIX poller framework has been simplified, and made
more robust, and more scalable. There were some fairly inherent race
conditions around the shutdown/close code, where we *thought* we were
synchronizing against the other thread, but weren't doing so adequately.
With a cleaner design, we've been able to tighten up the implementation
to remove these race conditions, while substantially reducing the chance
for lock contention, thereby improving scalability. The illumos poller
also got a performance boost by polling for multiple events.
In highly "busy" systems, we expect to see vast reductions in lock
contention, and therefore greater scalability, in addition to overall
improved reliability.
One area where we currently can do better is that there is still only
a single poller thread run. Scaling this out is a task that has to be done
differently for each poller, and carefuly to ensure that close conditions
are safe on all pollers, and that no chance for deadlock/livelock waiting
for pfd finalizers can occur.
|
| | |
|
| |
|
|
|
|
|
| |
fixes #249 nngcat needs test cases
fixes #416 transports do not permit unlimited message size with 0
fixes #417 nngcat truncates input files to 4k
fixes #348 nngcat should have switch to adjust maximum receive size
|
| |
|
|
|
| |
We offer uid, gid, process id, and even zone id where we have them.
Docs and tests are provided.
|
| |
|
|
|
|
|
|
|
|
| |
fixes #382 Permissions support for IPC on POSIX
This adds support for permission management on Windows and
POSIX systems. There are two different properties, and they
are very different.
Tests and documentation are included.
|
| |
|
|
| |
fixes #106 TCP keepalive tuning
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This closes a fundamental flaw in the way aio structures were
handled. In paticular, aio expiration could race ahead, and
fire before the aio was properly registered by the provider.
This ultimately led to the possibility of duplicate completions
on the same aio.
The solution involved breaking up nni_aio_start into two functions.
nni_aio_begin (which can be run outside of external locks) simply
validates that nni_aio_fini() has not been called, and clears certain
fields in the aio to make it ready for use by the provider.
nni_aio_schedule does the work to register the aio with the expiration
thread, and should only be called when the aio is actually scheduled
for asynchronous completion. nni_aio_schedule_verify does the same thing,
but returns NNG_ETIMEDOUT if the aio has a zero length timeout.
This change has a small negative performance impact. We have plans to
rectify that by converting nni_aio_begin to use a locklesss flag for
the aio->a_fini bit.
While we were here, we fixed some error paths in the POSIX subsystem,
which would have returned incorrect error codes, and we made some
optmizations in the message queues to reduce conditionals while holding
locks in the hot code path.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ultimately, this just removes the support for lingering altogether.
Based on prior experience, lingering has always been unreliable, and
was removed in legacy libnanomsg ages ago.
The problem is that operating system support for lingering is very
inconsistent at best, and for some transports the very concept is somewhat
meaningless.
Making things worse, we were never able to adequately capture an exit()
event from another thread -- so lingering was always a false promise.
Applications that need to be sure that messages are delivered should
either include an ack in their protocol, use req/rep (which has an ack),
or inject a suitable delay of their own.
For things going over local networks, an extra delay of 100 msec should
be sufficient *most of the time*.
|
| | |
|