| Commit message (Collapse) | Author | Age |
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
| |
This compiles correctly, but doesn't actually deliver events yet.
As part of this, I've made most of the initializables in nng
safe to tear-down if uninitialized (or set to zero e.g. via calloc).
This makes it loads easier to write the teardown on error code, since
I can deinit everything, without worrying about which things have been
initialized and which have not.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
As part of this, we've added a way to unblock callers in a message
queue with an error, even without a signal channel. This was necessary
to interrupt blockers upon survey timeout. They will get NNG_ETIMEDOUT,
but afterwards callers get NNG_ESTATE.
|
| |
|
|
|
|
| |
Platforms must seed the pRNGs by offering an nni_plat_seed_prng()
routine. Implementations for POSIX using various options (including
the /dev/urandom device) are supplied.
|
| | |
|
| |
|
|
|
|
|
| |
This adds the surveyor protocol, and updates the respondent somewhat.
I've switched to using generic names for per-pipe and per-socket protocol
data. Hopefully this will make 'cut-n-paste' from other protocol
implementations easier.
|
| |
|
|
|
| |
This should eliminate all need for protocols to do their own
thread management tasks.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
In an attempt to simplify the protocol implementation, and hopefully
track down a close related race, we've made it so that most protocols
need not worry about locks, and can access the socket lock if they do
need a lock. They also let the socket manage their workers, for the
most part. (The req protocol is special, since it needs a top level
work distributor, *and* a resender.)
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PUSH attempts to do a round-robin based distribution. However, I
noticed that there is a bug in REQ, because REQ sockets will continue
to pull down work until the first one no longer has room. This can
in theory lead to scheduliung imbalances when the load is very light.
(Under heavy load, the backpressure dominates.)
Also, I note that mangos suffers the same problem. It does not
make any attempt to deliver work equally, basically each pipe winds
up pulling messages until its own buffers are full. This is bad.
We can borrow the logic here for both REQ and mangos.
None of this is tested yet.
|
| | |
|
| |
|
|
|
| |
This fixes several issues, and brings PUB/SUB to operational
correctness. Included is test code to verify that.
|
| |
|
|
|
|
|
| |
The use of a single function to get both size and length actually
turned out to be awkward to use; better to have separate functions
to get each. While here, disable some of the initialization/fork
checks, because it turns out they aren't needed.
|
| | |
|
| |
|
|
|
|
| |
On retry we were pushing back to the queue. The problem with this is that
we could wind up pushing back many copies of the message if no reader was
present. The new code ensures at most one retry is outstanding.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
This also adds checks in the protocols to verify that pipe peers
are of the proper protocol.
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
This uncovered a few problems - inproc was not moving the headers
to the body on transmit, and the message chunk allocator had a serious
bug leading to memory corruption. I've also added a message dumper,
which turns out to be incredibly useful during debugging.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
This may also address a race in closing down pipes. Now pipes are always
registered with the socket. They also always have both a sender and receiver
thread. If the protocol doesn't need one or the other, the stock thread just
exits early.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
| |
This is an untested implementation for REQ. It is based
hugely upon the mangos implementation, so there is an excellent
chance it will work.
|
| |
|
|
|
|
|
|
|
|
| |
At this point listening and dialing operations appear to function properly.
As part of this I had to break the close logic up since otherwise we had a
loop trying to reap a thread from itself. So there is now a separate reaper
thread for pipes per-socket. I also changed lists to be a bit more rigid,
and allocations now zero memory initially. (We had bugs due to uninitialized
memory, and rather than hunt them all down, lets just init them to sane zero
values.)
|
| | |
|
| | |
|
| | |
|