| Commit message (Collapse) | Author | Age |
| ... | |
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
As part of this, we've added a way to unblock callers in a message
queue with an error, even without a signal channel. This was necessary
to interrupt blockers upon survey timeout. They will get NNG_ETIMEDOUT,
but afterwards callers get NNG_ESTATE.
|
| | |
|
| |
|
|
|
|
| |
Platforms must seed the pRNGs by offering an nni_plat_seed_prng()
routine. Implementations for POSIX using various options (including
the /dev/urandom device) are supplied.
|
| |
|
|
|
|
|
| |
There are multiple different versions of uncrustify, and they
do not always generate the same output. (Arguably this is due to
defects in uncrustify.) So for now we punt and don't throw an error,
but we do still generate the output. Pay attention to this going forward.
|
| | |
|
| |
|
|
|
|
|
| |
This adds the surveyor protocol, and updates the respondent somewhat.
I've switched to using generic names for per-pipe and per-socket protocol
data. Hopefully this will make 'cut-n-paste' from other protocol
implementations easier.
|
| |
|
|
|
| |
This should eliminate all need for protocols to do their own
thread management tasks.
|
| | |
|
| | |
|
| |
|
|
|
| |
Don't drop the lock in sock_close while holding the pipe reference.
I'm pretty sure this is responsible for the use-after-free race.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
| |
In an attempt to simplify the protocol implementation, and hopefully
track down a close related race, we've made it so that most protocols
need not worry about locks, and can access the socket lock if they do
need a lock. They also let the socket manage their workers, for the
most part. (The req protocol is special, since it needs a top level
work distributor, *and* a resender.)
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
Pub pipes might not be connected yet. Do the dial from the
pub side synchronously, and we can be sure no data will be lost.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PUSH attempts to do a round-robin based distribution. However, I
noticed that there is a bug in REQ, because REQ sockets will continue
to pull down work until the first one no longer has room. This can
in theory lead to scheduliung imbalances when the load is very light.
(Under heavy load, the backpressure dominates.)
Also, I note that mangos suffers the same problem. It does not
make any attempt to deliver work equally, basically each pipe winds
up pulling messages until its own buffers are full. This is bad.
We can borrow the logic here for both REQ and mangos.
None of this is tested yet.
|
| | |
|
| |
|
|
|
| |
This fixes several issues, and brings PUB/SUB to operational
correctness. Included is test code to verify that.
|
| |
|
|
|
|
|
| |
The use of a single function to get both size and length actually
turned out to be awkward to use; better to have separate functions
to get each. While here, disable some of the initialization/fork
checks, because it turns out they aren't needed.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
On retry we were pushing back to the queue. The problem with this is that
we could wind up pushing back many copies of the message if no reader was
present. The new code ensures at most one retry is outstanding.
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
Also we added a two phase shutdown for threads.
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
The throughput performance tests "try" to avoid hitting the allocator,
but I think this actually causes other cache related performance, and the
receive thread still has to perform a message allocation, leading to really
rotten performance. Its probably time to think about a message pool.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|