aboutsummaryrefslogtreecommitdiff
path: root/perf
Commit message (Collapse)AuthorAge
* Eliminate legacy option settings, provide easier option IDs.Garrett D'Amore2017-08-24
| | | | | | | | | | | | | | | | | | This eliminates all the old #define's or enum values, making all option IDs now totally dynamic, and providing well-known string values for well-behaved applications. We have added tests of some of these options, including lookups, and so forth. We have also fixed a few problems; including at least one crasher bug when the timeouts on reconnect were zero. Protocol specific options are now handled in the protocol. We will be moving the initialization for a few of those well known entities to the protocol startup code, following the PAIRv1 pattern, later. Applications must therefore not depend on the value of the integer IDs, at least until the application has opened a socket of the appropriate type.
* fixes #63 NNG_FLAG_SYNCH should be the defaultGarrett D'Amore2017-08-14
| | | | | Also enables creating endpoints that are idle (first part of endpoint options API) and shutting down endpoints.
* Thundering herd kills performance.Garrett D'Amore2017-08-10
| | | | | | | | | | | | | | A little benchmarking showed that we were encountering far too many wakeups, leading to severe performance degradation; we had a bunch of threads all sleeping on the same condition variable (taskqs) and this woke them all up, resulting in heavy mutex contention. Since we only need one of the threads to wake, and we don't care which one, let's just wake only one. This reduced RTT latency from about 240 us down to about 30 s. (1/8 of the former cost.) There's still a bunch of tuning to do; performance remains worse than we would like.
* fixes #44 open protocol by "name" (symbol) instead numberGarrett D'Amore2017-08-09
| | | | | | | | | | | | | | fixes #38 Make protocols "pluggable", or at least optional This is a breaking change, as we've done away with the central registered list of protocols, and instead demand the user call nng_xxx_open() where xxx is a protocol name. (We did keep a table around in the compat framework though.) There is a nice way for protocols to plug in via an nni_proto_open(), where they can use a generic constructor that they use to build a protocol specific constructor (passing their ops vector in.)
* Event notification via pollable FDs verified working.Garrett D'Amore2017-01-22
|
* Fix leaks in bus, socket leaks, tighten up close-side refcnting.Garrett D'Amore2017-01-21
| | | | | | | | | | | | | | This does a few things. First it closes some preexisting leaks. Second it tightens the overall close logic so that we automatically discard idhash resources (while keeping numeric values for next id etc. around) when the last socket is closed. This then eliminates the need for applications to ever explicitly terminate resources. It turns out platform-specific resources established at nni_init() time might still be leaked, but it's also the case that we now no longer dynamically allocate anything at platform initialization time. (This presumes that the platform doesn't do so under the hood when creating critical sections or mutexes for example.)
* Dangling compiler warnings from sock handle change.Garrett D'Amore2017-01-20
|
* Various complaints found in AppVeyor build.Garrett D'Amore2017-01-16
|
* Message API was awkward.Garrett D'Amore2017-01-06
| | | | | | | The use of a single function to get both size and length actually turned out to be awkward to use; better to have separate functions to get each. While here, disable some of the initialization/fork checks, because it turns out they aren't needed.
* Add inproc performance tests.Garrett D'Amore2017-01-05
|
* Fix early termination, and buffer for throughput.Garrett D'Amore2017-01-05
|
* Naive attempt to avoid allocator was misguided. This helps performance.Garrett D'Amore2017-01-05
|
* Implement throughput performance tests.Garrett D'Amore2017-01-05
| | | | | | | The throughput performance tests "try" to avoid hitting the allocator, but I think this actually causes other cache related performance, and the receive thread still has to perform a message allocation, leading to really rotten performance. Its probably time to think about a message pool.
* Add initial performanced tests.Garrett D'Amore2017-01-05