| Commit message (Collapse) | Author | Age | |
|---|---|---|---|
| * | Refactor stop again, closing numerous races (thanks valgrind!) | Garrett D'Amore | 2017-06-28 |
| | | |||
| * | Convert to POSIX polled I/O for async; start of cancelable aio. | Garrett D'Amore | 2017-06-27 |
| | | | | | | | | | | | | | | | | | | | This eliminates the two threads per pipe that were being used to provide basic I/O handling, replacing them with a single global thread for now, that uses poll and nonblocking I/O. This should lead to great scalability. The infrastructure is in place to easily expand to multiple polling worker threads. Some thought needs to be given about how to scale this to engage multiple CPUs. Horizontal scaling may also shorten the poll() lists easing C10K problem. We should look into better solutions than poll() for platforms that have them (epoll on Linux, kqueue on BSD, and event ports on illumos). Note that the file descriptors start out in blocking mode for now, but then are placed into non-blocking mode. This is because the negotiation phase is not yet callback driven, and so needs to be synchronous. | ||
| * | TCP (POSIX) async send/recv working. Other changes. | Garrett D'Amore | 2017-03-29 |
| | | | | | | | | Transport-level pipe initialization is now sepearate and explicit. The POSIX send/recv logic still uses threads under the hood, but makes use of the AIO framework for send/recv. This is a key stepping stone towards enabling poll() or similar async I/O approaches. | ||
| * | Pair protocol now callback driven. | Garrett D'Amore | 2017-03-06 |
| | | |||
| * | Pipeline protocol now entirely callback driven. | Garrett D'Amore | 2017-03-04 |
| | | |||
| * | Timer implementation. Operations can timeout now? | Garrett D'Amore | 2017-03-03 |
| | | |||
| * | Start of msgq aio. | Garrett D'Amore | 2017-03-01 |
| | | |||
| * | Rename ioev to aio. Eliminate generic cancel handling (not needed). | Garrett D'Amore | 2017-02-23 |
| We will still need some kind of specific handling of cancellation for msg queues, but it will be simpler to just implement that for the queues, and not worry about cancellation in the general case around poll etc. (The low level poll and I/O routines will get notified by their underlying transport pipes/descriptors closing.) | |||
