Scalable, asynchronous IO handling using coroutines (aka MIO COroutines).
Using mioco
you can handle mio
-based IO, using set of synchronous-IO
handling functions. Based on asynchronous mio
events mioco
will
cooperatively schedule your handlers.
You can think of mioco
as of Node.js for Rust or green threads on top of mio
.
mioco
is a young project, but we think it's already very useful. See
projects using mioco. If
you're considering or already using mioco, please drop us a line on #mioco gitter.im.
Read Documentation for details.
If you need help, try asking on #mioco gitter.im. If still no luck, try rust user forum.
To report a bug or ask for features use github issues.
Note: You must be using nightly Rust release. If you're using
multirust, which is highly recommended, switch with multirust default nightly
command.
To start test echo server:
cargo run --release --example echo
For daily work:
make all
- colerr - colorize stderr;
Send PR
Beware: This is very naive comparison! I tried to run it fairly,
but I might have missed something. Also no effort was spent on optimizing
neither mioco
nor other tested tcp echo implementations.
In thousands requests per second:
bench1 |
bench2 |
|
---|---|---|
libev |
183 | 225 |
node |
37 | 42 |
mio |
156 | 190 |
mioco |
157 | 177 |
Server implementation tested:
libev
- https://github.com/dpc/benchmark-echo/blob/master/server_libev.c; Note: this implementation "cheats", by waiting only for read events, which works in this particular scenario.node
- https://github.com/dpc/node-tcp-echo-server;mio
- https://github.com/dpc/mioecho; TODO: this implementation could use some help.mioco
- https://github.com/dpc/mioco/blob/master/examples/echo.rs;
Benchmarks used:
bench1
- https://github.com/dpc/benchmark-echo ;PARAMS='-t64 -c10 -e10000 -fdata.json'
;bench2
- https://gist.github.com/dpc/8cacd3b6fa5273ffdcce ;GOMAXPROCS=64 ./tcp_bench -c=128 -t=30 -a=""
;
Machine used:
- i7-3770K CPU @ 3.50GHz, 32GB DDR3 1800Mhz, some basic overclocking, Fedora 21;