Switch to actor model
Closed this issue · 7 comments
Stream_file interface is no longer capable of managing the complexity
Switch to Lwt
Things to work on
- Blocking queue, implement using Lwt's mailbox, OCaml's queue, and a mutex lock
----------------------------------------------------
| MailBox | -> Queue (lock protected) -> | MailBox |
----------------------------------------------------
- Concurrent progress text printing using Lwt
- Adhere to zero-copy model for more things maybe
Encode module is now using actor-based concurrency using Lwt, and is ~50% slower...
Maybe I should move to Rust or wait a few years for multicore ocaml.
@darrenldl did you use profiling to discover where the bottleneck is? OCaml can be reasonably fast if tweaked... (sorry, I haven't read through your seqbox code)
@hannesm All good, the code's really rough atm anyway(still looks much better than the old code tho, albeit slower).
I did some testing with different data structures (flipping between Lwt_mvar and Lwt_queue)
(Lwt_queue is just a ring buffer implemented using array and condition variable)
But I didn't do proper profiling with tools, only relied on the progress text printing which reports the rate.
The current arrangement of the encoder internals
--------------- ----------- ---------------
| File reader | --Lwt_mvar--> | Encoder | --Lwt_mvar--> | File writer |
| | (1) | | (2) | |
--------------- ----------- ---------------
The Lwt_queue variant is just Lwt_mvar with Lwt_queue swapped
Statistics (using defaults, this means data size = 496 bytes, hash = SHA256)
Implementation | Choice for (1) | Choice for (2) | Avg. encode rate blocks/sec |
---|---|---|---|
Sequential | N/A | N/A | 105K |
Lwt | Lwt_queue, size=100 | Lwt_queue, size=100 | 63K |
Lwt | Lwt_mvar | Lwt_queue, size=100 | 63K |
Lwt | Lwt_mvar | Lwt_queue, size=1000 | 67K |
Lwt | Lwt_mvar | Lwt_mvar | 65K |
Progress reporting text didn't seem to have any impact on the performance, so just I placed a call next to where the block is created in Lwt implementations.
Description
Sequential
The sequential implementation is just a single chain of tail positioned function calls.
Lwt
For Lwt implementation, each internal component in is made from a generating function (returns a function of type : unit -> something Lwt.t
, e.g. gen_encoder
at encode.ml
, gen_file_reader, gen_file_writer
at actors.ml
).
The generated threads first wait on a waiter thread, then the waiter thread is woken up to kick start the encoding process.
My original intuition was that using Lwt would speed things up since the encoder doesn't have to wait for file IO, but that wasn't entirely right I suppose.
My guess is that the sequential implementation was better optimized since it's just a statically written chain of tail position function calls, while a lot of anonymous functions are used in Lwt version implementation.
But I have almost zero background on compilers and language and runtimes, so I'm probably very wrong.
Currently I'm still very attached to OCaml rather than Rust mainly due to Coq, and also Rust doesn't support tail call optimization. If only Ada has a much larger FOSS community, hmm...
@hannesm I did more tests today with more useful results, namely disabling some code of encoder
The setup is still the same, with connections (1) and (2) both being Lwt_mvar, may switch back to Lwt_queue later for consistency but doesn't seem to matter
Statistics
Code disabled in encoder | Avg. encode rate blocks/sec | Slowdown compared to all disabled |
---|---|---|
Hash & data packing | 257K | 0 |
Hash | 76K | -181K (caused by data packing) |
Data packing | 112K | -145K (caused by hash) |
I noticed the data packing function(which converts 496 bytes of data to a sbx block) can be optimized, I'll post the results after optimization for both the sequential implementation and Lwt implementation.
The issue with delay in hashing and data packing may still persist since there's no way to run code in parallel yet. The hashing part was previously done in another actor, but it has the same level of performance, probably due to above reason.
After a lot of optimizations, the rate sits at around 90K blocks/sec
I'm gonna leave Reed-Solomon support to a Rust rewrite of osbx, if I am ever going to do that