This is an experimental concurrency runtime for Nim.
We think it's easier to move continuations from thread to thread than it is to move arbitrary data. We offer API that lets you imperatively move your continuation to other thread pools without any locking or copies.
This experiment has expanded to support detached threads, channels with backpressure, lock-free queues, and a "local" event queue for I/O and signal handling.
- extremely high efficiency, but
- favor generality over performance
- zero-copy migration among threads
- lock-free without busy loops
- detached threads for robustness
- idiomatic; minimal boilerplate
- standard
Continuation
passing - arbitrary
ref
andptr
passing - expose thread affinity, attributes
- event queue included for I/O
- enable incremental CPS adoption
- be the concurrency-lib's toolkit
Adequate.
Inside a single thread, concurrency is cooperative and lock-free, so if you
don't yield to the dispatcher, your continuation may only be interrupted by a
signal from another thread. At present, the only occasions for interruption are
when you call halt()
on a thread, or use the pause/resume functions named
freeze()
and thaw()
.
Empty continuations are 40-bytes each. Queue overhead is 10-bytes per object. One billion queued continuations thus occupies 50gb of memory.
Toys are starting to run a little more slowly due to overhead from things like thread cancellation, signal handling and, more generally, the event queue.
That said, the tests demonstrate using the (richest) API to run millions of continuations per second on modern desktop hardware.
insideout supports define:useMalloc
, mm:arc
, backend:c
,
and POSIX threads. insideout does not support mm:orc
.
insideout is primarily developed using Nimskull and may not work with mainline Nim.
insideout is tested with compiler sanitizers to make sure it doesn't demonstrate memory leaks or data-races.
Nim's documentation generator breaks when attempting to read insideout.
Define insideoutValgrind=on
to enable Valgrind-specific annotations.
MIT