nodejs/worker

High level architecture

refack opened this issue ยท 22 comments

Follow up to #1

If we consider the use-cases brought up in Step 1) Figure out use cases what should be the high level architecture?
Let's try to consider pros and cons and not just bikeshed ๐Ÿ˜‰

Some options that were brought up in nodejs/node#13143:

  1. Multithreading - nodejs/node#2133 and node-webworker-threads
  2. Multi process with mutable shared memory - nodejs/node#13143 (comment)
  3. Multi process with immutable shared memory and serialized communication - nodejs/node#13143 (comment)
  4. Multi process with only serialized communication

Considering @bnoordhuis comments on the complexity of multithreading, and based on personal experience, personally I'm pro multi-process. IMHO it seems like a the more natural solution for JS in general and nodejs in particular.
Intuitively I would vote for option (3), but I'm not sure how well it serves the "parallelize heavy computation" use case over option (2)...

Pros for (3):

  • IMHO simplest to implement
  • Covers the "utilize multi core" requirement
  • Could fulfill the "prioritize main thread over Workers" requirement
  • Immutable shared memory fulfills the "efficiently share large amount of data" requirement (although only one way)

Cons for (3)

  • Depends on OS for IPC and shared memory (platform specific code fragmentation)
  • Necessitates a mechanism for loading/sharing code
  • Multithreading could be considered simpler to grok and use
  • Necessitates implementing immutability of shared-memory, and implementing efficient communication protocol.
  • Probably more memory intensive.

I'll be happy add to this list based on future comments.

Some biased refs off the top of my head
python's GIL
Chromium's multi process architecture

@addaleax too soon?

I think a more reasonable next step would be to figure out whether what weโ€™re going for is exposing a full Node API, or just a reasonable minimum that doesnโ€™t include things like I/O (which is part of the reason why I was starting by asking for use cases).

If we want a full Node API in Workers, yes, multi-process probably makes the most sense. But Iโ€™m not sure whether thatโ€™s a good idea; I would very well imagine that using parallel workers to do more I/O would be considered an anti-pattern. It would also make the lifes of those easier who would want to use Workers for script isolation.

Ack. That's why I didn't call this "Step 2".
But it was on my mind, and was discussed in nodejs/node#13143...

I think a more reasonable next step would be to figure out whether what weโ€™re going for is exposing a full Node API, or just a reasonable minimum that doesnโ€™t include things like I/O (which is part of the reason why I was starting by asking for use cases).

I think we need both modes.

Your comments made me realize I had an hidden assumption that we are planning to comply with the Web Worker API. But I realize that's TBD as well. Thank you ๐Ÿ‘

@refack I think it would be great if we could conform to the Web Worker API, would be less of a learning curve, and they have already solved most of the things we would be doing. Like you mentioned before we already have non-standard things like cluster, and fork.

The big advantage of Workers would be a standardized performant way of utilizing all cores and interfacing with large ArrayBuffers.

Note that providing the WebWorker API and a full Node API arenโ€™t necessarily mutually exclusive; I agree, having the former is very likely a good idea.

IMO implementing WebWorker API should be a non-goal. It could be easily implemented in userland on top.

FWIW the language spec these days has a built-in model of "agents" and "agent clusters" which are meant to represent threads and processes. SharedArrayBuffers can be shared between agents, but not between agent clusters.

I believe we should allow for an architecture that allows for horizontal, peer to peer messaging. Basically, let's have the ability to postMessage to any event loop. Today, we can send a message from Master to worker, but workers should be able to communicate directly with other workers.

If we can do this, and add shared memory between event loops using the shared array buffer (available in 6.0), I think that's a big win for the kind of software architectures we'll enable others to create. It'd be a big deal

It'd be a big deal

It's definatly worth thinking about. I'm just thinking how will the workers identify each other ๐Ÿค” (know if the other side is there/responsive/alive)...
@NawarA Do you have a use case in mind? (Re #1 )

Anecdotally, I'm using child_process.fork for the elm-test CLI, and avoiding the per-message IPC cost would be a big deal to me!

For my use case, any design that allows computation across multiple cores with less message-passing cost than IPC is ๐Ÿ˜ป, and anything with the same (or more) message-passing cost than IPC means I'd just keep using child_process.fork. ๐Ÿ˜„

Not having to spawn separate processes (which only (1) would avoid) would be nice, as spawning processes contributes to overall execution time whenever someone runs elm-test, but it's not a huge deal.

I think this is a big opportunity to do the work needed to switch to a multithreaded model, rather than trying to get around that. Implementing all the fun of mutable shared memory and atomics and whatnot would be a huge step forward with what is possible to do in node, and I think it would totally be worth the work of converting the codebase to play well with multithreading.

@devsnek Atomics and shared memory are possible in both multi-process and multi-thread mode.

that wasn't my point but ok

Then what is your point?

I'd like to see this built around a very conventional UNIX model, if at all possible. Sending file-descriptors over (unix domain, &c) sockets would be ideal- that would mean something like usocket. I don't know much about memory is allocated to back something like a SharedArrayBuffer, but if we can use memfd to allocate that memory, we can pass the file-descriptor to other processes.

There's a non-JS example of this kind of thing here. This author also seals the memfd before sending, making it immutable, but I'm 95% confident that's optional.

This is more or less the standard way of passing data among processes on Unix, & it would be baller if WebWorkers backed to this common, well known architecture for data-sharing in a multi-process environment. Creating a SharedArrayBuffer backed by memory allocated via memfd would be the major core demonstrator, proving out the viability.

i am using unix sockets with redis, it is much faster then TCP about 50% :)

Iโ€™m closing the existing issues here. If you have feedback about the existing Workers implementation in Node.js 10+, please use #6 for that!