privacy-scaling-explorations/p0tion

pipelining of users contributions

ctrlc03 opened this issue · 5 comments

Problem

Currently it is not possible to contribute to large circuits > 1M constrains on a browser due to memory restrictions. For that, we use a cli tool instead where we can use the full computing power of a machine.

Possible solution

Pipelining the contribution of a user.

Flow:

  • Download n chunk -> process n chunk -> (upload n chunk)

By processing the computation on one chunk at a time, we can effectively reduce the memory consumption as we do not need to hold the whole file in memory at all times. This could possibly allow even larger contributions on a mobile phone's browser.

What we would need to change

We would need to first validate this is possible to do the computation on chunks rather than the whole zKey.

  1. Amend the current backend step system. This is quite strict, there's certain actions which can only be done on when at a certain step (for instance cannot get an upload pre-signed url when not in the upload phase, etc.). (medium complexity)
  2. This approach described above requires some changes on snarkjs' code.