/graypaper

The JAM Specification

Primary LanguageTeX

Graypaper: The JAM Specification

The description and formal specification of the Jam protocol, a potential successor to the Polkadot Relay chain.

Build with xelatex.

https://graypaper.com/

Remaining near-term

Finesse

  • Make all subscript names capitalized
  • Ensure all definitions are referenced
  • Link and integrate to Bandersnatch RingVRF references (Davide/Syed) IN-PROGRESS
  • Remove any "TODOs" in text
  • Macrofy everything

Simple Networking

  • Specify required connectivity (QUIC, endpoints, ports, encryption keys, for vals and nodes)
  • Specify protocol-specific handshake
  • Specify basic message format, reply format and all variants
    • How are long messages handled?
  • Messages for all nodes:
    • Block propagation (all nodes)
    • ImportDA query & response
  • Messages for validator nodes only:
    • Assurance publication
    • Guarantee publication
    • Audit-announcement
    • Judgement publication
    • Ticket submission
    • AuditDA query & response

Final PVM

  • 64-bit PVM
  • Gas pricing
    • Merkle reads in terms of nodes traversed.
    • Non-linear gas for export/import host calls
  • No pages mappable in first 64 KB

Final DA

  • Formalize as much as possible.
  • Migrate formalization & explanation:
    • guaranteeing-specific stuff into relevant section
    • assurance-specific stuff into relevant section
    • auditing-specific stuff into relevant section
  • Include an epochal on-chain lookup from Work Package hash to segments root.
  • Define Erasure Coding proof means
    • Define binary Merkle proof-generation function which compiles neighbours down to leaf.
    • Define binary Merkle proof-verification function exists sequence of values which contains our value and Merklised to some root.

Auditing

  • Specify announcement signatures
  • Specify how to build perspective on other validators with announcements

Discussion and Conclusions/Further Work

  • Security assumptions: redirect to ELVES paper
  • Creating a parachains service: further work (RFC for upgrade perhaps)
    • Key differences
      • limited size of Work Output vs unlimited candidate receipt
      • Laissez-faire on Work Items vs requirement for valid transition
      • Hermit relay (staking &c is on system chains)
    • Supporting liveness
    • Supporting *MP
    • No need for UMP/DMP
  • Compare with danksharding v1
  • Deeper talk Cost & latency comparison with RISC0-VM and latest ZK stuff.
  • Include full calculations for bandwidth requirements.

Stuff before 1.0

Final networking protocol

  • Consider a simple network protocol needed for M1/M2 and a production protocol for M3+
  • Block distribution via EC and proactive-chunk-redistribution
  • Guarantor-guarantor handover
  • Star-shaped Point-to-point extrinsic distribution
  • Mixnet for ticket submission

Bring together sub-protocols

  • Better integration to Grandpa paper
  • Better description of Beefy
  • Better integration to Bandersnatch RingVRF.

Ideas to consider

Statistics/Bookkeeping

  • Consider integrating the subjective extrinsic and state:
    • If so, have three items to allow for a whole epoch of opinion submission
    • In which case allow for guaranteeing val keys from last epoch to gain points

General

  • Think about time and relationship between lookup-anchor block and import/export period.
    • Lookup anchor: maybe it should be 48 hours since lookup anchor can already be up to 24 hours after reporting and we want something available up to 24 hours after that?
  • Refine arguments:
    • Currently passing in the WP hash, some WP fields and all manifest preimages: Consider passing in the whole work-package and a work-item index.
  • Consider removal of the arrow-above notation in favour of subscript and ellipsis (this only works for the right-arrow).
  • Optional on_report entry point
  • Remove assignments from state - no need for it to be there as it's derivable from $\eta_2$ alone.
  • Make memo bounded, rather than fixed.

Stuff to replicate to PolkaJam

  • Beefy root and accumulate-result hash.
  • Judgements
  • Using posterior assignments.

Done

  • Statistics/Bookkeeping
    • Integrate into intro and definitions.
  • All "where" and "let" lines are unnumbered/integrated
  • DA2
    • Update chunks/segments to new size of 12 bytes / 4KB in the availability sections, especially the work packages and work reports section and appendix H.
    • export is in multiples of 4096 bytes.
    • Manifest specifies WI (maximum) export count.
    • import is provided as concatenated segments of 4096 bytes, as per manifest.
    • Constant-depth merkle root
    • (Partial) Merkle proof generation function
    • New erasure root (4 items per validator; 2 hashes + 2 roots).
    • Specification of import hash (to include concatenated import data and proof).
      • Proof spec.
      • Specification of segment root.
    • Additional two segment-roots in WR.
      • Specification of segment tree.
    • Specification of segment proofs.
    • Specification of final segments for DA and ER.
    • Re-erasure-code imports.
    • Fetching imports and verification.
  • Independent definition of PVM.
  • Need to translate the basic work result into an "L"; do it in the appendix to ease layout
    • service - easy
    • service code hash - easy
    • payload hash - easy
    • gas prioritization - just from WP?
  • Edit Previous Work.
  • Edit Discussion.
  • Document guide at beginning.
  • Move constants to appendix and define at first use.
  • Context strings for all signatures.
    • List of all context strings in definitions.
  • Remove header items from ST dependency graph where possible.
  • Update serialization
    • For $\beta$ component $b$ - implement MMR encode.
    • Additional field: $\rho_g$
  • Link and integrate to RISCV references (Jan) HAVE SPEC
  • Link and integrate to Beefy signing spec (Syed)
  • Link and integrate to Erasure-Coding references (work with Al)
  • Grandpa/best-block: Disregard blocks which we believe are equivocated unless finalized.
  • Other PVM work
    • Define sbrk properly:
    • Update host functions to agreed API.
    • Figure out what to do with the jump table.
  • Define inner PVM host-calls
    • Spec below
    • Figure out what the $c_i$/$c_b$ are
    • Avoid entry point
    • Ensure code and jump-table is amalgamated down to VM-spec
    • Move host calls to use register index
  • Update serialization for Judgement extrinsic and judgements state.
  • Define Beefy process
    • Accumulate: should return Beefy service hash
    • Define Keccak hash $\mathbb{H}_K$
    • Remove Beefy root from header
    • Put the Beefy root into recent blocks after computation
    • Recent blocks should store MMR of roots of tree of accumulated service hashes
    • Define an MMR
    • Add \textsc{bls} public key to keyset (48 octet).
    • Specify requirement of validators to sign.
  • Define audit process.
    • Erasure coding root must be correct
    • This means we cannot assume that the WP hash can be inverted.
    • Instead, we assume that we can collect 1/3 chunks and combine to produce some data
    • Then we check:
      • if that hashes to the WP hash.
      • if the erasure-coded chunks merklise into a tree of the given root.
    • If so we continue.
    • NOTE: The above should be done in guarantor stage also.
  • Auditing: Always finish once announced.
  • Judgements: Should cancel work-report from Rho prior to accumulation.
  • Signed judgements should not include guarantor keys;
    • Judgement extrinsic should use from rho.
  • Check "Which History" section and ensure it mentions possibility for reversion via judgement.
    • No reversion beyond finalized
    • Of unfinalized extension, not block containing work-reports which appear in the banned-set of any other (valid) block.
  • Prior work and refine/remove the zk argumentation (work with Al)
  • Disputes state transitioning and extrinsic (work with Al)
  • Finish Merklization description
  • Bibliography
  • Updated PVM
  • Remove extrinsic segment root. Rename "* segment-root" to just "segment-root".
  • Combine chunk-root for WP, concatenated extrinsics and concatenated imports.
  • Imports are host-call
  • Make work report field r bold.
  • Segmented DA v2
    • Underlying EC doesn't change, need to make clear segments are just a double-EC
  • Make work report field r bold.
  • Need to translate the basic work result into an "L"; do it in the appendix to ease layout
    • service - easy
    • service code hash - easy
    • payload hash - easy
    • gas prioritization - just from WP?
    • Consider introducing a host-call for reading manifest data rather than always passing it in.

% A set of independent, sequential, asynchronously interacting 32-octet state machines each of whose transitions lasts around 2 seconds of webassembly computation if a predetermined and fixed program and whose transition arguments are 5 MB. While well-suited to the verification of substrate blockchains, it is otherwise quite limiting.