Taiko's multi-prover of Taiko & Etheruem block, currently supports Risc0, Sp1, and SGX.
To download all dependencies for all provers you can run
$ make install
You can also download all required dependencies for each prover separately, for example to install SP1:
$ TARGET=sp1 make install
After installing dependencies of selected prover, the following command internally calls cargo to build the prover's guest target with the --release
profile by default, for example:
$ TARGET=sp1 make build
If you set DEBUG=1
then the target will be compiled without optimization (not recommended for ZkVM elfs).
Note that you have to make build
first before running ZkVM provers, otherwise the guest elf may not be up to date and can result in poof failures.
$ TARGET=sp1 make run
Just for development with the native prover which runs through the block execution without producing any ZK/SGX proof:
cargo run
run
camand will start the host service that listens to proof requests, then in another terminal you can do requests like this, which proves the 10th block with native prover on Taiko A7 testnet:
./script/prove-block.sh taiko_a7 native 10
Look into prove-block.sh
for the available options or run the script without inputs for hints. You can also automatically sync with the tip of the chain and prove all new blocks:
./script/prove-block.sh taiko_a7 native sync
For all host program, you can enable CPU optimization through exporting CPU_OPT=1
.
To install, build, and run in one step:
$ export TARGET=risc0
$ make install && make build && make run
To build and run test on Risc0 Zkvm:
$ TARGET=risc0 make test
If you are using Bonsai service, edit run-bonsai.sh
to setup your API key, endpoint and on-chain verifier address.
$ ./script/setup-bonsai.sh
$ ./script/prove-block.sh taiko_a7 risc0-bonsai 10
If you have GPU with CUDA or Apple's GPU API to accelerate risc0 proof, you can do:
// cuda
$ cargo run -F cuda --release --features risc0
// metal
$ cargo run -F metal --release --features risc0
Note that CUDA needs to be installed when using cuda
: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
To install, build, and run in one step:
$ export TARGET=sp1
$ make install && make build && make run
To build and run test on Sp1 Zkvm:
$ TARGET=sp1 make test
Some optimized configuration tailored to the host can be found here
To install, build, and run in one step:
$ export TARGET=sgx
$ make install && make build && make run
To build and run test related SGX provers:
$ TARGET=sgx make test
If your CPU doesn't support SGX, you can still run the SGX code through gramine like it would on an SGX machine:
$ MOCK=1 TARGET=sgx make run
Docker & Remote Attestation Support Metrics
You can generate an execution trace for the block that is being proven by enabling the tracer
feature:
$ cargo run --features tracer
A traces
folder will be created inside the root directory. This folder will contain json files with the trace of each valid transaction in the block.
When running any of the features/provers, OpenAPI UIs are available in both Swagger and Scalar flavors on /swagger-ui
and /scalar
respectively.