The following links, repos, companies and projects have been important in the development of this repo, we have learned a lot from them and want to thank and acknowledge them.
If we forgot to include anyone, please file an issue so we can add you. We always strive to reference the inspirations and code we use, but as an organization with multiple people, mistakes can happen, and someone might forget to include a reference.
Many long-established clients accumulate bloat over time. This often occurs due to the need to support legacy features for existing users or through attempts to implement overly ambitious software. The result is often complex, difficult-to-maintain, and error-prone systems.
In contrast, our philosophy is rooted in simplicity. We strive to write minimal code, prioritize clarity, and embrace simplicity in design. We believe this approach is the best way to build a client that is both fast and resilient. By adhering to these principles, we will be able to iterate fast and explore next-generation features early, either from the Ethereum roadmap or from innovations from the L2s.
Read more about our engineering philosophy here
- Ensure effortless setup and execution across all target environments.
- Be vertically integrated. Have the minimal amount of dependencies.
- Be structured in a way that makes it easy to build on top of it, i.e rollups, vms, etc.
- Have a simple type system. Avoid having generics leaking all over the codebase.
- Have few abstractions. Do not generalize until you absolutely need it. Repeating code two or three times can be fine.
- Prioritize code readability and maintainability over premature optimizations.
- Avoid concurrency split all over the codebase. Concurrency adds complexity. Only use where strictly necessary.
This client supports running in two different modes:
- As a regular Ethereum execution client, like
geth
. - As a ZK-Rollup, where block execution is proven and the proof sent to an L1 network for verification, thus inheriting the L1's security.
We call the first one Lambda Ethereum Rust L1 and the second one Lambda Ethereum Rust L2.
The main differences between this mode and regular Ethereum Rust are:
- There is no consensus, only one sequencer proposes blocks for the network.
- Block execution is proven using a RISC-V zkVM and its proofs are sent to L1 for verification.
- A set of Solidity contracts to be deployed to the L1 are included as part of network initialization.
- Two new types of transactions are included: deposits (native token mints) and withdrawals.
deposit_demo.mov
An Ethereum execution client consists roughly of the following parts:
- A storage component, in charge of persisting the chain's data. This requires, at the very least, storing it in a Merkle Patricia Tree data structure to calculate state roots. It also requires some on-disk database; we currently use libmdbx but intend to change that in the future.
- A JSON RPC API. A set of HTTP endpoints meant to provide access to the data above and also interact with the network by sending transactions. Also included here is the
Engine API
, used for communication between the execution and consensus layers. - A Networking layer implementing the peer to peer protocols used by the Ethereum Network. The most important ones are:
- The
disc
protocol for peer discovery, using a Kademlia DHT for efficient searches. - The
RLPx
transport protocol used for communication between nodes; used by other protocols that build on top to exchange information, sync state, etc. These protocols built on top are usually calledcapabilities
. - The Ethereum Wire Protocol (
ETH
), used for state synchronization and block/transaction propagation, among other things. This runs on top ofRLPx
. - The
SNAP
protocol, used for exchanging state snapshots. Mainly needed for snap sync, a more optimized way of doing state sync than the old fast sync (you can read more about it here).
- The
- Block building and Fork choice management (i.e. logic to both build blocks so a validator can propose them and set where the head of the chain is currently at, according to what the consensus layer determines). This is essentially what our
blockchain
crate contains. - The block execution logic itself, i.e., an EVM implementation. We are finishing an implementation of our own called levm (Lambda EVM).
Because most of the milestones below do not overlap much, we are currently working on them in parallel.
Implement the bare minimum required to:
- Execute incoming blocks and store the resulting state on an on-disk database (
libmdbx
). No support for reorgs/forks, every block has to be the child of the current head. - Serve state through a JSON RPC API. No networking yet otherwise (i.e. no p2p).
In a bit more detail:
Task Description | Status |
---|---|
Add libmdbx bindings and basic API, create tables for state (blocks, transactions, etc) |
✅ |
EVM wrapper for block execution | ✅ |
JSON RPC API server setup | ✅ |
RPC State-serving endpoints | 🏗️ (almost done, a few endpoint are left) |
Basic Engine API implementation. Set new chain head (forkchoiceUpdated ) and new block (newPayload ). |
✅ |
See detailed issues and progress for this milestone here.
Implement support for block reorganizations and historical state queries. This milestone involves persisting the state trie to enable efficient access to historical states and implementing a tree structure for the blockchain to manage multiple chain branches. It also involves a real implementation of the engine_forkchoiceUpdated
Engine API when we do not have to build the block ourselves (i.e. when payloadAttributes
is null).
Task Description | Status |
---|---|
Persist data on an on-disk Merkle Patricia Tree using libmdbx |
✅ |
Engine API forkchoiceUpdated implementation (without payloadAttributes ) |
🏗️ |
Support for RPC historical queries, i.e. queries (eth_call , eth_getBalance , etc) at any block |
✅ |
Detailed issues and progress here.
Add the ability to build new payloads (blocks), so the consensus client can propose new blocks based on transactions received from the RPC endpoints.
Task Description | Status |
---|---|
engine_forkchoiceUpdated implementation with a non-null payloadAttributes |
🏗️ |
engine_getPayload endpoint implementation that builds blocks. |
🏗️ |
Implement a mempool and the eth_sendRawTransaction endpoint where users can send transactions |
✅ |
Detailed issues and progress here.
Implement the peer to peer networking stack, i.e. the DevP2P protocol. This includes discv4
, RLPx
and the eth
capability. This will let us get and retrieve blocks and transactions from other nodes. We'll add the transactions we receive to the mempool. We'll also download blocks from other nodes when we get payloads where the parent isn't in our local chain.
Task Description | Status |
---|---|
Implement discv4 for peer discovery |
✅ |
Implement the RLPx transport protocol |
🏗️ |
Implement the eth capability |
🏗️ |
Detailed issues and progress here.
Add support for the SNAP
protocol, which lets us get a recent copy of the blockchain state instead of going through all blocks from genesis. This is used for used for snap sync. Since we don't support older versions of the spec by design, this is a prerequisite to being able to sync the node with public networks, including mainnet.
Task Description | Status |
---|---|
Implement SNAP protocol for snap syncing |
❌ |
Detailed issues and progress here.
make localnet
This make target will:
- Build our node inside a docker image.
- Fetch our fork ethereum package, a private testnet on which multiple ethereum clients can interact.
- Start the localnet with kurtosis.
If everything went well, you should be faced with our client's logs (ctrl-c to leave)
To stop everything, simply run:
make stop-localnet
To build the node, you will need the rust toolchain:
- First, install asdf:
- Add the rust plugin:
asdf plugin-add rust https://github.com/asdf-community/asdf-rust.git
- cd into the project and run:
asdf install
You now should be able to build the client:
make build
Currently, the database is libmdbx
, it will be set up
when you start the client. The location of the db's files will depend on your OS:
- Mac:
~/Library/Application Support/ethereum_rust
- Linux:
~/.config/ethereum_rust
You can delete the db with:
cargo run --bin ethereum_rust -- removedb
For testing, we're using three kinds of tests.
These are the official execution spec tests, you can execute them with:
make test
This will download the test cases from the official execution spec tests repo and run them with our glue code
under cmd/ef_tests/tests
.
The second kind are each crate's tests, you can run them like this:
make test CRATE=<crate>
For example:
make test CRATE="ethereum_rust-blockchain"
Finally, we have End-to-End tests with hive. Hive is a system which simply sends RPC commands to our node, and expects a certain response. You can read more about it here. Hive tests are categorized by "simulations', and test instances can be filtered with a regex:
make run-hive-debug SIMULATION=<simulation> TEST_PATTERN=<test-regex>
This is an example of a Hive simulation called ethereum/rpc-compat
, which will specificaly
run chain id and transaction by hash rpc tests:
make run-hive SIMULATION=ethereum/rpc-compat TEST_PATTERN="/eth_chainId|eth_getTransactionByHash"
If you want debug output from hive, use the run-hive-debug instead:
make run-hive-debug SIMULATION=ethereum/rpc-compat TEST_PATTERN="*"
This example runs every test under rpc, with debug output
Example run:
cargo run --bin ethereum_rust -- --network test_data/genesis-kurtosis.json
The network
argument is mandatory, as it defines the parameters of the chain.
For more information about the different cli arguments check out the next section.
Ethereum Rust supports the following command line arguments:
--network <FILE>
: Receives aGenesis
struct in json format. This is the only argument which is required. You can look at some example genesis files attest_data/genesis*
.--datadir <DIRECTORY>
: Receives the name of the directory where the Database is located.--import <FILE>
: Receives an rlp encodedChain
object (aka a list ofBlock
s). You can look at the example chain file attest_data/chain.rlp
.--http.addr <ADDRESS>
: Listening address for the http rpc server. Default value: localhost.--http.port <PORT>
: Listening port for the http rpc server. Default value: 8545.--authrpc.addr <ADDRESS>
: Listening address for the authenticated rpc server. Default value: localhost.--authrpc.port <PORT>
: Listening port for the authenticated rpc server. Default value: 8551.--authrpc.jwtsecret <FILE>
: Receives the jwt secret used for authenticated rpc requests. Default value: jwt.hex.--p2p.addr <ADDRESS>
: Default value: 0.0.0.0.--p2p.port <PORT>
: Default value: 30303.--discovery.addr <ADDRESS>
: UDP address for P2P discovery. Default value: 0.0.0.0.--discovery.port <PORT>
: UDP port for P2P discovery. Default value: 30303.--bootnodes <BOOTNODE_LIST>
: Comma separated enode URLs for P2P discovery bootstrap.--log.level <LOG_LEVEL>
: The verbosity level used for logs. Default value: info. possible values: info, debug, trace, warn, error
Documentation for each crate can be found on the following links (still a work in progress, we will be adding more documentation as we go).