microcosm
allows you to spin up a single, mining Ethereum node that you can use to test:
-
Smart-contract-based decentralized applications
-
Code which interacts with an Ethereum blockchain even if it doesn't live on the blockchain
-
Modifications to Ethereum node implementations
In the first two capacities, it is a complement to Ganache,
which diverges from Ethereum clients like geth
and parity
in its implementation of the JSON-RPC specification.
With microcosm
, currently, you get a geth
node to test with.
The only requirement is Docker.
Pull the latest microcosm
image from DockerHub:
docker pull fuzzyfrog/microcosm
Create a microcosm container, bind-mounting a volume onto /root
:
MICROCOSM_DIR=$(mktemp -d)
docker run \
-e NUM_ACCOUNTS=<number of accounts to provision> \
-v $MICROCOSM_DIR:/root \
fuzzyfrog/microcosm \
<geth arguments>
If you look in $MICROCOSM_DIR
, you will see the microcosm
directory. This directory
contains the geth
data directory as a subdirectory -- $MICROCOSM_DIR/.ethereum
.
It also contains the following files:
-
genesis.json
- Genesis file used to initialize themicrocosm
network being run -
init
- File denoting that the network initialization was successful -
accounts.txt
- File listing the addresses of accounts created bymicrocosm
-
passwords.txt
- File listing the passwords corresponding to each account inaccounts.txt
The items in $MICROCOSM_DIR
are owned by root
. To take ownership of them, from outside the
container, run
sudo chown -R $USER:$USER $MICROCOSM_DIR
Now, you will be able to use the IPC socket $MICROCOSM_DIR/geth.ipc
as a
web3
provider.
For a side-by-side view of the microcosm
-generated accounts and passwords, you can run:
pr -w 100 -m -t $MICROCOSM_DIR/accounts.txt $MICROCOSM_DIR/passwords.txt
As indicated above, you can directly pass in arguments for geth
when you run the microcosm
docker container. For example, if you want to expose the management APIs over the JSON RPC
interface, you can run:
docker run -p 8545:8545 -e NUM_ACCOUNTS=1 -v $MICROCOSM_DIR:/root \
fuzzyfrog/microcosm --rpc --rpcaddr 0.0.0.0 --rpcapi eth,web3
Note: It is important to use --rpcaddr 0.0.0.0
because of how docker handles loopbacks within
containers -- using the default of 127.0.0.1
means you will be unable to connect to the RPC API
from outside the container.
This repository also provides a helm chart that you can use to deploy
microcosm
to a kubernetes cluster.
This creates a StatefulSet
resource provisioned with a 100 GB persistent disk in the standard
storage class.
If you are already set up with helm
, getting microcosm running is a simple as:
helm install ./helm/
(from this repository's root directory).
To get up and running with helm
, follow the instructions here.
You can deploy a custom storage class to your kubernetes cluster following these instructions.
You can modify the size of your microcosm volume in your custom values.yaml
file.
- If you do not set
--networkid
in your values file, state will not persist between pod restarts. This is done via themicrocosm.networkId
parameter. See thehelm/values.yaml
for an example.