Centralized cache for blockchain data
- scrape any data available on blockchain (events, tx data, traces)
- easily provide semantic layers on top
- ensure data consistency
npm install spock-etl
Spock exposes CLI interface like:
spock-etl etl|migrate|sync|api yourconfig.js|ts
- migrate — launches database migrations (core and defined in config)
- sync — synchronizes work after config changes — It is highly recommended to always run sync before etl
- etl - launches ETL process (long running process)
- api — run general GrahpQL api exposing database schema
api
spock pulls all the data from ethereum node. Nodes can differ greatly between each other, and some are simply not reliable / consistent. Based on our tests:
- Alchemy works
- Infura DOESN'T WORK. Sometimes it can randomly return empty sets for getLogs calls
- Self hosted nodes should work (not tested yet) but keep in mind that vulcan can generate quite a lot of network calls (around 500k daily)
yarn build
— build everythingyarn build:watch
- build and watchyarn test:fix
- run tests, auto fix all errors
Tip: Use yarn link
to link packages locally.
docker-compose up -d
docker-compose stop
docker-compose down
We use consola for logging. By default it will log
everything. To adjust logging levels set
VL_LOGGING_LEVEL
. env variable. Ex. use VL_LOGGING_LEVEL=4
to omit detailed db logs (most
verbose).