yarn install
yarn db:migrate up
yarn lint:check
yarn test
With defaults:
yarn start
Starting at an arbitrary block (only works immediately after initial DB migration):
START_HEIGHT=800000 yarn start
You can run the ar.io gateway as a standalone docker container:
docker build . -t ar-io-core:latest
docker run -p 4000:4000 -v ar-io-data:/app/data ar-io-core:latest
To run with a specified start height (sets height on first run only):
docker run -e START_HEIGHT=800000 -v $PWD/data/:/app/data ar-io-core:latest
You can also run Envoy along side an ar.io
node via Docker Compose. Envoy
will proxy routes to arweave.net
not yet implemented in the ar.io node.
docker compose up --build
or:
docker-compose up --build
Once running, requests can be directed to Envoy server at localhost:3000
.
When running via docker-compose, it will read a .env
file in the project root
directory and use the environment variables set there.
Add the following to your .env
file to proxy GraphQL to another server while
using the ar.io gateway to serve data (using arweave.net GraphQL as an example):
GRAPHQL_HOST=arweave.net
GRAPHQL_PORT=443
The ar.io gateway supports unbundling and indexing ANS-104 bundle data. To
enable this add the following environment variables to your .env
file:
ANS104_UNBUNDLE_FILTER="<filter string>"
ANS104_INDEX_FILTER="<filter string>"
ANS104_UNBUNDLE_FILTER
determines which TXs and data items (in the case of
nested bundles) are unbundled, and ANS104_INDEX_FILTER
determines which data
items within a bundle get indexed.
The following types of filters are supported:
{ "never": true } # the default
{ "always": true }
{ "attributes": { "owner": <owner key>, ... }}
{ "tags": [{ "name": <utf8 tag name>, "value": <utf8 tag value> }, ...]}
{ "and": [ <nested filter>, ... ]}
{ "or": [ <nested filter>, ... ]}
{ "not": [ <nested filter>, ... ]}
Place an ANS-104 bundle at the start of the queue for unbundling and indexing on your gateway:
curl -X PUT -H "Authorization: Bearer <ADMIN_KEY>" \
-H "Content-Type: application/json" \
"http://<HOST>:<PORT>/ar-io/admin/queue-tx" \
-d '{ "id": "<ID>" }'
Note: ANS-104 indexing support is currently experimental. It has been tested successfully with small sets of bundles (using filters), but you may still encounter problems with it when indexing larger sets of transactions.
The ar.io gateway includes a feature to emit webhooks to specified servers when a transaction or data item is indexed and matches a predefined filter. This feature allows for real-time notifications and integrations based on the transaction and data indexing.
To use this feature, you need to set up two environment variables in your .env
file:
-
WEBHOOK_TARGET_SERVERS: This is a comma-separated list of servers where the webhooks will be sent.
Format:
WEBHOOK_TARGET_SERVERS="<server1>,<server2>,..."
-
WEBHOOK_INDEX_FILTER: This filter determines which transactions or data items will trigger the webhook emission.
The filter syntax is identical to ANS104_INDEX_FILTER
. Supported filter types include:
{ "never": true }
(default){ "always": true }
{ "attributes": { "owner": <owner key>, ... }}
{ "tags": [{ "name": <utf8 tag name>, "value": <utf8 tag value> }, ...]}
{ "and": [ <nested filter>, ... ]}
{ "or": [ <nested filter>, ... ]}
Example: WEBHOOK_INDEX_FILTER="{ "tags": [{ "name": "App-Name", "value": "MyApp" }]}"
After setting up the environment variables, the ar.io gatway will monitor for transactions or data items that match the WEBHOOK_INDEX_FILTER
. Once a match is found, a webhook will be emitted to all the servers listed in WEBHOOK_TARGET_SERVERS
.
Ensure that the target servers are configured to receive and process these webhooks appropriately.
Add the following to your .env
file to enable ArNS resolution:
ARNS_ROOT_HOST=<gateway-hostname>
For example if your gateway's hostname was my-gateway.net
your .env
would
contain the following:
ARNS_ROOT_HOST=my-gateway.net
This would allow you to resolve names like my-arns-name.my-gateway.net
provided
you correctly configured a wildcard DNS entry for your gateway.
Note: ArNS data ID resolution is currently delegated to arweave.dev
. Routing is
handled locally, but ArNS state is not yet computed locally. Local ArNS state
computation will be added in a future release. Also, be aware, ArNS is still using
a test contract. Resolved names should be considered temporary.
In order to participate in the ar.io network, gateways need
to associate themselves with a wallet. This can be configured by setting the
AR_IO_WALLET
environment variable. Once set, the associated wallet address is
visible via the /ar-io/info
endpoint.
Similarly, network participants must make observations of other gateways and
submit them. The wallet for this is configured using the OBSERVER_WALLET
environment variable. An associated key file is also required to upload
observation reports. The key file must be placed in
./wallets/<OBSERVER_WALLET>.json
(<OBSERVER_WALLET>
should be replaced with
the address of the wallet you are using).
HTTP endpoints under '/ar-io/admin' are protected by an admin API key. On startup,
the admin key is read from the ADMIN_API_KEY
environment variable. If no key is
set, a random key is generated and logged. To make a request to an admin endpoint
add an Authorization: Bearer <ADMIN_API_KEY>
header to your request.
Block a specific TX/data item ID on your gateway:
curl -X PUT -H "Authorization: Bearer <ADMIN_KEY>" \
-H "Content-Type: application/json" \
"http://<HOST>:<PORT>/ar-io/admin/block-data" \
-d '{ "id": "<ID>", "notes": "Example notes", "source": "Example source" }'
notes
and source
are for documentation only. source
is intended to be an
identifier of a particular source of IDs to block (e.g. the name of a
blocklist). notes
is a text field that can be used to further describe why a
particular ID is blocked.
- Code to interfaces.
- Separate IO from application logic.
- Make processes idempotent whenever possible.
- Separate mutable from immutable data.
- Avoid trusting data when the cost to validate it is low.
- To support rapid development iteration, All system components must be runnable in a single process.
- Keep the compile test suite blazingly fast.
- In general, prefer in-memory implementations over mocks and stubs.
- In general, prefer sociable over solitary tests.
- Commit messages should describe both what is being changed and why it is being changed.
- Make liberal use of Prometheus metrics to aid in monitoring and debugging.
- Follow the Prometheus metrics naming recommendations.