cp .env.example .env
Adjust the .env
values according to your needs, source it, so it is available in your current environment.
source .env
If using zsh (for example, on macOS), you may need to mark variables for export,
before calling any make
target:
set -a; source .env; set +a
docker compose build
docker compose up
- Install Python 3.11 or greater
- Install pip
- Install virtualenv
- Install sqlite-devel / libsqlite3-dev
- Docker daemon must be available for development
make venv
Activate this virtual environment:
source venv/bin/activate
Execute deactivate
to exit out of the virtual environment.
Then use the build
Make target to get all requirements fetched and compiled:
make build
Make sure you have:
- Docker running
- and the environment is loaded
The following starts the data stores used by this project: PostgreSQL.
make start-postgres
The following starts the data stores used by this project: Redis.
make start-redis
Both together:
make start-databases
To stop the data stores, use Docker Compose:
docker-compose down postgres
docker-compose down redis
make start-dev
Some make commands can be used w/ or w/o docker for the app container:
start-app
,start-dev
,test
,run-migration
- (when using docker the envvar
DATABASE_HOST
has to point to the name of the postgres container:DATABASE_HOST=postgres
)
- e.g.:
make test
make test use-docker=true
If your blockchain is running and setup in .env under BLOCKCHAIN_URL
you can start the event listener by running:
make start-listener
It will sync the database with the chain and try to fetch a new block every BLOCK_CREATION_INTERVAL
seconds.
API documentation: /redoc/
- default: http://127.0.0.1:8000/redoc/
- APPLICATION_STAGE
- type: str
- default:
development
- use
production
for prod
- BASE_PORT
- type: int
- default:
8000
- BASE_URL
- type: str
- default:
http://127.0.0.1:8000
- DATABASE_HOST
- type: str
- default:
0.0.0.0
- use
postgres
when working w/ docker
- DATABASE_NAME
- type: str
- default:
core
- DATABASE_PORT
- type: int
- default:
5432
- DATABASE_USER
- type: str
- default:
postgres
- DATABASE_PASSWORD
- type: str
- default:
postgres
- SLACK_DEFAULT_URL
- type: str
- optional
- incoming slack webhook url used by
alerts.slack
logger
- CONFIG_SECRET
- type: str
- only required for the usage of the
/update-config/
endpoint - used as authorization for said endpoint, provided as request header:
Config-Secret
- BLOCKCHAIN_URL
- type: str
- BLOCK_CREATION_INTERVAL
- type: int
- default:
6
seconds - minimum time the event listener waits before trying to fetch the newest block from the chain
- RETRY_DELAYS
- type: str
- default
5,10,30,60,120
seconds - comma separated list
- increasing retry delays for blockchain actions
- the last value will be used for all further retries
- CORE_CONTRACT_ADDRESS
- type: str
- address of the core contract used as initial trusted contract to fetch events for
- VOTES_CONTRACT_ADDRESS
- type: str
- address of the votes contract used as initial trusted contract to fetch events for
- ASSETS_WASM_HASH
- type: str
- assets contract wasm. not used by the service aside from providing it to the frontend via
/config/
endpoint
- NETWORK_PASSPHRASE
- type: str
- network passphrase (https://soroban.stellar.org/docs/reference/futurenet). not used by the service aside from providing it to the frontend via
/config/
endpoint
- FILE_UPLOAD_CLASS:
- type: str
- default:
"core.file_handling.aws.s3_client"
- Class used to upload metadata. Requires a method:
def upload_file(self, file, storage_destination=None) -> Optional[str]: """ Args: file: file to upload (file-like obj, readable) storage_destination: pathstr / folder name. e.g.: "folder_1/folder_2/my_file.jpeg" Returns: url of uploaded file """
- ENCRYPTION_ALGORITHM:
- type: str
- default: "sha3_256"
- Hashlib encryption algorithm used to hash the uploaded metadata. Uses
hexdigest()
.
- MAX_LOGO_SIZE:
- type: int
- default:
2_000_000
2 mb - maximum allowed logo size
- these are only required when using the
FILE_UPLOAD_CLASS
:core.file_handling.aws.s3_client
- AWS_STORAGE_BUCKET_NAME
- type: str
- Name of the AWS bucket to store metadata in.
- AWS_REGION
- type: str
- AWS region of said bucket.
- AWS_S3_ACCESS_KEY_ID
- type: str
- AWS access key to access said bucket.
- AWS_S3_SECRET_ACCESS_KEY
- type: str
- AWS secret access key to access said bucket.
- or similar aws authentication method using boto3
- AWS_STORAGE_BUCKET_NAME