This is the backend for the deno.land/x service.
There are a few guidelines / rules that you should follow when publishing a module:
- Please only register module names that you will actually use.
- Do not squat names. If you do, we might transfer the name to someone that makes better use of it.
- Do not register names which contain trademarks that you do not own.
- Do not publish modules containing illegal content.
Additionally to these guidelines there are also hard limits:
- You can not publish more than 3 different modules from a single repository source.
- You can not publish more than 15 modules from a single GitHub account or organization.
If you need an increase to these quotas, please reach out to ry@tinyclouds.org.
- AWS account
- MongoDB Atlas account
- Create a cluster on MongoDB Atlas. A M2 cluster is enough in most cases.
- Create a database user on Atlas. They should have the read write database permission.
- Get the database connection string and insert the username and password for the created. It should look something like this:
mongodb+srv://user:password@zyxwvu.fedcba.mongodb.net/?retryWrites=true&w=majority
. - Save this connection string in AWS Secrets Manager with the name
mongodb/atlas/deno_registry2
and the value keyMongoURI
. - Create a database called
production
in your cluster. - In this database create a collection called
modules
. - In this collection create a new Atlas Search index with the name
default
and the mapping defined inindexes/atlas_search_index_mapping.json
- In this collection create a new index with the name
by_owner_and_repo
like it is defined inindexes/modules_by_owner_and_repo.json
- In this collection create a new index with the name
by_is_unlisted_and_star_count
like it is defined inindexes/modules_by_is_unlisted_and_star_count.json
- In this database create a collection called
builds
. - In this collection create a new unique index with the name
by_name_and_version
like it is defined inindexes/builds_by_name_and_version.json
Make sure to follow the official instructions to login to ECR via the Docker cli - this is needed to push the images used by the Lambda deployment to ECR.
aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
- Install
aws
CLI. - Sign in to
aws
by runningaws configure
- Install Terraform version 0.13 or higher
- Copy
terraform/terraform.tfvars.example
toterraform/terraform.tfvars
- Move to the
terraform/
and comment out thebackend
section in themeta.tf
file (important for first-time apply) - Run the following steps:
terraform init
terraform plan -var-file terraform.tfvars -out plan.tfplan
terraform apply plan.tfplan
aws s3 ls | grep 'terraform-state' # take note of your tf state bucket name
# before the final step, go back and remove the comments from step 5
terraform init -backend-config "bucket=<your-bucket-name>" -backend-config "region=<aws-region>"
Before destroying your staging environment, make sure to:
- run
terraform state pull
to make a local copy of your state file - comment out the
backend
section of themeta.tf
file - re-initialize your terraform workspace by running
terraform init -backend-config "region=<aws-region>"
- make sure you empty your s3 buckets, otherwise the destroy will fail
You can then run terraform destroy
to completely remove your staging environment.
To run tests locally, make sure you have Docker and docker-compose installed. Then run:
make test