CODM is a Serverless framework configuration for running OpenDroneMap on AWS.
CODM uses Docker, AWS Batch, S3, Lambda, and SNS to provide a cloud infrastructure for running OpenDroneMap in AWS. The benefits of this architecture include the ability to configure and execute ODM by simply copying data to S3 to signal activity.
Like many things AWS, installation requires an active account with permissions to create things. CODM creates Batch configuration, SNS topics, Lambda functions, and a number of supporting roles to do its work. CODM has only been tested with a superuser account with permissions to do all of that.
Installation assumes you're going to use Conda to provide the required software. We
also need Docker to extend and build the ODM image with CODM's ENTRYPOINT
and
execution script.
- Docker
- Conda (Conda Forge)
- NodeJS
- Serverless
- AWS CLI
conda create -n codm -c conda-forge nodejs
conda activate codm
pip install awscli
npm install -g serverless
npm install -g serverless-python-requirements
The installation requires some AWS environment variables set to define the user and region where we are installing CODM. It is easiest to set these in the conda environment directly so they are not forgotten on any subsequent runs. After setting the variables, make sure to cycle the environment so the variables stick:
conda env config vars set AWS_ACCESS_KEY_ID=
conda env config vars set AWS_SECRET_ACCESS_KEY=
conda env config vars set AWS_DEFAULT_REGION=
conda env deactivate
conda activate codm
-
print the environment variables for your AWS region. This will gather the subnets and the GPU AMI ID in
subnets.yaml
andami.yaml
./print-variables.sh
-
Execute the Serverless deployment, getting the GPU AMI from the
./print-variables.sh
call and use a service name (in our case it iscodm
)sls deploy --service codm --stage dev
-
Push the Docker image with the service name
./deploy-docker.sh codm
Create a Slack Incoming Webhook and store the URL in config.json
under the slackhook
key.
Store the sesdomain
and sesregion
key/value pairs in config.json
{"sesregion":"us-east-1",
"sesdomain":"rsgiscx.net"}
Add a notifications
list to the settings.yaml
that is used by the collection
notifications:
- 'howard@hobu.co'
- 'hobu.inc+codm@gmail.com'
-
The bucket must be empty before it can be removed. For the service described in Deployment, the bucket name would be
s3://codm-dev-codm
aws s3 rm --recursive s3://codm-dev-codm
-
Clean up the ECR repository
./cleanup-docker codm
-
Remove the deployment
sls remove --service codm --stage dev
- User copies data to
s3://bucket/prefix/*.jpg
- User copies an empty
process
file tos3://bucket/prefix/process
to signal ODM to start the execution. - An S3 event trigger sees
process
file and fires the dispatch Lambda function for the files ins3://bucket/prefix/
- The dispatch function creates a new Batch job for the data
in
s3://bucket/prefix/
- Batch runs the ODM job and uploads results to
s3://bucket/prefix/output
- Notifications of success or failure are sent to the SNS topic.
CODM uses the settings.yaml
that OpenDroneMap provides to provide configuration. It works at multiple levels:
-
The administrator can copy a default
settings.yaml
tos3://bucket/settings.yaml
and this will be copied into the ODM execution and used. It is suggested that the default settings have simple configuration with low resolution and parameters. -
A user can copy a
settings.yaml
tos3://bucket/prefix/settings.yaml
as part of their invocation to override any default settings provided by #1.
If your imagery doesn't have embedded geospatial information, you might need to
copy a geo.txt
that maps the coordinates for each image to
s3://bucket//prefix/geo.txt
. This is likely going to be needed for big
collections, which otherwise might not match or converge.
Azavea has excellent CF templates for GPU Batch at https://github.com/azavea/raster-vision-aws/blob/master/cloudformation/template.yml