Create an auto-scaling build cluster on AWS/VPC in under 10 minutes. Designed to support multiple different projects sharing a single stack and isolated builds of third-party pull-requests.
The easiest way is to launch the latest built version via this button:
If you'd like to use the CLI, download config.json.example
to config.json
and then run the below command to create a new stack.
aws cloudformation create-stack \
--output text \
--stack-name buildkite \
--template-url "https://s3.amazonaws.com/buildkite-aws-stack/aws-stack.json" \
--capabilities CAPABILITY_IAM \
--parameters <(cat config.json)
Alternately, if you prefer to use this repo, clone it and run the following command to set up things locally and create a remote stack.
# To set up your local environment and build a template based on public AMIs
make setup download-mappings build
# Or, to set things up locally and create the stack on AWS
make create-stack
# You can use any of the AWS... environment variables that the aws-cli supports.
AWS_PROFILE="SOMETHING" make create-stack
Command | Description | Default |
---|---|---|
KeyName | The AWS EC2 Keypair to use | default |
BuildkiteOrgSlug | Your Buildkite Organization slug (e.g 99designs) | |
BuildkiteAgentToken | Your Buildkite Agent Token | |
BuildkiteApiAccessToken | A Buildkite API token for metrics | |
BuildkiteQueue | The Buildkite queue to give the agents | elastic |
SecretsBucket | An existing S3 bucket (and optional prefix) that contains secrets | |
ArtifactsBucket | An existing S3 bucket (and optional prefix) that contains build artifacts | |
InstanceType | The EC2 instance size to launch | t2.nano |
MinSize | The minimum number of instances to launch | 0 |
MaxSize | The maximum number of instances to launch | 10 |
SpotPrice | An optional price to bid for spot instances (0 means non-spot) | 0 |
Check out buildkite-elastic.yml
for more details.
Set your Agent Query Rules to queue=elastic
, or to whatever BuildkiteQueue
you provided to your stack.
Your stack has access to the SecretsBucket
parameter you passed in. This should be used in combination with server-side object encryption to ensure that your CI secrets (such as Github credentials) are reasonably secure. See the Security section for more details.
Two files are specifically looked for, id_rsa_github
, for checking out your git code and optionally env
, which contains environment variables to expose to the job command.
By default, builds will look for s3://{SecretsBucket}/{PipelineSlug}/filename
. You can override the {PipelineSlug}
part with the BUILDKITE_SECRETS_PREFIX
environment variable.
You should encrypt your objects with a project-specific key and provide it in BUILDKITE_SECRETS_KEY
which will be used to decrypt all the files found in the secrets bucket.
# generate a deploy key for your project
ssh-keygen -t rsa -b 4096 -f id_rsa_github
pbcopy < id_rsa_github.pub # paste this into your github deploy key
# upload the private key, encrypted
PASSPHRASE=$(head -c 24 /dev/urandom | base64)
aws s3 cp --acl private --sse-c --sse-c-key "$PASSPHRASE" id_rsa_github "s3://my-provision-bucket/myproject/id_rsa_github"
pbcopy <<< "$PASSPHRASE" # paste passphrase into buildkite env as BUILDKITE_SECRETS_KEY
# cleanup
unset PASSPHRASE
rm id_rsa_github*
For Docker Hub credentials, you can use DOCKER_HUB_USER
, DOCKER_HUB_PASSWORD
and DOCKER_HUB_EMAIL
in your env
file.
If you provided a BuildkiteApiAccessToken
, a Buildkite API token with read_pipelines
, read_builds
and read_agents
permissions across your organization, then build and job metrics will be collected for your queue and used to scale your cluster of agents. Autoscaling is designed to scale up quite quickly and then gradually scale down. See the autoscale.yml template for more details, or the Buildkite Metrics Publisher project for how metrics are collected.
When scaling down, instances wait until any running jobs on them have completed (thanks to lifecycled).
This repository hasn't been reviewed by security researchers, so exercise caution and careful thought with what credentials you make available to your builds. At present anyone with access to your CI machines or commit access to your codebase (including third-party pull-requests) will theoretically have access to your encrypted secrets. Anyone with access to your Buildkite Project Configuration will be able to retrieve the encryption key used to decrypt these. In combination, the attacker would have access to your decrypted secrets.
Presently the EC2 Metadata instance is available via HTTP request from builds, which means that builds have the same IAM access as the basic build host does.
This is experimental and still being actively developed, but under heavy use at 99designs.
Feel free to drop me an email at lachlan@ljd.cc with questions, or checkout the #aws
channel in Buildkite Slack.