This action was forked from 'https://github.com/jakejarvis/s3-sync-action'. At the point of forking a regular s3 sync would identify the content type of br and gzip files as application/octet-stream. To correctly serve compressed files we need the files to maintain their orginal content types. Due to this we would need to use multiple calls to this github action with various includes and excludes to apply the correct the correct content types for br and gzip files. This action is calls the orginal actions supplying the different includes and excludes.
Calls s3 sync multiple times. The initial call ignores all files ending with br and gz. Additional calls are then made to upload the br and gz files while supplying the correct content type and content encoding. The content types currently supported are js, css, and html.
This simple action uses the vanilla AWS CLI to sync a directory (either from your repository or generated during your workflow) with a remote S3 bucket.
Place in a .yml
file such as this one in your .github/workflows
folder. Refer to the documentation on workflow YAML syntax here.
As of v0.3.0, all aws s3 sync
flags are optional to allow for maximum customizability (that's a word, I promise) and must be provided by you via args:
.
--acl public-read
makes your files publicly readable (make sure your bucket settings are also set to public).--follow-symlinks
won't hurt and fixes some weird symbolic link problems that may come up.- Most importantly,
--delete
permanently deletes files in the S3 bucket that are not present in the latest version of your repository/build. - Optional tip: If you're uploading the root of your repository, adding
--exclude '.git/*'
prevents your.git
folder from syncing, which would expose your source code history if your project is closed-source. (To exclude more than one pattern, you must have one--exclude
flag per exclusion. The single quotes are also important!)
name: Upload Website
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- uses: jakejarvis/s3-sync-action@master
with:
args: --acl public-read --follow-symlinks --delete
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'us-west-1' # optional: defaults to us-east-1
SOURCE_DIR: 'public' # optional: defaults to entire repository
The following settings must be passed as environment variables as shown in the example. Sensitive information, especially AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
, should be set as encrypted secrets โ otherwise, they'll be public to anyone browsing your repository's source code and CI logs.
Key | Value | Suggested Type | Required | Default |
---|---|---|---|---|
AWS_ACCESS_KEY_ID |
Your AWS Access Key. More info here. | secret env |
Yes | N/A |
AWS_SECRET_ACCESS_KEY |
Your AWS Secret Access Key. More info here. | secret env |
Yes | N/A |
AWS_S3_BUCKET |
The name of the bucket you're syncing to. For example, jarv.is or my-app-releases . |
secret env |
Yes | N/A |
AWS_REGION |
The region where you created your bucket. Set to us-east-1 by default. Full list of regions here. |
env |
No | us-east-1 |
AWS_S3_ENDPOINT |
The endpoint URL of the bucket you're syncing to. Can be used for VPC scenarios or for non-AWS services using the S3 API, like DigitalOcean Spaces. | env |
No | Automatic (s3.amazonaws.com or AWS's region-specific equivalent) |
SOURCE_DIR |
The local directory (or file) you wish to sync/upload to S3. For example, public . Defaults to your entire repository. |
env |
No | ./ (root of cloned repository) |
DEST_DIR |
The directory inside of the S3 bucket you wish to sync/upload to. For example, my_project/assets . Defaults to the root of the bucket. |
env |
No | / (root of bucket) |
This project is distributed under the MIT license.