Composite Actions
For simple combinations of different jobs, with inputs and outputs for dynamic responsesJavascript Actions
For custom actions that run scripts in combination with a .js fileDocker Actions
For cutom actions that need scripts run in a non-javascript language and for more control over enviornmentAWS-S3
For static website hosting
The first action I assigned is a composite action. This composite action checks whether caching needs to be performed, then uses a Github custom action to download, install, and then cache dependencies with "actions/cache@v3". The install step is only run if we fail our cache-hit or we specify with an input to specifically not use cache:
if: steps.cache.outputs.cache-hit != 'true' || inputs.caching != 'true'
echo "cache='${{ inputs.caching }}'" >> $GITHUB_OUTPUT
This line adds an ouput that is used lated in our "deploy.yml":
- name: Load & cache dependencies
id: cache-deps
uses: ./.github/actions/cached-deps
with:
caching: "false" #disables cache for this step
- name: Output information
run: echo "Cache used? ${{steps.cache-deps.outputs.used-cache}}"
Caching will only be attempted if it was attempted previously OR a step has specified with a input "caching=false"
By default, if not specified, we attempt to use cached dependencies if they are available. We check an output from the "Install dependencies" step by referencing the "install" id, and print whether cache was used as a log for the step. The output is assigned from the cache step in line
This is where this custom action is used in the main workflow:
Next we run a test step. It gathers the required files and then uses a built in npm testing library to run a test.jsx file that tests for the presence of a button as well as the popup component of the button.
For our website deployment, I use a javascript custom action. I define input parameters, with "bucket" and "dist-folder" being required, and "bucket-region" defaulting to 'us-east-1', which are passed in from the "deploy.yml" if they are set as "required: true". These will be used during the execution of the "main.js" file.
"main.js" Uses built-in Github commands through the "require()" function passing in Github custom actions "@actions/core" and "@actions/exec". We can use that "core" object to connect to AWS and pass in our inputs, and upload our static webpage to AWS.
We add an "output" to action.yml
outputs:
website-url:
description: 'The URL of the deployed website.'
and then set that output in "main.js"
const websiteUrl = `http://${bucket}.s3-website-${bucketRegion}.amazonaws.com`
core.setOutput('website-url',websiteUrl) // ::set
and that results in our URL being added to our actions logs:
Key difference is that we are going to use a docker image based on a Dockerfile. This is so we can execute in a language of our choice. I chose Python.
All our docker file does is copy the requirements.txt and deployment.py, install the dependencies, then call the python function, which performs the exact same as main.js but in Python.
And here's the python file:
We change the deploy step to use our new docker action:
Finally, we can see the results of our change, we can see our job is now running on a Docker image.
And we get the same result as before: