s3 auto deploy should be done before cloudformation deployment
giorgio-zamparelli opened this issue · 13 comments
The auto deploy of this plugin uploads the assets to the S3 bucket AFTER the cloudformation stack has been deployed.
In my use case (and possibly everyone's use case) the upload of the assets should happen BEFORE the cloudformation stack has been deployed.
SOLUTION
Change the hook event used from after:deploy:finalize
to another hook event happening before the deployment like for example before:deploy:deploy
MY USE CASE (probably everyone's)
I'm using serverless-s3-deploy to upload some bundles for a SPA.
In my case I'm uploading 30+ bundles and it takes sometimes.
The auto deploy of this plugin happens after the deployment stage of the cloudformation stack. At that point Api Gateway and AWS Lambda are already answering with an index.html file that is pointing to a bundle that is still being uploaded by the serverless-s3-deploy.
LIKELY FIX #19 AS WELL
This would likely fix also this issue #19 since the order of the bundles wouldn't matter anymore since they are deployed all before the cloudformation stack. What do you think @jrencz?
I don't feel like it would solve the case I had in #19 because I wasn't using cloudformation at all. I was only trying to upload files to a part of stack that lives continously.
The history of my project after #19 was filed was that I left this plugin and sls at all and I wrote a shell script that uses aws cli (sync) for all assets and then when it's done it does the same for the index.html
file. I got the sequence I needed just with simpler tools
The Serverless Framework does use CloudFormation under the hood to create all the AWS Lambda and Api Gateway but is trasparent to the final user.
I'd also be interested in this. But there seems to be an easy workaround.
Run sls s3deply
before the normal deploy command. This would (of corse) only work on updates and not on the first deploy.
But there needs to be a better option to remove old assets after the deploy.
My first idea was to set empty: ${opt:s3cleanup, false}
with a deployment like this:
sls s3deploy
sls deploy --s3cleanup
But I noticed that empty: true
actually empties the bucket and then reuploads.
For a zero-down-time deployment it would need to only delete files that aren't synced later.
My original thinking (as far as I recall it) is the CF config would be creating the bucket, potentially, so you couldn't upload to it before the deploy.
I'd certainly welcome an option to let you control if it happened before or after, to remove this assumption.
In the meanwhile I forked this repo and made one with a fix for this specific problem which I use in production:
https://github.com/giorgio-zamparelli/serverless-upload-assets-to-s3
The main difference is the use of the serverless hook use from after:deploy:finalize to before:deploy:deploy
// ORIGINAL
this.hooks = {
's3deploy:deploy': () => Promise.resolve().then(this.deployS3.bind(this)),
'after:deploy:finalize': () => Promise.resolve().then(this.afterDeploy.bind(this))
};
// FORK
this.hooks = {
'upload-assets-to-s3:deploy': () => Promise.resolve().then(this.uploadAssetsToS3.bind(this)),
'before:deploy:deploy': () => Promise.resolve().then(this.autoUpload.bind(this))
}
This way all assets are uploaded to the S3 bucket before Serverless event starts deploying the serverless zip file.
@giorgio-zamparelli have you considered contributing to this repo?
@giorgio-zamparelli have you considered contributing to this repo?
I'll try.
@funkybob would you merge a PR with a solution where the paramater auto can be a boolean OR a string specifying the hook?
It would be retrocompatible this way:
// it uses the after:deploy:finalize hook as it's done now
custom:
assets:
auto: true
// it uses the after:deploy:finalize hook as it's done now
custom:
assets:
auto: after:deploy:finalize
// it uses the before:deploy:deploy hook (requires the bucket to already exist)
custom:
assets:
auto: before:deploy:deploy
For documentation purposes, my deployment now looks like this:
sls s3deploy --stage=prod || ASSET_DEPLOYMENT_FAILED=$true
sls deploy --stage=prod --conceal
if [ $ASSET_DEPLOYMENT_FAILED ]; then sls s3deploy --stage=prod; fi
This now works on first deploy and on any consecutive deploy as well as on changes.
In the best case I'd like s3deployment to do basically that.
@giorgio-zamparelli moving this to the before:deploy:deploy
hook causes following problem...
When executing serverless deploy
for the first time, when the cloudformation stack doesn't exist yet (before the create stack process from serverless), the result of the command is this error here
.........
Serverless: Invoke deploy
Serverless: Invoke package
Serverless: Invoke aws:common:validate
Serverless: Invoke aws:common:cleanupTempDir
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Invoke aws:package:finalize
Serverless: Invoke aws:common:moveArtifactsToPackage
Serverless: Invoke aws:common:validate
Serverless: [AWS apigatewayv2 200 0.372s 0 retries] getDomainName({ DomainName: 'app.test.in' })
Serverless: [AWS cloudformation 400 0.169s 0 retries] listStackResources({ StackName: 'app-test-in-production', NextToken: undefined })
Serverless Error ---------------------------------------
ServerlessError: Stack with id app-test-in-production does not exist
at /Users/tom/.nvm/versions/node/v12.14.1/lib/node_modules/serverless/lib/plugins/aws/provider/awsProvider.js:623:27
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:94:5)
There's no way to bypass this except by removing the plugin from the serverless.yml file.
Adding a check here to see if the cloudformation stack exists or adding an error message would be a nice addition (for anyone else who may experience this):
https://github.com/giorgio-zamparelli/serverless-upload-assets-to-s3/blob/a891c7fd9c22bfeeac082b527d1fea99def716c7/index.js#L73-L85
I stopped using Serverless Framework in favor of AWS CDK.
With AWS CDK it's possible to write javascript code that is then converted in a Cloudformation Template. This way you are not forced to write YAML/JSON and use tons of third party plugins hoping they suite your use case.
For documentation purposes, my deployment now looks like this:
sls s3deploy --stage=prod || ASSET_DEPLOYMENT_FAILED=$true sls deploy --stage=prod --conceal if [ $ASSET_DEPLOYMENT_FAILED ]; then sls s3deploy --stage=prod; fiThis now works on first deploy and on any consecutive deploy as well as on changes.
In the best case I'd like s3deployment to do basically that.
@Nemo64 This is nice but s3deploy doesn't seem to have any exit codes different than 0 so your logic might not work. Did you manage to test it?
Mhh, now that you mention it, it could be that I never had an initial deploy with it.
But my preferred deployment strategy actually changed again because s3deploy became just way too slow with a growing number of files and the fact that I also couldn't delete outdated files without deleting all of them first.
It now involves this shell script, which reads the AssetBucket
export from serverless
and then uses aws s3 sync
, which acts more like an rsync
.
#!/usr/bin/env bash
set -e
STAGE="${1}"
BUCKET="$(sls info --stage=$STAGE -v|grep AssetsBucket:|cut -d ' ' -f2)"
OPTIONS="${@:2}"
echo stage: $STAGE
echo bucket: $BUCKET
echo options: $OPTIONS
if [ -z $BUCKET ]
then
echo "no bucket bath found"
exit 1
fi
aws s3 sync public/build "s3://${BUCKET}/build" --exclude="*" --include="????????????????????*.*" --cache-control="public, max-age=31536000, immutable" $OPTIONS
# more lines here
And my deployment now does this
bin/asset_deploy.sh "${DEPLOYMENT_ENVIRONMENT}" || true # deploy new files and update existing ones ~ fails on first deploy
sls deploy --stage=$DEPLOYMENT_ENVIRONMENT --conceal
bin/asset_deploy.sh "${DEPLOYMENT_ENVIRONMENT}" --delete # delete old assets or deploy for the first time
Thanks a lot for sharing that.
I've end up moving away from this tool as I was facing way too many challenges.
I end up using serverless-finch. It is way less flexible but works for static asset serving and it takes care of the cloudformation regardless of serverless stack so I can just run it BEFORE serverless regular deployment and the files will be there when the API change kicks in.