Is s3_website invalidating all files on CloudFront on each update?
ccamacho opened this issue · 2 comments
Hello,
Ill like to report a weird behavior when pushing new files to my S3 bucket, on my debug view I can see all the time:
[debg] Invalidated /*
But I'm only updating one file in the S3 bucket, so the issue is that the invalidation method is not working properly or there is an error in the log message.
Can you clarify this, please?
Thanks!
To be more specific,
[succ] Updated en/content/articles/2017/11/08/initiation-drones/index.html (max-age=31536000 | text/html; charset=utf-8 | gzip | 6.3 kB | 23.0 kB/s)
[succ] Updated sitemap.xml (max-age=31536000 | application/xml | 378 B | 1.0 kB/s)
[succ] Updated en/store/products/merchandise/index.html (max-age=31536000 | text/html; charset=utf-8 | gzip | 4.0 kB | 14.0 kB/s)
[succ] Updated assets/images/bg-top.png (max-age=31536000 | image/png | 262.6 kB | 456.0 kB/s)
[debg] Invalidated /*
[succ] Invalidated 1 item on CloudFront
[info] Summary: Updated 4 files. Applied 1 redirect. Transferred 273.4 kB, 760.0 kB/s.
And I had to manually invalidate assets/images/bg-top.png in the AWS console to be able to refresh it.
This is part of my config:
cloudfront_distribution_id: xxxxxxx
cloudfront_distribution_config:
default_cache_behavior:
min_ttl: <%= 31536000 %>
cloudfront_invalidate_root: true
cloudfront_wildcard_invalidation: true
The cloudfront_wildcard_invalidatation: true
config setting by design invalidates all objects in your CF distribution because it is invalidating the path /*
which counts as only one invalidation against your quota of 1,000 free invalidation paths per month.
If not set or set to false
then s3_website
will individually invalidate your updated/changed objects, which I think is what you appear to be expecting. While this may seem like the expected or desired behaviour, if you run a decent sized site and push multiple large rebuilds each month you would easily go above the 1,000 tier and start paying US$0.005 for each file path invalidated.