rehanvdm/serverless-website-analytics

Janitor for pruning data

rehanvdm opened this issue · 1 comments

Create a cron job lambda that runs a CTAS query to group and store the records by page_id and highest time_on_page, combining the initial page track and the final page track. This cuts down on the data stored. The system has been designed in such a way as to cater for this. It is basically the equivalent of a vacuum command in postgress terms, but can be done without locking.

Released in v1.2.0 #45

This PR contains more details about implementation. Also, see docs/CONTRIBUTING.md

The lambda function runs at 1 hour past midnight UTC. At this point, the Firehose is done writing for the previous day and no new data will be written. The function is idempotent and will delete the raw files after doing the rollup only if it is successful.

During the test, it does an incredible job at rolling up.

BEFORE:
S3 Objects (388)
Input rows: 650
Input bytes: 181.34 KB
Output rows: 148
Output bytes: 37.37 KB

AFTER:
S3 Objects (4)
Input rows: 148
Input bytes: 33.73 KB
Output rows: 148
Output bytes: 37.26 KB

The important part is the almost 100x reduction in S3 file count. This will decrease the amount of files that Athena has to scan by magnitudes, which means big savings to gain from this rollup.