karloscodes/serverless-es-logger

Design Decisions / Discussion

simlu opened this issue · 1 comments

simlu commented

The problem is that lambda execution time is expensive. Sending logs directly from lambda is costly since it happens async and delays termination.

We prefer to send logs to cloudwatch and then listen to the cloudwatch streams using a lambda function and depending on the content doing different actions. The benefit here is that if a lot of logs are generated you can send in parallel and have the logic completely separate from your function code logic.

Here is our fully functional OS package: https://github.com/blackflux/lambda-monitor
(Would need some adaption to support your exact use case)

I love what you are doing here and that's how we started originally, but it feels like the above described way is just better. Would love to hear your thoughts

First, I want to thank you for taking the time and providing your feedback @simlu.

The base idea of the package is that skipping Cloudwatch and hitting directly Elasticsearch for logging in some cases might be actually cheaper despite of the extra seconds you will have to pay in the worst case scenarios. In my experience, Cloudwatch is expensive for log ingestion and storage, seeing this in several projects I started asking myself, if I want to send logs to Elasticsearch why should I pay for Cloudwatch as well?

As for the way you adopted, I think it won’t fit every use case because of the Cloudwatch costs, also, adding extra lambda functions to process Iogs would not go in favor of reducing costs either, which is the point of this package.

Anyways, at the end it always depends of your particular case, so I just wanted to put another option out there and see if it helps someone.