DataDog/datadog-lambda-js

DataDog logs flooded with "datadog:handler not initialized"

ahernandez2-rbi opened this issue · 4 comments

Expected Behavior

DataDog/Lambda integration to run smoothly.

Actual Behavior

We are getting lots of messages of "datadog:handler not initialized" in DataDog.

Steps to Reproduce the Problem

We are using the serverless integration with serverless-plugin-datadog@5.4.0.

Things I've tried:

  • I already went through this issue and verified the deployed Lambda code does not include datadog-lambda-js or dd-trace.
  • I added DD_LOG_LEVEL=debug and DD_TRACE_DEBUG=true but nothing useful came from it.

Do you have any other tips to troubleshoot this issue?

Specifications

Datadog Lambda Layer version:
arn:aws:lambda:eu-central-1:464622532012:layer:Datadog-Node14-x:81
arn:aws:lambda:eu-central-1:464622532012:layer:Datadog-Extension-ARM:25

Node version: 14.x

image

Hi @ahernandez2-rbi - thanks for reaching out!

These logs are only printed when using the optional sendDistributionMetric or sendDistributionMetricWithDate methods. The error is thrown when the metricListener isn't running yet - but one of those optional methods was called.

Can you share your handler code, and the code which calls these methods?

Without seeing your code, my best guess is that this can occur if you try to emit a metric outside of your handler.

Thanks!

Hi @astuyve, yes, we have a few places where we call sendDistributionMetric. I'll review the code around them. The app is pretty big, it's hard to share a meaningful piece of it, but I understand there is not much you can do without it.

if you try to emit a metric outside of your handler.

Do you mean like a promise that was not awaited and completed after the handler exited? Any other scenarios I should look into?

Ah - in Lambda it's common to declare a variable to instantiate a database connection that can be re-used on subsequent lambda invocations. You can read more about this pattern here.

Today this library works by wrapping your handler, in essence:

module.exports.yourHandler = async (event, context) => {
  // your business logic
}

becomes:

module.exports.datadogHandler = async (event, context) => {
  await datadog.tracer.trace(yourHandler(event, context))
}

this means that if you have code that's not reachable via the handler call path, or is otherwise called before or after your function returns, the datadog library isn't present and thus metrics can't be sent.

Example w/ annotations:

var myVar += 1

datadog.sendDistributionMetricWithDate(myVar) // This throws "datadog:handler not initialized"

module.exports.datadogHandler = async (event, context) => {
  await datadog.tracer.trace(yourHandler(event, context))
}

@astuyve we found our problem! Turns out our Lambda handler was not being wrapped by DD.

Here is why (with some context):

  1. The app is deployed to us-east-1 and uses the DD US site and it works fine.
  2. The app is deployed to eu-central-1 and uses the DD EU site and here is where it fails.

Same code, same infrastructure. Here is the serverless.yml relevant config:

datadog:
  addExtension: true
  apiKey: ${self:custom.awsAccountSecrets.datadog.apiKey}
  site: ${self:custom.awsAccountSecrets.datadog.site}

Both apiKey and site are correctly injected and valid, they have different values for us-east-1 and eu-central-1. However, our CI server is also injecting the env vars: DATADOG_API_KEY and DATADOG_API_KEY_EU (used by other repos, but injected for all). The problem is that serverless-plugin-datadog is picking the env var over the value in the serverless.yml.

image

We know how to fix it, but wanted to suggest a couple of things:

  1. Consider if the serverless.yml config should take precedence over the env var. I feel like this is more explicit and should override ambient info.
  2. serverless-plugin-datadog should fail and stop the build instead of silently failing or at least make this behavior configurable.