DataDog/dd-trace-py

Third Party Python Modules Warnings are Logged as Errors in Datadog

Opened this issue · 1 comments

Hey datadog team,

I used datadog to monitor my python application, the warnings in the third party python modules will appear as errors in the datadog log explorer. even the DD trace module warnings or debugging could be shown as errors in Datadog UI. For example, I got a lot of error logs of

Error getting lock acquire/release call location and variable name: 'Model' object has no attribute '_has_params'

But I believe these are warnings in the dd-trace-py code. Also in my error log explorer, I could see error logs like sent 958B in 1s to <datadog trace host>, which is debugging info.

I know I could resolve this by set the logger level to ERROR, but would like to understand why DD cannot grab the right logger type.

I've managed to make all app only related log to have proper levels in DD with using custom log config, custom formatter class and pythonjsonlogger like this one:

log config in gunicorn_logging.conf:

[loggers]
keys=root, gunicorn.error, gunicorn.access

[handlers]
keys=console

[formatters]
keys=json

[logger_root]
level=INFO
handlers=console

[logger_gunicorn.error]
level=INFO
handlers=console
propagate=0
qualname=gunicorn.error

[logger_gunicorn.access]
level=INFO
handlers=console
propagate=0
qualname=gunicorn.access

[handler_console]
class=StreamHandler
formatter=json
args=(sys.stdout, )

[formatter_json]
format = %(message)s, %(asctime)s, %(levelname)s, %(name)s, %(module)s, %(filename)s, %(funcName)s, %(lineno)d
class=my_app.logging.CustomJsonFormatter

class my_app.logging.CustomJsonFormatter

import datetime

from pythonjsonlogger import jsonlogger


class CustomJsonFormatter(jsonlogger.JsonFormatter):
    def add_fields(self, log_record, record, message_dict):
        super(CustomJsonFormatter, self).add_fields(log_record, record, message_dict)
        if not log_record.get('timestamp'):
            # Python logs contain levelname and asctime that are not parsed by default in Datadog.
            # Additionally, asctime is not ISO8601 formatted.
            # I'm adding proper timestamp and level fields to the log record for Datadog to parse the logs correctly.
            log_record['timestamp'] = datetime.datetime.fromtimestamp(record.created, tz=datetime.UTC).isoformat()
            log_record['level'] = record.levelname

But what I cannot make to work is to get proper logs from ddtrace itself. I'm running gunicorn as webserver on GCP Cloud Run and I get those all the time as ERROR

Image

Which looks to me as actual warning. My guess is because ddtrace is running before actual python app and does not "know" about json or any custom logging. Additionally ddtrace is probably sending all those warnings to stderr which is by default treated as error by serverless-init. I'm running it like this:


COPY --from=gcr.io/datadoghq/serverless-init:1.5.1 /datadog-init /datadog/datadog-init

CMD [ \
    "/datadog/datadog-init", \
    "/dd_tracer/python/bin/ddtrace-run", \
    "gunicorn", "--config=gunicorn.conf.py", "my_app.app:flask_app" \
]

Anyone knows if there is a way to make those as warnings?