Dragon2fly/logger_tt

What is the difference between logger_tt.logger and logger_tt.getLogger(name)?

Opened this issue · 3 comments

First, let me express my appreciation to you for this wonderful logging library. It's my pleasure to be the author of the first issue.

logger_tt helps me a lot. But I meet a problem.

I used to think that logger_tt.getLogger(name) will get a logger with same behavior with the logger_tt.logger, since they are configured by a global setup_logging.

However, I find that they behaves differently. For example, in the multi-processing case, the logger_tt.logger will add the process name to the logger, while logger_tt.getLogger(name) does not.

The behavior seems a little strange to me since there is only one global configuration there.

And when I want to set another formatter to the logger obtained by logger_tt.getLogger(name), the formatter seems not working. The codes are as follows:

    logger = logger_tt.getLogger(name)

    FORMAT = "%(asctime)s [$BOLD%(name)-20s$RESET][%(levelname)-18s] %(message)s ($BOLD%(filename)s$RESET:%(lineno)d)"
    COLOR_FORMAT = formatter_message(FORMAT, True)

    color_formatter = ColoredFormatter(COLOR_FORMAT)

    for handler in logger.handlers:
        handler.setFormatter(color_formatter)

    return logger

Could you please help to look through this problem?

You need to add %(processName)s to your format string.
The logger that is imported from logger_tt is just a pre-configured logger. But it can switch the format string base on the name of process or thread that calls it.

For the second problem with ColoredFormatter, could you give a minimal reproducible code that I can run? Also a picture or a description of what the result should be and what the reality is.

Thank you for your response!

I'm still encountered problems in multi-processing environment.

The log output in single process output like:

[2021-05-01 03:46:54] INFO: The 0-th batch finished training the machine. Historical average loss = 0.7541559338569641.

however, it output in multi-process:

WARNING:machine:The 0-th batch finished training the machine. Historical average loss = 0.8452694416046143.
WARNING:machine:The 0-th batch finished training the machine. Historical average loss = 0.8452694416046143.
WARNING:machine:The 1000-th batch finished training the machine. Historical average loss = 0.19884416460990906.
WARNING:machine:The 1000-th batch finished training the machine. Historical average loss = 0.19884416460990906.

I only called setup in both cases. Do I need to call setup in each process?

Could you provide a minimal example of your code?
Also which platform are you using, Linux? Windows?