pycromanager setting the root logger on import
bruno-digitbio opened this issue · 8 comments
Code for reproduction / Actual Outcome
The latest version of pycromanager
adds an error handler that forwards all logging messages from the root logger to stdout. Resulting in a wall of DEBUG messages when loading any module (e.g. matplotlib
, numba
, etc.) that uses the logging
module extensively.
>>> import pycromanager
>>> import logging
>>> logging.root.handlers[0].level
0
Expected outcome
Only deployed binaries should set the root logger. Libraries that are meant to be imported by others should leave it untouched.
>>> import pycromanager
>>> import logging
>>> logging.root.handlers
[]
The offending code is here.
If we really want to always print debug messages for pycromanager
specifically, you can of course set this configuration for pycromanager
's internal logger instead:
main_logger.setLevel(...)
Suggestions on a PR for a fix? Any thoughts on this @jacopoabramo ?
Depends on where the break came from, but it looks like
this commit to PyJavaZ accidentally adds an import pycromanager.logging
.
For one, this introduces a circular import dependency, so I would be a bit careful here, but the most likely fix is to just remove this line:
(on the other hand, it looks like that commit also introduced a lot of usage of pycromanager.logging.main_logger
. If we want to keep this usage, then the best solution is probably to set the level of the logging.getLogger("pycromanager")
logger instead of the root logger...)
I have a lot of experience with Python logging, if you have a list of requirements (importing module X should trigger logging at level Y, but by default logging should be at level Z, but only for modules ABC, etc.), happy to send a PR, but it's a bit hard to infer from the current state what the original intention was!
Yeah the circular dependency is definitely not supposed to be there and should be removed. I don't think that any special logging is required in PyJavaZ, so that could all be removed. All this logging was added in #719
Hey, sorry for the late reply. I'm the author of PR #719
Background on this PR is: I wanted to add a logger instance for debugging to be compatible with another project I've been working on (integrating pycromanager in ImSwitch). I have to admit I'm not an expert on logging and my initial thought was the following: creating a logger instance that would be the baseline for debugging when debug=True
while also providing access to an external logger instance from another application (in this case, the ImSwitch logger).
@bruno-digitbio I would happy to provide a fixing PR if you have some suggestions on how to approach this
Hi all -
This is a high priority issue for us as well, as the changes are creating chaos with logging and the console. Is there anything we can do to help push forward fixing the problem?
Thanks.
compatible with another project I've been working on (integrating pycromanager in ImSwitch).
Are you trying to use coloredlogs
? If so, then pycromanager
itself does not need any custom logging code. Your application (when it wants to see colored pycromanger logs), can simply say
(in your_file.py
),
coloredlogs.install(
level='DEBUG',
logger=logging.getLogger("pycromanager"),
fmt='%(asctime)s %(levelname)s %(message)s'
)
after this point, any code that logs to the "pycromanager"
logger will have this configuration applied.
I have to admit I'm not an expert on logging and my initial thought was the following: creating a logger instance that would be the baseline for debugging
I think I see what you were trying to do here, definitely a reasonable thought, but (if I'm understanding correctly), simply coming from the wrong mental model for how Python's built-in logging
module is meant to be used.
logging
maintains a global "database" of "logger"s. In the simplest case, the idea is that:
- each module creates its own logger, e.g.
pycromanager.acquisition.acquisition_superclass
would calllogging.getLogger(__name__)
to create a logger called"pycromanager.acquisition.acquisition_superclass"
- the global loggers "database" organizes loggers into a "tree" based on the "module structure" implied by the dot-separated parts of the logger's name, so if in any module, you create a logger (let's call it
logger1
) called "pycromanager", and in any other module (before or after!) you create another logger (logger2
) called "pycromanager.acquisition", thenlogger2.parent == logger1
. - whenever you apply config to a logger, it is used by that logger's children
- each module logs to its own logger, and doesn't touch the config at all
- the user who is running the final binary/script/etc and wants to configure the logger then simply grabs the logger(s) they want to configure and sets information specifically for those loggers.
- IMPORTANT: there is an implicit "root" logger (see
logging.getLogger(anything).root
), and because loggers' config applies to its children, all configuration performed on the root logger affects all modules. Libraries should never touch the root logger, only application code.
when
debug=True
while also providing access to an external logger instance from another application (in this case, the ImSwitch logger).
Walking through the implications of the system I (try to) outline above, if you want to apply the same logger configuration used by (e.g.) ImSwitch
to pycromanager, you should not actually monkeypatch ImSwitch's logger to "trick" pycromanager into using it. Instead, you should simply apply that same configuration to the pycromanager
logger (as shown in the coloredlogs
example above.
If you want to "reset" the settings back to normal, you should again definitely not replace the logger object itself, but instead you should modify it's (global) properties to have the settings that you want. For example, coloredlogs
just installs a so-called "handler" for the logger you give it:
In [2]: import logging
In [3]: pycro_logger = logging.getLogger("pycromanager")
In [4]: import coloredlogs
In [5]: coloredlogs.install(logger=pycro_logger, level=logging.DEBUG)
In [6]: pycro_logger.handlers
Out[6]: [<StandardErrorHandler <stderr> (INFO)>]
so to undo that, you would just remove it and reset the level
In [11]: pycro_logger.setLevel(logging.WARNING)
In [12]: pycro_logger.removeHandler(pycro_logger.handlers[0])
In [13]: pycro_logger.handlers
Out[13]: []
@bruno-digitbio I would happy to provide a fixing PR if you have some suggestions on how to approach this
There's a few things to fix here, across a couple of repositories:
- In PyJavaZ, remove the usage of the
"pycromanager"
logger altogether (EDIT: e.g,. PyJavaZ/PyJavaZ#5). It's possible that actually using the pycromanager logger is what you want, but probably what you really want is forPyJavaZ
to log to loggers like"pyjavaz.*"
, and then you can alwayscoloredlogs.install(logger=logging.getLogger("pyjavaz"), level=logging.DEBUG)
to set a handler on that top-level "parent" logger to get the debug messages if you want. If you really want to log to"pycromanager"
, do not bother importing the module as you have, instead simply calllogging.getLogger("pycromanager")
. - In #768, I simply remove the usage of the root logger.
- As a follow-up (e.g. #769), I would recommend we remove the usage of the existing
pycromanager.logging
altogether from everywhere, and instead use the (standard) Python conventions that each module logs to its own personal logger if needed (allowing for flexible configuration by application writers)
This is a high priority issue for us as well, as the changes are creating chaos with logging and the console. Is there anything we can do to help push forward fixing the problem?
@dpshepherd, In the mean time, at work we are patching this problem by making sure that (something like) the following lines are executed before any other imports:
import pycromanager.logging # trigger the "bad" `logging.basicConfig` call
import logging
logging.getLogger().handlers = [] # delete the erroneously-created handler
after this, stuff like
import numba
no longer creates walls of output