How to disable logging?
Closed this issue ยท 28 comments
I want to use logging in my script that runs caiman, but I do not want the excessive logging output from CNMF.fit()
. According to:
print(f"Name: {__name__}")
logger = logging.getLogger(__name__)
in cluster.py
, the Name: caiman.cluster
, so I set the following at the start of my script:
logging.getLogger("caiman.cluster").setLevel(logging.ERROR)
...but I still get the full logging output from CNMF.fit()
I've also tried:
logging.disable(logging.CRITICAL + 1)
...but I still get the full logging output.
How does one disable the verbose output from
Setup
- Operating System: Ubuntu 22.04.4
- Hardware type (x86, ARM..) and RAM: x86, 1 Tb
- Python Version (e.g. 3.9): 3.11.9
- Caiman version (e.g. 1.9.12): 1.11.1
- Which demo exhibits the problem (if applicable): N/A
- How you installed Caiman (pure conda, conda + compile, colab, ..): mamba
I'll look into this; the codebase used to have snippets all over the place that reset the loglevel; I've removed a lot of them but I may have missed some.
I just read your question more carefully; I think you're asking for the ability to set different logging levels for different parts of the code; our support for this is pretty halfhearted right now (caiman.cluster will only affect things in that part of the code, I think.
Right now only caiman.utils.labelling and caiman.cluster provide that kind of fine-grained control.
For the rest of the code, you'll just need to set up the general logger to a certain level and get exactly that level.
I'll file an issue to take a look at this in the future (and in general to review the level of noise that CNMF.fit() produces)
To be clear, I just want to turn off logging (at least disable INFO messages) for caiman, but I want logging for the rest of my script that does more than just runs caiman code (e.g., CNMF.fit()
).
Right now the best way to do that is a bit unfortunate; you'll want to save the state of the global logger, change the global log level just before calling caiman code, and then change it back afterwards.
This is a good use case for me to think about in the future though; if we put all caiman code under one logger that's not the global logger, we can document that almost as easily as changing the global logger settings, and it would give you the flexibility you want.
Thoughts?
@nick-youngblut If you can give me your thoughts on the idea above..
@pgunn I've tried that, but changing the global logger still results in the verbose logging when running CNMF.fit()
@nick-youngblut Oh, sorry, what I meant was I want to know what you think if I were to revise the code so caiman consistently uses a single "caiman" logger which will let you split off your preferences between how caiman logs and how your calling code logs.
Would that be good for your concerns?
That would be great! It should solve my situation.
@nick-youngblut Cool; I'll get that done early next week and after testing, I'll cut the next release. Thanks for the idea.
@nick-youngblut The branch "dev-logging_cleanup" now has the changes; will do a bit more testing tomorrow and then land/release if things look good.
Merged and in today's release (will be on conda-forge soon)
@pgunn I've installed caiman via git+https://github.com/flatironinstitute/CaImAn.git@dev
and am using logging.getLogger("caiman").setLevel(logging.ERROR)
; however, I'm still getting verbose logging output when running cnmf.CNMF.fit
. A bit of the verbose output:
2024-07-29 08:32:54,235 - 0 neurons have been initialized
2024-07-29 08:32:54,237 - In total, 1 neurons were initialized.
2024-07-29 08:32:54,237 - Merging components
2024-07-29 08:32:54,238 - No more components merged!
2024-07-29 08:32:54,238 - Updating spatial components
2024-07-29 08:32:54,239 - Initializing update of Spatial Components
2024-07-29 08:32:54,239 - Computing support of spatial components
2024-07-29 08:32:54,244 - Memory mapping
2024-07-29 08:32:54,244 - Updating Spatial Components using lasso lars
2024-07-29 08:32:54,337 - thresholding components
2024-07-29 08:32:54,338 - removing 1 empty spatial component(s)
2024-07-29 08:32:54,338 - Updating done in 0s
2024-07-29 08:32:54,338 - Removing created tempfiles
2024-07-29 08:32:54,338 - Updating temporal components
2024-07-29 08:32:54,338 - Generating residuals
2024-07-29 08:32:54,339 - entering the deconvolution
2024-07-29 08:32:54,339 - stopping: overall temporal component not changing significantly
2024-07-29 08:32:54,339 - Recomputing background
2024-07-29 08:32:54,374 - Merging components
2024-07-29 08:32:54,375 - No more components merged!
2024-07-29 08:32:54,375 - Updating spatial components
2024-07-29 08:32:54,375 - Initializing update of Spatial Components
2024-07-29 08:32:54,375 - Computing support of spatial components
2024-07-29 08:32:54,378 - Merging components
2024-07-29 08:32:54,379 - No more components merged!
2024-07-29 08:32:54,379 - Updating spatial components
2024-07-29 08:32:54,379 - Initializing update of Spatial Components
2024-07-29 08:32:54,379 - Computing support of spatial components
2024-07-29 08:32:54,380 - Memory mapping
2024-07-29 08:32:54,380 - Updating Spatial Components using lasso lars
2024-07-29 08:32:54,385 - Memory mapping
2024-07-29 08:32:54,385 - Updating Spatial Components using lasso lars
2024-07-29 08:32:54,395 - thresholding components
2024-07-29 08:32:54,396 - Updating done in 0s
2024-07-29 08:32:54,396 - Removing created tempfiles
2024-07-29 08:32:54,396 - Updating temporal components
2024-07-29 08:32:54,396 - Generating residuals
2024-07-29 08:32:54,399 - entering the deconvolution
2024-07-29 08:32:54,399 - stopping: overall temporal component not changing significantly
2024-07-29 08:32:54,400 - thresholding components
2024-07-29 08:32:54,400 - Returning full background
2024-07-29 08:32:54,400 - Updating done in 0s
2024-07-29 08:32:54,400 - Removing created tempfiles
2024-07-29 08:32:54,400 - Updating temporal components
2024-07-29 08:32:54,400 - Generating residuals
2024-07-29 08:32:54,403 - entering the deconvolution
2024-07-29 08:32:54,403 - stopping: overall temporal component not changing significantly
2024-07-29 08:32:54,403 - Returning full background
2024-07-29 08:32:54,491 - Merging components
2024-07-29 08:32:54,492 - No more components merged!
2024-07-29 08:32:54,492 - Updating spatial components
2024-07-29 08:32:54,492 - Initializing update of Spatial Components
2024-07-29 08:32:54,492 - Computing support of spatial components
The logger is now named "caiman", correct?
It is, and that's originally being logged at the INFO priority.
I tested the logging levels while developing for the last release and saw the desired effects; I'm surprised that you're still getting this.
I wonder if you're somehow getting an old/cached build from dev (I don't know if that method of pip install receives things fresh every time or not).
I'll also have to see if I can somehow find a hole in the level setting tomorrow. I may ask for more info then.
I just rebuilt my conda env, and I'm still getting verbose output when running cnmf.CNMF.fit
Do you mind:
a) Edit caiman/source_extraction/cnmf/temporal.py and stick a print(f"{logger=}")
right before one of those messages you see, rebuild, and run ; look for the message and tell me what you see - I'm wondering if something is resetting the loglevel
b) Sending me the script or notebook you're running that exhibits this - I updated the demos and notebooks in caiman_data and maybe your (edited?) versions didn't pick up the changes. If you removed ~/caiman_data and redid a caimanmanager install
and were running the refreshed demos, there's no need for this, just chasing a suspicion.
(if you need to send me a file, you can either use pastebin or email it to pgunn@flatironinstitute.org )
Here are all of the loggers, according to logging.Logger.manager.loggerDict
:
PIL.Image
PIL
PIL.PngImagePlugin
matplotlib.ticker
matplotlib
matplotlib.artist
matplotlib.lines
matplotlib._afm
matplotlib.font_manager
matplotlib.dviread
matplotlib.mathtext
matplotlib.texmanager
matplotlib.textpath
matplotlib.text
matplotlib._layoutgrid
matplotlib._constrained_layout
matplotlib.backend_bases
matplotlib.colorbar
matplotlib.image
matplotlib.style.core
matplotlib.style
matplotlib.axis
matplotlib.gridspec
matplotlib.axes._base
matplotlib.axes
matplotlib.category
matplotlib.dates
matplotlib.axes._axes
matplotlib.figure
matplotlib.pyplot
concurrent.futures
concurrent
packaging.tags
packaging
h5py._conv
h5py
stack_data.serializing
stack_data
parso
asyncio
prompt_toolkit.buffer
prompt_toolkit
parso.python.diff
parso.python
parso.cache
Comm
tqdm.cli
tqdm
imageio
sklearn
numcodecs
tornado.access
tornado
tornado.application
tornado.general
ipykernel.comm
ipykernel
matplotlib.animation
distutils._vendor.packaging.tags
distutils._vendor.packaging
distutils._vendor
distutils
setuptools.config._apply_pyprojecttoml
setuptools.config
setuptools
setuptools.config.pyprojecttoml
absl
tensorflow
urllib3.util.retry
urllib3.util
urllib3
urllib3.connection
urllib3.response
urllib3.connectionpool
urllib3.poolmanager
charset_normalizer
socks
requests
bokeh
bokeh.sampledata
bokeh.util.logconfig
bokeh.util
bokeh.settings
bokeh.util.deprecation
bokeh.util.warnings
bokeh.util.paths
bokeh.models
bokeh.model
bokeh.model.data_model
bokeh.core.has_props
bokeh.core
bokeh.util.strings
bokeh.core.property
bokeh.core.property.descriptor_factory
bokeh.core.property.descriptors
bokeh.core.property.singletons
bokeh.core.property.wrappers
bokeh.core.property.override
bokeh.core.serialization
bokeh.util.dataclasses
bokeh.util.dependencies
bokeh.util.serialization
bokeh.core.types
bokeh.model.model
bokeh.core.properties
bokeh.core.property.alias
bokeh.core.property.bases
bokeh.core.property._sphinx
bokeh.core.property.aliases
bokeh.core.property.datetime
bokeh.core.property.primitive
bokeh.core.property.either
bokeh.core.property.factors
bokeh.core.property.container
bokeh.core.property.enum
bokeh.core.enums
bokeh.colors
bokeh.colors.groups
bokeh.colors.util
bokeh.colors.color
bokeh.colors.named
bokeh.palettes
bokeh.core.property.numeric
bokeh.core.property.any
bokeh.core.property.auto
bokeh.core.property.color
bokeh.core.property.string
bokeh.core.property.constraints
bokeh.core.property.dataspec
bokeh.core.property.instance
bokeh.core.property.nothing
bokeh.core.property.nullable
bokeh.core.property.required
bokeh.core.property.serialized
bokeh.core.property.struct
bokeh.core.property.vectorization
bokeh.core.property.visual
bokeh.core.property.include
bokeh.core.property.json
bokeh.core.property.pd
bokeh.core.property.readonly
bokeh.core.property.text_like
bokeh.core.property.validation
bokeh.events
bokeh.themes
bokeh.themes.theme
bokeh.util.callback_manager
bokeh.util.functions
bokeh.model.docs
bokeh.model.util
bokeh.models.axes
bokeh.core.property_mixins
bokeh.models.formatters
bokeh.core.validation
bokeh.core.validation.errors
bokeh.core.validation.issue
bokeh.core.validation.warnings
bokeh.core.validation.check
bokeh.core.validation.decorators
bokeh.models.tickers
bokeh.models.mappers
bokeh.models.transforms
bokeh.models.sources
bokeh.models.callbacks
bokeh.models.filters
bokeh.models.selections
bokeh.models.glyphs
bokeh.core.property_aliases
bokeh.models.glyph
bokeh.models.graphics
bokeh.models.labeling
bokeh.models.renderers
bokeh.models.renderers.contour_renderer
bokeh.models.renderers.glyph_renderer
bokeh.models.renderers.renderer
bokeh.models.coordinates
bokeh.models.ranges
bokeh.models.scales
bokeh.models.renderers.graph_renderer
bokeh.models.graphs
bokeh.models.expressions
bokeh.models.renderers.tile_renderer
bokeh.models.tiles
bokeh.models.canvas
bokeh.models.ui
bokeh.models.ui.dialogs
bokeh.models.dom
bokeh.models.css
bokeh.models.ui.ui_element
bokeh.models.nodes
bokeh.models.ui.examiner
bokeh.models.ui.icons
bokeh.models.ui.menus
bokeh.models.ui.panels
bokeh.models.ui.panes
bokeh.models.ui.tooltips
bokeh.models.selectors
bokeh.models.grids
bokeh.models.layouts
bokeh.models.map_plots
bokeh.models.plots
bokeh.core.query
bokeh.models.annotations
bokeh.models.annotations.annotation
bokeh.models.annotations.arrows
bokeh.models.annotations.dimensional
bokeh.models.annotations.geometry
bokeh.models.common.properties
bokeh.models.common
bokeh.models.annotations.html
bokeh.models.annotations.html.html_annotation
bokeh.models.annotations.html.labels
bokeh.models.annotations.html.toolbars
bokeh.models.annotations.labels
bokeh.models.annotations.legends
bokeh.models.tools
bokeh.models.text
bokeh.models.textures
bokeh.models.widgets
bokeh.models.widgets.buttons
bokeh.models.widgets.widget
bokeh.models.widgets.groups
bokeh.models.widgets.inputs
bokeh.models.widgets.markups
bokeh.models.widgets.pickers
bokeh.models.widgets.sliders
bokeh.models.widgets.tables
bokeh.document
bokeh.document.document
bokeh.core.templates
bokeh.util.version
bokeh.document.callbacks
bokeh.document.events
bokeh.document.json
bokeh.document.locking
bokeh.document.models
bokeh.util.datatypes
bokeh.document.modules
bokeh.io
bokeh.io.doc
bokeh.io.state
bokeh.resources
bokeh.util.token
bokeh.io.export
bokeh.embed
bokeh.embed.server
bokeh.embed.bundle
bokeh.util.compiler
bokeh.embed.util
bokeh.embed.elements
bokeh.core.json_encoder
bokeh.embed.wrappers
bokeh.embed.standalone
bokeh.io.util
bokeh.io.notebook
bokeh.io.output
bokeh.io.saving
bokeh.io.showing
bokeh.util.browser
panel.util
panel
panel.state
panel.callbacks
panel.io.callbacks.PeriodicCallback
panel.io.callbacks
panel.io
bokeh.application
bokeh.application.application
bokeh.protocol
bokeh.protocol.exceptions
bokeh.protocol.message
bokeh.protocol.messages
bokeh.protocol.messages.ack
bokeh.protocol.messages.error
bokeh.protocol.messages.ok
bokeh.protocol.messages.patch_doc
bokeh.protocol.messages.pull_doc_reply
bokeh.protocol.messages.pull_doc_req
bokeh.protocol.messages.push_doc
bokeh.protocol.messages.server_info_reply
bokeh.protocol.messages.server_info_req
panel.io.document
bokeh.embed.notebook
panel.io.resources
bokeh.application.handlers
bokeh.application.handlers.code
bokeh.application.handlers.code_runner
bokeh.application.handlers.handler
bokeh.application.handlers.directory
bokeh.application.handlers.notebook
bokeh.application.handlers.script
bokeh.application.handlers.server_lifecycle
bokeh.application.handlers.lifecycle
bokeh.application.handlers.server_request_handler
bokeh.application.handlers.request_handler
bokeh.application.handlers.function
bokeh.server.server
bokeh.server
bokeh.util.options
bokeh.server.tornado
bokeh.util.tornado
bokeh.server.auth_provider
bokeh.server.connection
bokeh.server.contexts
bokeh.server.session
bokeh.server.callbacks
bokeh.server.urls
bokeh.server.views.autoload_js_handler
bokeh.server.views
bokeh.server.views.session_handler
bokeh.server.views.auth_request_handler
bokeh.server.views.doc_handler
bokeh.server.views.metadata_handler
bokeh.server.views.multi_root_static_handler
bokeh.server.views.root_handler
bokeh.server.views.static_handler
bokeh.server.views.ws
bokeh.protocol.receiver
bokeh.server.protocol_handler
bokeh.server.views.ico_handler
bokeh.server.util
bokeh.command.util
bokeh.command
bokeh.application.handlers.document_lifecycle
MARKDOWN
panel.io.reload
panel.io.handlers
panel.io.application
panel.io.session
panel.io.server
panel.reactive
bokeh.plotting
bokeh.plotting._figure
bokeh.transform
bokeh.plotting._graph
bokeh.plotting._renderer
bokeh.plotting._legends
bokeh.plotting._plot
bokeh.plotting._stack
bokeh.plotting._tools
bokeh.plotting.contour
bokeh.plotting.glyph_api
bokeh.plotting._decorators
bokeh.plotting._docstring
bokeh.plotting.gmap
bokeh.plotting.graph
bokeh.layouts
panel.viewable.LoadingSpinner
panel.viewable
panel.viewable.TemplateActions
panel.viewable.BootstrapTemplateActions
panel.viewable.MaterialTemplateActions
caiman
PIL.PcxImagePlugin
PIL.TiffImagePlugin
I tried setting the level of all loggers to ERROR via:
def set_all_logger_levels(level=logging.ERROR):
loggers = logging.Logger.manager.loggerDict
for logger_name, logger in loggers.items():
if isinstance(logger, logging.Logger):
logger.setLevel(level)
Still, I'm getting:
[...]
2024-08-12 08:37:06,955 - Computing support of spatial components
2024-08-12 08:37:06,980 - Updating done in 0s
2024-08-12 08:37:06,980 - Removing created tempfiles
2024-08-12 08:37:06,980 - Updating temporal components
2024-08-12 08:37:06,981 - Generating residuals
2024-08-12 08:37:06,982 - entering the deconvolution
2024-08-12 08:37:06,982 - stopping: overall temporal component not changing significantly
2024-08-12 08:37:06,982 - Searching for more neurons in the residual
2024-08-12 08:37:06,989 - Memory mapping
2024-08-12 08:37:06,996 - Updating Spatial Components using lasso lars
2024-08-12 08:37:07,008 - Updating spatial components
2024-08-12 08:37:07,009 - Initializing update of Spatial Components
2024-08-12 08:37:07,009 - Computing support of spatial components
2024-08-12 08:37:07,013 - Memory mapping
2024-08-12 08:37:07,014 - Updating Spatial Components using lasso lars
2024-08-12 08:37:07,020 - thresholding components
2024-08-12 08:37:07,021 - Updating done in 0s
2024-08-12 08:37:07,021 - Removing created tempfiles
2024-08-12 08:37:07,021 - Updating temporal components
2024-08-12 08:37:07,022 - Generating residuals
2024-08-12 08:37:07,023 - entering the deconvolution
2024-08-12 08:37:07,023 - stopping: overall temporal component not changing significantly
2024-08-12 08:37:07,023 - Searching for more neurons in the residual
2024-08-12 08:37:07,045 - thresholding components
2024-08-12 08:37:07,055 - Updating done in 0s
2024-08-12 08:37:07,056 - Removing created tempfiles
2024-08-12 08:37:07,056 - Updating temporal components
2024-08-12 08:37:07,057 - Generating residuals
2024-08-12 08:37:07,064 - entering the deconvolution
2024-08-12 08:37:07,071 - stopping: overall temporal component not changing significantly
2024-08-12 08:37:07,071 - Returning full background
2024-08-12 08:37:07,075 - In total, 0 neurons were initialized.
2024-08-12 08:37:07,075 - Merging components
2024-08-12 08:37:07,077 - No more components merged!
2024-08-12 08:37:07,077 - Updating spatial components
2024-08-12 08:37:07,077 - Initializing update of Spatial Components
2024-08-12 08:37:07,077 - Computing support of spatial components
[...]
when running:
cnm = cnmf.CNMF(
n_processes=n_processes,
method_init="corr_pnr", # use this for 1 photon
k=K,
gSig=(gSig, gSig),
gSiz=(gSiz, gSiz),
merge_thresh=merge_thresh,
p = p,
dview = cluster,
tsub = tsub,
ssub = ssub,
Ain = Ain,
rf = rf,
stride = stride_cnmf,
only_init_patch = True, # set it to True to run CNMF-E
gnb = gnb,
nb_patch = nb_patch,
method_deconvolution = "oasis", # could use 'cvxpy' alternatively
low_rank_background = low_rank_background,
update_background_components = True, # sometimes setting to False improve the results
min_corr = min_corr,
min_pnr = min_pnr,
normalize_init = False, # just leave as is
center_psf = True, # leave as is for 1 photon
ssub_B = ssub_B,
ring_size_factor = ring_size_factor,
del_duplicates = True, # whether to remove duplicates from initialization
border_pix = 0 # number of pixels to not consider in the borders
)
# Fit the model to the data
cnm.fit(im)
Note: I'm using the latest commit of the dev branch.
I'm guessing that the issue is due to using cluster:
_, cluster, n_processes = cm.cluster.setup_cluster(
backend="multiprocessing",
n_processes=processes,
ignore_preexisting=True
)
...and not being able to disable the logging for all of the child processes.
I'm guessing that you are going to have to include an option to set the logging level in the functions executed by each worker in the cluster.
Actually, the following appears to work:
logging.info("Setting up a new cluster...")
n_processes = 0 # Initialize n_processes
try:
# Set up a new cluster with the specified number of processes
logging.disable(logging.ERROR)
_, cluster, n_processes = cm.cluster.setup_cluster(
backend="multiprocessing",
n_processes=processes,
ignore_preexisting=True
)
logging.disable(logging.INFO)
logging.info(f" Successfully set up a new cluster with {n_processes} processes")
except Exception as e:
logging.warning(f" Error during cluster setup: {str(e)}")
Each worker must inherit the logging level set prior to cm.cluster.setup_cluster
Ah, that makes sense. So if people don't set caiman's logging level first, it might not get set in all the processes.
I think we understand what the problem was and how to solve it, after the patch. Closing the issue.
Thanks for all of your help with this issue!
@pgunn I'm still getting a ton of logging output when running cnmf.CNMF
(using caiman 1.11.3). Can I just use logging.disable(logging.CRITICAL)
, or must I setting the logging level specifically for the caiman logger(s)?
Caiman's logging now goes through the caiman logger, so you'd need to adjust or disable any logging through that logger.
If you're still seeing surprises with how all this works, maybe we should do a quick VC; there's a chance that either I made a mistake when trying to adjust this, or there may be some misconceptions on how logging works that we could clear up quickly.
logging.disable(logging.CRITICAL)
should disable all logging below CRITICAL. I'm surprised that I'm getting so much output. For example:
/scratch/multiomics/nickyoungblut/nextflow-work/lizard-wizard/conda/caiman-96ca22373214b18b4791b98fc8b70e20/lib/python3.11/site-packages/caiman/source_extraction/cnmf/initialization.py:1457: RuntimeWarning: invalid value encountered in divide
pnr = np.divide(data_max, noise_pixel)
/scratch/multiomics/nickyoungblut/nextflow-work/lizard-wizard/conda/caiman-96ca22373214b18b4791b98fc8b70e20/lib/python3.11/site-packages/caiman/source_extraction/cnmf/initialization.py:1769: RuntimeWarning: divide by zero encountered in divide
pnr_box = np.divide(max_box, noise_box)
/scratch/multiomics/nickyoungblut/nextflow-work/lizard-wizard/conda/caiman-96ca22373214b18b4791b98fc8b70e20/lib/python3.11/site-packages/caiman/source_extraction/cnmf/initialization.py:1769: RuntimeWarning: invalid value encountered in divide
pnr_box = np.divide(max_box, noise_box)
/scratch/multiomics/nickyoungblut/nextflow-work/lizard-wizard/conda/caiman-96ca22373214b18b4791b98fc8b70e20/lib/python3.11/site-packages/caiman/source_extraction/cnmf/initialization.py:1785: RuntimeWarning: invalid value encountered in multiply
v_search[r2_min:r2_max, c2_min:c2_max] = cn_box * pnr_box
/scratch/multiomics/nickyoungblut/nextflow-work/lizard-wizard/conda/caiman-96ca22373214b18b4791b98fc8b70e20/lib/python3.11/site-packages/scipy/signal/_spectral_py.py:2014: UserWarning: nperseg = 256 is greater than input length = 205, using nperseg = 205
warnings.warn('nperseg = {0:d} is greater than input length '
/scratch/multiomics/nickyoungblut/nextflow-work/lizard-wizard/conda/caiman-96ca22373214b18b4791b98fc8b70e20/lib/python3.11/site-packages/scipy/signal/_spectral_py.py:2014: UserWarning: nperseg = 256 is greater than input length = 205, using nperseg = 205
warnings.warn('nperseg = {0:d} is greater than input length '
/scratch/multiomics/nickyoungblut/nextflow-work/lizard-wizard/conda/caiman-96ca22373214b18b4791b98fc8b70e20/lib/python3.11/site-packages/scipy/signal/_spectral_py.py:2014: UserWarning: nperseg = 256 is greater than input length = 205, using nperseg = 205
warnings.warn('nperseg = {0:d} is greater than input length '
/scratch/multiomics/nickyoungblut/nextflow-work/lizard-wizard/conda/caiman-96ca22373214b18b4791b98fc8b70e20/lib/python3.11/site-packages/caiman/source_extraction/cnmf/initialization.py:1457: RuntimeWarning: invalid value encountered in divide
pnr = np.divide(data_max, noise_pixel)
/scratch/multiomics/nickyoungblut/nextflow-work/lizard-wizard/conda/caiman-96ca22373214b18b4791b98fc8b70e20/lib/python3.11/site-packages/caiman/source_extraction/cnmf/initialization.py:1769: RuntimeWarning: invalid value encountered in divide
pnr_box = np.divide(max_box, noise_box)
/scratch/multiomics/nickyoungblut/nextflow-work/lizard-wizard/conda/caiman-96ca22373214b18b4791b98fc8b70e20/lib/python3.11/site-packages/caiman/source_extraction/cnmf/initialization.py:1769: RuntimeWarning: divide by zero encountered in divide
pnr_box = np.divide(max_box, noise_box)
/scratch/multiomics/nickyoungblut/nextflow-work/lizard-wizard/conda/caiman-96ca22373214b18b4791b98fc8b70e20/lib/python3.11/site-packages/caiman/source_extraction/cnmf/initialization.py:1785: RuntimeWarning: invalid value encountered in multiply
v_search[r2_min:r2_max, c2_min:c2_max] = cn_box * pnr_box
/scratch/multiomics/nickyoungblut/nextflow-work/lizard-wizard/conda/caiman-96ca22373214b18b4791b98fc8b70e20/lib/python3.11/site-packages/scipy/signal/_spectral_py.py:2014: UserWarning: nperseg = 256 is greater than input length = 205, using nperseg = 205
warnings.warn('nperseg = {0:d} is greater than input length '
/scratch/multiomics/nickyoungblut/nextflow-work/lizard-wizard/conda/caiman-96ca22373214b18b4791b98fc8b70e20/lib/python3.11/site-packages/scipy/signal/_spectral_py.py:2014: UserWarning: nperseg = 256 is greater than input length = 205, using nperseg = 205
warnings.warn('nperseg = {0:d} is greater than input length '
/scratch/multiomics/nickyoungblut/nextflow-work/lizard-wizard/conda/caiman-96ca22373214b18b4791b98fc8b70e20/lib/python3.11/site-packages/scipy/signal/_spectral_py.py:2014: UserWarning: nperseg = 256 is greater than input length = 205, using nperseg = 205
warnings.warn('nperseg = {0:d} is greater than input length '
/scratch/multiomics/nickyoungblut/nextflow-work/lizard-wizard/conda/caiman-96ca22373214b18b4791b98fc8b70e20/lib/python3.11/site-packages/caiman/source_extraction/cnmf/initialization.py:1457: RuntimeWarning: invalid value encountered in divide
pnr = np.divide(data_max, noise_pixel)
[...and many 100's of more lines]
@nick-youngblut
Ah, none of those warnings are warnings that Caiman explicitly generates, so they won't use caiman's logger.
I'll need to look more at this; some of this is stuff we may be able to correct without inspecting data, and the rest we'd probably want to quiet one way or the other.
I was able to suppress the warnings via essentially the following:
import warnings
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=RuntimeWarning)
warnings.filterwarnings("ignore", category=UserWarning)
n_processes = setup_cluster(args.processes)
cnm = cnmf.CNMF( <params> )
cnm.fit()
Thanks for the quick response!
@nick-youngblut Thanks for reminding us of these though; some of this may be improved by code modernisation (I know we have some other bits of code that need a fresh look that were written in the Python 2.x era)