numericalalgorithmsgroup/pybobyqa

Clean up logging

lindonroberts opened this issue · 6 comments

Use nicer logging (more like fmincon) with one line printed per iteration

If I may, in this context I would suggest downgrading the logging.info per evaluation at

logging.info("Function eval %i at point %i has f = %.15g at x = " % (eval_num, pt_num, f) + str(x))
to a logging.debug, such that the INFO logging level prints more manageable periodic progress info.

Also, thanks for developing this implementation! @cmbant and I are interfacing it in our sampler, and it appears to work well for our problems (though any progress on #3 would be very welcomed!)

Hi Jesus - I agree with this (currently too much output is displayed). I'm glad it works well for you - pending an improved handling of correlations (I'm finishing my phd thesis at the moment, but hope to look into this later in the year)

I'm planning on doing something similar to Matlab's optimisation routines (e.g. fmincon), where at each iteration they print out one line with a couple of key variables (current best value, # evals, optimality measure, ...).

Hi Jesus - I agree with this (currently too much output is displayed). I'm glad it works well for you - pending an improved handling of correlations (I'm finishing my phd thesis at the moment, but hope to look into this later in the year)

Good luck with the thesis!

I'm planning on doing something similar to Matlab's optimisation routines (e.g. fmincon), where at each iteration they print out one line with a couple of key variables (current best value, # evals, optimality measure, ...).

Not sure of what "iteration" means in the sentence, but in general differentiating between one logging level (say debug) that prints something per target evaluation and a less verbose (say info) one that prints progress info once in a while would be great IMO.

I would also recommend you, if I may, to declare a logger and call its logging methods, as described here as opposed to calling info etc directly from the logging module. This way the user can grab your logger with e.g. from pybobyqa import my_logger and change its verbosity level, name, etc. In you case, you may want to define a single one in __init__.py or solver.py, and then import it from the rest of the modules. I can write a very simple PR for that if you'd like.

Hi @lindonroberts

I can see that you are now allowing for more logging control in 1.20. That's great and really useful! I would suggest, as described above, to split the logging levels a bit more finely: info for more sparse progress info (e.g. every few rho updates and at soft restarts), and debug for printing at each evaluation. But that's up to you, of course.

Also, you may want to consider creating a module-level logger with logger = logging.getLogger("name") and then using it like logger.info() etc, as explained here: https://docs.python.org/3/library/logging.html#logger-objects This way, people like me using your code as a library within theirs can identify where the messages come from, e.g. using a logging format like "[logger_name] msg".

Thanks for the great work!

@JesusTorrado - I have (finally!) added module-level logging, thanks for this suggestion. At this stage I'm reluctant to change the levels for each message, but you should be able to deactivate the INFO message for each evaluation (which are the only log messages produced by pybobyqa.util)

That's brilliant, thanks a lot! I'll update the pybobyqa version in our next release.

Mind that my suggestion was not necessarily about changing the info logging level, but about having 2 separate levels, whichever they are: one that prints once per target evaluation, and another one that prints progress information only sparsely (e.g. when specific values of the target metric are reached).