QUERY: Are the benchmarking done sequentially?
officialasishkumar opened this issue · 6 comments
I have a class like this. I want to do the benchmarking sequentially i.e. first time_FormalIntegrator_check
, then time_FormalIntegrator_calculate_spectrum
ans so on. Is there any way to do it?
class BenchmarkMontecarloMontecarloNumbaNumbaFormalIntegral(BenchmarkBase):
def __init__(self):
super().__init__()
self.config = None
def setup(self):
filename = "data/tardis_configv1_benchmark.yml"
path = self.get_relative_path(filename)
self.config = Configuration.from_yaml(path)
self.Simulation = run_tardis(
self.config, log_level="ERROR", show_progress_bars=False
)
self.FormalIntegrator = formal_integral.FormalIntegrator(
self.Simulation.simulation_state, self.Simulation.plasma, self.Simulation.transport
)
def time_FormalIntegrator_check(self) -> None:
self.FormalIntegrator.check()
def time_FormalIntegrator_calculate_spectrum(self) -> None:
self.FormalIntegrator.calculate_spectrum(
self.Simulation.transport.transport_state.spectrum.frequency
)
def time_FormalIntegrator_make_source_function(self) -> None:
self.att_S_ul, self.Jredlu, self.Jbluelu, self.e_dot_u = self.FormalIntegrator.make_source_function()
def time_FormalIntegrator_generate_numba_objects(self) -> None:
self.FormalIntegrator.generate_numba_objects()
Is there a reason for wanting them to be done in order?
@HaoZeke I have some functions that needs to be run before other functions. It's basically dependent on them.
@HaoZeke I have some functions that needs to be run before other functions. It's basically dependent on them.
Isn't that the use-case for the setup
functions?
@HaoZeke I have some functions that needs to be run before other functions. It's basically dependent on them.
Isn't that the use-case for the
setup
functions?
Those functions also need to be benchmarked.
I think this might be a design issue. If you're benchmarking the sequence of functions then a simple lambda enclosing them should suffice? Or even a little utility function. eg. :
# pseudocode
def a():
b() # setup
c() # payload
# profile this with ASV
def tester():
a()
Either that or I'd assume you could profile the setup separately for optimizing it, and have that separated from the c()
function above, since they're separate tasks conceptually, i.e. how quickly c()
works is independent or should be independent of how quickly b()
is run.