Ekumen-OS/beluga

Error when trying to compare the benchmark results

agalbachicar opened this issue · 2 comments

Bug description

When trying to compare the results obtained by running instructions here fails because of negative data and the code tries to log-scale the data.

Please note that I could get up to the comparison by applying the suggested patch in #262 (thanks @glpuga for the suggestion).

Platform (please complete the following information):

  • OS: docker, humble, jammy
  • Beluga version: 3144527 (main branch)

How to reproduce

List steps to reproduce the issue:

  1. Follow https://github.com/Ekumen-OS/beluga/blob/main/GETTING_STARTED.md to build the workspace using the humble - docker environment.
  2. Run a parametrized benchmark:
$ ros2 run beluga_benchmark parameterized_run 10 20 50
  1. Fix matplotlib
$ sudo pip3 install matplotlib==3.7.3
  1. Run compare_results:
$ ros2 run beluga_benchmark compare_results -s ./benchmark_10_particles_output/ -l 10_part -s ./benchmark_20_particles_output/ -l 20_part -s ./benchmark_50_particles_output/ -l 50_part
/usr/local/lib/python3.10/dist-packages/pandas/core/arrays/masked.py:62: UserWarning: Pandas requires version '1.3.4' or newer of 'bottleneck' (version '1.3.2' currently installed).
  from pandas.core import (
Exception in Tkinter callback
Traceback (most recent call last):
  File "/usr/lib/python3.10/tkinter/__init__.py", line 1921, in __call__
    return self.func(*args)
  File "/usr/lib/python3.10/tkinter/__init__.py", line 839, in callit
    func(*args)
  File "/usr/local/lib/python3.10/dist-packages/matplotlib/backends/_backend_tk.py", line 271, in idle_draw
    self.draw()
  File "/usr/local/lib/python3.10/dist-packages/matplotlib/backends/backend_tkagg.py", line 10, in draw
    super().draw()
  File "/usr/local/lib/python3.10/dist-packages/matplotlib/backends/backend_agg.py", line 400, in draw
    self.figure.draw(self.renderer)
  File "/usr/local/lib/python3.10/dist-packages/matplotlib/artist.py", line 95, in draw_wrapper
    result = draw(artist, renderer, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/matplotlib/artist.py", line 72, in draw_wrapper
    return draw(artist, renderer)
  File "/usr/local/lib/python3.10/dist-packages/matplotlib/figure.py", line 3175, in draw
    mimage._draw_list_compositing_images(
  File "/usr/local/lib/python3.10/dist-packages/matplotlib/image.py", line 131, in _draw_list_compositing_images
    a.draw(renderer)
  File "/usr/local/lib/python3.10/dist-packages/matplotlib/artist.py", line 72, in draw_wrapper
    return draw(artist, renderer)
  File "/usr/local/lib/python3.10/dist-packages/matplotlib/axes/_base.py", line 3028, in draw
    self._update_title_position(renderer)
  File "/usr/local/lib/python3.10/dist-packages/matplotlib/axes/_base.py", line 2963, in _update_title_position
    bb = ax.xaxis.get_tightbbox(renderer)
  File "/usr/local/lib/python3.10/dist-packages/matplotlib/axis.py", line 1335, in get_tightbbox
    ticks_to_draw = self._update_ticks()
  File "/usr/local/lib/python3.10/dist-packages/matplotlib/axis.py", line 1274, in _update_ticks
    major_locs = self.get_majorticklocs()
  File "/usr/local/lib/python3.10/dist-packages/matplotlib/axis.py", line 1496, in get_majorticklocs
    return self.major.locator()
  File "/usr/local/lib/python3.10/dist-packages/matplotlib/ticker.py", line 2341, in __call__
    return self.tick_values(vmin, vmax)
  File "/usr/local/lib/python3.10/dist-packages/matplotlib/ticker.py", line 2358, in tick_values
    raise ValueError(
ValueError: Data has no positive values, and therefore can not be log-scaled.

Just an empty window is created.

Expected behavior

The comparisons of all the metrics should appear as the benchmarking guide indicates.

Actual behavior

It fails because of negative data (apparently all). Probably, this should be handled in compare_results.
Otherwise, when previous data adjustment is required, it must be indicated in the guide.

Additional context

N/A

More info, I think the problem occurs in this line.

@glpuga is this still a problem or did it get fixed back when you were collecting data for ERF?