Optimizing Trace load and parsing functionality
briancoutinho opened this issue ยท 3 comments
๐ Motivation and context
As we analyze larger and more traces at scale the time for parsing trace files gets into the critical path.
In this work-stream, we plan to identify and fix performance bottlenecks in trace load and parsing and fix them.
Description
Details
To investigate this we will need 1) benchmarking setup, 2) test trace data, 3) profiling methodology. These are described below.
Benchmark Setup and Test Trace data
We can leverage pyperf for a reliable benchmarking setup.
- Pyperf provides means to get measurements statistics, histograms etc.
- It can also track overall memory usage and trace memory allocation calls.
For trace data we will use the hta/tests/data/
directory, and optionally include any test traces a user may want to run with benchmarks.
Profiling Methodology
In addition to the benchmark measurements we can leverage py-spy to analyze CPU time breakdown across functions.
To install py-spy simply run:
pip install py-spy
And profile the benchmark using
sudo /opt/miniconda3/envs/trace-analyzer/bin/py-spy record -p <pid of benchmark>
Alternatives
No response
Additional context
No response
Initial Analysis
Looking at the py-spy results a large fraction of trace_load() was being spent getting the memory footprint of the loaded json.
We now look to optimize the load and parsing together. This could done by merging the load to json and parsing in a single step in pandas, this is still WIP.
Currently the rank parsing is pretty fast with the use of re.search
. The trace file is loaded once and converted into a pandas df. What exactly are we trying to optimize here?
Currently the rank parsing is pretty fast with the use of
re.search
. The trace file is loaded once and converted into a pandas df. What exactly are we trying to optimize here?
We are currently loading the trace as a json object and then constructing dataframe, the intermediate step consumes a lot of memory and time (its a dynamic object with a lot of memory allocations). It may be possible to incrementally parse json and fill the dataframe, that is how pandas read_json works.
The time is low for the example traces, but larger traces are taking 120s or more to load. Also, your optimization sped things up a quite a bit; that was a low hanging fruit.