Benchmark script improvements needed
Opened this issue · 0 comments
carlosmmatos commented
In order to align with each script - they all currently will save the resulting CSV file to the local directory.
- Update benchmark script to not use the virtual env to store the file. Just run it from whatever directory the user chooses and this will keep the existing functionality of each python script correct.
- Update/Refactor benchmark logic. It's been a while and the script needs some refinement.
- Update any necessary documentation
- Consider adding a developer doc that shows how one can test this (w/o using the benchmark script)