Run notebook tests in parallel?
Closed this issue · 3 comments
The job_notebooks
job of the Tests
workflow takes forever and a day to execute. We might be able to speed it up at least somewhat by splitting it in two separate jobs, for what are currently two steps, Run METIS Notebooks
and Run MICADO Notebooks
. As far as I can tell, those two steps don't depend on each other (would be bad if they did), but only on the previous step to setup the environment, aka install dependencies. With the proper use of caching, we should be able to factor this out. Then the two expensive jobs of actually running the notebooks could be executed in parallel.
Fine with me. Some thoughts
- I think we should setup a test runner on a beefy machine in the institute. (Also to run the scripts in METIS_Simulations, and end-to-end runs of METIS_Pipeline.) Then maybe we can split the notebooks (and the other tests): run the slow and memory-hogging ones on our beefy machine (maybe only every night or so), and run only fast ones on the github CI. If that setup is our ultimate goal, then I'd rather skip the proposed changes here and instead setup the beefy machine.
- I was hoping to move the notebook running to the DevOps repo too, so we can run the notebooks in all the repos in a coherent way. E.g. maybe use them to increase the code coverage and such. But each repo has their notebooks in a different place. So maybe we can combine those two goals: convert the current script into a script in the DevOps workflow with a parameter that says where the notebooks are. Then the IRDB can just call that script twice, with two different directories.
- I think we should setup a test runner on a beefy machine in the institute. (Also to run the scripts in METIS_Simulations, and end-to-end runs of METIS_Pipeline.) Then maybe we can split the notebooks (and the other tests): run the slow and memory-hogging ones on our beefy machine (maybe only every night or so), and run only fast ones on the github CI. If that setup is our ultimate goal, then I'd rather skip the proposed changes here and instead setup the beefy machine.
I had a very similar idea a few days ago when I first read about self-hosted runners. Also there's a small number of tests within ScopeSim itself (i.e. not notebooks) that are currently skipped, with comments along the lines of "too much for GitHub actions". So we might be able to also run those from the chad machine in the future. Still, it might be worth splitting this here (maybe in a less complicated version, without the caching for now), to get an immediate performance increase.
- I was hoping to move the notebook running to the DevOps repo too, so we can run the notebooks in all the repos in a coherent way. E.g. maybe use them to increase the code coverage and such. But each repo has their notebooks in a different place. So maybe we can combine those two goals: convert the current script into a script in the DevOps workflow with a parameter that says where the notebooks are. Then the IRDB can just call that script twice, with two different directories.
This might tie into what I'm currently doing in https://github.com/AstarVienna/ScopeSim/blob/fh/notebook-dispatch/.github/workflows/notebooks_dispatch.yml, albeit with further modifications.
Ah yes, in ScopeSim we have runnotebooks.sh
. I could not see an easy way to share that script with the other repositories (well, the IRDB). But maybe that is not necessary.
For now it seems indeed good to simply make two jobs, one for the MICADO notebooks and one for the METIS notebooks.