Use get-changed-files to decide which tests to run
Closed this issue · 3 comments
With our test suite already being longish, and with 4 new demos already in the PRs, I think we should think of some nice ways to cut down test suite runtime.
For example, maybe we could use https://github.com/jitterbit/get-changed-files to see which files are changed/added in the commit, and use some conditions to reliably decide which tests to run. For example, if the commit changes something in go_mesh_seq.py
, it doesn't make sense to run demos (and other tests) that have nothing to do with it.
I'd be happy to take this on if you think it's a good idea @jwallwork23 @stephankramer.
Potentially relevant for mesh-adaptation/movement#74 also, but I don't know movement well enough.
I'm not too keen on this idea, to be honest.
I only took a glance at the repo but it seems to me to be more about tackling specific coding issues rather than cutting down costs. It doesn't seem to me that the tool does any dependency checking, so how could we be sure that changes in just goalie/mesh_seq.py
didn't have side-effects elsewhere in the codebase (which they likely would).
Instead, the first recommendations I have for cutting down test suites are (where appropriate):
- Reduce mesh resolution
- Reduce numbers of timesteps
- Don't solve to high tolerances
Another option is to use mocking, which is done to some extent (see #42).
Thanks for taking a look! And yes, the action only returns names of changed files - we are responsible for setting the conditions. So we could make this pretty reliable/conservative.
Re your suggestions: I did have to reduce both resolution and number of timesteps in my bubble shear demos, to the point where it made me think how much sense it even has to solve over the whole subinterval. I have some more ideas dependant on #182 so I'll bring them up if that gets merged.
Will close this :)
Thanks for taking a look! And yes, the action only returns names of changed files - we are responsible for setting the conditions. So we could make this pretty reliable/conservative.
Yes, but I think it would be difficult to implement ourselves. Also, we already generate a code coverage report, but at some point it would be good to put in a CI check that code coverage is not reduced by a new contribution.
Re your suggestions: I did have to reduce both resolution and number of timesteps in my bubble shear demos, to the point where it made me think how much sense it even has to solve over the whole subinterval. I have some more ideas dependant on #182 so I'll bring them up if that gets merged.
Yeah - perhaps confusingly - reducing the subinterval length is what I meant by taking fewer timesteps, rather than taking longer timesteps. Apologies.