schuderer/mllaunchpad

Convenience functionality for development workflow

schuderer opened this issue · 2 comments

Right now, when developing a model for ML Launchpad, you have to use the command line to evaluate every code change (which is unnecessary for the prediction code) following a process like:

  1. edit prediction code
  2. run mllaunchpad train ...
  3. run mllaunchpad api ...
  4. test prediction through API (using a tool like Postman)
  5. start over at 1

Actually, this is not completely true -- there are convenience functions like train_model, retest, predict which allow you to replace steps 2, 3 and 4 with code in a local test script/notebook. But they don't allow for, e.g. tweaking the API and running it immediately without retraining (and re-persisting) the model, which potentially wastes time.

Some ideas:

  • Add an optional parameter debug=False to the mllaunchpad.predict convenience function in order to try out new prediction code without re-training. Also add an optional model=None parameter to directly pass the model output of mllaunchpad.train_model.
  • Change the behavior of the command line interface (CLI) command api to always use the live prediction code of the model instead of the persisted code (or create an option to change this behavior accordingly).
  • Same for the predict CLI command.
  • ... other ways are possible.
  • add variant get_validated_config_str besides get_validated_config (or use param)

This needs to be discussed as it's not immediately clear which of the mentioned (or unmentioned) possibilities would be worthwhile to implement (first).

@bobplatte, @Vegacyrez, @TheMBadger Interested in any workflow hiccups you might be running into and what would help them. I tagged this as "needs discussion" for now, meaning this issue is dormant until more concrete use case(s) and needs rear their heads.

This issue overlaps with #91