deep-diver/semantic-segmentation-ml-pipeline

Add model evaluation GitHub Action

Opened this issue · 0 comments

This issue is to discuss if it is worth trying to add an additional GitHub Action to evaluate the latest trained model when we have more fresh data. It assumes the following scenario:

  1. We have collected/sampled more fresh data
  2. We manually run the model evaluation GitHub Action
    • The latest deployed model's performance is lower than expected on the fresh data
    • The model evaluation is done in a GitHub Action itself
  3. Then we manually run the model training pipeline GitHub Action including the fresh data