VisionEval/VisionEval-Dev

Problems Re-Running Models

Closed this issue · 2 comments

A couple of items that occur when models have already been run:

  1. If a model has already been run, there can be an error reloading the visioneval.cnf specifications where a null list of specifications is passed for merging. Need to figure out a short path to reproduce that problem.
  2. If a model has been run and the visioneval.cnf is changed, it can lead to weird problems with inconsistent parameters. Need to implement the following:
    • Add a framework function (and apply it during model loading) to compare a set of run parameters and identify differences.
    • Load the configuration first (without regard to what is in the model state/previous run). If any changes (using new framework function), then warn the user and require a resolution (to save or reset the existing results) prior to running the model ("continue" is not an option).
    • Make sure the model can open if with inconsistent changes (do we load the new visioneval.cnf, or do we load the one from the existing results). I'm inclined to load the existing results, but to warn that there are "unexecuted changes" in the source visioneval.cnf, and perhaps report what those are, then deliver the message about needing to do "save or reset" to move on. Don't try to auto-recover however (though perhaps do give the user an opportunity to re-extract the configuration from a previous run - possibly when we copy the model, we could choose to copy the visioneval.cnf implied in the run results, or copy what is in the model "source").
    • If there are only scenario or model stage changes that add new model stages, those can be "continued"; need to carefully examine what changed and do an item-by-item comparison

Fix in the pipeline along with R 4.2.2 code updates.

Fixed in development-next push 2023-03-24