berkeleybop/artificial-intelligence-ontology

model drift metrics

Opened this issue · 0 comments

Detect feature changes between training and production to catch problems ahead of performance degradation
Detect prediction distribution shifts between two production periods as a proxy for performance changes (especially useful in delayed ground truth scenarios)
Use drift as a signal for when to retrain – and how often to retrain
Catch feature transformation issues or pipeline breaks
Detect default fallback values used erroneously
Find new data to go label
Find clusters of new data that are problematic for the model in unstructured data
Find anomalous clusters of data that are not in the training set
Find drift in embeddings representing image, language, or unstructured data

https://arize.com/blog-course/drift/