kjappelbaum/mofdscribe

models to benchmark

kjappelbaum opened this issue · 2 comments

FMcil commented

@kjappelbaum I was trying to think of the best way to benchmark moftransformer. It's pretrained on some tasks, is it a requirement to ensure that pretraining was not performed on leadboard test MOFs? Even in the case where leadboard tasks are very different to the pretraining tasks?

it is on my ToDo's #417

The reason I didn't do it so far is (indeed) that I think one needs to be a bit more careful with hyperparameter optimization and pertaining.
At least the hyperparameter optimization should happen within the cross-validation loop of the benchmark.
De-duplicating the pre-training dataset would be nice, but it is probably not as relevant as being careful with the hyperopt