lacava/few

implement 3-fold cross validation for internal updating of best model

lacava opened this issue · 1 comments

currently the training data is split into training and validation sets and the best model is updated when a model with a higher validation score is found. we could simplify quite a bit and have a more robust validation measure by removing train_test_split and the associated numpy arrays / fitting predicting code with a direct call to cross_val_score(self.ml,features,labels,cv=3) or cross_val_score(self.ml,self.X[self.valid_loc(),:].transpose(),labels,cv=3).

see commit 39e9323