Model selection failed
lzw950905 opened this issue · 1 comments
lzw950905 commented
Description
When I run the Model selection process, even though I just use very small search space (just optimize one parameter), it always reports the bug:
ValueError: too many values to unpack.
Actual Behavior
This is my code.
X_dict = load_wn18rr()
model_class = ComplEx
# Use the template given below for doing grid search.
param_grid = {
"batches_count": [100],
"seed": 0,
"epochs": [10],
"k": [64,128],
"eta": [5],
"loss": ["pairwise"],
# We take care of mapping the params to corresponding classes
"embedding_model_params": {
# generate corruption using all entities during training
"negative_corruption_entities":"all"
},
"regularizer": [None, "LP"],
"regularizer_params": {
"p": [2],
},
"optimizer": ["adam"],
"verbose": False
}
# Train the model on all possibile combinations of hyperparameters.
# Models are validated on the validation set.
# It returnes a model re-trained on training and validation sets.
best_model, best_params, best_mrr_train, \
ranks_test, mrr_test = select_best_model_ranking(model_class, # Class handle of the model to be used
# Dataset
X_dict['train'],
X_dict['valid'],
X_dict['test'],
# Parameter grid
param_grid,
max_combinations=3,
# Use filtered set for eval
use_filter=True,
# corrupt subject and objects separately during eval
use_default_protocol=True,
# Log all the model hyperparams and evaluation stats
verbose=False)
print(type(best_model).__name__, best_params, best_mrr_train, mrr_test)
Expected Behavior
No such error and run successfully.
Steps to Reproduce
lukostaz commented
Hey @lzw950905.
select_best_model_ranking
returns 6 values. You listed only 5 and missed experimental_history
.
The snippet below will do:
best_model, best_params, best_mrr_train, \
ranks_test, mrr_test, experimental_history = select_best_model_ranking(ComplEx,
# Dataset
X_dict['train'],
X_dict['valid'],
X_dict['test'],
# Parameter grid
param_grid,
max_combinations=3,
# Use filtered set for eval
use_filter=True,
# corrupt subject and objects separately during eval
use_default_protocol=True,
# Log all the model hyperparams and evaluation stats
verbose=False)