rickvanveen/sklvq

Add more values to the "State" parameter

Opened this issue · 6 comments

Is your feature request related to a problem? Please describe.
For certain applications it is useful to be able to easily pull more information out of a simple callback to the "State" parameter of a model while it is training. Currently the "State" parameter only has access to certain elements such as "variables" , "nit", "fun", "nfun", "tfun", and "step size".

Describe the solution you'd like
If possible, the Relevance Matrix and Current Prototypes should be added to the "State" parameter as well.

Describe alternatives you've considered
While it is possible to calculate these from the variables and elements already provided in the "State" parameter, having them in a more concise form would prevent a lot of unnecessary calculations.

Agree! However, I propose an easier and more general approach: Instead of adding the relevance matrix and current prototypes. Add a copy of the current "model". The solvers and thus the callback function don't and shouldn't know which exact model GLVQ, GMLVQ or LGMLVQ they are dealing with.

Some addition changes are required to make this work but the essence of this is changing:
variables=np.copy(model.get_variables()),
to
model=copy(model)

I'm not sure what the best way of making a copy of a fitted sklearn estimator is though.

@Emraldis is it the case that you created a version where the model (or relevance matrix/prototypes) were added to the State parameter? I would need to plot learning curves of the training process and need a similar functionality.

Thank you! I will look into it and might get back to you.

It would also be great if this would be made into a pull request such that the next time we don't need to pass branches around :-)