Add high-level backend specific gradient based optimization procedures
braun-steven opened this issue · 0 comments
We want to provide high-level gradient-descent based optimization procedures. The idea is that a user constructs some model structure with parameters
def optimize(model, data, optimizer, epochs, batch_size, ...):
...
which then uses optimizer
to maximize the data
likelihood batch-wise for epochs
number of epochs.
This is not supposed to be very flexible but should provide a user with a simplistic version to train a model in a specific backend. More advanced users will most likely write their own optimization procedure.
Since tensorly does not provide any dataset/optimizer system, this needs to be implemented in all supported backends.