flowersteam/explauto

Design architecture of GMR

RxnDbr opened this issue · 4 comments

I have met @sebastien-forestier last week and we have a proposition of architecture for the development of GMR. As it has been said before, it should be possible to use GMR in any case. That is why a GMR class will be created in models and will be herited from sklearn.mixture.GMM like GMM class from gmminf.py. Plus, it will contained regressions methods LSE, SLSE, stochastic sampling and the jonathan sampling method that has been described by @oudeyer inspired from @jgrizou noteboook . The number of gaussians are given in parameters.

NB :

  • LSE : moyenne pondérée
  • SLSE : consiste à faire de la régression LSE en utilisant seulement une Gaussienne: celle dont la projection sur y a le plus grand « poids ».
  • Stochastic sampling : consiste à tirer le x dans la distribution P(x sachant y) encodée par le mélange de Gaussienne.
  • Jonathan sampling (I do not know how to name it) : la méthode consistant à trouver le x qui maximise la probabilité P(x sachant y), ce dont parle Jonathan dans son notebook (mais Gharamani n’en parle pas). Pour cela il faut utiliser un alto d’optimisation stochastique,
    car pas de solution analytique. Méthode aussi utilisable pour apprendre directement des modèles inverses redondants.

A second class ILOGMR will be implemented and would be herited from SensorimotorModels abstract class . It will be available for inverse and forward model. Plus, users will choose the way they want to compute their inverse model (directly or using optimization).

Do you think this structure is fine ? Do I forget something ?

ok,
To detail a bit more, to perform an ILO-GMR inverse inference, the first choice of the user is whether to compute an inverse model (with local data) with one of the methods of GMR, or to compute a forward model (with local data) with one of the methods of GMR and use an optimization method (e.g. CMA-ES) to find the good m to reach the goal s.

Sounds good!

In addition to gmminf.py from explauto, you can also take inspiration from https://github.com/AlexanderFabisch/gmr/tree/master/gmr, that is what I used in the gist example to compute the conditional distribution.

Hello everybody,

I have written GMR class as described above except that it does not heritate from sklearn.mixture.GMM ( gmminf.py ) but from GMM class of Alexandre Fabisch.

I am going to test this, start a new notebook and then write ILO_GMR from Sensorimotor model (cf above) if it is fine to you.

Hi,
It's a bit late to say that, but if you want, I've implemented GMR which extend scikit-learn GMM class, which can be usefull if you want to keep the sk-learn api or other sk-learn GMM methods. It is basically a strict implementation of what is described in Calinon book "Robot programming by demonstration" with part of code inspired from Alexandre Fabish.