Code Release on evaluate_imitate.py
Closed this issue · 1 comments
Hi, thanks a lot for your incredible framework and open-sourced codes!
I was wondering if you have any plans to release the evaluation framework for the "Language-Conditioned Imitation" task, namely, evaluate_imitate.py (currently a blank file)
It would be quite interesting to port the language-condition imitation code to the simulation frameworks (e.g. coppeliasim), and I hope further research could be facilitated from your work by doing so.
Hope this issue could reach you, and thanks again for your outstanding work!
Hey @jobin2725 - sorry for the delayed response! The plan is to add the simulation evaluate_imitate.py
following the new standardized environments from RoboHive (https://github.com/vikashplus/robohive/tree/main). Unfortunately, I am a bit swamped with a couple final threads this month, but I will get to it as soon as I can!
In the meantime, you can use the R3M Evaluation Code (https://github.com/facebookresearch/r3m/tree/eval/evaluation) which is what we initially forked / used to run the evaluations in the paper!