/Rofunc

πŸ€– The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation

Primary LanguagePythonApache License 2.0Apache-2.0

Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation

Release License Documentation Status Build Status

Repository address: https://github.com/Skylark0924/Rofunc
Documentation: https://rofunc.readthedocs.io/

Rofunc package focuses on the Imitation Learning (IL), Reinforcement Learning (RL) and Learning from Demonstration (LfD) for (Humanoid) Robot Manipulation. It provides valuable and convenient python functions, including demonstration collection, data pre-processing, LfD algorithms, planning, and control methods. We also provide an IsaacGym and OmniIsaacGym based robot simulator for evaluation. This package aims to advance the field by building a full-process toolkit and validation platform that simplifies and standardizes the process of demonstration data collection, processing, learning, and its deployment on robots.

Update News πŸŽ‰πŸŽ‰πŸŽ‰

Installation

Please refer to the installation guide.

Documentation

Documentation Example Gallery

To give you a quick overview of the pipeline of rofunc, we provide an interesting example of learning to play Taichi from human demonstration. You can find it in the Quick start section of the documentation.

The available functions and plans can be found as follows.

Note βœ…: Achieved πŸ”ƒ: Reformatting β›”: TODO

Data Learning P&C Tools Simulator
xsens.record βœ… DMP β›” LQT βœ… config βœ… Franka βœ…
xsens.export βœ… GMR βœ… LQTBi βœ… logger βœ… CURI βœ…
xsens.visual βœ… TPGMM βœ… LQTFb βœ… datalab βœ… CURIMini πŸ”ƒ
opti.record βœ… TPGMMBi βœ… LQTCP βœ… robolab.coord βœ… CURISoftHand βœ…
opti.export βœ… TPGMM_RPCtl βœ… LQTCPDMP βœ… robolab.fk βœ… Walker βœ…
opti.visual βœ… TPGMM_RPRepr βœ… LQR βœ… robolab.ik βœ… Gluon πŸ”ƒ
zed.record βœ… TPGMR βœ… PoGLQRBi βœ… robolab.fd β›” Baxter πŸ”ƒ
zed.export βœ… TPGMRBi βœ… iLQR πŸ”ƒ robolab.id β›” Sawyer πŸ”ƒ
zed.visual βœ… TPHSMM βœ… iLQRBi πŸ”ƒ visualab.dist βœ… Humanoid βœ…
emg.record βœ… RLBaseLine(SKRL) βœ… iLQRFb πŸ”ƒ visualab.ellip βœ… Multi-Robot βœ…
emg.export βœ… RLBaseLine(RLlib) βœ… iLQRCP πŸ”ƒ visualab.traj βœ…
mmodal.record β›” RLBaseLine(ElegRL) βœ… iLQRDyna πŸ”ƒ oslab.dir_proc βœ…
mmodal.sync βœ… BCO(RofuncIL) πŸ”ƒ iLQRObs πŸ”ƒ oslab.file_proc βœ…
BC-Z(RofuncIL) β›” MPC β›” oslab.internet βœ…
STrans(RofuncIL) β›” RMP β›” oslab.path βœ…
RT-1(RofuncIL) β›”
A2C(RofuncRL) βœ…
PPO(RofuncRL) βœ…
SAC(RofuncRL) βœ…
TD3(RofuncRL) βœ…
CQL(RofuncRL) β›”
TD3BC(RofuncRL) β›”
DTrans(RofuncRL) βœ…
EDAC(RofuncRL) β›”
AMP(RofuncRL) βœ…
ASE(RofuncRL) βœ…
ODTrans(RofuncRL) β›”

RofuncRL

RofuncRL is one of the most important sub-packages of Rofunc. It is a modular easy-to-use Reinforcement Learning sub-package designed for Robot Learning tasks. It has been tested with simulators like OpenAIGym, IsaacGym, OmniIsaacGym (see example gallery), and also differentiable simulators like PlasticineLab and DiffCloth. Here is a list of robot tasks trained by RofuncRL:

Note
You can customize your own project based on RofuncRL by following the RofuncRL customize tutorial.
We also provide a RofuncRL-based repository template to generate your own repository following the RofuncRL structure by one click.
For more details, please check the documentation for RofuncRL.

The list of all supported tasks.
Tasks Animation Performance ModelZoo
Ant βœ…
Cartpole
Franka
Cabinet
βœ…
Franka
CubeStack
CURI
Cabinet
βœ…
CURI
CabinetImage
CURI
CabinetBimanual
CURIQbSoftHand
SynergyGrasp
βœ…
Humanoid βœ…
HumanoidAMP
Backflip
βœ…
HumanoidAMP
Walk
βœ…
HumanoidAMP
Run
βœ…
HumanoidAMP
Dance
βœ…
HumanoidAMP
Hop
βœ…
HumanoidASE
GetupSwordShield
βœ…
HumanoidASE
PerturbSwordShield
βœ…
HumanoidASE
HeadingSwordShield
βœ…
HumanoidASE
LocationSwordShield
βœ…
HumanoidASE
ReachSwordShield
βœ…
HumanoidASE
StrikeSwordShield
βœ…
BiShadowHand
BlockStack
βœ…
BiShadowHand
BottleCap
βœ…
BiShadowHand
CatchAbreast
βœ…
BiShadowHand
CatchOver2Underarm
βœ…
BiShadowHand
CatchUnderarm
βœ…
BiShadowHand
DoorOpenInward
βœ…
BiShadowHand
DoorOpenOutward
βœ…
BiShadowHand
DoorCloseInward
βœ…
BiShadowHand
DoorCloseOutward
βœ…
BiShadowHand
GraspAndPlace
βœ…
BiShadowHand
LiftUnderarm
βœ…
BiShadowHand
HandOver
βœ…
BiShadowHand
Pen
βœ…
BiShadowHand
PointCloud
BiShadowHand
PushBlock
βœ…
BiShadowHand
ReOrientation
βœ…
BiShadowHand
Scissors
βœ…
BiShadowHand
SwingCup
βœ…
BiShadowHand
Switch
βœ…
BiShadowHand
TwoCatchUnderarm
βœ…

Star History

Star History Chart

Citation

If you use rofunc in a scientific publication, we would appreciate citations to the following paper:

@software{liu2023rofunc,
          title = {Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation},
          author = {Liu, Junjia and Li, Chenzui and Delehelle, Donatien and Li, Zhihao and Chen, Fei},
          year = {2023},
          publisher = {Zenodo},
          doi = {10.5281/zenodo.10016946},
          url = {https://doi.org/10.5281/zenodo.10016946},
          dimensions = {true},
          google_scholar_id = {0EnyYjriUFMC},
}

Related Papers

  1. Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects (IEEE RA-L 2022 | Code)
@article{liu2022robot,
         title={Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects},
         author={Liu, Junjia and Chen, Yiting and Dong, Zhipeng and Wang, Shixiong and Calinon, Sylvain and Li, Miao and Chen, Fei},
         journal={IEEE Robotics and Automation Letters},
         volume={7},
         number={2},
         pages={5159--5166},
         year={2022},
         publisher={IEEE}
}
  1. SoftGPT: Learn Goal-oriented Soft Object Manipulation Skills by Generative Pre-trained Heterogeneous Graph Transformer (IROS 2023|Code coming soon)
@inproceedings{liu2023softgpt,
               title={Softgpt: Learn goal-oriented soft object manipulation skills by generative pre-trained heterogeneous graph transformer},
               author={Liu, Junjia and Li, Zhihao and Lin, Wanyu and Calinon, Sylvain and Tan, Kay Chen and Chen, Fei},
               booktitle={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
               pages={4920--4925},
               year={2023},
               organization={IEEE}
}
  1. BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration (IEEE CDC 2023 | Code)
@article{liu2023birp,
        title={BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration},
        author={Liu, Junjia and Sim, Hengyi and Li, Chenzui and Chen, Fei},
        journal={arXiv preprint arXiv:2307.05933},
        year={2023}
}

The Team

Rofunc is developed and maintained by the CLOVER Lab (Collaborative and Versatile Robots Laboratory), CUHK.

Acknowledge

We would like to acknowledge the following projects:

Learning from Demonstration

  1. pbdlib
  2. Ray RLlib
  3. ElegantRL
  4. SKRL
  5. DexterousHands

Planning and Control

  1. Robotics codes from scratch (RCFS)