Skip to content

Latest commit

 

History

History
253 lines (200 loc) · 63.5 KB

README.md

File metadata and controls

253 lines (200 loc) · 63.5 KB

Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation

Release License Documentation Status Build Status

Repository address: https://github.com/Skylark0924/Rofunc
Documentation: https://rofunc.readthedocs.io/

Rofunc package focuses on the Imitation Learning (IL), Reinforcement Learning (RL) and Learning from Demonstration (LfD) for (Humanoid) Robot Manipulation. It provides valuable and convenient python functions, including demonstration collection, data pre-processing, LfD algorithms, planning, and control methods. We also provide an IsaacGym and OmniIsaacGym based robot simulator for evaluation. This package aims to advance the field by building a full-process toolkit and validation platform that simplifies and standardizes the process of demonstration data collection, processing, learning, and its deployment on robots.

Update News 🎉🎉🎉

Installation

Please refer to the installation guide.

Documentation

Documentation Example Gallery

To give you a quick overview of the pipeline of rofunc, we provide an interesting example of learning to play Taichi from human demonstration. You can find it in the Quick start section of the documentation.

The available functions and plans can be found as follows.

Note ✅: Achieved 🔃: Reformatting ⛔: TODO

Data Learning P&C Tools Simulator
xsens.record DMP LQT config Franka
xsens.export GMR LQTBi logger CURI
xsens.visual TPGMM LQTFb datalab CURIMini 🔃
opti.record TPGMMBi LQTCP robolab.coord CURISoftHand
opti.export TPGMM_RPCtl LQTCPDMP robolab.fk Walker
opti.visual TPGMM_RPRepr LQR robolab.ik Gluon 🔃
zed.record TPGMR PoGLQRBi robolab.fd Baxter 🔃
zed.export TPGMRBi iLQR 🔃 robolab.id Sawyer 🔃
zed.visual TPHSMM iLQRBi 🔃 visualab.dist Humanoid
emg.record RLBaseLine(SKRL) iLQRFb 🔃 visualab.ellip Multi-Robot
emg.export RLBaseLine(RLlib) iLQRCP 🔃 visualab.traj
mmodal.record RLBaseLine(ElegRL) iLQRDyna 🔃 oslab.dir_proc
mmodal.sync BCO(RofuncIL) 🔃 iLQRObs 🔃 oslab.file_proc
BC-Z(RofuncIL) MPC oslab.internet
STrans(RofuncIL) RMP oslab.path
RT-1(RofuncIL)
A2C(RofuncRL)
PPO(RofuncRL)
SAC(RofuncRL)
TD3(RofuncRL)
CQL(RofuncRL)
TD3BC(RofuncRL)
DTrans(RofuncRL)
EDAC(RofuncRL)
AMP(RofuncRL)
ASE(RofuncRL)
ODTrans(RofuncRL)

RofuncRL

RofuncRL is one of the most important sub-packages of Rofunc. It is a modular easy-to-use Reinforcement Learning sub-package designed for Robot Learning tasks. It has been tested with simulators like OpenAIGym, IsaacGym, OmniIsaacGym (see example gallery), and also differentiable simulators like PlasticineLab and DiffCloth. Here is a list of robot tasks trained by RofuncRL:

Note
You can customize your own project based on RofuncRL by following the RofuncRL customize tutorial.
We also provide a RofuncRL-based repository template to generate your own repository following the RofuncRL structure by one click.
For more details, please check the documentation for RofuncRL.

The list of all supported tasks.
Tasks Animation Performance ModelZoo
Ant
Cartpole
Franka
Cabinet
Franka
CubeStack
CURI
Cabinet
CURI
CabinetImage
CURI
CabinetBimanual
CURIQbSoftHand
SynergyGrasp
Humanoid
HumanoidAMP
Backflip
HumanoidAMP
Walk
HumanoidAMP
Run
HumanoidAMP
Dance
HumanoidAMP
Hop
HumanoidASE
GetupSwordShield
HumanoidASE
PerturbSwordShield
HumanoidASE
HeadingSwordShield
HumanoidASE
LocationSwordShield
HumanoidASE
ReachSwordShield
HumanoidASE
StrikeSwordShield
BiShadowHand
BlockStack
BiShadowHand
BottleCap
BiShadowHand
CatchAbreast
BiShadowHand
CatchOver2Underarm
BiShadowHand
CatchUnderarm
BiShadowHand
DoorOpenInward
BiShadowHand
DoorOpenOutward
BiShadowHand
DoorCloseInward
BiShadowHand
DoorCloseOutward
BiShadowHand
GraspAndPlace
BiShadowHand
LiftUnderarm
BiShadowHand
HandOver
BiShadowHand
Pen
BiShadowHand
PointCloud
BiShadowHand
PushBlock
BiShadowHand
ReOrientation
BiShadowHand
Scissors
BiShadowHand
SwingCup
BiShadowHand
Switch
BiShadowHand
TwoCatchUnderarm

Star History

Star History Chart

Citation

If you use rofunc in a scientific publication, we would appreciate citations to the following paper:

@software{liu2023rofunc,
          title = {Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation},
          author = {Liu, Junjia and Li, Chenzui and Delehelle, Donatien and Li, Zhihao and Chen, Fei},
          year = {2023},
          publisher = {Zenodo},
          doi = {10.5281/zenodo.10016946},
          url = {https://doi.org/10.5281/zenodo.10016946},
          dimensions = {true},
          google_scholar_id = {0EnyYjriUFMC},
}

Related Papers

  1. Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects (IEEE RA-L 2022 | Code)
@article{liu2022robot,
         title={Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects},
         author={Liu, Junjia and Chen, Yiting and Dong, Zhipeng and Wang, Shixiong and Calinon, Sylvain and Li, Miao and Chen, Fei},
         journal={IEEE Robotics and Automation Letters},
         volume={7},
         number={2},
         pages={5159--5166},
         year={2022},
         publisher={IEEE}
}
  1. SoftGPT: Learn Goal-oriented Soft Object Manipulation Skills by Generative Pre-trained Heterogeneous Graph Transformer (IROS 2023|Code coming soon)
@inproceedings{liu2023softgpt,
               title={Softgpt: Learn goal-oriented soft object manipulation skills by generative pre-trained heterogeneous graph transformer},
               author={Liu, Junjia and Li, Zhihao and Lin, Wanyu and Calinon, Sylvain and Tan, Kay Chen and Chen, Fei},
               booktitle={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
               pages={4920--4925},
               year={2023},
               organization={IEEE}
}
  1. BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration (IEEE CDC 2023 | Code)
@article{liu2023birp,
        title={BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration},
        author={Liu, Junjia and Sim, Hengyi and Li, Chenzui and Chen, Fei},
        journal={arXiv preprint arXiv:2307.05933},
        year={2023}
}

The Team

Rofunc is developed and maintained by the CLOVER Lab (Collaborative and Versatile Robots Laboratory), CUHK.

Acknowledge

We would like to acknowledge the following projects:

Learning from Demonstration

  1. pbdlib
  2. Ray RLlib
  3. ElegantRL
  4. SKRL
  5. DexterousHands

Planning and Control

  1. Robotics codes from scratch (RCFS)