/RL_S2R

Primary LanguagePythonOtherNOASSERTION

Goal: Zero migration of the decision model in the virtual scene to the real scene guarantees good adaptivity and stability.

Environment

  1. TORCS
  2. UNTIY
  3. GYM

Algorithm

  1. AMDDPG
  2. AMRL
  3. PPO
  4. TRPO
  5. SAC
  6. MAML
  7. DDPG
  8. RL^2
  9. EPG
  10. DQN
  11. DDQN

Requirement

  1. python=3.9
  2. mlagents==0.29.0
  3. torch
  4. gym
  5. numpy==1.20.3
  6. torch==1.8.1+cu102 torchvision==0.9.1+cu102 torchaudio===0.8.1 -f https://download.pytorch.org/whl/torch_stable.html

Function

  1. reinforcement learning
  2. target detection
  3. Semantic segmentation

How to runing

1) python  amddg_run.py 
2) python  amrl_run.py

Main paper

Reference

  • CleanRL is a learning library based on the Gym API. It is designed to cater to newer people in the field and provides very good reference implementations.
  • Tianshou is a learning library that's geared towards very experienced users and is design to allow for ease in complex algorithm modifications.
  • RLlib is a learning library that allows for distributed training and inferencing and supports an extraordinarily large number of features throughout the reinforcement learning space.
  • Ray/Lilib Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a toolkit of libraries (Ray AIR) for simplifying ML compute.

Citation

@article{xiao2022feature,
  title={Feature semantic space-based sim2real decision model},
  author={Xiao, Wenwen and Luo, Xiangfeng and Xie, Shaorong},
  journal={Applied Intelligence},
  pages={1--17},
  year={2022},
  publisher={Springer}
}

License

Apache License 2.0