It is my reading with RL, AI papers. There have 7 levels to explain the myself learning on reachers.
There are few class :
-
Grasping
-
Computer vision
-
Transfering and Muilt-task learning
-
Manipulation
-
Exploration
-
Sim2real
-
Others
paper |
~15% |
~30% |
~45% |
~60% |
~70% |
~80% |
~90% |
Deep Reinforcement Learning for Vision-Based Robotic Grasping: A Simulated Comparative Evaluation of Off-Policy Methods |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
|
Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
|
A Framework for Efficient Robotic Manipulation |
✓ |
✓ |
|
|
|
|
|
Grasp Proposal Networks- An End-to-End Solution for Visual Learning of Robotic Grasps |
✓ |
✓ |
✓ |
|
|
|
|
QT-Opt Scalable Deep_Reinforcement Learning for Vision-Based Robotic Manipulation |
✓ |
✓ |
✓ |
✓ |
✓ |
|
|
Dex-Net 2.0: Deep Learning to Plan Robust Grasps withSynthetic Point Clouds and Analytic Grasp Metrics |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
|
Dex-Net 2.1: Learning Deep Policies for Robot Bin Picking by Simulating Robust Grasping Sequences |
✓ |
✓ |
✓ |
✓ |
|
|
|
Dex-Net 3.0: Computing Robust Robot Suction Grasp Targets using a New Analytic Model and Deep Learning |
✓ |
✓ |
✓ |
✓ |
|
|
|
Combining Deep Deterministic Policy Gradient with Cross-Entropy Method |
✓ |
✓ |
|
|
|
|
|
Learning_Hand-Eye_Coordination_for_Robotic_Grasping_with_Deep_Learning_and_Large-Scale_Data_Collection |
✓ |
✓ |
✓ |
✓ |
|
|
|
paper |
~15% |
~30% |
~45% |
~60% |
~70% |
~80% |
~90% |
You Only Look Once: Unified, Real-Time Object Detection |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
YOLO9000: Better, Faster, Stronger |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
YOLOv3: An Incremental Improvement |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
|
Mask R-CNN |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
|
Transfering and Muilt-task learning
paper |
~15% |
~30% |
~45% |
~60% |
~70% |
~80% |
~90% |
EPOpt: Learning robust neural network policies.. |
✓ |
✓ |
|
|
|
|
|
Sim-to-Real Transfer of Robotic Control with Dynamics Randomization |
✓ |
|
|
|
|
|
|
Adapting Visuomotor Representations with Weak Pairwise Constraints |
✓ |
|
|
|
|
|
|
paper |
~15% |
~30% |
~45% |
~60% |
~70% |
~80% |
~90% |
Transporter Networks- Rearranging the Visual World for Robotic Manipulation |
✓ |
✓ |
✓ |
✓ |
|
|
|
Robotic Table Tennis with Model-Free Reinforcement Learning |
✓ |
✓ |
|
|
|
|
|
paper |
~15% |
~30% |
~45% |
~60% |
~70% |
~80% |
~90% |
Deep Exploration via Bootstrapped DQN |
✓ |
✓ |
✓ |
|
|
|
|
VIME_Variational Information Maximizing Exploration |
✓ |
✓ |
|
|
|
|
|
paper |
~15% |
~30% |
~45% |
~60% |
~70% |
~80% |
~90% |
Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping |
✓ |
✓ |
✓ |
|
|
|
|
3D Simulation for Robot Arm Control with Deep Q-Learning |
✓ |
✓ |
✓ |
✓ |
✓ |
|
|
RL-CycleGAN Reinforcement Learning_Aware Simulation_To_Real |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
|
RetinaGAN An Object-aware Approach to Sim-to-Real Transfer |
✓ |
✓ |
✓ |
✓ |
✓ |
|
|
Sim-To-Real Transfer for Miniature AutonomousCar Racing |
✓ |
✓ |
✓ |
|
|
|
|
paper |
~15% |
~30% |
~45% |
~60% |
~70% |
~80% |
~90% |
Learning to Stop: Dynamic Simulation Monte-Carlo Tree Search |
✓ |
|
|
|
|
|
|
An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition |
✓ |
✓ |
✓ |
|
|
|
|
ClearGrasp 3D Shape Estimation of Transparent Objects for Manipulation |
✓ |
✓ |
✓ |
|
|
|
|