/Kuka_Robotics_Arms

Application of Deep Reinforcement Learning on Robotics Arm control task

Primary LanguageJupyter Notebook

Deep Reinforcement Learning for Robotic Grasping combined with Inverse Kinematics

Prior work in robotic manipulation has sought to address the grasping problem through a wide range of method, from analytic grasp metrics, to learning-based approaches. Learning grasping directly from self-supervision offers considerable promise in this field: if a robot can become progressively better at grasping through repeated experience, perhaps it can achieve a very high degree of proficiency with minimal human involvement. However, these methods typically do not reason about the sequential aspect of the grasping task. Modern robots operating in real environments should flexibly adapt to new tasks, new motions, environment changes, and disturbances like deformed objects grasping. Reinforcement Learning(RL) has been commonly adopted for this purpose.

However, there are two important challenges of RL. One is that has very high sample complexity which means we need try a large amount of episodes to train out policy. Another challenges in RL based grasping is generalization: can the system learn grasping unseen status of objects that change their positions and poses during training. In this work, I explore the way to use some deep reinforcement learning(DRL) algorithms on grasping tasks to make agent(robotic arms) learn efficiently in dynamic environment.Also, I’m going to try naïve Deep Q networks(DQN) and some of its improvements like DDQN, prioritized DQN, Dueling DQN on same task. There will be some other methods used on speed up training process. For example, robot would converge faster after using inverse kinematics to help DRL reduce action space. For generalization problem, Off-policy reinforcement learning methods might be preferred for the task where the wide variety of previously random position objects is crucial for generalization.