Deep Imitation Learning for Bimanual Robotic Manipulation

Abstract

We present a deep imitation learning framework for robotic bimanual manipulation in a continuous state-action space. A core challenge is to generalize the manipulation skills to objects in different locations. We hypothesize that modeling the relational information in the environment can significantly improve generalization. To achieve this, we propose to (i) decompose the multi-modal dynamics into elemental movement primitives, (ii) parameterize each primitive using a recurrent graph neural network to capture interactions, and (iii) integrate a high-level planner that composes primitives sequentially and a low-level controller to combine primitive dynamics and inverse kinematics control. Our model is a deep, hierarchical, modular architecture. Compared to baselines, our model generalizes better and achieves higher success rates on several simulated bimanual robotic manipulation tasks. We open source the code for simulation, data, and models at this https URL.

Publication
In NeurIPS 2020
Linfeng Zhao
Linfeng Zhao
CS Ph.D. Student

I am a CS Ph.D. student at Khoury College of Computer Sciences of Northeastern University, advised by Prof. Lawson L.S. Wong. My research interests include reinforcement learning, artificial intelligence, and robotics.