Manipulation

ThinkGrasp: A Vision-Language System for Strategic Part Grasping in Clutter

We have developed ThinkGrasp, a plug-and-play vision-language grasping system for heavy clutter environment grasping strategies.

Open-vocabulary Pick and Place via Patch-level Semantic Maps

We develop an approach for efficient open-vocabulary language-conditioned manipulation policy learning.

Language Conditioned Equivariant Grasp

We study how to ground language for robotic grasping while preserve the geometric structure of its symmetry.

Equivariant Single View Pose Prediction Via Induced and Restriction Representations

We train an equivariant network for pose prediction from single 2D image by using induced and restricted representations.