We integrate perception and task planning under belief-space planning to enable strategic information gathering in open-world environments, where vision-language foundation models are used to estimate the state and its uncertainty.
This work learns to navigate end-to-end with map input, aiming to generalize to novel map layouts in zero-shot.
We enable a robot to rapidly and autonomously specialize parameterized skills by planning to practice them. The robot decides what skills to practice and how to practice them. The robot is left alone for hours, repeatedly practicing and improving.
We formulate how differentiable planning algorithms can exploit inherent symmetry in path planning problems, named SymPlan, and propose practical algorithms.