Seeing is Believing: Belief-Space Planning with Foundation Models as Uncertainty Estimators

Abstract

Generalizable robotic mobile manipulation in open-world environments poses significant challenges due to long horizons, complex goals, and partial observability. A promising approach to address these challenges involves planning with a library of parameterized skills, where a task planner sequences these skills to achieve goals specified in structured languages, such as logical expressions over symbolic facts. While vision-language models (VLMs) can be used to ground these expressions, they often assume full observability, leading to suboptimal behavior when the agent lacks sufficient information to evaluate facts with certainty. This paper introduces a novel framework that leverages VLMs as a perception module to estimate uncertainty and facilitate symbolic grounding. Our approach constructs a symbolic belief representation and uses a belief-space planner to generate uncertainty-aware plans that incorporate strategic information gathering. This enables the agent to effectively reason about partial observability and property uncertainty. We demonstrate our system on a range of challenging real-world tasks that require reasoning in partially observable environments. Simulated evaluations show that our approach outperforms both vanilla VLM-based end-to-end planning and VLM-based state estimation baselines, by planning for—and executing—strategic information gathering. This work highlights the potential of VLMs to construct belief-space symbolic scene representations, enabling downstream tasks such as uncertainty-aware planning.

Publication
Under Conference Review
Linfeng Zhao
Linfeng Zhao
CS Ph.D. Student

I am a CS Ph.D. student at Khoury College of Computer Sciences of Northeastern University, advised by Prof. Lawson L.S. Wong. My research interests include reinforcement learning, artificial intelligence, and robotics.

Related