Hello there! đź‘‹

I am Linfeng Zhao (赵林风), a Postdoctoral Scholar at Stanford University working with Prof. Mykel Kochenderfer.

I finished Ph.D. (2025) at Khoury College of Computer Sciences of Northeastern University, advised by Prof. Lawson L.S. Wong, where I closely collaborated with MIT LIS group and Prof. Leslie Kaelbling, and Prof. Robin Walters. I interned at Meta, Boston Dynamics AI Institute, Amazon, and Microsoft Research Asia. Before, I worked with Prof. Hao Su at UC San Diego (2018-19).

My research focuses on building human-level general-purpose agents that can act in the physical world—robots that navigate homes, manipulate objects, and accomplish long-horizon tasks in open-world scenarios with unseen environments and goals. I develop abstractions for decision-making that decompose complex behaviors into compositional building blocks. I develop learning and planning approaches to enable agents to reason about the world and plan their actions at decision time for scalable, generalizable, and efficient decision-making systems.

Updates

2025/10
Relocate to California for Postdoc research at Stanford University.
2025/09
Defended my PhD thesis, titled "Learning and Planning with Structured Abstraction for Embodied Decision-Making" (video recording TBD).
2025/08
Give invited talk at University of Washington (Dieter Fox's group).
2025/08
Give invited talk at Stanford University (Mykel Kochenderfer's SISL group).
2025/07
Give invited talk at University of Pennsylvania (Dinesh Jayaraman's group).

Research Focus

Abstraction and Representation

Learning structured representations (symmetry, compositionality) for efficient generalization.

Planning and World Modeling

Developing world models and planning algorithms for long-horizon reasoning.

Decision-time Scaling

Scaling decision quality with more compute and interaction.

Mobile Manipulation

Enabling diverse manipulation skills on mobile robot platforms.

Selected Publications

Seeing is Believing: Belief-Space Planning with Foundation Models as Uncertainty Estimators

Seeing is Believing: Belief-Space Planning with Foundation Models as Uncertainty Estimators

Linfeng Zhao, Willie McClinton*, Aidan Curtis*, Nishanth Kumar, Tom Silver, Leslie Kaelbling, Lawson L.S. Wong
arXiv 2025
TL;DR
We integrate perception and task planning under belief-space planning to enable strategic information gathering in open-world environments, where vision-language foundation models are used to estimate the state and its uncertainty.
Practice Makes Perfect: Planning to Learn Skill Parameter Policies

Practice Makes Perfect: Planning to Learn Skill Parameter Policies

Nishanth Kumar*, Tom Silver*, Willie McClinton, Linfeng Zhao, Stephen Proul, Tomás Lozano-Pérez, Leslie Kaelbling, Jennifer Barry
RSS 2024
TL;DR
We enable a robot to rapidly and autonomously specialize parameterized skills by planning to practice them. The robot decides what skills to practice and how to practice them. The robot is left alone for hours, repeatedly practicing and improving.
E(2)-Equivariant Graph Planning for Navigation

E(2)-Equivariant Graph Planning for Navigation

Linfeng Zhao*, Hongyu Li*, Taskin Padir, Huaizu Jiang†, Lawson L.S. Wong†
IEEE RA-L 2024
IROS 2024 (Oral)
TL;DR
We study E(2) Euclidean equivariance in navigation on geometric graphs and develop message passing network to solve it.
Integrating Symmetry into Differentiable Planning with Steerable Convolutions

Integrating Symmetry into Differentiable Planning with Steerable Convolutions

Linfeng Zhao, Xupeng Zhu*, Lingzhi Kong*, Robin Walters, Lawson L.S. Wong
ICLR 2023
RLDM 2022
TL;DR
We formulate how differentiable planning algorithms can exploit inherent symmetry in path planning problems, named SymPlan, and propose practical algorithms.
Scaling up and Stabilizing Differentiable Planning with Implicit Differentiation

Scaling up and Stabilizing Differentiable Planning with Implicit Differentiation

Linfeng Zhao, Huazhe Xu, Lawson L.S. Wong
ICLR 2023
TL;DR
We study how implicit differentiation helps scale up and improve convergence of differentiable planning algorithms.
Toward Compositional Generalization in Object‑Oriented World Modeling

Toward Compositional Generalization in Object‑Oriented World Modeling

Linfeng Zhao, Lingzhi Kong, Robin Walters, Lawson L.S. Wong
ICML 2022 (Long Presentation, top 2%)
RLDM 2022
TL;DR
We formulate compositional generalization in object-oriented world modeling, and propose a soft and efficient mechanism for practice.
Deep Imitation Learning for Bimanual Robotic Manipulation

Deep Imitation Learning for Bimanual Robotic Manipulation

Fan Xie*, Alexander Chowdhury*, M. Clara De Paolis Kaluza, Linfeng Zhao, Lawson L.S. Wong, Rose Yu
NeurIPS 2020
TL;DR
We present a deep imitation learning framework for robotic bimanual manipulation in a continuous state-action space.