Title: Toward Interpretable, Efficient, and Scalable Robot Perception

Abstract: Recent advances in foundation models have opened new opportunities for general-purpose, robot-agnostic perception systems. While these models offer strong generalization and robustness, they often overlook valuable robot-specific priors that can enhance both performance and interpretability. In this talk, I will present recent efforts from my group to develop interpretable, efficient, and scalable robot perception systems by leveraging the structure and experience of specific robots. First, I will present a series of works on state estimation for legged robots that exploit their kinematic structures and morphological symmetries. Next, I will introduce approaches to legged robot navigation by learning terrain traversability from robot experience. Finally, I will briefly discuss a scalable, continuous semantic mapping pipeline designed to support large-scale, multi-robot deployments.

Bio: Lu Gan is an Assistant Professor in the School of Aerospace Engineering at the Georgia Institute of Technology, where she leads the Lu’s Navigation and Autonomous Robotics (Lunar) Lab. She received her Ph.D. in Robotics from the University of Michigan in 2022, and subsequently spent two years as a Postdoctoral Scholar at the California Institute of Technology. Her research interests lie in robot perception, learning, and autonomous navigation. Her group focuses on physics-informed robot learning, multi-modal 3D scene understanding, and multi-robot SLAM to advance robot autonomy in the wild.