Pandora: Articulated 3D Scene Graphs from Egocentric Vision
2026-03-30 • Robotics
RoboticsComputer Vision and Pattern Recognition
AI summaryⓘ
The authors show that robots can learn about moving parts of objects, like drawers or cabinets, by watching humans explore a room with special glasses. This helps robots understand things they usually can’t explore on their own. They use simple rules to create 3D models of these objects and add them to detailed maps. These better maps help robots, like the Boston Dynamics Spot, find hidden items and handle objects more effectively.
robotic mappingegocentric dataarticulated objects3D scene graphmobile manipulationProject Aria glassesBoston Dynamics Spotobject dynamicsscene representationhuman-robot interaction
Authors
Alan Yu, Yun Chang, Christopher Xie, Luca Carlone
Abstract
Robotic mapping systems typically approach building metric-semantic scene representations from the robot's own sensors and cameras. However, these "first person" maps inherit the robot's own limitations due to its embodiment or skillset, which may leave many aspects of the environment unexplored. For example, the robot might not be able to open drawers or access wall cabinets. In this sense, the map representation is not as complete, and requires a more capable robot to fill in the gaps. We narrow these blind spots in current methods by leveraging egocentric data captured as a human naturally explores a scene wearing Project Aria glasses, giving a way to directly transfer knowledge about articulation from the human to any deployable robot. We demonstrate that, by using simple heuristics, we can leverage egocentric data to recover models of articulate object parts, with quality comparable to those of state-of-the-art methods based on other input modalities. We also show how to integrate these models into 3D scene graph representations, leading to a better understanding of object dynamics and object-container relationships. We finally demonstrate that these articulated 3D scene graphs enhance a robot's ability to perform mobile manipulation tasks, showcasing an application where a Boston Dynamics Spot is tasked with retrieving concealed target items, given only the 3D scene graph as input.