BEACON: Language-Conditioned Navigation Affordance Prediction under Occlusion
2026-03-10 • Robotics
RoboticsArtificial IntelligenceComputer Vision and Pattern Recognition
AI summaryⓘ
The authors created BEACON, a system that helps robots find nearby locations described by language, even when those spots are hidden from view. Unlike previous methods that only looked at visible parts in images, BEACON predicts a top-down map showing where the robot can go, including areas behind obstacles. It uses special cameras around the robot and combines depth information with language understanding to do this. Tests showed BEACON finds hidden targets more accurately than earlier methods.
language-conditioned navigationvision-language modelsbird's-eye view (BEV)occlusionRGB-D sensorsspatial groundinghabitat simulatorheatmapego-centric mapping
Authors
Xinyu Gao, Gang Chen, Javier Alonso-Mora
Abstract
Language-conditioned local navigation requires a robot to infer a nearby traversable target location from its current observation and an open-vocabulary, relational instruction. Existing vision-language spatial grounding methods usually rely on vision-language models (VLMs) to reason in image space, producing 2D predictions tied to visible pixels. As a result, they struggle to infer target locations in occluded regions, typically caused by furniture or moving humans. To address this issue, we propose BEACON, which predicts an ego-centric Bird's-Eye View (BEV) affordance heatmap over a bounded local region including occluded areas. Given an instruction and surround-view RGB-D observations from four directions around the robot, BEACON predicts the BEV heatmap by injecting spatial cues into a VLM and fusing the VLM's output with depth-derived BEV features. Using an occlusion-aware dataset built in the Habitat simulator, we conduct detailed experimental analysis to validate both our BEV space formulation and the design choices of each module. Our method improves the accuracy averaged across geodesic thresholds by 22.74 percentage points over the state-of-the-art image-space baseline on the validation subset with occluded target locations. Our project page is: https://xin-yu-gao.github.io/beacon.