Stop Wandering: Efficient Vision-Language Navigation via Metacognitive Reasoning
2026-04-02 • Robotics
RoboticsComputer Vision and Pattern Recognition
AI summaryⓘ
The authors identify that current vision-language navigation agents get stuck or waste time because they can't think about how well their exploring is going. They create MetaNav, which helps the agent remember where it has been, avoid repeating places, and fix problems when it notices it’s not making progress. Using a smart language model, MetaNav improves how the agent picks new places to explore. Tests show MetaNav works better and uses less computation compared to earlier methods.
Vision-Language Navigation (VLN)spatial memory3D semantic mapmetacognitionhistory-aware planningreflective correctionlarge language model (LLM)frontier selectionnavigation agentexploration efficiency
Authors
Xueying Li, Feng Lyu, Hao Wu, Mingliu Liu, Jia-Nan Liu, Guozi Liu
Abstract
Training-free Vision-Language Navigation (VLN) agents powered by foundation models can follow instructions and explore 3D environments. However, existing approaches rely on greedy frontier selection and passive spatial memory, leading to inefficient behaviors such as local oscillation and redundant revisiting. We argue that this stems from a lack of metacognitive capabilities: the agent cannot monitor its exploration progress, diagnose strategy failures, or adapt accordingly. To address this, we propose MetaNav, a metacognitive navigation agent integrating spatial memory, history-aware planning, and reflective correction. Spatial memory builds a persistent 3D semantic map. History-aware planning penalizes revisiting to improve efficiency. Reflective correction detects stagnation and uses an LLM to generate corrective rules that guide future frontier selection. Experiments on GOAT-Bench, HM3D-OVON, and A-EQA show that MetaNav achieves state-of-the-art performance while reducing VLM queries by 20.7%, demonstrating that metacognitive reasoning significantly improves robustness and efficiency.