CapNav: Benchmarking Vision Language Models on Capability-conditioned Indoor Navigation

2026-02-20Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionRobotics
AI summary

The authors created a new test called CapNav to see how well vision-language models (VLMs) can navigate indoor spaces when considering the physical limits of different agents, like robots or humans. They designed five types of agents with different abilities, such as whether they can climb stairs or not. After testing 13 VLMs on many navigation challenges, they found these models struggle more when mobility limits are stricter and have trouble understanding spatial obstacles. The study highlights the need for better models that take physical capabilities into account when navigating. They also shared the benchmark publicly for future research.

Vision-Language ModelsVision-Language NavigationMobility ConstraintsPhysical CapabilitiesIndoor NavigationRobotic PlatformsSpatial ReasoningBenchmarkEmbodied AINavigation Tasks
Authors
Xia Su, Ruiqi Chen, Benlin Liu, Jingwei Ma, Zonglin Di, Ranjay Krishna, Jon Froehlich
Abstract
Vision-Language Models (VLMs) have shown remarkable progress in Vision-Language Navigation (VLN), offering new possibilities for navigation decision-making that could benefit both robotic platforms and human users. However, real-world navigation is inherently conditioned by the agent's mobility constraints. For example, a sweeping robot cannot traverse stairs, while a quadruped can. We introduce Capability-Conditioned Navigation (CapNav), a benchmark designed to evaluate how well VLMs can navigate complex indoor spaces given an agent's specific physical and operational capabilities. CapNav defines five representative human and robot agents, each described with physical dimensions, mobility capabilities, and environmental interaction abilities. CapNav provides 45 real-world indoor scenes, 473 navigation tasks, and 2365 QA pairs to test if VLMs can traverse indoor environments based on agent capabilities. We evaluate 13 modern VLMs and find that current VLM's navigation performance drops sharply as mobility constraints tighten, and that even state-of-the-art models struggle with obstacle types that require reasoning on spatial dimensions. We conclude by discussing the implications for capability-aware navigation and the opportunities for advancing embodied spatial reasoning in future VLMs. The benchmark is available at https://github.com/makeabilitylab/CapNav