Panoramic Affordance Prediction
2026-03-16 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionRobotics
AI summaryⓘ
The authors study how AI can better understand what actions are possible in a scene by looking at 360-degree panoramic images instead of regular camera pictures. They created a large dataset called PAP-12K with very high-resolution panoramic images and detailed labels to help train and test models. They also designed a new method, PAP, that mimics how human vision focuses on parts of a big scene to handle the complexity and distortions of panoramic images. Their experiments show that older methods meant for normal photos don’t work well on panoramas, but their approach works much better. This work suggests panoramic views can improve AI understanding of environments.
Affordance PredictionPanoramic Imaging360-degree VisionHigh-resolution ImagesEmbodied AIFoveal Visual SystemGeometric DistortionDataset BenchmarkInstance Segmentation
Authors
Zixin Zhang, Chenfei Liao, Hongfei Zhang, Harold Haodong Chen, Kanghao Chen, Zichen Wen, Litao Guo, Bin Ren, Xu Zheng, Yinchuan Li, Xuming Hu, Nicu Sebe, Ying-Cong Chen
Abstract
Affordance prediction serves as a critical bridge between perception and action in embodied AI. However, existing research is confined to pinhole camera models, which suffer from narrow Fields of View (FoV) and fragmented observations, often missing critical holistic environmental context. In this paper, we present the first exploration into Panoramic Affordance Prediction, utilizing 360-degree imagery to capture global spatial relationships and holistic scene understanding. To facilitate this novel task, we first introduce PAP-12K, a large-scale benchmark dataset containing over 1,000 ultra-high-resolution (12k, 11904 x 5952) panoramic images with over 12k carefully annotated QA pairs and affordance masks. Furthermore, we propose PAP, a training-free, coarse-to-fine pipeline inspired by the human foveal visual system to tackle the ultra-high resolution and severe distortion inherent in panoramic images. PAP employs recursive visual routing via grid prompting to progressively locate targets, applies an adaptive gaze mechanism to rectify local geometric distortions, and utilizes a cascaded grounding pipeline to extract precise instance-level masks. Experimental results on PAP-12K reveal that existing affordance prediction methods designed for standard perspective images suffer severe performance degradation and fail due to the unique challenges of panoramic vision. In contrast, PAP framework effectively overcomes these obstacles, significantly outperforming state-of-the-art baselines and highlighting the immense potential of panoramic perception for robust embodied intelligence.