Modeling and Measuring Redundancy in Multisource Multimodal Data for Autonomous Driving
2026-03-06 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors studied how repeated or overlapping data (called redundancy) from multiple cameras and sensors in self-driving car datasets affects object detection performance. They found that removing some redundant labels in overlapping camera views can actually improve how well a detection algorithm (YOLOv8) works without losing important information. They also noticed considerable redundancy between camera images and LiDAR data. Their work shows that understanding and reducing redundancy is important for improving data quality in autonomous vehicle systems.
autonomous vehiclesdata qualityredundancymultimodal dataobject detectionYOLOv8nuScenes datasetArgoverse 2 datasetcamera overlapLiDAR
Authors
Yuhan Zhou, Mehri Sattari, Haihua Chen, Kewei Sha
Abstract
Next-generation autonomous vehicles (AVs) rely on large volumes of multisource and multimodal ($M^2$) data to support real-time decision-making. In practice, data quality (DQ) varies across sources and modalities due to environmental conditions and sensor limitations, yet AV research has largely prioritized algorithm design over DQ analysis. This work focuses on redundancy as a fundamental but underexplored DQ issue in AV datasets. Using the nuScenes and Argoverse 2 (AV2) datasets, we model and measure redundancy in multisource camera data and multimodal image-LiDAR data, and evaluate how removing redundant labels affects the YOLOv8 object detection task. Experimental results show that selectively removing redundant multisource image object labels from cameras with shared fields of view improves detection. In nuScenes, mAP${50}$ gains from $0.66$ to $0.70$, $0.64$ to $0.67$, and from $0.53$ to $0.55$, on three representative overlap regions, while detection on other overlapping camera pairs remains at the baseline even under stronger pruning. In AV2, $4.1$-$8.6\%$ of labels are removed, and mAP${50}$ stays near the $0.64$ baseline. Multimodal analysis also reveals substantial redundancy between image and LiDAR data. These findings demonstrate that redundancy is a measurable and actionable DQ factor with direct implications for AV performance. This work highlights the role of redundancy as a data quality factor in AV perception and motivates a data-centric perspective for evaluating and improving AV datasets. Code, data, and implementation details are publicly available at: https://github.com/yhZHOU515/RedundancyAD