Perceive What Matters: Relevance-Driven Scheduling for Multimodal Streaming Perception

2026-03-13Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors studied how robots work with humans by using several sensors to understand what’s happening around them. They noticed that running all sensors all the time slows things down in real-time situations. So, they created a smart system that decides which sensors to use based on what it saw just before, making things faster without losing much accuracy. Their tests show the system can speed up perception and still work reliably, helping robots assist humans better.

human-robot collaborationperception modulesscene understandinglatencyreal-time processingperception schedulingcomputational resourcesmultimodal perceptionkeyframe accuracy
Authors
Dingcheng Huang, Xiaotong Zhang, Kamal Youcef-Toumi
Abstract
In modern human-robot collaboration (HRC) applications, multiple perception modules jointly extract visual, auditory, and contextual cues to achieve comprehensive scene understanding, enabling the robot to provide appropriate assistance to human agents intelligently. While executing multiple perception modules on a frame-by-frame basis enhances perception quality in offline settings, it inevitably accumulates latency, leading to a substantial decline in system performance in streaming perception scenarios. Recent work in scene understanding, termed Relevance, has established a solid foundation for developing efficient methodologies in HRC. However, modern perception pipelines still face challenges related to information redundancy and suboptimal allocation of computational resources. Drawing inspiration from the Relevance concept and the information sparsity in HRC events, we propose a novel lightweight perception scheduling framework that efficiently leverages output from previous frames to estimate and schedule necessary perception modules in real-time based on scene context. The experimental results demonstrate that the proposed perception scheduling framework effectively reduces computational latency by up to 27.52% compared to conventional parallel perception pipelines, while also achieving a 72.73% improvement in MMPose activation recall. Additionally, the framework demonstrates high keyframe accuracy, achieving rates of up to 98%. The results validate the framework's capability to enhance real-time perception efficiency without significantly compromising accuracy. The framework shows potential as a scalable and systematic solution for multimodal streaming perception systems in HRC.