OmniStream: Mastering Perception, Reconstruction and Action in Continuous Streams

2026-03-12Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors present OmniStream, a visual model designed to understand and act on video streams in real time by combining spatial, temporal, and semantic information in one system. They use special attention mechanisms and 3D positional encoding to process videos efficiently frame by frame. OmniStream was trained on many tasks and datasets to learn image perception, motion, 3D understanding, and language alignment all at once. Their tests show that it performs well across a variety of vision challenges without needing task-specific tuning. This work suggests a way to build a single, general visual model that can handle multiple complex tasks instead of using many specialized ones.

visual backbonespatiotemporal attention3D rotary positional embeddingsvideo streamingmulti-task learninggeometry reconstructionvision-language alignmentKV-cacheembodied agentsgeneral-purpose visual understanding
Authors
Yibin Yan, Jilan Xu, Shangzhe Di, Haoning Wu, Weidi Xie
Abstract
Modern visual agents require representations that are general, causal, and physically structured to operate in real-time streaming environments. However, current vision foundation models remain fragmented, specializing narrowly in image semantic perception, offline temporal modeling, or spatial geometry. This paper introduces OmniStream, a unified streaming visual backbone that effectively perceives, reconstructs, and acts from diverse visual inputs. By incorporating causal spatiotemporal attention and 3D rotary positional embeddings (3D-RoPE), our model supports efficient, frame-by-frame online processing of video streams via a persistent KV-cache. We pre-train OmniStream using a synergistic multi-task framework coupling static and temporal representation learning, streaming geometric reconstruction, and vision-language alignment on 29 datasets. Extensive evaluations show that, even with a strictly frozen backbone, OmniStream achieves consistently competitive performance with specialized experts across image and video probing, streaming geometric reconstruction, complex video and spatial reasoning, as well as robotic manipulation (unseen at training). Rather than pursuing benchmark-specific dominance, our work demonstrates the viability of training a single, versatile vision backbone that generalizes across semantic, spatial, and temporal reasoning, i.e., a more meaningful step toward general-purpose visual understanding for interactive and embodied agents.