ThinkJEPA: Empowering Latent World Models with Large Vision-Language Reasoning Model
2026-03-23 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionArtificial IntelligenceComputation and LanguageMachine LearningRobotics
AI summaryⓘ
The authors present a new model that predicts future video scenes better by combining two approaches: one that looks closely at many frames in detail (JEPA) and another that understands the bigger picture and meaning over fewer frames (Vision--Language Models). Their method uses a special way to mix detailed motion information with high-level knowledge from the vision-language model. This helps the model make more accurate predictions over longer times, especially for tasks involving hand movements. They show it works better than using either approach alone.
latent world modelsV-JEPA2Vision--Language Modelsdense predictiontemporal contexthierarchical pyramid representationtrajectory predictionaction-conditioned datasetslong-horizon forecastingmulti-layer representation aggregation
Authors
Haichao Zhang, Yijiang Li, Shwai He, Tushar Nagarajan, Mingfei Chen, Jianglin Lu, Ang Li, Yun Fu
Abstract
Recent progress in latent world models (e.g., V-JEPA2) has shown promising capability in forecasting future world states from video observations. Nevertheless, dense prediction from a short observation window limits temporal context and can bias predictors toward local, low-level extrapolation, making it difficult to capture long-horizon semantics and reducing downstream utility. Vision--language models (VLMs), in contrast, provide strong semantic grounding and general knowledge by reasoning over uniformly sampled frames, but they are not ideal as standalone dense predictors due to compute-driven sparse sampling, a language-output bottleneck that compresses fine-grained interaction states into text-oriented representations, and a data-regime mismatch when adapting to small action-conditioned datasets. We propose a VLM-guided JEPA-style latent world modeling framework that combines dense-frame dynamics modeling with long-horizon semantic guidance via a dual-temporal pathway: a dense JEPA branch for fine-grained motion and interaction cues, and a uniformly sampled VLM \emph{thinker} branch with a larger temporal stride for knowledge-rich guidance. To transfer the VLM's progressive reasoning signals effectively, we introduce a hierarchical pyramid representation extraction module that aggregates multi-layer VLM representations into guidance features compatible with latent prediction. Experiments on hand-manipulation trajectory prediction show that our method outperforms both a strong VLM-only baseline and a JEPA-predictor baseline, and yields more robust long-horizon rollout behavior.