EgoForge: Goal-Directed Egocentric World Simulator

2026-03-20Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionMultimedia
AI summary

The authors developed EgoForge, a system that can create realistic first-person videos showing goal-driven tasks using just one starting image, a simple instruction, and sometimes an extra outside view. They improved the video quality by introducing VideoDiffusionNFT, which fine-tunes the video generation to better follow the intended goal, keep scenes stable, and ensure smooth action flow. Their experiments show that EgoForge produces videos that are more accurate and consistent compared to other methods, even working well with real smart glasses recordings.

egocentric videogenerative world modelsvideo diffusiongoal-directed simulationtemporal consistencylatent human intentfirst-person viewtrajectory-level refinementsemantic alignmentmotion fidelity
Authors
Yifan Shen, Jiateng Liu, Xinzhuo Li, Yuanzhe Liu, Bingxuan Li, Houze Yang, Wenqi Jia, Yijiang Li, Tianjiao Yu, James Matthew Rehg, Xu Cao, Ismini Lourentzou
Abstract
Generative world models have shown promise for simulating dynamic environments, yet egocentric video remains challenging due to rapid viewpoint changes, frequent hand-object interactions, and goal-directed procedures whose evolution depends on latent human intent. Existing approaches either focus on hand-centric instructional synthesis with limited scene evolution, perform static view translation without modeling action dynamics, or rely on dense supervision, such as camera trajectories, long video prefixes, synchronized multicamera capture, etc. In this work, we introduce EgoForge, an egocentric goal-directed world simulator that generates coherent, first-person video rollouts from minimal static inputs: a single egocentric image, a high-level instruction, and an optional auxiliary exocentric view. To improve intent alignment and temporal consistency, we propose VideoDiffusionNFT, a trajectory-level reward-guided refinement that optimizes goal completion, temporal causality, scene consistency, and perceptual fidelity during diffusion sampling. Extensive experiments show EgoForge achieves consistent gains in semantic alignment, geometric stability, and motion fidelity over strong baselines, and robust performance in real-world smart-glasses experiments.