Generated Reality: Human-centric World Simulation using Interactive Video Generation with Hand and Camera Control
2026-02-20 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors created a new way to make virtual reality environments that react closely to how a person moves their head and hands. They improved existing methods to allow detailed hand movements and interactions with things in the virtual world. Their system is trained to understand these motions and then generates first-person video scenes that feel natural and responsive. Tests with people showed that their approach makes users feel more in control and helps them perform tasks better than previous methods.
extended realityvideo world modelshead pose trackinghand pose trackingdiffusion modelstransformer conditioningegocentric videohuman-computer interactiongenerative modelsvirtual environments
Authors
Linxi Xie, Lisong C. Sun, Ashley Neall, Tong Wu, Shengqu Cai, Gordon Wetzstein
Abstract
Extended reality (XR) demands generative models that respond to users' tracked real-world motion, yet current video world models accept only coarse control signals such as text or keyboard input, limiting their utility for embodied interaction. We introduce a human-centric video world model that is conditioned on both tracked head pose and joint-level hand poses. For this purpose, we evaluate existing diffusion transformer conditioning strategies and propose an effective mechanism for 3D head and hand control, enabling dexterous hand--object interactions. We train a bidirectional video diffusion model teacher using this strategy and distill it into a causal, interactive system that generates egocentric virtual environments. We evaluate this generated reality system with human subjects and demonstrate improved task performance as well as a significantly higher level of perceived amount of control over the performed actions compared with relevant baselines.