RealMaster: Lifting Rendered Scenes into Photorealistic Video

2026-03-24Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors introduce RealMaster, a method that turns computer-generated videos from 3D engines into more realistic-looking videos without changing the original shapes or movements. Their approach uses a special training process with pairs of less and more realistic frames to teach a model how to improve video quality while keeping everything aligned with the 3D source. This technique works well on complex scenes from games like GTA-V, making videos look better without losing important details or consistency. RealMaster also handles new objects appearing in the video and doesn't always need special reference frames to work.

video diffusion models3D consistency3D enginesphotorealismgeometry conditioningIC-LoRAanchor-based propagationvideo generationGTA-V sequencessim-to-real gap
Authors
Dana Cohen-Bar, Ido Sobol, Raphael Bensadoun, Shelly Sheynin, Oran Gafni, Or Patashnik, Daniel Cohen-Or, Amit Zohar
Abstract
State-of-the-art video generation models produce remarkable photorealism, but they lack the precise control required to align generated content with specific scene requirements. Furthermore, without an underlying explicit geometry, these models cannot guarantee 3D consistency. Conversely, 3D engines offer granular control over every scene element and provide native 3D consistency by design, yet their output often remains trapped in the "uncanny valley". Bridging this sim-to-real gap requires both structural precision, where the output must exactly preserve the geometry and dynamics of the input, and global semantic transformation, where materials, lighting, and textures must be holistically transformed to achieve photorealism. We present RealMaster, a method that leverages video diffusion models to lift rendered video into photorealistic video while maintaining full alignment with the output of the 3D engine. To train this model, we generate a paired dataset via an anchor-based propagation strategy, where the first and last frames are enhanced for realism and propagated across the intermediate frames using geometric conditioning cues. We then train an IC-LoRA on these paired videos to distill the high-quality outputs of the pipeline into a model that generalizes beyond the pipeline's constraints, handling objects and characters that appear mid-sequence and enabling inference without requiring anchor frames. Evaluated on complex GTA-V sequences, RealMaster significantly outperforms existing video editing baselines, improving photorealism while preserving the geometry, dynamics, and identity specified by the original 3D control.