VOID: Video Object and Interaction Deletion

2026-04-02Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial Intelligence
AI summary

The authors created VOID, a new way to remove objects from videos that handles tricky cases where the object interacts physically with others, like collisions. They made a special training dataset showing how things would look if an object was removed and the interactions changed. Their method uses a language model to spot parts of the scene affected by the removed object, and then a video model fills in those parts in a way that respects physical events. Tests show VOID keeps the scene’s movements more realistic than previous methods.

video object removalinpaintingphysical interactionsvideo diffusion modelvision-language modelcounterfactual dataKubricHUMOTOscene dynamicscausal reasoning
Authors
Saman Motamed, William Harvey, Benjamin Klein, Luc Van Gool, Zhuoning Yuan, Ta-Ying Cheng
Abstract
Existing video object removal methods excel at inpainting content "behind" the object and correcting appearance-level artifacts such as shadows and reflections. However, when the removed object has more significant interactions, such as collisions with other objects, current models fail to correct them and produce implausible results. We present VOID, a video object removal framework designed to perform physically-plausible inpainting in these complex scenarios. To train the model, we generate a new paired dataset of counterfactual object removals using Kubric and HUMOTO, where removing an object requires altering downstream physical interactions. During inference, a vision-language model identifies regions of the scene affected by the removed object. These regions are then used to guide a video diffusion model that generates physically consistent counterfactual outcomes. Experiments on both synthetic and real data show that our approach better preserves consistent scene dynamics after object removal compared to prior video object removal methods. We hope this framework sheds light on how to make video editing models better simulators of the world through high-level causal reasoning.