MuSteerNet: Human Reaction Generation from Videos via Observation-Reaction Mutual Steering
2026-03-20 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors address the challenge of generating 3D human reactions that match video content, which previous methods struggled with because they did not properly connect the video inputs with the type of reaction. They propose MuSteerNet, a method that improves this connection by using a feedback system based on learned prototypes to adjust the video observations and refine the reaction motions. Their approach helps create more accurate and fitting 3D human reactions for the videos. Experiments show that their method works better than existing techniques.
3D human motion synthesisvideo-driven generationreaction typesrelational distortionprototype feedbackgated delta-rectificationrelational margin constraintreaction refinementinteractive AIMuSteerNet
Authors
Yuan Zhou, Yongzhi Li, Yanqi Dai, Xingyu Zhu, Yi Tan, Qingshan Xu, Beier Zhu, Richang Hong, Hanwang Zhang
Abstract
Video-driven human reaction generation aims to synthesize 3D human motions that directly react to observed video sequences, which is crucial for building human-like interactive AI systems. However, existing methods often fail to effectively leverage video inputs to steer human reaction synthesis, resulting in reaction motions that are mismatched with the content of video sequences. We reveal that this limitation arises from a severe relational distortion between visual observations and reaction types. In light of this, we propose MuSteerNet, a simple yet effective framework that generates 3D human reactions from videos via observation-reaction mutual steering. Specifically, we first propose a Prototype Feedback Steering mechanism to mitigate relational distortion by refining visual observations with a gated delta-rectification modulator and a relational margin constraint, guided by prototypical vectors learned from human reactions. We then introduce Dual-Coupled Reaction Refinement that fully leverages rectified visual cues to further steer the refinement of generated reaction motions, thereby effectively improving reaction quality and enabling MuSteerNet to achieve competitive performance. Extensive experiments and ablation studies validate the effectiveness of our method. Code coming soon: https://github.com/zhouyuan888888/MuSteerNet.