MIBURI: Towards Expressive Interactive Gesture Synthesis

2026-03-03Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionGraphicsHuman-Computer Interaction
AI summary

The authors developed MIBURI, a system that creates natural, expressive body and face movements in virtual agents while they speak in real time. Unlike past methods that made stiff or delayed gestures, their approach produces lively motions right as the speech happens. They use a special way to break down and generate movement details so the gestures match what is said and look more human-like. Tests show MIBURI’s gestures are more natural and fit the conversation better than earlier models.

Embodied Conversational AgentsLarge Language ModelsCo-speech gesture synthesisAutoregressive generationReal-time motion generationGesture codecsTemporal dynamicsHierarchical motion modeling
Authors
M. Hamza Mughal, Rishabh Dabral, Vera Demberg, Christian Theobalt
Abstract
Embodied Conversational Agents (ECAs) aim to emulate human face-to-face interaction through speech, gestures, and facial expressions. Current large language model (LLM)-based conversational agents lack embodiment and the expressive gestures essential for natural interaction. Existing solutions for ECAs often produce rigid, low-diversity motions, that are unsuitable for human-like interaction. Alternatively, generative methods for co-speech gesture synthesis yield natural body gestures but depend on future speech context and require long run-times. To bridge this gap, we present MIBURI, the first online, causal framework for generating expressive full-body gestures and facial expressions synchronized with real-time spoken dialogue. We employ body-part aware gesture codecs that encode hierarchical motion details into multi-level discrete tokens. These tokens are then autoregressively generated by a two-dimensional causal framework conditioned on LLM-based speech-text embeddings, modeling both temporal dynamics and part-level motion hierarchy in real time. Further, we introduce auxiliary objectives to encourage expressive and diverse gestures while preventing convergence to static poses. Comparative evaluations demonstrate that our causal and real-time approach produces natural and contextually aligned gestures against recent baselines. We urge the reader to explore demo videos on https://vcai.mpi-inf.mpg.de/projects/MIBURI/.