Bridging Semantic and Kinematic Conditions with Diffusion-based Discrete Motion Tokenizer

2026-03-19Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors created a three-step method to make better animated motions by combining two common approaches: one that controls motion smoothly and another that understands motion commands. They introduce MoTok, a tool that turns motions into simple tokens and then uses another process to rebuild detailed movements. This approach helps keep the meaning of motions clear while still allowing precise control. Their tests show improvements in how accurately and smoothly motions follow given instructions compared to previous methods.

diffusion modelsdiscrete tokensmotion synthesiskinematic controlsemantic conditioningmotion tokenizerHumanML3Dtrajectory errorFID scoremotion planning
Authors
Chenyang Gu, Mingyuan Zhang, Haozhe Xie, Zhongang Cai, Lei Yang, Ziwei Liu
Abstract
Prior motion generation largely follows two paradigms: continuous diffusion models that excel at kinematic control, and discrete token-based generators that are effective for semantic conditioning. To combine their strengths, we propose a three-stage framework comprising condition feature extraction (Perception), discrete token generation (Planning), and diffusion-based motion synthesis (Control). Central to this framework is MoTok, a diffusion-based discrete motion tokenizer that decouples semantic abstraction from fine-grained reconstruction by delegating motion recovery to a diffusion decoder, enabling compact single-layer tokens while preserving motion fidelity. For kinematic conditions, coarse constraints guide token generation during planning, while fine-grained constraints are enforced during control through diffusion-based optimization. This design prevents kinematic details from disrupting semantic token planning. On HumanML3D, our method significantly improves controllability and fidelity over MaskControl while using only one-sixth of the tokens, reducing trajectory error from 0.72 cm to 0.08 cm and FID from 0.083 to 0.029. Unlike prior methods that degrade under stronger kinematic constraints, ours improves fidelity, reducing FID from 0.033 to 0.014.