Perceptive Humanoid Parkour: Chaining Dynamic Human Skills via Motion Matching
2026-02-17 • Robotics
RoboticsArtificial IntelligenceMachine Learning
AI summaryⓘ
The authors created a system called Perceptive Humanoid Parkour (PHP) that helps robots perform complex parkour movements on difficult obstacle courses using only onboard cameras and simple speed commands. They combine many small human-like skills into longer sequences using motion matching and then train the robot with reinforcement learning to follow these sequences smoothly. The robot can decide how to move around or over obstacles by itself, adapting to changes it sees in real time. Their experiments show that the robot can climb tall obstacles and navigate multiple obstacles dynamically and accurately.
humanoid robotparkourmotion matchingreinforcement learningskill compositiondepth sensingperception-driven controlDAggertrajectory trackingobstacle traversal
Authors
Zhen Wu, Xiaoyu Huang, Lujie Yang, Yuanhang Zhang, Koushil Sreenath, Xi Chen, Pieter Abbeel, Rocky Duan, Angjoo Kanazawa, Carmelo Sferrazza, Guanya Shi, C. Karen Liu
Abstract
While recent advances in humanoid locomotion have achieved stable walking on varied terrains, capturing the agility and adaptivity of highly dynamic human motions remains an open challenge. In particular, agile parkour in complex environments demands not only low-level robustness, but also human-like motion expressiveness, long-horizon skill composition, and perception-driven decision-making. In this paper, we present Perceptive Humanoid Parkour (PHP), a modular framework that enables humanoid robots to autonomously perform long-horizon, vision-based parkour across challenging obstacle courses. Our approach first leverages motion matching, formulated as nearest-neighbor search in a feature space, to compose retargeted atomic human skills into long-horizon kinematic trajectories. This framework enables the flexible composition and smooth transition of complex skill chains while preserving the elegance and fluidity of dynamic human motions. Next, we train motion-tracking reinforcement learning (RL) expert policies for these composed motions, and distill them into a single depth-based, multi-skill student policy, using a combination of DAgger and RL. Crucially, the combination of perception and skill composition enables autonomous, context-aware decision-making: using only onboard depth sensing and a discrete 2D velocity command, the robot selects and executes whether to step over, climb onto, vault or roll off obstacles of varying geometries and heights. We validate our framework with extensive real-world experiments on a Unitree G1 humanoid robot, demonstrating highly dynamic parkour skills such as climbing tall obstacles up to 1.25m (96% robot height), as well as long-horizon multi-obstacle traversal with closed-loop adaptation to real-time obstacle perturbations.