E-3DPSM: A State Machine for Event-Based Egocentric 3D Human Pose Estimation
2026-04-09 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors focus on improving 3D human pose estimation using event cameras worn on the head, which capture fast and detailed motion data. They create a new method called E-3DPSM that continuously tracks pose changes by matching the stream of events with smooth human motion, reducing errors like jitter and self-occlusion problems. Their approach combines predicted poses with event-driven updates to produce more stable and accurate 3D poses in real time. Experiments show that their method is more precise and consistent than previous techniques.
event camera3D human pose estimationegocentric visionmonocular visiontemporal resolutionself-occlusionmotion blurpose state machineMPJPEtemporal stability
Authors
Mayur Deshmukh, Hiroyasu Akada, Helge Rhodin, Christian Theobalt, Vladislav Golyanik
Abstract
Event cameras offer multiple advantages in monocular egocentric 3D human pose estimation from head-mounted devices, such as millisecond temporal resolution, high dynamic range, and negligible motion blur. Existing methods effectively leverage these properties, but suffer from low 3D estimation accuracy, insufficient in many applications (e.g., immersive VR/AR). This is due to the design not being fully tailored towards event streams (e.g., their asynchronous and continuous nature), leading to high sensitivity to self-occlusions and temporal jitter in the estimates. This paper rethinks the setting and introduces E-3DPSM, an event-driven continuous pose state machine for event-based egocentric 3D human pose estimation. E-3DPSM aligns continuous human motion with fine-grained event dynamics; it evolves latent states and predicts continuous changes in 3D joint positions associated with observed events, which are fused with direct 3D human pose predictions, leading to stable and drift-free final 3D pose reconstructions. E-3DPSM runs in real-time at 80 Hz on a single workstation and sets a new state of the art in experiments on two benchmarks, improving accuracy by up to 19% (MPJPE) and temporal stability by up to 2.7x. See our project page for the source code and trained models.