MessyKitchens: Contact-rich object-level 3D scene reconstruction
2026-03-17 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionArtificial IntelligenceRobotics
AI summaryⓘ
The authors focus on improving 3D scene reconstruction from a single image, especially for messy scenes with many objects that touch or overlap. They created a new dataset called MessyKitchens, which has detailed 3D shapes, positions, and realistic contact data for objects in cluttered environments. They also enhanced a previous method by adding a Multi-Object Decoder to better reconstruct multiple objects together. Their experiments show that their dataset and method improve accuracy and reduce object overlaps compared to previous work.
monocular 3D reconstructiondepth estimationobject poseobject shapeocclusionscene reconstruction3D datasetobject contactsMulti-Object Decoderobject-level registration
Authors
Junaid Ahmed Ansari, Ran Ding, Fabio Pizzati, Ivan Laptev
Abstract
Monocular 3D scene reconstruction has recently seen significant progress. Powered by the modern neural architectures and large-scale data, recent methods achieve high performance in depth estimation from a single image. Meanwhile, reconstructing and decomposing common scenes into individual 3D objects remains a hard challenge due to the large variety of objects, frequent occlusions and complex object relations. Notably, beyond shape and pose estimation of individual objects, applications in robotics and animation require physically-plausible scene reconstruction where objects obey physical principles of non-penetration and realistic contacts. In this work we advance object-level scene reconstruction along two directions. First, we introduceMessyKitchens, a new dataset with real-world scenes featuring cluttered environments and providing high-fidelity object-level ground truth in terms of 3D object shapes, poses and accurate object contacts. Second, we build on the recent SAM 3D approach for single-object reconstruction and extend it with Multi-Object Decoder (MOD) for joint object-level scene reconstruction. To validate our contributions, we demonstrate MessyKitchens to significantly improve previous datasets in registration accuracy and inter-object penetration. We also compare our multi-object reconstruction approach on three datasets and demonstrate consistent and significant improvements of MOD over the state of the art. Our new benchmark, code and pre-trained models will become publicly available on our project website: https://messykitchens.github.io/.