PhyScensis: Physics-Augmented LLM Agents for Complex Physical Scene Arrangement

2026-02-16Robotics

RoboticsArtificial Intelligence
AI summary

The authors developed PhyScensis, a system that uses a language model plus a physics engine to create detailed 3D scenes where objects physically interact, like books stacked on a shelf or items inside boxes. Their method carefully considers how objects support and touch each other to make scenes more realistic and stable. By combining suggestions from the language model with physics-based checks and feedback, the system improves the arrangement step-by-step. This approach allows precise control of object placement and stability, leading to better and more complex scenes than previous methods.

3D environment generationrobotic simulationphysical relationshipsphysics enginelanguage model (LLM)scene layoutspatial predicatesprobabilistic programmingrobotic manipulationscene stability
Authors
Yian Wang, Han Yang, Minghao Guo, Xiaowen Qiu, Tsun-Hsuan Wang, Wojciech Matusik, Joshua B. Tenenbaum, Chuang Gan
Abstract
Automatically generating interactive 3D environments is crucial for scaling up robotic data collection in simulation. While prior work has primarily focused on 3D asset placement, it often overlooks the physical relationships between objects (e.g., contact, support, balance, and containment), which are essential for creating complex and realistic manipulation scenarios such as tabletop arrangements, shelf organization, or box packing. Compared to classical 3D layout generation, producing complex physical scenes introduces additional challenges: (a) higher object density and complexity (e.g., a small shelf may hold dozens of books), (b) richer supporting relationships and compact spatial layouts, and (c) the need to accurately model both spatial placement and physical properties. To address these challenges, we propose PhyScensis, an LLM agent-based framework powered by a physics engine, to produce physically plausible scene configurations with high complexity. Specifically, our framework consists of three main components: an LLM agent iteratively proposes assets with spatial and physical predicates; a solver, equipped with a physics engine, realizes these predicates into a 3D scene; and feedback from the solver informs the agent to refine and enrich the configuration. Moreover, our framework preserves strong controllability over fine-grained textual descriptions and numerical parameters (e.g., relative positions, scene stability), enabled through probabilistic programming for stability and a complementary heuristic that jointly regulates stability and spatial relations. Experimental results show that our method outperforms prior approaches in scene complexity, visual quality, and physical accuracy, offering a unified pipeline for generating complex physical scene layouts for robotic manipulation.