SeeThrough3D: Occlusion Aware 3D Control in Text-to-Image Generation
2026-02-26 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionArtificial Intelligence
AI summaryⓘ
The authors focus on the problem of correctly showing objects partially hidden behind others (occlusions) in 3D scene generation based on layouts. They create a special 3D representation that shows objects as semi-transparent boxes, helping the model understand which parts are hidden from the camera view. Their method uses these visuals to guide a text-to-image model to generate scenes where the objects appear with correct depth and occlusion relationships. They also train the system on synthetic data with overlapping objects and show it works well on new objects and viewpoints.
3D layout-conditioned generationocclusion reasoning3D scene representationtext-to-image generationmasked self-attentioncamera viewpointsynthetic datasetflow-based generative modelsobject occlusionbounding box
Authors
Vaibhav Agrawal, Rishubh Parihar, Pradhaan Bhat, Ravi Kiran Sarvadevabhatla, R. Venkatesh Babu
Abstract
We identify occlusion reasoning as a fundamental yet overlooked aspect for 3D layout-conditioned generation. It is essential for synthesizing partially occluded objects with depth-consistent geometry and scale. While existing methods can generate realistic scenes that follow input layouts, they often fail to model precise inter-object occlusions. We propose SeeThrough3D, a model for 3D layout conditioned generation that explicitly models occlusions. We introduce an occlusion-aware 3D scene representation (OSCR), where objects are depicted as translucent 3D boxes placed within a virtual environment and rendered from desired camera viewpoint. The transparency encodes hidden object regions, enabling the model to reason about occlusions, while the rendered viewpoint provides explicit camera control during generation. We condition a pretrained flow based text-to-image image generation model by introducing a set of visual tokens derived from our rendered 3D representation. Furthermore, we apply masked self-attention to accurately bind each object bounding box to its corresponding textual description, enabling accurate generation of multiple objects without object attribute mixing. To train the model, we construct a synthetic dataset with diverse multi-object scenes with strong inter-object occlusions. SeeThrough3D generalizes effectively to unseen object categories and enables precise 3D layout control with realistic occlusions and consistent camera control.