Pair2Scene: Learning Local Object Relations for Procedural Scene Generation
2026-04-13 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors address the problem of creating realistic 3D indoor scenes, which is hard because of limited data and complicated object relationships. They propose Pair2Scene, a method that focuses on local object relationships—how objects support or function with each other—rather than trying to understand the whole scene at once. Their system learns these local rules from a new dataset and uses physics-aware techniques to place objects properly in a scene hierarchy. Experiments show that their approach generates more complex and realistic scenes than previous methods, even for situations not seen during training.
3D indoor scene generationlocal dependenciessupport relationsfunctional relationsscene hierarchycollision-aware samplingprocedural generationspatial position distributionphysics-based algorithmsscene data
Authors
Xingjian Ran, Shujie Zhang, Weipeng Zhong, Li Luo, Bo Dai
Abstract
Generating high-fidelity 3D indoor scenes remains a significant challenge due to data scarcity and the complexity of modeling intricate spatial relations. Current methods often struggle to scale beyond training distribution to dense scenes or rely on LLMs/VLMs that lack the ability for precise spatial reasoning. Building on top of the observation that object placement relies mainly on local dependencies instead of information-redundant global distributions, in this paper, we propose Pair2Scene, a novel procedural generation framework that integrates learned local rules with scene hierarchies and physics-based algorithms. These rules mainly capture two types of inter-object relations, namely support relations that follow physical hierarchies, and functional relations that reflect semantic links. We model these rules through a network, which estimates spatial position distributions of dependent objects conditioned on position and geometry of the anchor ones. Accordingly, we curate a dataset 3D-Pairs from existing scene data to train the model. During inference, our framework can generate scenes by recursively applying our model within a hierarchical structure, leveraging collision-aware rejection sampling to align local rules into coherent global layouts. Extensive experiments demonstrate that our framework outperforms existing methods in generating complex environments that go beyond training data while maintaining physical and semantic plausibility.