SEM-ROVER: Semantic Voxel-Guided Diffusion for Large-Scale Driving Scene Generation
2026-04-07 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors created a new way to generate large, outdoor 3D driving scenes that look realistic from many different viewpoints. They use a special 3D grid system called Σ-Voxfield, where each tiny cube stores colored surface details. Their method involves a diffusion model that learns local areas and helps build big scenes gradually by expanding from smaller parts. This allows for creating detailed and consistent scenes without needing to fine-tune for each one, and the final scenes can be rendered into photorealistic images efficiently.
3D representationΣ-Voxfield griddiffusion modelsemantic conditioningspatial outpaintingphotorealistic renderingmultiview consistencyvoxelpositional encoding3D scene generation
Authors
Hiba Dahmani, Nathan Piasco, Moussab Bennehar, Luis Roldão, Dzmitry Tsishkou, Laurent Caraffa, Jean-Philippe Tarel, Roland Brémond
Abstract
Scalable generation of outdoor driving scenes requires 3D representations that remain consistent across multiple viewpoints and scale to large areas. Existing solutions either rely on image or video generative models distilled to 3D space, harming the geometric coherence and restricting the rendering to training views, or are limited to small-scale 3D scene or object-centric generation. In this work, we propose a 3D generative framework based on $Σ$-Voxfield grid, a discrete representation where each occupied voxel stores a fixed number of colorized surface samples. To generate this representation, we train a semantic-conditioned diffusion model that operates on local voxel neighborhoods and uses 3D positional encodings to capture spatial structure. We scale to large scenes via progressive spatial outpainting over overlapping regions. Finally, we render the generated $Σ$-Voxfield grid with a deferred rendering module to obtain photorealistic images, enabling large-scale multiview-consistent 3D scene generation without per-scene optimization. Extensive experiments show that our approach can generate diverse large-scale urban outdoor scenes, renderable into photorealistic images with various sensor configurations and camera trajectories while maintaining moderate computation cost compared to existing approaches.