Fully Procedural Synthetic Data from Simple Rules for Multi-View Stereo
2026-04-06 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors created a new way to make training images for teaching computers to understand 3D shapes from multiple pictures, using a method called SimpleProc. Instead of relying on real photos or game images, SimpleProc uses rules and special curves to automatically generate images with textures and details. Their tests show that with fewer images, their method works better than manual datasets, and with more images, it matches or beats very large manually made datasets. They also shared their code and data publicly for others to use.
multi-view stereoprocedural generationtraining dataNURBSdisplacement mappingtexture patterns3D reconstructioncomputer visiondatasetmachine learning
Authors
Zeyu Ma, Alexander Raistrick, Jia Deng
Abstract
In this paper, we explore the design space of procedural rules for multi-view stereo (MVS). We demonstrate that we can generate effective training data using SimpleProc: a new, fully procedural generator driven by a very small set of rules using Non-Uniform Rational Basis Splines (NURBS), as well as basic displacement and texture patterns. At a modest scale of 8,000 images, our approach achieves superior results compared to manually curated images (at the same scale) sourced from games and real-world objects. When scaled to 352,000 images, our method yields performance comparable to--and in several benchmarks, exceeding--models trained on over 692,000 manually curated images. The source code and the data are available at https://github.com/princeton-vl/SimpleProc.