SegviGen: Repurposing 3D Generative Model for Part Segmentation
2026-03-17 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors present SegviGen, a new method that uses pretrained 3D models originally designed for generating shapes to instead identify different parts of those 3D objects. Unlike previous methods that rely heavily on 2D images or large amounts of labeled 3D data, SegviGen cleverly colors parts of a 3D shape to distinguish them, needing very little labeled data. Their tests show SegviGen works much better and more efficiently than earlier approaches. This suggests that using pretrained 3D generation knowledge helps with understanding object parts without needing extensive training.
3D generative models3D part segmentationvoxelpretrained modelsinteractive segmentationgeometry-aligned reconstruction2D guidancelabeled training datastructured priors
Authors
Lin Li, Haoran Feng, Zehuan Huang, Haohua Chen, Wenbo Nie, Shaohua Hou, Keqing Fan, Pan Hu, Sheng Wang, Buyu Li, Lu Sheng
Abstract
We introduce SegviGen, a framework that repurposes native 3D generative models for 3D part segmentation. Existing pipelines either lift strong 2D priors into 3D via distillation or multi-view mask aggregation, often suffering from cross-view inconsistency and blurred boundaries, or explore native 3D discriminative segmentation, which typically requires large-scale annotated 3D data and substantial training resources. In contrast, SegviGen leverages the structured priors encoded in pretrained 3D generative model to induce segmentation through distinctive part colorization, establishing a novel and efficient framework for part segmentation. Specifically, SegviGen encodes a 3D asset and predicts part-indicative colors on active voxels of a geometry-aligned reconstruction. It supports interactive part segmentation, full segmentation, and full segmentation with 2D guidance in a unified framework. Extensive experiments show that SegviGen improves over the prior state of the art by 40% on interactive part segmentation and by 15% on full segmentation, while using only 0.32% of the labeled training data. It demonstrates that pretrained 3D generative priors transfer effectively to 3D part segmentation, enabling strong performance with limited supervision. See our project page at https://fenghora.github.io/SegviGen-Page/.