Less Gaussians, Texture More: 4K Feed-Forward Textured Splatting

2026-03-26Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors explain that current 3D Gaussian Splatting methods create many small shapes that grow too much as image quality goes up, making it hard to create very detailed images like 4K. They propose a new method called LGTM that uses fewer but smarter shapes combined with textures, which separates the detail of the 3D shapes from the image resolution. This allows their method to produce high-quality 4K images without extra fine-tuning for each scene and with less computing effort. Their approach improves scalability for detailed 3D image synthesis.

3D Gaussian Splattingfeed-forward methodspixel-aligned primitivesGaussian primitivesper-primitive texturesnovel view synthesis4K resolutionrendering scalabilityscene optimization
Authors
Yixing Lao, Xuyang Bai, Xiaoyang Wu, Nuoyuan Yan, Zixin Luo, Tian Fang, Jean-Daniel Nahmias, Yanghai Tsin, Shiwei Li, Hengshuang Zhao
Abstract
Existing feed-forward 3D Gaussian Splatting methods predict pixel-aligned primitives, leading to a quadratic growth in primitive count as resolution increases. This fundamentally limits their scalability, making high-resolution synthesis such as 4K intractable. We introduce LGTM (Less Gaussians, Texture More), a feed-forward framework that overcomes this resolution scaling barrier. By predicting compact Gaussian primitives coupled with per-primitive textures, LGTM decouples geometric complexity from rendering resolution. This approach enables high-fidelity 4K novel view synthesis without per-scene optimization, a capability previously out of reach for feed-forward methods, all while using significantly fewer Gaussian primitives. Project page: https://yxlao.github.io/lgtm/