RPiAE: A Representation-Pivoted Autoencoder Enhancing Both Image Generation and Editing
2026-03-19 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors propose a new method called Representation-Pivoted AutoEncoder (RPiAE) to improve image generation and editing using diffusion models. Unlike previous approaches that used fixed encoders leading to poor reconstruction, their method fine-tunes the encoder to better recreate images while keeping important semantic information. They also reduce the size of the latent space, making diffusion modeling easier and more efficient. Their training carefully balances making the model good at both generating images and accurately reconstructing them. Experiments show their method performs better in generating and editing images compared to other similar tokenizers.
Diffusion modelsLatent diffusionAutoencoderRepresentation learningTokenizerReconstruction fidelityVariational compressionSemantic structureStage-wise trainingText-to-image generation
Authors
Yue Gong, Hongyu Li, Shanyuan Liu, Bo Cheng, Yuhang Ma, Liebucha Wu, Xiaoyu Wu, Manyuan Zhang, Dawei Leng, Yuhui Yin, Lijun Zhang
Abstract
Diffusion models have become the dominant paradigm for image generation and editing, with latent diffusion models shifting denoising to a compact latent space for efficiency and scalability. Recent attempts to leverage pretrained visual representation models as tokenizer priors either align diffusion features to representation features or directly reuse representation encoders as frozen tokenizers. Although such approaches can improve generation metrics, they often suffer from limited reconstruction fidelity due to frozen encoders, which in turn degrades editing quality, as well as overly high-dimensional latents that make diffusion modeling difficult. To address these limitations, We propose Representation-Pivoted AutoEncoder, a representation-based tokenizer that improves both generation and editing. We introduce Representation-Pivot Regularization, a training strategy that enables a representation-initialized encoder to be fine-tuned for reconstruction while preserving the semantic structure of the pretrained representation space, followed by a variational bridge which compress latent space into a compact one for better diffusion modeling. We adopt an objective-decoupled stage-wise training strategy that sequentially optimizes generative tractability and reconstruction-fidelity objectives. Together, these components yield a tokenizer that preserves strong semantics, reconstructs faithfully, and produces latents with reduced diffusion modeling complexity. Experiments demonstrate that RPiAE outperforms other visual tokenizers on text-to-image generation and image editing, while delivering the best reconstruction fidelity among representation-based tokenizers.