PoDAR: Power-Disentangled Audio Representation for Generative Modeling

2026-05-11Artificial Intelligence

Artificial IntelligenceMachine LearningSound
AI summary

The authors study how to make audio generation models work better by improving the way sound information is organized inside the model. They introduce PoDAR, a method that separates the loudness (power) of the audio from its content, making it easier for the model to learn and create new sounds. This separation helps the model learn faster and produce speech that sounds more like the original speaker. Their approach also allows for better use of certain techniques to control audio generation at larger scales.

latent diffusion modelsaudio representationfactor disentanglementlatent spacepower augmentationlatent consistencyVAE (Variational Autoencoder)speaker similarityUTMOSclassifier-free guidance (CFG)
Authors
Alejandro Luebs, Mithilesh Vaidya, Ishaan Kumar, Sumukh Badam, Stephen W. Bailey, Matthew Bendel, Jose Sotelo, Xingzhe He
Abstract
The performance of audio latent diffusion models is primarily governed by generator expressivity and the modelability of the underlying latent space. While recent research has focused primarily on the former, as well as improving the reconstruction fidelity of audio codecs, we demonstrate that latent modelability can be significantly improved through explicit factor disentanglement. We present PoDAR (Power-Disentangled Audio Representation), a framework that utilizes a randomized power augmentation and latent consistency objective to decouple signal power from invariant semantic content. This factorization makes the latent space easier to model, which both accelerates the convergence of downstream generative models and improves final overall performance. When applied to a Stable Audio 1.0 VAE with an F5-TTS generator, PoDAR achieves about a $2\times$ acceleration in convergence to match baseline performance, while increasing final speaker similarity by 0.055 and UTMOS by 0.22 on the LibriSpeech-PC dataset. Furthermore, isolating power into dedicated channels enables the application of CFG exclusively to power-invariant content, effectively extending the stable guidance regime to higher scales.