ParaSpeechCLAP: A Dual-Encoder Speech-Text Model for Rich Stylistic Language-Audio Pretraining

2026-03-30Artificial Intelligence

Artificial IntelligenceComputation and LanguageSound
AI summary

The authors created ParaSpeechCLAP, a model that connects speech and text styles, like emotion or pitch, into the same space so computers can understand them together. They built separate models for speaker traits and for specific speech moments, plus a combined one. Specialized models worked better at specific style details, while the combined model was better at mixing styles. Their models also helped improve text-to-speech systems without extra training and beat existing methods on several tests.

contrastive learningdual-encoderspeech styletext captionsembedding spaceintrinsic descriptorssituational descriptorsstyle caption retrievalspeech attribute classificationtext-to-speech (TTS)
Authors
Anuj Diwan, Eunsol Choi, David Harwath
Abstract
We introduce ParaSpeechCLAP, a dual-encoder contrastive model that maps speech and text style captions into a common embedding space, supporting a wide range of intrinsic (speaker-level) and situational (utterance-level) descriptors (such as pitch, texture and emotion) far beyond the narrow set handled by existing models. We train specialized ParaSpeechCLAP-Intrinsic and ParaSpeechCLAP-Situational models alongside a unified ParaSpeechCLAP-Combined model, finding that specialization yields stronger performance on individual style dimensions while the unified model excels on compositional evaluation. We further show that ParaSpeechCLAP-Intrinsic benefits from an additional classification loss and class-balanced training. We demonstrate our models' performance on style caption retrieval, speech attribute classification and as an inference-time reward model that improves style-prompted TTS without additional training. ParaSpeechCLAP outperforms baselines on most metrics across all three applications. Our models and code are released at https://github.com/ajd12342/paraspeechclap .