AC-Foley: Reference-Audio-Guided Video-to-Audio Synthesis with Acoustic Transfer

2026-03-16Sound

SoundComputer Vision and Pattern RecognitionMachine LearningMultimedia
AI summary

The authors identified problems in current methods that create sounds from videos, mainly because text descriptions aren't detailed enough and group different sounds together. They designed AC-Foley, a new model that uses actual audio examples instead of just text, allowing it to make more precise and detailed sounds matched to the video. This helps the model copy sound qualities, generate new sounds without training, and improve overall sound quality. Their method works better than others when given reference sounds and still performs well without them.

video-to-audio generationFoleyaudio conditioningtimbre transferzero-shot generationsemantic granularitymicro-acoustic featuressound synthesistext prompt ambiguity
Authors
Pengjun Fang, Yingqing He, Yazhou Xing, Qifeng Chen, Ser-Nam Lim, Harry Yang
Abstract
Existing video-to-audio (V2A) generation methods predominantly rely on text prompts alongside visual information to synthesize audio. However, two critical bottlenecks persist: semantic granularity gaps in training data, such as conflating acoustically distinct sounds under coarse labels, and textual ambiguity in describing micro-acoustic features. These bottlenecks make it difficult to perform fine-grained sound synthesis using text-controlled modes. To address these limitations, we propose AC-Foley, an audio-conditioned V2A model that directly leverages reference audio to achieve precise and fine-grained control over generated sounds. This approach enables fine-grained sound synthesis, timbre transfer, zero-shot sound generation, and improved audio quality. By directly conditioning on audio signals, our approach bypasses the semantic ambiguities of text descriptions while enabling precise manipulation of acoustic attributes. Empirically, AC-Foley achieves state-of-the-art performance for Foley generation when conditioned on reference audio, while remaining competitive with state-of-the-art video-to-audio methods even without audio conditioning.