The goal of Audio-Visual Segmentation (AVS) is to localize and segment the sounding source objects from video frames. Research on AVS suffers from data scarcity due to the high cost of fine-grained manual annotations. Recent works attempt to overcome the challenge of limited data by leveraging the vision foundation model, Segment Anything Model (SAM), prompting it with audio to enhance its ability to segment sounding source objects. While this approach alleviates the model’s burden on understanding visual modality by utilizing knowledge of pre-trained SAM, it does not address the fundamental challenge of learning audio-visual correspondence with limited data. To address this limitation, we propose AV2T-SAM, a novel framework that bridges audio features with the text embedding space of pre-trained text-prompted SAM. Our method leverages multimodal correspondence learned from rich text-image paired datasets to enhance audio-visual alignment. Furthermore, we introduce a novel feature, fCLIP ⊙fCLAP, which emphasizes shared semantics of audio and visual modalities while filtering irrelevant noise. Our approach outperforms existing methods on the AVSBench dataset by effectively utilizing pre-trained segmentation models and cross-modal semantic alignment.
(Publisher abstract provided.)
Similar Publications
- Biological Distance Analysis, Cranial Morphoscopic Traits, and Ancestry Assessment in Forensic Anthropology
- Criminal Orders of Protection for Domestic Violence: Associated Revictimization, Mental Health, and Well-being Among Victims
- Face-to-Face Surveys in High Crime Areas: Balancing Respondent Cooperation and Interviewer Safety