BAVS: Bootstrapping Audio-Visual Segmentation by Integrating Foundation Knowledge

The University of Queensland, Matrix Verse, Netease Fuxi AI Lab, CSIRO DATA61
*Corresponding author
MY ALT TEXT

Overview of our BAVS framework.

In the first stage, we first utilize an off-the-shelf large foundation multi-modal model to extract the visual semantics. Based on the visual semantics, we introduce a silent object-aware objective (SOAO) to our segmentation model and thus obtain the potential sounding instance labels and masks. Moreover, we employ a large pre-trained audio classification foundation model to collect semantic tags for each audio recording. In the second stage, we first introduce an audio-visual tree to fit sound semantics to their sounding sources. Then we present the audio-visual semantic integration strategy (AVIS) to establish a consistent audio-visual mapping between the segmented instances and the audio-semantic tags.

Abstract

Given an audio-visual pair, audio-visual segmentation (AVS) aims to locate sounding sources by predicting pixel-wise maps. Previous methods assume that each sound component in an audio signal always has a visual counterpart in the image. However, this assumption overlooks that off-screen sounds and background noise often contaminate the audio recordings in real-world scenarios. They impose significant challenges on building a consistent semantic mapping between audio and visual signals for AVS models and thus impede precise sound localization. In this work, we propose a two-stage bootstrapping audio-visual segmentation framework by incorporating multi-modal foundation knowledge. In a nutshell, our BAVS is designed to eliminate the interference of background noise or off-screen sounds in segmentation by establishing the audio-visual correspondences in an explicit manner. In the first stage, we employ a segmentation model to localize potential sounding objects from visual data without being affected by contaminated audio signals. In the meanwhile, we also utilize a foundation audio classification model to discern audio semantics. Considering the audio tags provided by the audio foundation model are noisy, associating object masks with audio tags is not trivial. In the second stage, we develop an audio-visual semantic integration strategy (AVIS) to localize the authentic-sounding objects. Here, we construct an audio-visual tree based on the hierarchical correspondence between sounds and object categories. Then, we examine the label concurrency between the localized objects and classified audio tags by tracing the audio-visual tree. With AVIS, we can effectively segment real-sounding objects. Extensive experiments demonstrate the superiority of our method on AVS datasets, particularly in scenarios involving background noise.

Comparisons With the AVS Method

Qualitative comparisons with the state-of-the-art TPAVI on the single-sounding source and multi-sounding source cases.

Visual results of adding white noise to original audio recordings. Red bounding boxes highlight the specific regions for comparison.
When the input audio-visual pairs suffer from interference, our framework still segments sounding sources accurately.

Comparisons With VSL Methods

BibTeX

@article{liu2023bavs,
  title={BAVS: Bootstrapping Audio-Visual Segmentation by Integrating Foundation Knowledge},
  author={Liu, Chen and Li, Peike and Zhang, Hu and Li, Lincheng and Huang, Zi and Wang, Dadong and Yu, Xin},
  journal={arXiv preprint arXiv:2308.10175},
  year={2023}
}