The Sound of Pixels

Multimodal Learning


Published on


We introduce PixelPlayer, a system that, by leveraging large amounts of unlabeled videos, learns to locate image regions which produce sounds and separate the input sounds into a set of components that represents the sound from each pixel. Our approach capitalizes on the natural synchronization of the visual and audio modalities to learn models that jointly parse sounds and images, without requiring additional manual supervision. Experimental results on a newly collected MUSIC dataset show that our proposed Mix-and-Separate framework outperforms several baselines on source separation. Qualitative results suggest our model learns to ground sounds in vision, enabling applications such as independently adjusting the volume of sound sources.

Please cite our work using the BibTeX below.

  author    = {Hang Zhao and
               Chuang Gan and
               Andrew Rouditchenko and
               Carl Vondrick and
               Josh H. McDermott and
               Antonio Torralba},
  title     = {The Sound of Pixels},
  journal   = {CoRR},
  volume    = {abs/1804.03160},
  year      = {2018},
  url       = {},
  archivePrefix = {arXiv},
  eprint    = {1804.03160},
  timestamp = {Mon, 13 Aug 2018 16:47:59 +0200},
  biburl    = {},
  bibsource = {dblp computer science bibliography,}

Close Modal