Jointly Discovering Visual Objects and Spoken Words from Raw Sensory Input
Authors
Authors
- David Harwath
- Adria Recasens
- Dıdac Surıs
- Galen Chuang
- Antonio Torralba
- James Glass
Authors
- David Harwath
- Adria Recasens
- Dıdac Surıs
- Galen Chuang
- Antonio Torralba
- James Glass
Published on
09/14/2018
Categories
In this paper, we explore neural network models that learn to associate segments of spoken audio captions with the semantically relevant portions of natural images that they refer to. We demonstrate that these audio-visual associative localizations emerge from network-internal representations learned as a by-product of training to perform an image-audio retrieval task. Our models operate directly on the image pixels and speech waveform, and do not rely on any conventional supervision in the form of labels, segmentations, or alignments between the modalities during training. We perform analysis using the Places 205 and ADE20k datasets demonstrating that our models implicitly learn semanticallycoupled object and word detectors.
Please cite our work using the BibTeX below.
pdf] [arXiv] [bibtex]
@InProceedings{Harwath_2018_ECCV,
author = {Harwath, David and Recasens, Adria and Suris, Didac and Chuang, Galen and Torralba, Antonio and Glass, James},
title = {Jointly Discovering Visual Objects and Spoken Words from Raw Sensory Input},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}