Fashion IQ: A New Dataset towards Retrieving Images by Natural Language Feedback
Authors
Authors
- Hui Wu
- Yupeng Gao
- Xiaoxiao Guo
- Ziad Al-Halah
- Steven Rennie
- Kristen Grauman
- Rogerio Feris
Authors
- Hui Wu
- Yupeng Gao
- Xiaoxiao Guo
- Ziad Al-Halah
- Steven Rennie
- Kristen Grauman
- Rogerio Feris
Published on
05/30/2019
Categories
Conversational interfaces for the detail-oriented retail fashion domain are more natural, expressive, and user friendly than classical keyword-based search interfaces. In this paper, we introduce the Fashion IQ dataset to support and advance research on interactive fashion image retrieval. Fashion IQ is the first fashion dataset to provide human-generated captions that distinguish similar pairs of garment images together with side-information consisting of real-world product descriptions and derived visual attribute labels for these images. We provide a detailed analysis of the characteristics of the Fashion IQ data, and present a transformer-based user simulator and interactive image retriever that can seamlessly integrate visual attributes with image features, user feedback, and dialog history, leading to improved performance over the state of the art in dialog-based image retrieval. We believe that our dataset will encourage further work on developing more natural and real-world applicable conversational shopping assistants.
Please cite our work using the BibTeX below.
@InProceedings{Wu_2021_CVPR,
author = {Wu, Hui and Gao, Yupeng and Guo, Xiaoxiao and Al-Halah, Ziad and Rennie, Steven and Grauman, Kristen and Feris, Rogerio},
title = {Fashion IQ: A New Dataset Towards Retrieving Images by Natural Language Feedback},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021},
pages = {11307-11317}
}