Research

Fashion IQ: A New Dataset towards Retrieving Images by Natural Language Feedback

CVPR

Authors

Published on

05/30/2019

Categories

Computer Vision CVPR

Conversational interfaces for the detail-oriented retail fashion domain are more natural, expressive, and user friendly than classical keyword-based search interfaces. In this paper, we introduce the Fashion IQ dataset to support and advance research on interactive fashion image retrieval. Fashion IQ is the first fashion dataset to provide human-generated captions that distinguish similar pairs of garment images together with side-information consisting of real-world product descriptions and derived visual attribute labels for these images. We provide a detailed analysis of the characteristics of the Fashion IQ data, and present a transformer-based user simulator and interactive image retriever that can seamlessly integrate visual attributes with image features, user feedback, and dialog history, leading to improved performance over the state of the art in dialog-based image retrieval. We believe that our dataset will encourage further work on developing more natural and real-world applicable conversational shopping assistants.

This paper has been published at CVPR 2021

Please cite our work using the BibTeX below.

@misc{wu2020fashion,
      title={Fashion IQ: A New Dataset Towards Retrieving Images by Natural Language Feedback}, 
      author={Hui Wu and Yupeng Gao and Xiaoxiao Guo and Ziad Al-Halah and Steven Rennie and Kristen Grauman and Rogerio Feris},
      year={2020},
      eprint={1905.12794},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Close Modal