Disentangling Visual and Written Concepts in CLIP
Authors
Authors
- Antonio Torralba
- Joanna Materzyńska
- David Bau
Authors
- Antonio Torralba
- Joanna Materzyńska
- David Bau
Published on
06/24/2022
The CLIP network measures the similarity between natural text and images; in this work, we investigate the entanglement of the representation of word images and natural images in its image encoder. First, we find that the image encoder has an ability to match word images with natural images of scenes described by those words. This is consistent with previous research that suggests that the meaning and the spelling of a word might be entangled deep within the network. On the other hand, we also find that CLIP has a strong ability to match nonsense words, suggesting that processing of letters is separated from processing of their meaning. To explicitly determine whether the spelling capability of CLIP is separable, we devise a procedure for identifying representation subspaces that selectively isolate or eliminate spelling capabilities. We benchmark our methods against a range of retrieval tasks, and we also test them by measuring the appearance of text in CLIP-guided generated images. We find that our methods are able to cleanly separate spelling capabilities of CLIP from the visual processing of natural images.
Please cite our work using the BibTeX below.
@InProceedings{Materzynska_2022_CVPR,
author = {Materzy\'nska, Joanna and Torralba, Antonio and Bau, David},
title = {Disentangling Visual and Written Concepts in CLIP},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {16410-16419}
}