Identifying Interpretable Action Concepts in Deep Networks
Authors
Authors
- Dan Gutfreund
- Rogerio Feris
- Aude Oliva
- Kandan Ramakrishnan
- Mathew Monfort
- Barry A McNamara
- Alex Lascelles
Authors
- Dan Gutfreund
- Rogerio Feris
- Aude Oliva
- Kandan Ramakrishnan
- Mathew Monfort
- Barry A McNamara
- Alex Lascelles
Published on
06/20/2019
Categories
A number of recent methods to understand neural networks have focused on quantifying the role of individual features. One such method, NetDissect identifies interpretable features of a model using the Broden dataset of visual semantic labels (colors, materials, textures, objects and scenes). Given the recent rise of a number of action recognition datasets, we propose extending the Broden dataset to include actions to better analyze learned action models. We describe the annotation process and results from interpreting action recognition models on the extended Broden dataset.
This work was presented at CVPR 2019.
Please cite our work using the BibTeX below.
@InProceedings{Ramakrishnan_2019_CVPR_Workshops,
author = {Ramakrishnan, Kandan and Monfort, Mathew and A McNamara, Barry and Lascelles, Alex and Gutfreund, Dan and Feris, Rogerio and Oliva, Aude},
title = {Identifying Interpretable Action Concepts in Deep Networks},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}