Research

Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding

Neuro-Symbolic AI

Authors

Published on

10/04/2018

We marry two powerful ideas: deep representation learning for visual recognition and language understanding, and symbolic program execution for reasoning. Our neural-symbolic visual question answering (NS-VQA) system first recovers a structural scene representation from the image and a program trace from the question. It then executes the program on the scene representation to obtain an answer. Incorporating symbolic structure as prior knowledge offers three unique advantages. First, executing programs on a symbolic space is more robust to long program traces; our model can solve complex reasoning tasks better, achieving an accuracy of 99.8% on the CLEVR dataset. Second, the model is more data- and memory-efficient: it performs well after learning on a small number of training data; it can also encode an image into a compact representation, requiring less storage than existing methods for offline question answering. Third, symbolic program execution offers full transparency to the reasoning process; we are thus able to interpret and diagnose each execution step.

Please cite our work using the BibTeX below.

@article{DBLP:journals/corr/abs-1810-02338,
  author    = {Kexin Yi and
               Jiajun Wu and
               Chuang Gan and
               Antonio Torralba and
               Pushmeet Kohli and
               Joshua B. Tenenbaum},
  title     = {Neural-Symbolic {VQA:} Disentangling Reasoning from Vision and Language
               Understanding},
  journal   = {CoRR},
  volume    = {abs/1810.02338},
  year      = {2018},
  url       = {http://arxiv.org/abs/1810.02338},
  archivePrefix = {arXiv},
  eprint    = {1810.02338},
  timestamp = {Tue, 30 Oct 2018 10:49:09 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1810-02338.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
Close Modal