BlockDrop: Dynamic Inference Paths in Residual Networks

Computer Vision


  • Zuxuan Wu
  • Tushar Nagarajan
  • Abhishek Kumar
  • Steven Rennie
  • Larry S. Davis
  • Kristen Grauman
  • Rogerio Feris

Published on


Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20% on average, going as high as 36% for some images, while maintaining the same 76.4% top-1 accuracy on ImageNet.

Please cite our work using the BibTeX below.

  author    = {Zuxuan Wu and
               Tushar Nagarajan and
               Abhishek Kumar and
               Steven Rennie and
               Larry S. Davis and
               Kristen Grauman and
               Rog{\'{e}}rio Schmidt Feris},
  title     = {BlockDrop: Dynamic Inference Paths in Residual Networks},
  journal   = {CoRR},
  volume    = {abs/1711.08393},
  year      = {2017},
  url       = {},
  archivePrefix = {arXiv},
  eprint    = {1711.08393},
  timestamp = {Wed, 16 Oct 2019 14:14:57 +0200},
  biburl    = {},
  bibsource = {dblp computer science bibliography,}

Close Modal