Research

Memory-efficient Patch-based Inference for Tiny Deep Learning

NeurIPS

Authors

Published on

10/28/2021

Tiny deep learning on microcontroller units (MCUs) is challenging due to the limited memory size. We find that the memory bottleneck is due to the imbalanced memory distribution in convolutional neural network (CNN) designs: the first several blocks have an order of magnitude larger memory usage than the rest of the network. To alleviate this issue, we propose a generic patch-by-patch inference scheduling, which operates only on a small spatial region of the feature map and significantly cuts down the peak memory. However, naive implementation brings overlapping patches and computation overhead. We further propose network redistribution to shift the receptive field and FLOPs to the later stage and reduce the computation overhead. Manually redistributing the receptive field is difficult. We automate the process with neural architecture search to jointly optimize the neural architecture and inference scheduling, leading to MCUNetV2. Patch-based inference effectively reduces the peak memory usage of existing networks by 4-8x. Co-designed with neural networks, MCUNetV2 sets a record ImageNet accuracy on MCU (71.8%), and achieves >90% accuracy on the visual wake words dataset under only 32kB SRAM. MCUNetV2 also unblocks object detection on tiny devices, achieving 16.9% higher mAP on Pascal VOC compared to the state-of-the-art result. Our study largely addressed the memory bottleneck in tinyML and paved the way for various vision applications beyond image classification.

This paper has been published at NeurIPS 2021

Please cite our work using the BibTeX below.

@misc{lin2021mcunetv2,
      title={MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning}, 
      author={Ji Lin and Wei-Ming Chen and Han Cai and Chuang Gan and Song Han},
      year={2021},
      eprint={2110.15352},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Close Modal