CodeNet: A Large-Scale AI for Code Dataset for Learning a Diversity of Coding Tasks
Authors
Authors
- Ruchir Puri
- David S. Kung
- Geert Janssen
- Wei Zhang
- Giacomo Domeniconi
- Vladimir Zolotov
- Julian Dolby
- Jie Chen
- Mihir Choudhury
- Lindsey Decker
- Veronika Thost
- Luca Buratti
- Saurabh Pujar
- Shyam Ramji
- Ulrich Finkler
- Susan Malaika
- Frederick Reiss
Authors
- Ruchir Puri
- David S. Kung
- Geert Janssen
- Wei Zhang
- Giacomo Domeniconi
- Vladimir Zolotov
- Julian Dolby
- Jie Chen
- Mihir Choudhury
- Lindsey Decker
- Veronika Thost
- Luca Buratti
- Saurabh Pujar
- Shyam Ramji
- Ulrich Finkler
- Susan Malaika
- Frederick Reiss
Published on
05/25/2021
Over the last several decades, software has been woven into the fabric of every aspect of our society. As software development surges and code infrastructure of enterprise applications ages, it is now more critical than ever to increase software development productivity and modernize legacy applications. Advances in deep learning and machine learning algorithms have enabled numerous breakthroughs, motivating researchers to leverage AI techniques to improve software development efficiency. Thus, the fast-emerging research area of AI for Code has garnered new interest and gathered momentum. In this paper, we present a large-scale dataset CodeNet, consisting of over 14 million code samples and about 500 million lines of code in 55 different programming languages, which is aimed at teaching AI to code. In addition to its large scale, CodeNet has a rich set of high-quality annotations to benchmark and help accelerate research in AI techniques for a variety of critical coding tasks, including code similarity and classification, code translation between a large variety of programming languages, and code performance (runtime and memory) improvement techniques. Additionally, CodeNet provides sample input and output test sets for 98.5% of the code samples, which can be used as an oracle for determining code correctness and potentially guide reinforcement learning for code quality improvements. As a usability feature, we provide several pre-processing tools in CodeNet to transform source code into representations that can be readily used as inputs into machine learning models. Results of code classification and code similarity experiments using the CodeNet dataset are provided as a reference. We hope that the scale, diversity and rich, high-quality annotations of CodeNet will offer unprecedented research opportunities at the intersection of AI and Software Engineering.
This paper has been published at NeurIPS 2021 Dataset and Benchmarks trackĀ
Please cite our work using the BibTeX below.
@misc{puri2021codenet,
title={CodeNet: A Large-Scale AI for Code Dataset for Learning a Diversity of Coding Tasks},
author={Ruchir Puri and David S. Kung and Geert Janssen and Wei Zhang and Giacomo Domeniconi and Vladimir Zolotov and Julian Dolby and Jie Chen and Mihir Choudhury and Lindsey Decker and Veronika Thost and Luca Buratti and Saurabh Pujar and Shyam Ramji and Ulrich Finkler and Susan Malaika and Frederick Reiss},
year={2021},
eprint={2105.12655},
archivePrefix={arXiv},
primaryClass={cs.SE}
}