StarCoder: May the Source be With You!

Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Randy, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Suriya Gunasekar, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, Harm de Vries.
Transactions on Machine Learning Research (TMLR), 2023

The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention. StarCoderBase is trained on 1 trillion tokens sourced from The Stack, a large collection of permissively licensed GitHub repositories with inspection tools and an opt-out process. We fine-tuned StarCoderBase on 35B Python tokens, resulting in the creation of StarCoder. We perform the most comprehensive evaluation of Code LLMs to date and show that StarCoderBase outperforms every open Code LLM that supports multiple programming languages and matches or outperforms the OpenAI code-cushman-001 model. Furthermore, StarCoder outperforms every model that is fine-tuned on Python, can be prompted to achieve 40% pass@1 on HumanEval, and still retains its performance on other programming languages. We take several important steps towards a safe open-access model release, including an improved PII redaction pipeline and a novel attribution tracing tool, and make the StarCoder models publicly available under a more commercially viable version of the Open Responsible AI Model license.

PDF available on arXiv

@misc{li:starcoder,
  title="{StarCoder}: may the source be with you!",
  author="Raymond Li and Loubna Ben Allal and Yangtian Zi and
          Niklas Muennighoff and Denis Kocetkov and Chenghao Mou and
          Marc Marone and Christopher Akiki and Jia Li and Jenny Chim and
          Qian Liu and Evgenii Zheltonozhskii and Terry Yue Zhuo and
          Thomas Wang and Olivier Dehaene and Mishig Davaadorj and
          Joel Lamy-Poirier and João Monteiro and Oleh Shliazhko and
          Nicolas Gontier and Nicholas Meade and Armel Zebaze and
          Ming-Ho Yee and Logesh Kumar Umapathi and Jian Zhu and
          Benjamin Lipkin and Muhtasham Oblokulov and Zhiruo Wang and
          Rudra Murthy and Jason Stillerman and Siva Sankalp Patel and
          Dmitry Abulkhanov and Marco Zocca and Manan Dey and Zhihan Zhang and
          Nour Fahmy and Urvashi Bhattacharyya and Wenhao Yu and
          Swayam Singh and Sasha Luccioni and Paulo Villegas and
          Maxim Kunakov and Fedor Zhdanov and Manuel Romero and Tony Lee and
          Nadav Timor and Jennifer Ding and Claire Schlesinger and
          Hailey Schoelkopf and Jan Ebert and Tri Dao and Mayank Mishra and
          Alex Gu and Jennifer Robinson and Carolyn Jane Anderson and
          Brendan Dolan-Gavitt and Danish Contractor and Siva Reddy and
          Daniel Fried and Dzmitry Bahdanau and Yacine Jernite and
          Carlos Muñoz Ferrandis and Sean Hughes and Thomas Wolf and
          Arjun Guha and Leandro von Werra and Harm de Vries",
  year=2023,
}