TypeScript and Python are two programming languages that support optional type annotations, which are useful but tedious to introduce and maintain. This has motivated automated type prediction: given an untyped program, produce a well-typed output program. Large language models (LLMs) are promising for type prediction, but there are challenges: fill-in-the-middle performs poorly, programs may not fit into the context window, generated types may not type check, and it is difficult to measure how well-typed the output program is. We address these challenges by building OpenTau, a search-based approach for type prediction that leverages large language models. We propose a new metric for type prediction quality, give a tree-based program decomposition that searches a space of generated types, and present fill-in-the-type fine-tuning for LLMs. We evaluate our work with a new dataset for TypeScript type prediction, and show that 47.4% of files type check (14.5% absolute improvement) with an overall rate of 3.3 type errors per file. All code, data, and models are available at: this https URL.
@misc{cassano:opentau, title = {Type {{Prediction With Program Decomposition}} and {{Fill-in-the-Type Training}}}, author = {Cassano, Federico and Yee, Ming-Ho and Shinn, Noah and Guha, Arjun and Holtzen, Steven}, year = {2023}, howpublished = {https://arxiv.org/abs/2305.17145v1}, }