How Beginning Programmers and Code LLMs (Mis)read Each Other

Hannah McLean Babe, Sydney Nguyen, Yangtian Zi, Arjun Guha, Carolyn Jane Anderson, and Molly Q Feldman
Accepted pending revisions to ACM Conference on Human Factors in Computing Systems (CHI), 2024

Generative AI models, specifically large language models (LLMs), have made strides towards the long-standing goal of text-to-code generation. This progress has invited numerous studies of user interaction. However, less is known about the struggles and strategies of non-experts, for whom each step of the text-to-code problem presents challenges: describing their intent in natural language, evaluating the correctness of generated code, and editing prompts when the generated code is incorrect. This paper presents a large-scale controlled study of how 120 beginning coders across three academic institutions approach writing and editing prompts. A novel experimental design allows us to target specific steps in the text-to-code process and reveals that beginners struggle with writing and editing prompts, even for problems at their skill level and when correctness is automatically determined. Our mixed-methods evaluation provides insight into student processes and perceptions with key implications for non-expert Code LLM use within and outside of education.

PDF available on arXiv