Can newbie coders use ChatGPT instead of learning to write code?
Tue 08.06.24 / Olivia Mintz
Can newbie coders use ChatGPT instead of learning to write code?
Tue 08.06.24 / Olivia Mintz
Tue 08.06.24 / Olivia Mintz
Tue 08.06.24 / Olivia Mintz
Can newbie coders use ChatGPT instead of learning to write code?
Tue 08.06.24 / Olivia Mintz
Can newbie coders use ChatGPT instead of learning to write code?
Tue 08.06.24 / Olivia Mintz
Tue 08.06.24 / Olivia Mintz
Tue 08.06.24 / Olivia Mintz
Khoury professor Arjun Guha and doctoral student Yangtian Zi recently co-authored a research paper on generative AI and large language models (LLMs). The paper, titled “How Beginning Programmers and Code LLMs (Mis)read Each Other,” was featured at the ACM CHI conference in May.
READ: Khoury researchers showcase college-record 28 works at CHI 2024
Motivated by the spread of LLMs like ChatGPT and their own experience with model development, Guha and Zi worked alongside Wellesley College’s Sydney Nguyen and Carolyn Jane Anderson, as well as Oberlin College’s Hannah McLean Babe and Molly Feldman, to gather 120 students from their schools, all with limited coding experience. Through several 75-minute Zoom calls, the researchers attempted to teach the students to complete three coding exercises. But instead of writing code themselves, the students would instruct the AI models to write it.
“If ChatGPT or other generative AI models can write code, then programming is going to be obsolete eventually,” Zi says. “But this is not really the case, because there’s a higher level of difficulty in learning how to code.”
“As I was using this technology when it first came out, I was heavily relying on my technical vocabulary,” Guha adds. “So, I thought that these wild claims that now anyone can write code were probably bunk, and I sort of set out to disprove them.”
When determining what types of problems would be asked of the students, tasks were selected from previously used assignments and exams.
“We know that students can do [these tasks] themselves; they can write the code directly because they’ve completed the class successfully,” Guha adds. “But when we asked [the students in our study] to not write the code directly, but to tell the model in English to write the code for them, they had a really hard time doing it successfully.”
This was because the students lacked coding experience; namely, they were unfamiliar with Python syntax and coding terminology, and thus couldn’t accurately describe the problems to the AI models.
“People still sometimes give detailed explanations, but it’s still incorrect,” Zi says. “This leads to students entering the prompt in a way that misleads the AI to generate code that is not intended for the problem.”
Part of the project’s mission was to empower students with tools to help them succeed in coding, especially for students already at a disadvantage due to a gap in resources.
“’We’ve put a lot of effort in nationally, but also here [at Khoury College], to close the gaps that emerge in our lower-level classes,” Guha says. “With a little bit of extra experience, you can use this technology effectively. However, if first-generation college students have a harder time using this technology or don’t understand how to leverage it, that’s not good for the gaps that we’re trying to close.”
So in short, these models are not rendering coding instruction, or computing education as a whole, obsolete. The team emphasized that having some technical knowledge was important to completing these coding tasks, with AI merely serving as a helpful launching point. Zi adds that students who are unfamiliar with Python syntax and terminology would likely find it especially difficult to ask the models specific questions.
“Understanding the concept of programming is still very important even in this age of generative AI,” Zi says.
But despite the challenges they pose for inexperienced coders, and despite the difficulty of using them properly without sufficient accompanying resources, the researchers say that the models can benefit users who understand coding principles, and that the LLMs have been a positive advancement overall.
“They can do a lot of good,” Guha says. “And we’re trying to make them better.”
Khoury professor Arjun Guha and doctoral student Yangtian Zi recently co-authored a research paper on generative AI and large language models (LLMs). The paper, titled “How Beginning Programmers and Code LLMs (Mis)read Each Other,” was featured at the ACM CHI conference in May.
READ: Khoury researchers showcase college-record 28 works at CHI 2024
Motivated by the spread of LLMs like ChatGPT and their own experience with model development, Guha and Zi worked alongside Wellesley College’s Sydney Nguyen and Carolyn Jane Anderson, as well as Oberlin College’s Hannah McLean Babe and Molly Feldman, to gather 120 students from their schools, all with limited coding experience. Through several 75-minute Zoom calls, the researchers attempted to teach the students to complete three coding exercises. But instead of writing code themselves, the students would instruct the AI models to write it.
“If ChatGPT or other generative AI models can write code, then programming is going to be obsolete eventually,” Zi says. “But this is not really the case, because there’s a higher level of difficulty in learning how to code.”
“As I was using this technology when it first came out, I was heavily relying on my technical vocabulary,” Guha adds. “So, I thought that these wild claims that now anyone can write code were probably bunk, and I sort of set out to disprove them.”
When determining what types of problems would be asked of the students, tasks were selected from previously used assignments and exams.
“We know that students can do [these tasks] themselves; they can write the code directly because they’ve completed the class successfully,” Guha adds. “But when we asked [the students in our study] to not write the code directly, but to tell the model in English to write the code for them, they had a really hard time doing it successfully.”
This was because the students lacked coding experience; namely, they were unfamiliar with Python syntax and coding terminology, and thus couldn’t accurately describe the problems to the AI models.
“People still sometimes give detailed explanations, but it’s still incorrect,” Zi says. “This leads to students entering the prompt in a way that misleads the AI to generate code that is not intended for the problem.”
Part of the project’s mission was to empower students with tools to help them succeed in coding, especially for students already at a disadvantage due to a gap in resources.
“’We’ve put a lot of effort in nationally, but also here [at Khoury College], to close the gaps that emerge in our lower-level classes,” Guha says. “With a little bit of extra experience, you can use this technology effectively. However, if first-generation college students have a harder time using this technology or don’t understand how to leverage it, that’s not good for the gaps that we’re trying to close.”
So in short, these models are not rendering coding instruction, or computing education as a whole, obsolete. The team emphasized that having some technical knowledge was important to completing these coding tasks, with AI merely serving as a helpful launching point. Zi adds that students who are unfamiliar with Python syntax and terminology would likely find it especially difficult to ask the models specific questions.
“Understanding the concept of programming is still very important even in this age of generative AI,” Zi says.
But despite the challenges they pose for inexperienced coders, and despite the difficulty of using them properly without sufficient accompanying resources, the researchers say that the models can benefit users who understand coding principles, and that the LLMs have been a positive advancement overall.
“They can do a lot of good,” Guha says. “And we’re trying to make them better.”
Khoury professor Arjun Guha and doctoral student Yangtian Zi recently co-authored a research paper on generative AI and large language models (LLMs). The paper, titled “How Beginning Programmers and Code LLMs (Mis)read Each Other,” was featured at the ACM CHI conference in May.
READ: Khoury researchers showcase college-record 28 works at CHI 2024
Motivated by the spread of LLMs like ChatGPT and their own experience with model development, Guha and Zi worked alongside Wellesley College’s Sydney Nguyen and Carolyn Jane Anderson, as well as Oberlin College’s Hannah McLean Babe and Molly Feldman, to gather 120 students from their schools, all with limited coding experience. Through several 75-minute Zoom calls, the researchers attempted to teach the students to complete three coding exercises. But instead of writing code themselves, the students would instruct the AI models to write it.
“If ChatGPT or other generative AI models can write code, then programming is going to be obsolete eventually,” Zi says. “But this is not really the case, because there’s a higher level of difficulty in learning how to code.”
“As I was using this technology when it first came out, I was heavily relying on my technical vocabulary,” Guha adds. “So, I thought that these wild claims that now anyone can write code were probably bunk, and I sort of set out to disprove them.”
When determining what types of problems would be asked of the students, tasks were selected from previously used assignments and exams.
“We know that students can do [these tasks] themselves; they can write the code directly because they’ve completed the class successfully,” Guha adds. “But when we asked [the students in our study] to not write the code directly, but to tell the model in English to write the code for them, they had a really hard time doing it successfully.”
This was because the students lacked coding experience; namely, they were unfamiliar with Python syntax and coding terminology, and thus couldn’t accurately describe the problems to the AI models.
“People still sometimes give detailed explanations, but it’s still incorrect,” Zi says. “This leads to students entering the prompt in a way that misleads the AI to generate code that is not intended for the problem.”
Part of the project’s mission was to empower students with tools to help them succeed in coding, especially for students already at a disadvantage due to a gap in resources.
“’We’ve put a lot of effort in nationally, but also here [at Khoury College], to close the gaps that emerge in our lower-level classes,” Guha says. “With a little bit of extra experience, you can use this technology effectively. However, if first-generation college students have a harder time using this technology or don’t understand how to leverage it, that’s not good for the gaps that we’re trying to close.”
So in short, these models are not rendering coding instruction, or computing education as a whole, obsolete. The team emphasized that having some technical knowledge was important to completing these coding tasks, with AI merely serving as a helpful launching point. Zi adds that students who are unfamiliar with Python syntax and terminology would likely find it especially difficult to ask the models specific questions.
“Understanding the concept of programming is still very important even in this age of generative AI,” Zi says.
But despite the challenges they pose for inexperienced coders, and despite the difficulty of using them properly without sufficient accompanying resources, the researchers say that the models can benefit users who understand coding principles, and that the LLMs have been a positive advancement overall.
“They can do a lot of good,” Guha says. “And we’re trying to make them better.”
Khoury professor Arjun Guha and doctoral student Yangtian Zi recently co-authored a research paper on generative AI and large language models (LLMs). The paper, titled “How Beginning Programmers and Code LLMs (Mis)read Each Other,” was featured at the ACM CHI conference in May.
READ: Khoury researchers showcase college-record 28 works at CHI 2024
Motivated by the spread of LLMs like ChatGPT and their own experience with model development, Guha and Zi worked alongside Wellesley College’s Sydney Nguyen and Carolyn Jane Anderson, as well as Oberlin College’s Hannah McLean Babe and Molly Feldman, to gather 120 students from their schools, all with limited coding experience. Through several 75-minute Zoom calls, the researchers attempted to teach the students to complete three coding exercises. But instead of writing code themselves, the students would instruct the AI models to write it.
“If ChatGPT or other generative AI models can write code, then programming is going to be obsolete eventually,” Zi says. “But this is not really the case, because there’s a higher level of difficulty in learning how to code.”
“As I was using this technology when it first came out, I was heavily relying on my technical vocabulary,” Guha adds. “So, I thought that these wild claims that now anyone can write code were probably bunk, and I sort of set out to disprove them.”
When determining what types of problems would be asked of the students, tasks were selected from previously used assignments and exams.
“We know that students can do [these tasks] themselves; they can write the code directly because they’ve completed the class successfully,” Guha adds. “But when we asked [the students in our study] to not write the code directly, but to tell the model in English to write the code for them, they had a really hard time doing it successfully.”
This was because the students lacked coding experience; namely, they were unfamiliar with Python syntax and coding terminology, and thus couldn’t accurately describe the problems to the AI models.
“People still sometimes give detailed explanations, but it’s still incorrect,” Zi says. “This leads to students entering the prompt in a way that misleads the AI to generate code that is not intended for the problem.”
Part of the project’s mission was to empower students with tools to help them succeed in coding, especially for students already at a disadvantage due to a gap in resources.
“’We’ve put a lot of effort in nationally, but also here [at Khoury College], to close the gaps that emerge in our lower-level classes,” Guha says. “With a little bit of extra experience, you can use this technology effectively. However, if first-generation college students have a harder time using this technology or don’t understand how to leverage it, that’s not good for the gaps that we’re trying to close.”
So in short, these models are not rendering coding instruction, or computing education as a whole, obsolete. The team emphasized that having some technical knowledge was important to completing these coding tasks, with AI merely serving as a helpful launching point. Zi adds that students who are unfamiliar with Python syntax and terminology would likely find it especially difficult to ask the models specific questions.
“Understanding the concept of programming is still very important even in this age of generative AI,” Zi says.
But despite the challenges they pose for inexperienced coders, and despite the difficulty of using them properly without sufficient accompanying resources, the researchers say that the models can benefit users who understand coding principles, and that the LLMs have been a positive advancement overall.
“They can do a lot of good,” Guha says. “And we’re trying to make them better.”