On Teaching Novices Computational Thinking by Utilizing Large Language Models Within Assessments

Mohammed Hassan, Yuxuan Chen, Paul Denny, Craig Zilles

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Novice programmers often struggle to develop computational thinking (CT) skills in introductory programming courses. This study investigates the use of Large Language Models (LLMs) to provide scalable, strategy-driven feedback to teach CT. Through think-aloud interviews with 17 students solving code comprehension and writing tasks, we found that LLMs effectively guided decomposition and program development tool usage. Challenges included students seeking direct answers or pasting feedback without considering suggested strategies. We discuss how instructors should integrate LLMs into assessments to support students’ learning of CT.

Original languageEnglish (US)
Title of host publicationSIGCSE TS 2025 - Proceedings of the 56th ACM Technical Symposium on Computer Science Education
PublisherAssociation for Computing Machinery
Pages471-477
Number of pages7
ISBN (Electronic)9798400705311
DOIs
StatePublished - Feb 18 2025
Event56th Annual SIGCSE Technical Symposium on Computer Science Education, SIGCSE TS 2025 - Pittsburgh, United States
Duration: Feb 26 2025Mar 1 2025

Publication series

NameSIGCSE TS 2025 - Proceedings of the 56th ACM Technical Symposium on Computer Science Education
Volume1

Conference

Conference56th Annual SIGCSE Technical Symposium on Computer Science Education, SIGCSE TS 2025
Country/TerritoryUnited States
CityPittsburgh
Period2/26/253/1/25

Keywords

  • Large Language Models
  • code comprehension
  • debuggers
  • execution

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • Education

Fingerprint

Dive into the research topics of 'On Teaching Novices Computational Thinking by Utilizing Large Language Models Within Assessments'. Together they form a unique fingerprint.

Cite this