Validating, Refining, and Identifying Programming Plans Using Learning Curve Analysis on Code Writing Data

Mehmet Arif Demirtaş, Max Fowler, Nicole Hu, Kathryn Cunningham

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Background and Context: A major difference between expert and novice programmers is the ability to recognize and apply common and meaningful patterns in code. Previous works have attempted to identify these patterns as programming plans, such as counting or filtering the items of a collection. However, these efforts primarily relied on expert opinions and yielded many varied sets of plans. No methods have been applied to evaluate these various programming plans as far as their alignment with novices' cognitive development. Objectives: In this work, we investigate which programming plans are learned as discrete skills. To this end, we evaluate how well students transfer their knowledge between problems that test a particular plan. Further, we explore how plan definitions can be improved to better represent student cognition using historical data on student performance collected from a programming course. Method: We apply learning curve analysis, a method for modeling student improvement on problems that test a particular skill, using programming plans as a skill model. More specifically, we study student submissions on code-writing exercises in Python from a CS1 class for non-majors that includes many small programming problems as well as implicit and explicit instruction on patterns. We compare the learning curves for ten programming plans across seven semesters of the same course. Findings: Students develop the skill of using some programming plans in their solutions, indicated by consistent declines in error rates on practice opportunities for a subset of plans in multiple semesters with various conditions. Most consistently learned plans have clear and concrete goals that can be explained in natural language, as opposed to having abstract definitions or being explained in terms of language structures. Implications: We show that learning curve analysis can be used to empirically assess the cognitive validity of proposed programming plans, as well as compare various plan models. However, our work also indicates that instructors should be cautious when assuming that introductory programming students can apply more abstract programming plans to successfully solve new problems, as plans with increased specificity tend to better explain the learning process in our observations.

Original languageEnglish (US)
Title of host publicationICER 2024 - ACM Conference on International Computing Education Research
PublisherAssociation for Computing Machinery
Pages263-279
Number of pages17
ISBN (Electronic)9798400704765
DOIs
StatePublished - Aug 13 2024
Event20th Annual ACM Conference on International Computing Education Research, ICER 2024 - Melbourne, Australia
Duration: Aug 13 2024Aug 15 2024

Publication series

NameICER 2024 - ACM Conference on International Computing Education Research
Volume1

Conference

Conference20th Annual ACM Conference on International Computing Education Research, ICER 2024
Country/TerritoryAustralia
CityMelbourne
Period8/13/248/15/24

Keywords

  • CS1
  • knowledge components
  • learning curve analysis
  • programming patterns
  • programming plans

ASJC Scopus subject areas

  • Computational Theory and Mathematics
  • Computer Science Applications
  • Software
  • Education

Fingerprint

Dive into the research topics of 'Validating, Refining, and Identifying Programming Plans Using Learning Curve Analysis on Code Writing Data'. Together they form a unique fingerprint.

Cite this