Evaluation of LLMs and Other Machine Learning Methods in the Analysis of Qualitative Survey Responses for Accessible Engineering Education Research

Xiuhao Ding, Meghana Gopannagari, Kang Sun, Alan Tao, Delu Louis Zhao, Sujit Varadhan, Bobbi Lee Battleson Hardy, David Dalpiaz, Chrysafis Vogiatzis, Lawrence Angrave, Hongye Liu

Research output: Contribution to journalConference articlepeer-review

Abstract

This research paper provides insights and guidance for selecting appropriate analytical tools in engineering educational research. Currently, educators and researchers face difficulties in gaining insights effectively from free-response survey data. We evaluate the effectiveness and accuracy of Large Language Models (LLMs), in addition to the existing methods that employ topic modeling, document clustering coupled with Support Vector Machine (SVM) and Random Forest (RF) approaches, and the unsupervised Latent Dirichlet Allocation (LDA) method. Free responses to open-ended questions from student surveys in multiple courses at University of Illinois Urbana-Champaign were previously collected by engineering education accessibility researchers. The data (N=129 with seven free response questions per student) were previously analyzed to assess the effectiveness, satisfaction, and quality of adding accessible digital notes to multiple engineering courses and the students' perceived belongingness, and self-efficacy. Manual codings for the seven open-ended questions were generated for qualitative tasks of sentiment analysis, topic modeling, and summarization and were used in this study as a gold standard to evaluate automated text analytic approaches. Raw text from open-ended questions was converted into numerical vectors using text vectorization and word embeddings and an unsupervised analysis using document clustering and topic modeling was performed using LDA and BERT methods. In addition to conventional machine learning models, multiple pre-trained open-sourced local LLMs were evaluated (BART and LLaMA) for summarization. The remote online ChatGPT closed-model services by OpenAI (ChatGPT-3.5 and ChatGPT-4) were excluded due to subject data privacy concerns. By comparing the accuracy, recall, and depth of thematic insights derived, we evaluated how effectively the method based on each model categorized and summarized students' responses across educational research interests of effectiveness, satisfaction, and quality of education materials. The paper will present these results and discuss the implications of our findings and conclusions.

Original languageEnglish (US)
JournalASEE Annual Conference and Exposition, Conference Proceedings
StatePublished - Jun 23 2024
Event2024 ASEE Annual Conference and Exposition - Portland, United States
Duration: Jun 23 2024Jun 26 2024

ASJC Scopus subject areas

  • General Engineering

Fingerprint

Dive into the research topics of 'Evaluation of LLMs and Other Machine Learning Methods in the Analysis of Qualitative Survey Responses for Accessible Engineering Education Research'. Together they form a unique fingerprint.

Cite this