TY - GEN
T1 - Generation of Student Questions for Inquiry-based Learning
AU - Ros, Kevin
AU - Jong, Maxwell
AU - Chan, Chak Ho
AU - Zhai, Cheng Xiang
N1 - This work is supported in part by the National Science Foundation under Grant No. 1801652. We would also like to thank Dr. Heng Ji for her insightful comments.
PY - 2022
Y1 - 2022
N2 - Asking questions during a lecture is a central part of the traditional classroom setting which benefits both students and instructors in many ways. However, no previous work has studied the task of automatically generating student questions based on explicit lecture context. We study the feasibility of automatically generating student questions given the lecture transcript windows where the questions were asked. First, we create a data set of student questions and their corresponding lecture transcript windows. Using this data set, we investigate variants of T5, a sequence-to-sequence generative language model, for a preliminary exploration of this task. Specifically, we compare the effects of training with continuous prefix tuning and pre-training with search engine queries. Question generation evaluation results on two MOOCs show that that pre-training on search engine queries tends to make the generation model more precise whereas continuous prefix tuning offers mixed results.
AB - Asking questions during a lecture is a central part of the traditional classroom setting which benefits both students and instructors in many ways. However, no previous work has studied the task of automatically generating student questions based on explicit lecture context. We study the feasibility of automatically generating student questions given the lecture transcript windows where the questions were asked. First, we create a data set of student questions and their corresponding lecture transcript windows. Using this data set, we investigate variants of T5, a sequence-to-sequence generative language model, for a preliminary exploration of this task. Specifically, we compare the effects of training with continuous prefix tuning and pre-training with search engine queries. Question generation evaluation results on two MOOCs show that that pre-training on search engine queries tends to make the generation model more precise whereas continuous prefix tuning offers mixed results.
UR - http://www.scopus.com/inward/record.url?scp=85180372184&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85180372184&partnerID=8YFLogxK
U2 - 10.18653/v1/2022.inlg-main.14
DO - 10.18653/v1/2022.inlg-main.14
M3 - Conference contribution
AN - SCOPUS:85180372184
T3 - 15th International Natural Language Generation Conference, INLG 2022
SP - 186
EP - 195
BT - 15th International Natural Language Generation Conference, INLG 2022
A2 - Shaikh, Samira
A2 - Ferreira, Thiago Castro
A2 - Stent, Amanda
PB - Association for Computational Linguistics (ACL)
T2 - 15th International Natural Language Generation Conference, INLG 2022
Y2 - 18 July 2022 through 22 July 2022
ER -