TY - GEN
T1 - Investigating crowdsourcing to generate distractors for multiple-choice assessments
AU - Scheponik, Travis
AU - Golaszewski, Enis
AU - Herman, Geoffrey
AU - Offenberger, Spencer
AU - Oliva, Linda
AU - Peterson, Peter A.H.
AU - Sherman, Alan T.
N1 - Publisher Copyright:
© Springer Nature Switzerland AG 2020.
PY - 2020
Y1 - 2020
N2 - We present and analyze results from a pilot study that explores how crowdsourcing can be used in the process of generating distractors (incorrect answer choices) in multiple-choice concept inventories (conceptual tests of understanding). To our knowledge, we are the first to propose and study this approach. Using Amazon Mechanical Turk, we collected approximately 180 open-ended responses to several question stems from the Cybersecurity Concept Inventory of the Cybersecurity Assessment Tools Project and from the Digital Logic Concept Inventory. We generated preliminary distractors by filtering responses, grouping similar responses, selecting the four most frequent groups, and refining a representative distractor for each of these groups. We analyzed our data in two ways. First, we compared the responses and resulting distractors with those from the aforementioned inventories. Second, we obtained feedback from Amazon Mechanical Turk on the resulting new draft test items (including distractors) from additional subjects. Challenges in using crowdsourcing include controlling the selection of subjects and filtering out responses that do not reflect genuine effort. Despite these challenges, our results suggest that crowdsourcing can be a very useful tool in generating effective distractors (attractive to subjects who do not understand the targeted concept). Our results also suggest that this method is faster, easier, and cheaper than is the traditional method of having one or more experts draft distractors, building on talk-aloud interviews with subjects to uncover their misconceptions. Our results are significant because generating effective distractors is one of the most difficult steps in creating multiple-choice assessments.
AB - We present and analyze results from a pilot study that explores how crowdsourcing can be used in the process of generating distractors (incorrect answer choices) in multiple-choice concept inventories (conceptual tests of understanding). To our knowledge, we are the first to propose and study this approach. Using Amazon Mechanical Turk, we collected approximately 180 open-ended responses to several question stems from the Cybersecurity Concept Inventory of the Cybersecurity Assessment Tools Project and from the Digital Logic Concept Inventory. We generated preliminary distractors by filtering responses, grouping similar responses, selecting the four most frequent groups, and refining a representative distractor for each of these groups. We analyzed our data in two ways. First, we compared the responses and resulting distractors with those from the aforementioned inventories. Second, we obtained feedback from Amazon Mechanical Turk on the resulting new draft test items (including distractors) from additional subjects. Challenges in using crowdsourcing include controlling the selection of subjects and filtering out responses that do not reflect genuine effort. Despite these challenges, our results suggest that crowdsourcing can be a very useful tool in generating effective distractors (attractive to subjects who do not understand the targeted concept). Our results also suggest that this method is faster, easier, and cheaper than is the traditional method of having one or more experts draft distractors, building on talk-aloud interviews with subjects to uncover their misconceptions. Our results are significant because generating effective distractors is one of the most difficult steps in creating multiple-choice assessments.
KW - Amazon mechanical turk
KW - Concept inventories
KW - Crowdsourcing
KW - Cybersecurity assessment tools (CATS) project
KW - Cybersecurity concept inventory (CCI)
KW - Cybersecurity education
KW - Digital logic concept inventory (DLCI)
KW - Distractors
KW - Multiple-choice questions
UR - http://www.scopus.com/inward/record.url?scp=85076089777&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85076089777&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-31239-8_15
DO - 10.1007/978-3-030-31239-8_15
M3 - Conference contribution
AN - SCOPUS:85076089777
SN - 9783030312381
T3 - Advances in Intelligent Systems and Computing
SP - 185
EP - 201
BT - National Cyber Summit (NCS) Research Track, 2019
A2 - Choo, Kim-Kwang Raymond
A2 - Morris, Thomas H.
A2 - Peterson, Gilbert L.
PB - Springer
T2 - National Cyber Summit, NCS 2019
Y2 - 4 June 2019 through 6 June 2019
ER -