TY - CHAP
T1 - Assessing the life sciences
T2 - Using evidence-centered design for accountability purposes
AU - Haertel, Geneva D.
AU - Rutstein, Daisy
AU - Cheng, Britte H.
AU - Ziker, Cindy
AU - Harris, Christopher J.
AU - D’angelo, Cynthia
AU - Snow, Eric B.
AU - Bienkowski, Marie
AU - Vendlinski, Terry P.
AU - De Barger, Angela
AU - Ructtinger, Liliana
N1 - Publisher Copyright:
© 2016 Taylor & Francis. All rights reserved.
PY - 2016/1/19
Y1 - 2016/1/19
N2 - For over a decade, educators have been confronted by urgent demands for evidence of improved instruction and increased student learning. This same era has yielded sobering evidence that U.S. students’ proficiency and enthusiasm for learning, especially STEM learning, had flagged (National Research Council, 2005a, 2007, 2011a). Opfer, Nehm, and Ha (2012) summarize the state of assessment practice in the life sciences:Assessments of student knowledge and reasoning patterns play a central role in science teaching. At their most effective, assessment instruments provide valid and reliable inferences about student conceptual progress, thereby facilitating guidance in targeting instruction and evaluating instructional efficacy (NRC, 2001). Despite their high potential however, assessment instruments for content-rich domains, such as biology, often lack validity in even the narrow sense described by Linn, Baker, and Dunbar (1991)—that is, the ability to independently predict outcomes on real-world assessments (e.g., teacher-developed achievement test). At their least effective, instruments may yield contradictory or false inferences about student knowledge, misconceptions, or reasoning processes (Nehm & Schonfeld, 2008). For some content areas-such as students’ understanding of evolutionary processes-there are still remarkably few tools available for validly assessing students’ progress.
AB - For over a decade, educators have been confronted by urgent demands for evidence of improved instruction and increased student learning. This same era has yielded sobering evidence that U.S. students’ proficiency and enthusiasm for learning, especially STEM learning, had flagged (National Research Council, 2005a, 2007, 2011a). Opfer, Nehm, and Ha (2012) summarize the state of assessment practice in the life sciences:Assessments of student knowledge and reasoning patterns play a central role in science teaching. At their most effective, assessment instruments provide valid and reliable inferences about student conceptual progress, thereby facilitating guidance in targeting instruction and evaluating instructional efficacy (NRC, 2001). Despite their high potential however, assessment instruments for content-rich domains, such as biology, often lack validity in even the narrow sense described by Linn, Baker, and Dunbar (1991)—that is, the ability to independently predict outcomes on real-world assessments (e.g., teacher-developed achievement test). At their least effective, instruments may yield contradictory or false inferences about student knowledge, misconceptions, or reasoning processes (Nehm & Schonfeld, 2008). For some content areas-such as students’ understanding of evolutionary processes-there are still remarkably few tools available for validly assessing students’ progress.
UR - http://www.scopus.com/inward/record.url?scp=84967195463&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84967195463&partnerID=8YFLogxK
U2 - 10.4324/9780203781302
DO - 10.4324/9780203781302
M3 - Chapter
AN - SCOPUS:85086543533
SN - 9780415838603
SP - 267
EP - 348
BT - Meeting the Challenges to Measurement in an Era of Accountability
PB - Taylor and Francis Inc.
ER -