As
much as it would please almost everyone in the country with any first-hand
knowledge of education, high-stakes, government-mandated testing isn’t going
away anytime soon. Standardized testing is a reality, and, unfortunately, it’s
a yardstick against which districts, campuses, and even individual teachers are
judged.
In response to the testing onslaught, publishers have rushed to the rescue with test-prep booklets, computer programs, and websites, all designed to help students get ready for the dreaded test. Teachers have latched onto test-prep methodologies such as assigning daily openers containing released test questions, giving practice tests in class, teaching vocabulary lists of high-frequency words gleaned from past assessments, and requiring after-school and on-the-weekend attendance at test-prep boot camps designed to give struggling students more exposure to what they can expect to see in the spring on their state tests.
The result of all this hullabaloo is that kids aren’t really doing much better on these tests and—no surprise—they hate school. When every day of the school year is another tedious encounter with test-prep materials, students have nothing to look forward to but soon learn that there’s lots to dread.
I’m led to believe that the companies we pay millions to construct and field test these state assessments have some expertise in the art of test design. They build these tests to measure student mastery of a subset of the state objectives for the course.
What would happen if we trusted that the test is actually measuring what it claims to measure and that if we simply taught the skills the state asks us to teach, our students would be fine on the test? I’m willing to take that chance for the sake of students’ love of learning.
The STAAR end-of-course exams in English, for instance, measure, in a fake way, many of the things we do naturally in a well-taught English class. They ask students to comprehend and analyze a variety of texts, to make judgments about the author’s intention, to connect ideas or techniques between texts, and to find textual evidence that supports an assertion. These tests require students to make corrections to errors in essays, just like they would do when editing a classmate’s paper in a writer’s workshop environment. They ask students to examine an essay and make recommendations about organization, word choice, transitions, and clarity, as they would in a writing conference with a peer. Finally, they ask students to come up with an idea and support it in an organized, focused, clearly-written essay of their own—something I hope every English student is doing frequently in class.
It seems to me that if we just teach our students the state objectives, formatively assess to adjust instruction and target students who need extra attention, and build up students’ self-efficacy and dispositions toward the subject, they ought to do fine on the test when spring rolls around.
Taking a practice test that asks students to analyze a poem doesn’t teach students how to analyze a poem. Answering revision and editing multiple choice questions about an imaginary student’s fake essay doesn’t teach more about how to give feedback on a paper than actual writing conference does. A practice test takes hours that could be used in active instruction: teaching state standards, informally gauging student understanding, and reteaching or extending as needed. Taking a practice test is boring. Good instruction isn’t.
Maybe we should trust the test— trust that the test is indeed assessing the knowledge and skills it claims to assess—and devote more of our time and energy to familiarizing ourselves with the standards the test is testing, pinpointing the places where the standards align with our curriculum, strategizing about how to teach the standards and assess our students’ mastery, and developing authentic ways to remediate during class time instead of asking struggling students to carve out time from their busy schedules to get help outside of class.
Teaching our students how to take the test won’t help them if we haven’t taught them how to do what the test is asking them to do.
In response to the testing onslaught, publishers have rushed to the rescue with test-prep booklets, computer programs, and websites, all designed to help students get ready for the dreaded test. Teachers have latched onto test-prep methodologies such as assigning daily openers containing released test questions, giving practice tests in class, teaching vocabulary lists of high-frequency words gleaned from past assessments, and requiring after-school and on-the-weekend attendance at test-prep boot camps designed to give struggling students more exposure to what they can expect to see in the spring on their state tests.
The result of all this hullabaloo is that kids aren’t really doing much better on these tests and—no surprise—they hate school. When every day of the school year is another tedious encounter with test-prep materials, students have nothing to look forward to but soon learn that there’s lots to dread.
I’m led to believe that the companies we pay millions to construct and field test these state assessments have some expertise in the art of test design. They build these tests to measure student mastery of a subset of the state objectives for the course.
What would happen if we trusted that the test is actually measuring what it claims to measure and that if we simply taught the skills the state asks us to teach, our students would be fine on the test? I’m willing to take that chance for the sake of students’ love of learning.
The STAAR end-of-course exams in English, for instance, measure, in a fake way, many of the things we do naturally in a well-taught English class. They ask students to comprehend and analyze a variety of texts, to make judgments about the author’s intention, to connect ideas or techniques between texts, and to find textual evidence that supports an assertion. These tests require students to make corrections to errors in essays, just like they would do when editing a classmate’s paper in a writer’s workshop environment. They ask students to examine an essay and make recommendations about organization, word choice, transitions, and clarity, as they would in a writing conference with a peer. Finally, they ask students to come up with an idea and support it in an organized, focused, clearly-written essay of their own—something I hope every English student is doing frequently in class.
It seems to me that if we just teach our students the state objectives, formatively assess to adjust instruction and target students who need extra attention, and build up students’ self-efficacy and dispositions toward the subject, they ought to do fine on the test when spring rolls around.
Taking a practice test that asks students to analyze a poem doesn’t teach students how to analyze a poem. Answering revision and editing multiple choice questions about an imaginary student’s fake essay doesn’t teach more about how to give feedback on a paper than actual writing conference does. A practice test takes hours that could be used in active instruction: teaching state standards, informally gauging student understanding, and reteaching or extending as needed. Taking a practice test is boring. Good instruction isn’t.
Maybe we should trust the test— trust that the test is indeed assessing the knowledge and skills it claims to assess—and devote more of our time and energy to familiarizing ourselves with the standards the test is testing, pinpointing the places where the standards align with our curriculum, strategizing about how to teach the standards and assess our students’ mastery, and developing authentic ways to remediate during class time instead of asking struggling students to carve out time from their busy schedules to get help outside of class.
Teaching our students how to take the test won’t help them if we haven’t taught them how to do what the test is asking them to do.
No comments:
Post a Comment