If it takes you nine minutes to run one mile, do you really think you'll keep that up for thirteen?
Students stare at a input-output chart of baskets shot and missed. How many baskets will a Charisse make after 265 shots? A few problems later they learn that "it takes Nancy 9 minutes to run one mile. How long will it take her to run thirteen miles?" Another time, Cho is picking random fruit from a bag and has to figure out what her chances are of picking an apple versus a peach. Finally, a catcher has to determine the distance to second base using the Pythagorean Theorem.
I've changed the problems slightly, so I don't violate copyright. However, it's the district standardized pre-test and you probably get the picture. Here are some reasons why those problems suck:
- Psuedo-context: Nobody picks random fruit from a bag. It's just not the way we choose our snacks. Nor do most catchers rely on the Pythagorean Theorem in order to throw a runner out at second base.
- False Problems: Unlike pseudo-context, the false problems require students to solve a problem using a specific standard that actually doesn't solve the problem at all. While the Pythagorean Theorem problem is silly, it is technically accurate. However, nobody who runs a nine-minute mile keeps that exact pace for a half marathon. These problems are using non-linear relationships as a basis for linear problem-solving.
- Limited Focus: Students rarely find a scenario, set up a problem, solve it using multiple methods or compare their processes. In other words, math is viewed through a tiny lens that focusses mostly on computation.
- Isolation: I get it. They're taking a test. However, that's not how math works in life. When I need to solve problems, I can work with others, use a calculator, solve on a spreadsheet, etc. Here, it's all paper and pencil and silence for two hours.
- Quantity Over Quality: The multiple choice problems tend to be easier than they need to be. However, students lose stamina as they plow through forty-five similar questions in the name of "reliability" and "validity." So we, as a teachers, know very little about whether or not a student has mastered a standard based upon the show set of problems they have finished.
I'm not sure there is a quick, easy answer to this problem. People want quantified data in a standardized format. However, I would love to see a shift toward fewer problems that embrace multiple methods and include more resources than just paper and pencil. I would like to see some reasoning and analysis. And I would like to see standards being applied to authentic contexts. Maybe we could grade these with performance based rubrics rather than scanning bubble sheets.
Ultimately, I'm doubtful that they'll embrace alternative assessments, because they might lose some of the control and efficiency that the really bad multiple choice tests currently allow. And that's the sad thing. For all the talk about doing what's best for kids, the standardized pre-test is a clear reminder that it's all about what's best for the compliance-based system.photo credit: kaneda99 via photopin cc