How well aligned are state assessments of student achievement with state content standards?

Polikoff, Morgan S., Porter, Andrew C., & Smithson, John. (2011). How well aligned are state assessments of student achievement with state content standards? American Educational Research Journal, 48(4), 965-995.

This article’s title clearly states the question it addresses: How well aligned are state assessments of student achievement with state content standards? Although I freely admit that I didn’t begin to fully understand the procedures the authors used to determine alignment (even after reading the Method and Data section multiple times), the answer to that clearly stated question was equally clear from their findings: State assessments and state content standards are NOT very well aligned, no matter how you look at it. Even in the best of cases for the 19 states’ data analyzed here, the alignment scores (which look similar to correlations) seemed lower than they should be. The average alignment scores were lower than .5, typically between .2 and .3, and sometimes even much lower.

What is going on here? It seems unconscionable for tests with such high stakes to be so poorly aligned with the standards states set up and communicate to districts, schools, teachers, parents, students, and everyone for whom the stakes of such assessments are high. As the authors point out near the end of the article, how can teachers be held responsible for their students’ scores on tests that do not match the standards they were told they had to teach and were told would be tested? The authors use the word “unfair” to describe this situation (p. 992). I would call “unfair” an understatement, given the possible consequences of low test scores. I’m searching mentally for a better word, but I’m not sure I have a word in my vocabulary to describe the situation this article lays out for us (Unjust? Insupportable? Maybe appalling? Inexcusable? ). I’m thinking about teachers who will be rewarded or punished, or even lose livelihoods, because of scores on poorly aligned tests. What about young people denied graduation, or perhaps college entry, or perhaps scholarships, because of poorly aligned tests? What about school districts that get poor “report cards” that are published in newspapers and then affect property values, and it is all based on scores from poorly aligned tests? If we are going to put such high stakes on these tests, then we had better be sure they measure what we want them to measure, and that what is on the tests is transparent so that everyone knows what they must make sure to teach and learn. Alignment between standards and tests makes sense. When expectations are transparent, and assessments mirror those transparent expectations, then assessment has some hope of functioning to improve education. While alignment may not be sufficient to make a test a valid measure of learning, it has to be a minimum requirement. It’s bad enough that test scores have become as important as they have; now we learn that they don’t even match well with the standards they are supposed to assess!

The article only scratches the surface on the authors’ second research question, which tries to get at the nature of the misalignment. The researchers here show us a methodology that I find fascinating, though I did not completely understand it. They constructed “maps” of the alignment between standards and tests for three areas, math, English/language arts/reading, and science, for several grade levels representing the 19 states studied. On these “maps” the lines of “latitude” were the subcategories of the content areas, and the lines of “longitude” were the various cognitive levels, ranging from low-level memorization to higher cognitive levels. Although I got hopelessly lost in the procedural details of how these “maps” were created, just looking at the examples provided by the authors showed me how the maps provided a visual representation that enabled me to see where standards and tests were well aligned and where they were not well aligned. The authors dug deeper, and specified various types of alignment and misalignment, and those findings come the closest of everything reported here to showing what needs to be done to increase the alignment to acceptable levels. The researchers found that some content subcategories were over-tested, some were under-tested, and some were not tested at all. Even worse, they found that some categories were tested but not found in the standards! Another type of misalignment occurred when standards specified outcomes at different cognitive levels than the levels at which they were tested. That could occur two ways: the standards might be at higher levels than the test, or the standards might be at lower levels than the test. I hope test developers are reading this and taking heed.

The question of why this happened, and HOW tests and standards got so misaligned in the first place, is something only hinted at here, and answering those questions looks like a complex task. For example, as a literacy educator I have long known that many important standards are NOT adequately tested (e.g., my state’s Speaking and Listening Grade Level Expectations), but others are probably over-tested (e.g., some of the reading comprehension standards). I also have long known that the lower cognitive levels are more likely to be tested than the higher levels. I can’t believe it is as simple as that the over-tested topics and levels are easier and cheaper to measure, but I’m sure that is one factor in the ways tests are built. Standards tend to be idealized and abstract, but when it comes to capturing and assessing outcomes, we have concrete realities to deal with. Alignment may mean both making standards more realistic and observable AND thinking of more innovative ways to capture complex student performances than the kinds of tests we have been relying on up to now. I wonder if technological advances may offer some new possibilities? However it happens, it is clear from this article that better alignment between standards and assessments MUST be achieved.

No comments:

Post a Comment