The decline of comprehension-based silent reading efficiency in the United States: A comparison of current data with performance in 1960

Spichtig, A.N., Hiebert, E.H., Vorstius, C., Pascoe, J.P, Pearson, P.D., & Radach, R. (2016). The decline of comprehension-based silent reading efficiency in the United States: A comparison of current data with performance in 1960. Reading Research Quarterly, 51(2), 239-259.


This article presents some bad news. Comparing data from a 1960 study with data from a 2011 study provides evidence that the reading skills of U.S. students have declined in the time between the two studies. Though the results of any research study making a claim like this have to be interpreted cautiously, this study has weight and importance because: 1) the 2011 study was meticulously designed to be as comparable to the 1960 study as possible given that there are 50 years between the studies, 2) some of the most respected and influential thinkers in the field of literacy education over the last 30 years are among the authors of this article, 3) the article appears in one of the most (if not the most) prestigious journals in literacy education today, and 4) the bad news is clearly and unequivocally stated in the article’s title—no inferences required, and no waffling about it. Any article with these four characteristics has to cause literacy educators like me to take notice.

First, it’s important to define what exactly the study’s authors say has declined. Researchers attempted to measure a construct they called “comprehension-based silent reading efficiency”, defined as being able to “read grade-level text silently with good comprehension” (p. 240). What sets this kind of study apart from studies of oral reading fluency (where readers’ deviations from print can be observed and recorded) is that here, technology was used to observe the “hidden processes of reading” (p. 241). Eye movement data would be needed to even begin to assess silent reading fluency in any kind of meaningful way. It is useful as well when looking at oral reading processes (look into some of Peter Duckett’s research for some intriguing work of this kind), but for capturing silent reading, eye movement data would be especially powerful.

Methodology in the 2011 study was kept as close as possible to the methodology in the 1960 study so that the results would be comparable, though the researchers do not claim that the 2011 study is a replication of the 1960 study. The most obvious difference in the two time periods is in the available technology. In 1960, eye movements were captured using a device called a “Reading Eye Camera”; the device provided filmstrips that were viewed and analyzed by hand. Digital devices, like the Visagraph device used in the 2011 study, provided digital records and analyzed the data automatically. With the rapid progress in technology now (five years is a long time in today’s world), it is likely that even more sophisticated (and accessible) devices for capturing eye movements than those used in 2011 are available now and certainly will be in the future. It is conceivable that every teacher could have such a device for formative assessments of silent reading fluency, making the old “correct words per minute” assessments a thing of the past.

Of course, comprehension is the bottom line of any reading assessment, and that was true for both the 1960 and the 2011 studies. In both studies, participants were asked ten questions about the same reading passages, and they had to answer 70% of the questions correctly for a passage to be used as data. Along with that, reading rate (words per minute), the number of fixations students made while reading, the length of those fixations, and the number of regressions while reading were all measured with the eye movement technology available at the time of each study.
The researchers found that readers in 2011 had slower reading rates than readers in 1960. This difference first became apparent at Grade 4, doubled by Grade 6, tripled by Grade 8, and by Grade 12, a 19% decline in comprehension-based reading rate between 1960 and 2011 was observed (p. 252). Whereas in 1960 a gradual decrease in the number of fixations occurred from elementary through hgh school (which is expected as reading skills develop), in 2011 the number of fixations did not decrease as much or as steadily. Particularly concerning was the documentation of a middle school “stalling” (p. 252) of reading rate and of the expected decrease in fixations. Similarly, in 1960, the length of fixations steadily decreased from elementary to high school, but in 2011, decreases were not seen until high school. For regressions, readers in 2011 made a decreasing number of regressions up to Grade 6, but after that, made more regressions than the 1960 readers did.

As a literacy educator, and as a former middle school teacher, I was especially alarmed by the pattern I saw emerging in these data for middle school readers. Why does development stall out at this age? We are obviously not doing enough to help students transition between the kinds of reading they do in elementary grades and the reading they need to do in middle school and beyond. As a teacher educator, I need to do more with helping teachers learn how to better nurture readers’ development as those readers get older.

Complicating my reflection on all this is my realization that my own college students, who are the future teachers I work with, are probably comparable to the 2011 study participants. If reading rate is declining at the rate seen in the study here, no wonder my students have trouble completing what seem to me to be completely reasonable reading and writing assignments. The 1960 generation (to which people my age can be compared) read faster, and we could reasonably expect that our reading efficiency would steadily increase as we moved through school and on to college and careers. Is it such a different world for my students? At what point did steady development in literacy skills stop being an expectation?

There could be many ways to account for the disturbing findings we see here. The authors of the article make some attempts to reflect on that, though the factors influencing the findings here have to be multilayered and complex. Though strong efforts were made to make the two studies comparable here, the times in 1960 and 2011 were vastly different. It wasn’t just the technological advances that revolutionized the eye movement research. The demographic landscape has also changed in ways unthinkable in 1960. As evidence of this, there was much less specific demographic data on the 1960 sample. In 2011, information on socioeconomic status, ethnicity, English Learners, and special education services was seen as important, and documentation was mandated. That was not the case in 1960, so we know less about that sample.

Why do our students seem to not be reading as well today? Could it be that they are not reading as much as they used to? Among literacy educators, it is an accepted idea that readers get better at reading by reading more. However, that’s a complex relationship. If reading is taking me longer, I won’t do it as much, and my skills will develop less, leading to even less reading. Further, is the quality of what I read, and the quality of the kind of thinking I am asked to do, as important to whether or not I engage in reading and meaning-making as how efficiently I read? Is there time for reading quality in a test-driven environment? Is there attention left for reading quality in a time when middle graders are tuned in to social media but not tuned in to sustained engagement with texts of all kinds?

Perhaps someone will update this study in 2061. I won’t be around to see it if they do, but it’s intriguing to think about. How will we be talking about literacy then? How will good reading be measured and studied? What concerns will still be disturbing us? Will we have solved some of the problems discussed in this article? What new concerns will arise?

No comments:

Post a Comment