Assessing the cognitive demands of a century of reading curricula: An analysis of reading text and comprehension tasks from 1910 to 2000

Stevens, R. J., Lu, X., Baker, D.P., Ray, M.N., Eckert, S.A., & Gamson, D.A. (2015). Assessing the cognitive demands of a century of reading curricula: An analysis of reading text and comprehension tasks from 1910 to 2000. American Educational Research Journal, 52(3), 582-617.


Text complexity and difficulty have received renewed interest from literacy educators since the coming of the Common Core State Standards. The research reported here focuses on the history of text complexity in published reading programs, and how the texts readers have been required to read, and the tasks they have been asked to do when reading those texts, have evolved from 1910 until 2000.

The study described here would be interesting to literacy educators for its historical value alone; I have not seen a study that covered the scope of time that this one does. However, the history was not the most valuable thing I saw in this work. What really struck me more was the changes in how we assess texts for their difficulty and complexity.

Since my days in graduate school in the 1980’s, we have many more tools and procedures for deciding how difficult a text is. In those days, basically all we had were readability formulas. I remember doing a review of these formulas for a course, and there were probably 30 or 40 of them out there. Most of them looked at some measure of sentence length (long sentences were seen as more difficult than short ones) and word length (a multi-syllable word was seen as more difficult than a word with only one or two syllables). Some formulas introduced other ways of measuring word difficulty, including counting how many words were NOT on a list of frequently occurring words (the more words not found on the list, the more difficult the text). There were other ways in other formulas, but most of them were some variation on counting things like words and syllables in texts and then entering them into a formula of some kind, then coming up with a “grade level” or some other sort of numerical system for leveling texts.

I made extra tuition money in grad school consulting with various organizations that wanted “readability studies” of texts they would use with children, and those studies required that I apply several of those formulas to the text in question as part of my judgment of whether the target audience of children would be able to read and comprehend that text. Remember that this was a time before average people had computers. We had computers at my university, but they were in a central location and could only be accessed by people who were computer specialists. You handed them your data on a huge sheaf of punch cards, and they later handed you back a huge green-striped printout. In those times, you had to do the counting for readability formulas by hand, and it was tedious.

Today’s technology has made simple previously tedious tasks like counting the number of words not on a list of frequently occurring words. All kinds of things in texts can be scanned, counted and classified rapidly using electronic means, and such resources are readily available. I thought about all this as I read about all of the things the researchers here looked at in reading instructional texts for third and sixth graders published over a 90-year period. Their analysis of “lexical complexity” included measures of the “sophistication” of the words found there. They looked at “lexical diversity” (the range of different kinds of words found in a text). Measures of sentence length have been replaced by measures of “syntactic complexity” that provide a more fine-grained analysis of various kinds of sentences.

On top of looking at the texts themselves, the researchers looked at the complexity of tasks required of children who were receiving reading instruction from these texts. Cognitive demands were studied, with classifications of the kinds of questions asked and the kinds of reading and thinking required to complete those tasks. Another thing that was assessed was the level of processing required by these different kinds of questions. This is especially interesting in light of current perceptions that the Common Core State Standards required a higher level of processing than what many believe children have been required to do in the past. Those increases in cognitive demand could be seen as a good trend or a worrisome one, depending on how you look at it.

In any case, the sheer amount of data and procedures for making sense of those data boggled my mind. It is truly amazing how far we have come in our ability to look at constructs like text difficulty, text complexity, and cognitive demand. That, to me, was more striking than the actual findings of the study.

What were those findings? The researchers showed changes in text and task complexity over time, with some ups and downs occurring, but contrary to what many believe, we do not seem to be currently at a low point. On most of the measures used here, the more recently published reading programs were either high or stable. In terms of the comprehension tasks those programs include, there is evidence that teachers may be asking students more questions in recent years than previously, and those questions seem to be at higher levels than questions in previous eras.

What do we do with this information? Increased “rigor”, high cognitive demand, and higher level processing would seem to be good things, but only if everyone has the opportunity to learn at those levels and no one is excluded. As always, it will be important that educators do what it takes to help all children be successful at completing these tasks and acquiring the literacy skills they will need to succeed in life. It will also be important to make sure teachers have the skills they need to make that happen.

No comments:

Post a Comment