Testing the impact of child characteristics x instruction interactions on third graders’ reading comprehension by differentiating literacy instruction

Connor, Carol McDonald; Morrison, Fredrick J.; Fishman, Barry; Giulani, Sarah; Luck, Melissa; Underwood, Phyllis S.; Bayraktar, Aysegul; Crowe, Elizabeth C.; & Schatschneider, Christopher. (2011). Testing the impact of child characteristics x instruction interactions on third graders’ reading comprehension by differentiating literacy instruction. Reading Research Quarterly, 46(3), 189-221.

Children with different abilities need different kinds of literacy instruction. Differentiating instruction makes sense to most teachers, and this article provides evidence to back up that idea. We also see evidence for how computer-generated models can predict the amounts of various kinds of instruction that are needed for students with varying ability levels (as measured by standardized tests). One such model, the A2i software (called a “dynamical forecasting intervention model”) was made available to teachers and they were trained in how to use the model to tailor small-group instruction to the needs of third graders at various ability levels. The findings seemed to show that having such a resource available helped the teachers more precisely tailor the instruction given to each group, resulting in greater gains than the control intervention, which was a program involving high-quality vocabulary study. The study was somewhat experimental in that teachers and their classes were randomly assigned to the two interventions, but because intact classroom groups were used (33 classrooms and a total of 448 children), the study, though it appears to be carefully designed, necessarily fits the “quasi-experimental” category. Thus, findings must be interpreted cautiously, and replication will be necessary before they can be generalized. The authors state all this clearly and account for the study’s limitations thoroughly. The concerns that remain for me with this study are either accounted for by the authors and/or are things that will always be perplexing about such studies. As always, the findings are only as good as the measures that are used to define the variables. Here, the measures chosen are highly respected ones with long histories in the literacy research literature: three Woodcock-Johnson III subtests, and the Gates-MacGinitie reading test. These are arguably good choices for reading assessments, but doubts remain about what such assessments really measure. How DO we measure reading comprehension anyway? Is it being able to answer questions about short passages? Is it being able to fill in the missing blanks in a cloze passage? Or is it something more complex and abstract that we may have a lot of difficulty measuring? Is it more than one thing? Another concern, which always will be there with studies that compare the results of two interventions, is the comparability of the two conditions. To me, the two interventions here were not that close to equivalent, in my opinion. The control condition involved quality instruction, but the experimental condition seemed to be even higher quality. The professional development for the experimental condition seemed much more developed, and the experimental condition teachers had the detailed software that kept them well abreast of individual students’ progress. I wonder if almost any kind of instruction would work better with that kind of support. In any case, the support for differentiated instruction, for the use of helpful computer software that can help teachers make practical sense of assessment results, and for the need to emphasize meaning-based instruction over code-based instruction, especially at the third grade level and above, all heartened me and made me hopeful that literacy research may be pointing us in some good directions here.

No comments:

Post a Comment