Do we know a successful teacher when we see one? Experiments in the identification of effective teachers

Strong, Michael, Gargani, John, & Hacifazlioglu, Ozge. (2011). Do we know a successful teacher when we see one? Experiments in the identification of effective teachers. Journal of Teacher Education, 62(4), 367-382.

This sequence of three experiments highlights the need for a valid, reliable classroom observation instrument that can accurately identify both effective and non-effective teachers (with effectiveness being defined as those whose students’ achievement test scores increase according to what should be expected each year). From what I read in the article’s conclusion, the development of such a tool is in progress, and probably we can expect to see that tool’s development documented in future articles by the first author here, who works at the Center for Research on the Teaching Profession at the University of California Santa Cruz. As a teacher educator, I look forward to seeing that work.

It is clear from the research presented here that without a good observation instrument, our ability to identify teachers whose students show gains on tests is woefully inadequate. Even when a nationally recognized observation instrument was used, as was done with the CLASS instrument in Experiment 3, and when well-trained observers used that instrument to rate extended videos of teachers teaching math lessons, accuracy was still no better than 50%. In Experiments 1 and 2, which did not use an instrument but simply had the observers rate the teachers, and which did not involve training of raters, the accuracy rates were even lower. This was true for educational experts with experience in classroom observation as well as for less-expert observers. Interestingly, in Experiment 1, elementary school children were included as raters, and they were more accurate than any of the other raters, though still not very much so. The researchers believe that may be because the children actually guessed, while the adults may have had some kinds of mental processes going on that actually worked against accurate identification of effective teachers. I noticed that children were not employed as raters in the second or third experiments. Though I think I understand why, it might have been interesting to know more about how children rate teachers. However, since children (rightly) do not typically perform teacher evaluations, that might be a different study, with a different focus than what was central here.

The authors spend some time considering why classroom observations might be as inaccurate as they are. Theory on cognitive operations is presented as a possible explanation. The notion of System 1 and System 2 processes is outlined. System 1 operations are “fast, automatic, effortless, associative, and difficult to control or modify “, and they “produce shortcuts, or heuristics, that enable us to function rapidly and effectively”. System 2 operations are “slower, serial, effortful, and deliberately controlled” as well as “relatively flexible and potentially rule-governed”, and function to “monitor the quality both of mental operations and overt behavior” (p. 369). In short, System 1 processes are so much easier and automatic for us as humans than System 2 processes are, that System 1 processes will be overpowering if we do not deliberately bring our System 2 processes into action. The evidence shows here that System 2 processes probably are more rational and more accurate, but observers are probably more likely to employ the less accurate but more automatic System 1 processes when they are in the midst of observations. Having done a good deal of classroom observation work myself, I see how that could happen. Classrooms are complex environments, and observers often are immersed in them almost “cold”, with less than plentiful information. We have to make high-stakes assessments based on complex but limited data, gathered under “noisy” (sometimes both literally and figuratively) conditions. It is difficult, physically and mentally stressful, and uncertain. No wonder we rely on the cognitive routines that we have developed over time, rather than always sitting down and considering what we see in a completely rational manner. In the experiments here, it appears that observers relied very heavily on their perceptions of whether a teacher’s students were “engaged” or not. Those kinds of observations, however, can be notoriously subjective and inaccurate. One wonders exactly what visual and other cues those who observed were using to decide that children were engaged. Obviously, just because children look engaged does not necessarily mean they are learning. As observers, though, we seem to rely a lot on that body language that signifies engagement and attention to us. As humans, we have been practicing responding to body language since our parents first smiled down at us, so it probably is not surprising that we revert to those well-established ways of judging the success of a social interaction, especially when we are in a high-pressure observation situation.

Perhaps a better instrument could reduce the demands by doing some of the cognitive work for us, and could nudge us more toward the rational. As the authors here suggest, that might mean focusing more on actual specific instructional behaviors and strategies rather than on what we perceive as engagement or other issues related to the way a classroom interaction “feels” to us. I’ll be waiting to see what Strong and his colleagues come up with.

No comments:

Post a Comment