Jones, N.D., Buzick, H.M., & Turkan, S. (2013). Including students with disabilities and English learners in measures of educator effectiveness. Educational Researcher, 42(4), 234-241.
It is important that we as educators develop good ways to assess our teaching effectiveness; if we do not, others will certainly develop them for us, and we will almost certainly be less satisfied with what they develop than we will be with assessments we have a hand in creating. The article here is helpful in pointing out what Jones et al believe are elements that need improvement in many current teaching assessment frameworks; Jones et al believe that more explicit attention needs to be paid to assessing specific ways that teachers work with students with disabilities (SWDs) and with English learners (ELs).
With these two populations increasing in today’s schools, I have to agree that working with these children in specific ways that are informed by educational research and theory is a foundational skill for teachers. These skills should be taught and assessed in professional education programs, developed and honed through continuing education, consultation and collaboration with specialists in working with SWDs and ELs, and mentoring/coaching in actual assessment and teaching strategies. In short, if teachers need these skills, then they need to be supported to acquire them, and they then need to be included in assessments of teacher quality. Assessment of skills without the means and opportunity to develop those skills would be unfair to teachers.
All of this sounds good, of course, but achieving what Jones et al call for is not easy, and they write about a number of the problems involved. As with almost all educational assessments, capturing the thing you want to assess is a challenge. Jones et al call for multiple assessments, which is in keeping with what is currently considered good practice in assessment. How do you capture effective teaching, though? Some believe that students’ achievement test scores are one way. Those scores probably need to be looked at as one piece of the picture, but as Jones et al point out very well here, there are numerous factors that can affect students’ test score performance, and those factors may be even more complex when we look at the test scores of SWDs and LDs. Many of those factors are out of the control of teachers.
Classroom observations can be another piece of the picture in teacher assessment. There are challenges with that method as well. The instrument itself (usually some sort of rubric used by evaluators to assign scaled rankings to teachers based upon several clusters of performance indicators) has to address the things that are important in effective teaching, and the instrument has to provide consistent results. That is, the instrument has to be valid and reliable, which is no small challenge with such instruments. The instrument also has to be understandable enough, and manageable enough, to be used by a wide variety of raters without too much burden of time and energy. As I have always said about assessment, that which cannot be easily implemented will not be implemented, at least not over time. We could have the most perfect observation form imaginable, but if it is not practical, it will go away and be replaced by something that is practical. The problem with a lot of teacher observation assessment tools is that they tend to be long and sometimes redundant. The challenge is to include enough of what is important but to keep the items discrete and as few as possible. When we need to add more emphasis on working with specific groups of students, as Jones et al suggest, that complicates the task.
This article did not settle any issues for me, but it did prompt me to look more closely at the tools for teaching assessment that I am familiar with, as well as to explore a few more that I knew about but had not ever looked at closely. The article prompted me to take a closer look at Danielson’s Framework for Teaching (FFT), which has an extensive and easily found web site. I recommend exploration there. The 2013 version of the FFT is available for download there. I wonder if Jones et al had seen the 2013 version when they wrote their article. As I went through the 2013 FFT, I saw plenty of specific mentions of teacher behaviors effective with SWDs and ELs. I would not want to see more added to the FFT, which in my view already borders on being a long assessment. Looking at the FFT more closely than I had before was helpful, though. It reminded me of things I need to renew my focus on in my own teacher education courses, and the behavioral descriptions for each rating level are clear. I went from looking at the FFT to relating it to what my own state will now be requiring. That was a good thing for me to do.
I also looked more closely than I had before at the Sheltered Instruction Observation Protocol (SIOP) which is well-known among special educators, and at some other tools and instruments as I continued my Internet exploration. I actually spent more time looking at items mentioned in the Jones et al article than I did reading the actual article, which is short and concise. I invite other educators to look at the article, note down words and terms that pique interest, and engage in a search. We all need to be involved in developing teacher assessments. All of us will either use them to assess others or have them used to assess ourselves, or maybe both. We need to be informed and involved.
No comments:
Post a Comment