Today, Dean Dad has a post about evaluations of instructors from his perspective as an administrator. I left a comment over there that wasn't particularly well thought out (sorry, Dean Dad) but I've been thinking a lot about the post since I left the lame comment, and so I thought I'd do a more deliberate response over here. Also, I thought I might get something out of doing a more deliberate response when I'm not in the whirlwind of reading the results of my own course evaluations or cursing the incomprehensible statistical information about my evaluations. At the moment, I've got some critical distance. My evaluations are waiting to be distributed, the semester is coming to a close, and I'm not fretting over what happens with them - not yet.
Dean Dad begins:
"Evaluating faculty is one of the most important parts of my job, yet some of the most basic information needed to do it right isn’t available."
Ok, this inspires my first point: I think that we need to draw a distinction between the evaluation of faculty and the evaluations of particular courses by students. Why do I draw this distinction? Because I think that when we conflate these two kinds of evaluation that we run the risk of disempowering both students and faculty:
1.) The faculty member becomes responsible for all of the learning that happens in the course, which inspires the faculty member to teach to the instrument of evaluation. When this happens, the faculty member loses the power to experiment with his/her teaching and to take risks as a teacher.
2.) The student becomes characterized as a passive receptacle of information, whose role in his or her education is defined by the reviews that he/she metes out for his/her instructors. Did the instructor deposit the right amount of wisdom in the student, according to the student? Fill in bubble A. Not exactly a model that inspires students to take ownership over their own educations.
3.) Sometimes a student's response to a course (say, a required course for the major, like the British Survey) is a response to the material of that course and not to the instructor. Yes, we might say that it's the instructor's charge to bring even the most inaccessible material to the students and to make them like it, but I think that's wrong. And sometimes, when students don't like the material - or if they feel overwhelmed by it - they will give a poor evaluation of the instructor when really what they are responding to is the course itself. The instructor may have little power to change the course itself, especially if it is a core of the major, and so the faculty member is then screwed.
Thus, I think it would be entirely valuable to find a way to make distinct the evaluation of faculty as instructors and the evaluation of courses, though I fear that this, with human nature being what it is, is impossible. So let's table this idea and move on to the kinds of instruments for evaluation that Dean Dad describes:
I think that it's important to note that students are not experts on classroom instruction nor do they have the experience necessarily to know what constitutes good pedagogy. Thus, when we read student evaluations, we are really reading students' evaluations of their experiences in the course and not objective evaluations of the quality of instruction. Thus, students might give very high marks to a professor who provides copies of all class notes in the form of power point online because students read this as the instructor being prepared or as the instructor being very user-friendly to students who can't make it to class or who have other commitments. I, on the other hand, might say that this does not constitute good pedagogy because it gives students the impression that they should passively consume the instructor's notes, it encourages memorization and does not encourage critical thinking, and it makes for limited interaction between instructor and student. Which interpretation has more legitimacy? I would argue that the student's perception is less legitimate, but yet that student's evaluation may ultimately mean that the professor who makes all notes available will have higher evaluations than I, the nasty lady who expects her students to take notes, will. I'm not saying that we should do away with student evaluations, but I do question their utility in the evaluation of professors, and I'm not sure that these evaluations should carry substantial weight in promotion and tenure decisions. (Oh, and I should add that one of the reasons I am very cynical about student evaluations is that at my university there is a widespread problem with young, female instructors getting low evaluations, and everybody is just like, "oh yeah, that's really a problem," and nothing is done about it and it sucks.)
Dean Dad notes that peer observation is useless as an evaluative tool because the observation reports are uniformly glowing. I think he's right. But I think that the reason why they are uniformly glowing is precisely because they are used as an evaluative tool by administrators. Ultimately, these evaluations are not about improving teaching, they are about weeding out bad teaching, or at least that is the perception. One will not give honest critique in this situation - at least not in the formal letter - because one doesn't want to be responsible for doing one's colleagues wrong. Also, one probably wants to err on the side of niceness - one probably only observed one class, and the colleague being observed could have been having an "off" day, etc. Again, it would be different if these evaluations were not used by administrators but rather were for the use of faculty. As it is, they are just one more hoop to jump through on the path to tenure, and an annoying and disruptive hoop at that.
I do think that grades can be an indicator of... something... if they are uniformly very high or uniformly very low. Again, though, I wonder: how do grades show something about good teaching if they are in the "normal" range? To my mind, what matters a hell of a lot more than grades is giving students feedback on their work, which is not the same thing. How do we measure that? Also, I tend to allow revision with no penalty in writing classes, which means that the grades in those courses tend to skew higher than the usual curve. Would I be penalized for this under a system in which grades were evaluated as an indicator of good teaching? Would I be accused of grade inflation? Even though the students work their asses off to get those grades? Or let's say that you have a bad class one semester, and the grades skew low. Does that necessarily say something about the instructor? The idea that grades tell something about teaching seems the equivalent to saying that test scores say something about good teaching. I'm not sure that either is true, and the idea of using these to measure the "accountability" of instructors seems specious.
Course Attrition Rates
Hmmm. I'm not sure, again, whether this would be a meaningful measure of teacher effectiveness in all but the most extreme cases. Yes, there might be that instructor who has incredibly high drop rates over time across a range of courses. In that case, this would be a meaningful statistic. But what about everybody who falls in the middle?
The point is, I suppose, that I don't think there is a way to quickly and easily to assess the quality of instruction, and I think that part of the problem is that assessment from the top-down is geared (or is perceived to be geared) not as a mechanism by which to improve instruction but as a mechanism through which to get rid of dead weight. The emphasis is not on developing quality teachers and on encouraging quality teaching but instead on proving that students are getting their money's worth and on justifying the existence of instructors, who, as we all know, are out frolicking in the sunshine instead of worrying about their teaching, preparing for class, grading, serving on a thousand and one committees, mentoring students, etc.
I suppose the thing is, if evaluation is supposed to be a means by which instructors get honest feedback on their teaching and are encouraged to improve their teaching through that feedback, then it should be disconnected from professional advancement. If it is going to be connected to professional advancement, evaluations of courses should be course-specific - or at the very least discipline-specific - and individual departments and instructors should have some say in the criteria through which courses are evaluated; evaluations of instructors should be just that - evaluations of their abilities as instructors, and they should be disconnected from course-specific questions.
I don't know how to do any of this, and I might be wrong about half of it. But I suppose that I often think that evaluation, assessment, self-assessment, etc. get in the way of my being a good teacher and becoming a better teacher.
6 years ago