Day 12: When Course Evaluations Actually Improve Teaching
As I’m getting ready for the new semester, I’m thinking about how to organize the course and discuss expectations on the first day of class to help students learn as best I can. This process gets me thinking about what worked in previous courses as I blogged yesterday. Today I thought, I should go look at course evaluations, right? Riiiight. I like to think that there was a time when people actually wanted to know what students thought about their courses. So they polled them with various kinds of instruments, including surveys. With the neoliberal drive to data in K12 and higher education, and the growing suspicion of the academy, course evaluations turned into evidence of success or lack thereof. Instead of seeing the surveys as an opportunity for faculty to get genuine feedback about how students perceived what was happening in the course, the surveys became a judgment of the faculty. As evidence of how ridiculous this is, a colleague of mine pointed to a study that shows that students who give positive evaluations went on to do worse in proceeding courses. Just yesterday, Inside Higher Ed published yet another study on gender bias in course evaluations. At my previous institution, course evaluations were closely associated with merit pay, and it wasn’t even the course evaluation as a whole, but question 11, something about rating the professor overall. I have come to believe that if course evaluations’ purpose is to judge faculty, they will not help faculty learn how to be better teachers. And I want to be a better teacher.
Aristotle tells us that learning happens in the student, and so it seems that indeed, we need to look to students to judge whether and how learning is happening. Students can offer us information about student perceptions and what they think they are learning, but students might think learning means one thing when the professor does not. For example, students often ask me for a synopsis of whatever we are reading. I never give it to them. This desire tells me that they think learning is getting information, which they can then have. But I don’t think that is learning so I don’t give them the information. Students can get frustrated about that because the story about learning that they are told through high school has changed on them, and so they begin to feel frantic. Like when they realize the five-paragraph essay they’ve been trained to do their whole lives will not get them through college. In both these cases, while there is initial dread and thrashing about, I think in the long run, I’m doing more to empower them and I’m treating them more like knowers who need to learn how to read and think and struggle through by not giving them an easy synopsis.
Another reason, the main reason imho, course evaluations are doomed if they judge faculty is that faculty won’t be able to treat course evaluations as possibilities for their own development. Universities talk like this — that course evaluations are meant to be formative not summative. But few faculty actually believe this is the case. So instead of taking feedback as information about how to better teach, faculty look for ways to dismiss negative feedback. And they aren’t wrong, there are lots of ways to dismiss that feedback – generic course evaluations turn out to be really poor tools.
A group of faculty at my institution have been experimenting with course evaluations. Now I should say I recognize the privilege of being in a small liberal arts college that expects faculty to give student evaluations but does not have a campus-wide evaluation. For tenure, faculty submit these evaluations, but are not asked to demonstrate that the evaluations are always excellent, but that the reflection and improvement on the basis of the evaluations is. (Teaching is also evaluated through interviews with students and course observation by several faculty over a semester, so other expert teachers are making judgments about faculty on the basis of sustained engagement.) Faculty construct their evaluations with an eye to thinking about what they want to be achieving and what they want to have worked. This is the first thing: evaluations that improve teaching cannot have their immediate outcomes influence the promotion and tenure of faculty.
One faculty member developed an online course evaluation drawing on the key indicators from the Wabash National Study, a large-scale, longitudinal study to determine which factors in particular led to success in liberal arts education. On the basis of that study, questions were constructed. This is the second thing: evaluations that improve teaching need to have questions that are tied to demonstrated success for what works. It turns out that students feeling like a faculty member is invested in them outside of class is highly correlated to how hard they are willing to work, so one question is about whether students feel like faculty are invested in them outside of class. I admit, I didn’t like this question, but when I saw how the students’ sense that a faculty member cares motivated them to work, it made more sense to me even why the college wants faculty members to be engaged in campus life.
Faculty then could add several open-ended questions that were specific to their courses. Faculty could provide extra-credit incentives to students because they would receive their names without receiving the feedback until after grades are due. I got a much better completion rate doing it online with an extra-credit incentive than doing it in the classroom where students could be absent and unable to make up the evaluation. This is the third thing: high rates of completion.
I was suspicious about this. I thought it might be another effort to develop data. But this project is separate from the institutional research office, and faculty are strongly invested in keeping it cordoned off from that office. If the evaluations really are about getting feedback to help you see what is and is not working, this could be useful information. I learned that some students think they should be memorizing when I have no desire to have them memorize. So that helped me think about how to frame the course and course assignments better. I learned that some students thought that I didn’t make sure that they had mastered material before moving on, when I didn’t really expect them to master the material over the whole semester! So I realized I needed to be speaking more directly and explicitly about what I expected. I needed to let them know that their sense that they were confused meant that they were learning; their sense that hadn’t yet mastered the material was a good sign that pointed to their developing capacity to think and to see the difficulties that confronted them. This is the fourth thing: evaluations need to be understood as students’ judgments and perceptions (as Philip Stark, associate dean of the Division of Mathematical and Physical Sciences and a professor of statistics at the University of California at Berkeley and author of the study referenced above, has argued). Addressing these judgments can help students learn but they aren’t so much judgments about the faculty. For example, no matter how long or short it takes you to return assignments, students sense of a “timely manner” doesn’t seem to change. But we can become better teachers from understanding students’ judgments and perceptions.
If teaching is a humanistic endeavor where who the teacher is and who the student is matters, then knowing students and their assumptions and backgrounds as students makes us better able to teach them. And that’s what I want to do.
That’s my #slatepitch for student evaluations.
Thank you for writing about this important topic. Any chance you or your colleague would be willing to share the Wabash-inspired course evaluation questions?