Every institution generates a class of employee determined to prove that their paycheck isn’t a reckless drain of resources. These are usually mid-level bureaucrats charged with things like “Quality Assurance” or “Systems Analysis.” Many of those bureaucrats imagine that the best way to show that they are invaluable is to create change. That change doesn’t have to make things better necessarily, but it does need to be big and splashy. If that change also includes technology in some unnecessary way, they get super bonus stars. Universities, I am here to tell you, are not immune from this phenomena.
Big Midwestern University recently experienced one such change that transformed the means by which students conduct course evaluations. Previously, professors allocated about fifteen minutes of class time at the end of the semester for students to fill out anonymous paper forms which were returned to a central office for processing. But why continue with that system when we can make it needlessly more complicated and inefficient?
The higher administration decided that moving the system to an on-line format would be so much better. Why? Well, because – um ... It means that – er . . . It will just be better! It’s on-line!
Actually, they did offer us a set of fairly laughable justifications. One was that students must want to fill out evaluations on-line if there are unofficial sites like the much reviled (and often slanderous) Rate Your Professors. The logic being that the driving force of RYP wasn’t student entitlement or a means to trash instructors who required their students to work hard. Rather, it was simply the on-line format that kept students coming back. If they could fill out official evaluations on-line, than their desire for venting about their professors at RYP would be sated. It's on-line!
Another bonus that they promised was that we professors would have our evaluations instantaneously! As soon as we submitted our final grades, the evaluations would be downloadable. Isn’t that exciting? It's on-line!
Maybe I am a bad teacher, but I can’t say that I spent six weeks pining for the return of the old paper evaluations. Why, after a grueling semester, one would want to immediately read a potential list of complaints from students, I am not certain (And, btw, this much vaunted possibility of “instant review” proved untrue as the system become riddled with problems, thus delaying the release of evaluations for about six weeks (about the same turn around for good ol’ paper evaluations)).
Perhaps the biggest jump in logic was that the administration predicted a major rise in the completion rate of evaluations. In what can only be explained as a stunning lack of understanding about students’ priorities, the administration predicted that students would race to their computers to fill out surveys with enthusiasm and vigor in their free time. It's on-line!
Now, I’m not saying the administration is totally out of touch with reality, but did they even try to imagine themselves as a student? Logging onto my computer and finding a set of four or five evaluations, each consisting of twenty tedious questions, isn’t going to look like a party on Friday night. At best, I might fill out evaluations for one or two classes that I really, really loved or really, really hated before losing interest and finding out who is on Facebook.
Moreover, removing the evaluations from the professional context of the classroom might lead to students taking them even less seriously. One has to only glance at RYP to discover that many students have no idea what a professional relationship looks like.
Lots of faculty tried in vain to explain these basic realities to the administration before this system went on-line. They answered these critiques with a massive advertising explosion on campus encouraging students to use the new system. After goddess-knows-how-much-money went into the new system, what was the result? Less than 50 percent of students submitted reviews for my classes. Comparing notes with my colleagues, I was lucky to get even that level of response. Keep in mind, with the old paper system, I almost always had a 95 to 100 percent response rate.
So, has the university learned a valuable lesson from this colossal failure? Not at all – They have placed the blame on faculty for “not encouraging” students to fill out these on-line forms. If we really cared about evaluations, they claim, we would have made filling out this on-line evaluation an official assignment.
If you work at another university and find yourself chuckling at BMU’s silliness, let me sober you up. Right now, as you read this, your own institution is probably planning an identical shift. My sister (there is another) reports that her college is about to institute the same on-line system (Despite comparable protests from faculty on that campus). Indeed, it must be one of those things that is recommended in this month’s issue of Unnecessary University Expenses magazine. It's on-line!
I hear you asking, “Why does any of this matter?” and “When is this blog going to be about gay porn again?” Both of those are fair questions.
It matters because the value attached to student evaluations is escalating on campuses across the nation. When student evaluations first appeared, they were intended to be a means for students to provide constructive feedback so that professors could fine tune their courses. Indeed, I think students should have a means to offer their perspective on the learning. The evaluations would also be a means to alert administration to very serious problems that would only be available if written anonymously.
Over time, though, the consumer-mentality started to infiltrate universities. Students stopped being students and, instead, transformed into customers. Inside Higher Ed recently reported that Texas A&M University is offering a $10,000 bonus to the faculty member who receives the highest student evaluations (That’s big money for a humanities prof, but small potatoes for a Wall Street executive). Apparently the idea had its origins in a conservative Texas think tank known as the Texas Public Policy Foundation. The Chancellor of A&M, Michael “Burger King” McKinney, explained the program as “customer satisfaction . . . It has to do with students having the opportunity to recognize good teachers and reward them with some money.” No offense to the fine students at A&M, but should a professor's career be determined by these guys:
So, with the new on-line evaluation system at BMU, many of us are wondering how we will fair in the consumer-oriented world. Since students are not likely to fill out these new evaluations unless they are throbbing with love or hate, it will skew the results considerably.
Faculty who recoil at comparing their classroom to the gift-wrap counter at Macy’s are disregarded, or worse, assumed to be “bad teachers” who are bitter about it. But that assumption places a huge amount of faith in students’ abilities to measure what is important in their instruction (For the record, my own evaluations are usually fine – not stellar, but not a horror show. Given the amount of work that I assign, and my inclination to give texts that are outside students’ comfort zones, I am amazed that I do as well I do).
The same article in Inside Higher Ed pointed to many problematic assumptions about teaching evaluations, including studies that refute the accuracy of evaluations as a measure of learning. One study by three economists at Ohio State found, not surprisingly, that students are more likely to give higher course evaluations if their own grade is high. They also reaffirmed that gender and national origin impacts evaluations. Women and non-U.S. faculty receive lower evaluations than their peers on average.
When those same economists charted the student’s grades in subsequent classes that depended upon content from the evaluated class, they found no correlation between professor evaluations and the learning that is actually taking place. In other words, a student might have learned a great deal, but still hated the class and given a negative review.
Student evaluations are important and I am not suggesting their demise (though we can dump this on-line nonsense). Students' goals in the classroom, though, are often about reducing their work load and being entertained. Instead of depending on their viewpoint as the sole measure of teaching effectiveness, we need to consider tools that actually measure whether students acquire new skills. Until then, do you want fries with that history class?