Saturday, January 31, 2009

Evaluate This

Every institution generates a class of employee determined to prove that their paycheck isn’t a reckless drain of resources. These are usually mid-level bureaucrats charged with things like “Quality Assurance” or “Systems Analysis.” Many of those bureaucrats imagine that the best way to show that they are invaluable is to create change. That change doesn’t have to make things better necessarily, but it does need to be big and splashy. If that change also includes technology in some unnecessary way, they get super bonus stars. Universities, I am here to tell you, are not immune from this phenomena.

Big Midwestern University recently experienced one such change that transformed the means by which students conduct course evaluations. Previously, professors allocated about fifteen minutes of class time at the end of the semester for students to fill out anonymous paper forms which were returned to a central office for processing. But why continue with that system when we can make it needlessly more complicated and inefficient?

The higher administration decided that moving the system to an on-line format would be so much better. Why? Well, because – um ... It means that – er . . . It will just be better! It’s on-line!

Actually, they did offer us a set of fairly laughable justifications. One was that students must want to fill out evaluations on-line if there are unofficial sites like the much reviled (and often slanderous) Rate Your Professors. The logic being that the driving force of RYP wasn’t student entitlement or a means to trash instructors who required their students to work hard. Rather, it was simply the on-line format that kept students coming back. If they could fill out official evaluations on-line, than their desire for venting about their professors at RYP would be sated. It's on-line!

Another bonus that they promised was that we professors would have our evaluations instantaneously! As soon as we submitted our final grades, the evaluations would be downloadable. Isn’t that exciting? It's on-line!

Maybe I am a bad teacher, but I can’t say that I spent six weeks pining for the return of the old paper evaluations. Why, after a grueling semester, one would want to immediately read a potential list of complaints from students, I am not certain (And, btw, this much vaunted possibility of “instant review” proved untrue as the system become riddled with problems, thus delaying the release of evaluations for about six weeks (about the same turn around for good ol’ paper evaluations)).



Perhaps the biggest jump in logic was that the administration predicted a major rise in the completion rate of evaluations. In what can only be explained as a stunning lack of understanding about students’ priorities, the administration predicted that students would race to their computers to fill out surveys with enthusiasm and vigor in their free time. It's on-line!

Now, I’m not saying the administration is totally out of touch with reality, but did they even try to imagine themselves as a student? Logging onto my computer and finding a set of four or five evaluations, each consisting of twenty tedious questions, isn’t going to look like a party on Friday night. At best, I might fill out evaluations for one or two classes that I really, really loved or really, really hated before losing interest and finding out who is on Facebook.

Moreover, removing the evaluations from the professional context of the classroom might lead to students taking them even less seriously. One has to only glance at RYP to discover that many students have no idea what a professional relationship looks like.



Lots of faculty tried in vain to explain these basic realities to the administration before this system went on-line. They answered these critiques with a massive advertising explosion on campus encouraging students to use the new system. After goddess-knows-how-much-money went into the new system, what was the result? Less than 50 percent of students submitted reviews for my classes. Comparing notes with my colleagues, I was lucky to get even that level of response. Keep in mind, with the old paper system, I almost always had a 95 to 100 percent response rate.

So, has the university learned a valuable lesson from this colossal failure? Not at all – They have placed the blame on faculty for “not encouraging” students to fill out these on-line forms. If we really cared about evaluations, they claim, we would have made filling out this on-line evaluation an official assignment.

If you work at another university and find yourself chuckling at BMU’s silliness, let me sober you up. Right now, as you read this, your own institution is probably planning an identical shift. My sister (there is another) reports that her college is about to institute the same on-line system (Despite comparable protests from faculty on that campus). Indeed, it must be one of those things that is recommended in this month’s issue of Unnecessary University Expenses magazine. It's on-line!

I hear you asking, “Why does any of this matter?” and “When is this blog going to be about gay porn again?” Both of those are fair questions.

It matters because the value attached to student evaluations is escalating on campuses across the nation. When student evaluations first appeared, they were intended to be a means for students to provide constructive feedback so that professors could fine tune their courses. Indeed, I think students should have a means to offer their perspective on the learning. The evaluations would also be a means to alert administration to very serious problems that would only be available if written anonymously.

Over time, though, the consumer-mentality started to infiltrate universities. Students stopped being students and, instead, transformed into customers. Inside Higher Ed recently reported that Texas A&M University is offering a $10,000 bonus to the faculty member who receives the highest student evaluations (That’s big money for a humanities prof, but small potatoes for a Wall Street executive). Apparently the idea had its origins in a conservative Texas think tank known as the Texas Public Policy Foundation. The Chancellor of A&M, Michael “Burger King” McKinney, explained the program as “customer satisfaction . . . It has to do with students having the opportunity to recognize good teachers and reward them with some money.” No offense to the fine students at A&M, but should a professor's career be determined by these guys:



So, with the new on-line evaluation system at BMU, many of us are wondering how we will fair in the consumer-oriented world. Since students are not likely to fill out these new evaluations unless they are throbbing with love or hate, it will skew the results considerably.

Faculty who recoil at comparing their classroom to the gift-wrap counter at Macy’s are disregarded, or worse, assumed to be “bad teachers” who are bitter about it. But that assumption places a huge amount of faith in students’ abilities to measure what is important in their instruction (For the record, my own evaluations are usually fine – not stellar, but not a horror show. Given the amount of work that I assign, and my inclination to give texts that are outside students’ comfort zones, I am amazed that I do as well I do).



The same article in Inside Higher Ed pointed to many problematic assumptions about teaching evaluations, including studies that refute the accuracy of evaluations as a measure of learning. One study by three economists at Ohio State found, not surprisingly, that students are more likely to give higher course evaluations if their own grade is high. They also reaffirmed that gender and national origin impacts evaluations. Women and non-U.S. faculty receive lower evaluations than their peers on average.

When those same economists charted the student’s grades in subsequent classes that depended upon content from the evaluated class, they found no correlation between professor evaluations and the learning that is actually taking place. In other words, a student might have learned a great deal, but still hated the class and given a negative review.



Student evaluations are important and I am not suggesting their demise (though we can dump this on-line nonsense). Students' goals in the classroom, though, are often about reducing their work load and being entertained. Instead of depending on their viewpoint as the sole measure of teaching effectiveness, we need to consider tools that actually measure whether students acquire new skills. Until then, do you want fries with that history class?

25 comments:

rosmar said...

I have such mixed feelings about student evaluations. On the one hand, I get a lot out of mine (sometimes). Almost every positive change I've made in my teaching has been in response to thinking about something a student said (even if I go in a different direction than the student suggested). On that same hand, I see the need for evaluations of teaching for the institution as a whole--so much of our jobs takes place behind closed doors, and I have witnessed some truly horrendous teaching in my day. (Mostly I've had good-to-excellent teachers, but when teaching goes bad, it can be really, really bad.) On the other hand, student evaluations (like all evaluations--peer evaluations show the same patterns) tend to reflect the stereotypes that pervade our society, and they are also often contradictory (in the same course I can get two students who say I lectured too much and two who say I need to lecture more, for example). And students, like all of us, sometimes are too focused on the short term to be good judges of what is in their long term interest.

Online evaluations, though, are clearly a joke. I can't believe your university went there. My college, fortunately, has already seen the low response rates on advisor evaluations, so it isn't going to turn to online student evaluations any time soon.

(Also, on the students as consumers point--I strangely often hear students as consumers mixed up with the concept of students as products--both disturbing in their own way. Fortunately, my intro class includes Plato, so I get to use the allegory of the cave early on to point out to students that education can be a long and painful process.)

Anonymous said...

1- I know it's early, but when I clicked on this post, I thought those "woosh" marks had cut off wonder woman's head, & I actually jumped away from the computer
2- I am willing to forgive you for such unintended blasphemy b/c you followed it with Doris Day

3- Has BMU had the bright idea of making the evals or some aggregate thereof available to anyone familiar with you university's computing system? You know, under the guise that it will help provide official channels by which students can learn from other students experiences in your classes when pick their next semester courses? I mean, you know, advising and informal networks be damned they aren't online!

If they have not, brace yourself, b/c that is the next pointless step in this effort. And even better, I have colleagues whose evals based ratings and some key quotes from the evals are available to anyone who accesses their name from within the registrar page at University of Evil. They didn't even know the info was available to the public.

Seriously tho - I do think evals have their place, when they are actually providing useful feedback and a means of exchange that positively impacts learning. And I'm unclear why we continue to be dependent on them in their current form(s) when so many studies have proven them to be biased based on race (I've got a link to the latest study somewhere on my blog), gender (does AAUW prove this every 3-5 years?), nationality (as you note), and perceived "hotness" (as noted on RYP) on top of ease of course. One way that I've tried to change that in my classes is to offer 1-2 informal anonymous evaluations that actually ask questions about the course material, they are written in such a way that class participants have to take out the syllabus to complete them, as well as the "learning goals" sheet I provide them at the beginning of the term with the goals of the course and space for where they write their own goals. These evaluations are radically more helpful b/c they actually reflect on specific assignments, reading materials, and info delivery. And I have modified my pedagogical strategies during the course as a result of them, which means the people doing the evals actually benefit as well not the class that comes after them. We encourage a similar thinking at the mentorship program b/c corporate college isn't just happy with "it's online" it also uses that to somehow negate questions of embodiment.

I was Chair at the time evals shifted in my program, and I just got together our curriculum committee and asked them to generate ideas for an internal evaluation system. Ultimately, we use a paper based system with specific questions about course content & departmental & uni goals. These forms are done in addition to the "please fill out the online eval" speech. It gave us a chance to have a voice even if the bureaucrats were too busy finding out what else was online to listen to our normal channels of complaint.

(On the other hand, I have a colleague who loves the transition. She has a 30% or less return rate online, mostly from students who love her and the classes, and so she says she doesn't have to worry anymore about the stack of evals from "gen ed kids who thought they were going to color all semester." So who knows, for some there may be a silver lining.)

Tenured Radical said...

We switched to on-line as well, and I am sorry to say, I haven't even looked at mine yet, so little time have i had. And there is something to be said for having something actually come in the mail. So little of any importance comes in the mail nowadays that I am likely to be immediately curious about something official that does. NB: I had my students bring their laptops to class (which they often do anyway), I left the room for 15 minutes, and they did them there. So I expect a 100% response rate because I duplicated the old system.

That said, I recently looked at the evaluations of two t-t colleagues, and found to my dismay that although they got a decent response rate (upwards of 75%) most of the comments were far more casual than I am used to. Few were detailed, either pro or con, in a way that can often help a more senior mentor figure out what needs to be addressed in a bad eval and what doesn't.

GayProf said...

RosMar: I get a lot out of my evaluations too. Well, at least the ones that are focused on course content and not the color of my tie.

Students as consumers or as products feels like the same coin.

Susurro: Have more faith in Wonder Woman. She would never allow her head to be cut off.

I almost always give "unofficial" midterm evaluations to my classes for the reasons that you suggest. These are focused, content-oriented questions that consider how students are grappling with the material. In many cases I make adjustments based on that feedback.

One of my departments has also suggested giving supplemental paper evaluations as a means to off set the "official" computer ones at the end of the semester. So, the on-line system really just means more work for individual faculty.

Ultimately, I think that there will be an ironic effect that the "official" evaluations are going to be taken less seriously by the institution due to their own on-line decision. Low response rates and skewed results are going to make them seem like questionable data.

Tenured Radical: Truth be told, I haven't really looked closely at the on-line evaluations. For mysterious reasons, though, we had to download them within a short time period otherwise they get deleted (How is the on-line system better?). I actually think there is something healthy about waiting to read the evaluations until an entire semester passes. The distance makes them seem less personal.

I fear that the lack of professional and specific content is going to be a side effect of the on-line system for many people. There must be a better way. . .

Anonymous said...

Ha! I love this post. I had thought that the folks who run BMU had more sense than their enchantment with on-line evals suggests. And the results that you and TR report, well--do we need any other excuse not to take student evaluations seriously? One upside of your on-line experiment is that student evals will be even more worthless and dismissable than they already are.

I think student evals as traditionally administered (in class, on paper) can be useful in 1) identifying a faculty member with a drug or alcohol problem, and 2) perhaps in the first year or two of an instructor's career, or when an instructor has switched institutions. Other than that, they're generally of very limited use unless someone in a department wants to use them against a junior faculty member. (Let's just say that "this isn't a graduate seminar, you assign too many books and too much writing," when all of that was spelled out in extravagant detail on my syllabus, isn't a comment that's going to effect any changes in my teaching style, although it's a perennial student favorite!)

My department does peer reviews of probational faculty every year, so that we can develop a context for reading student evaluations. The peer reviews carry 90% of the weight in the reviews the tenure and promotion committee writes.

Doug said...

It sounds like BMU and other universities have a similar problem to some technology companies: the management is not familiar with the work of the managed, and/or the management never worked in the jobs of those they currently manage. Therefore, all their decisions are based on unrelated theories they learned in management school or on the hype and level of kick-backs they receive from software vendors, and not on practical experience gained in the field.

Dr. No said...

Are we at the same University? We recently made the switch too. In addition to the many problems you describe, we have found that the online evaluation system invites ALL students who were EVER enrolled in a class to evaluate it...so, students who drop, withdraw, etc. all chime in. I got an evaluation from a student who admitted they dropped after one week- evaluation: class looked hard. Ugh.

GayProf said...

HistoriAnn: My most frequent comment in evaluations is that I assign too much reading. Quite frankly, I imagine if I assigned nothing but a pamphlet, that comment would still appear.

I think that you might be right that an unintended consequence of the on-line system will be the slow demise of "official" student evaluations. The less they can be considered as "representative," the more they are going to be deemed useless.

Doug: Boy, howdy, I am very curious about the software developers' relationship with universities. The fact that on-line evaluations are suddenly becoming the norm across the country suggests that it is driven by some type of business/market scheme.

Dr. No: Oh.My.God. At least BMU stopped short of having everybody who ever glanced at our courses give an evaluation. Why not just open it up to your neighbors and that check-out guy at your supermarket, too?

Pilgrim/Heretic said...

Whew - I think we may have dodged a bullet on this one. My university at least had the sense to run a pilot program with a smallish group of courses doing the online evaluations (though with much fanfare and promotion); when they discovered that the completion rates were abysmal, they gave up (very quietly).

Anonymous said...

Oh my Gayprof. This is insane. Wait, that's what you were saying.

This idea that "ON-LINE" is wonderful in every circumstance is driving me nuts. I don't have to worry about evaluations at my level (13-year-olds basically want to "watch more movies" and "assign less homework" if asked what they think of a class), but I am dealing with this on-line mania.

The latest is that we must put our grades on line, so parents can access them any time day or night. Because posting the student's grades once a week isn't enough. Problem is, we have a great grading program, but it's not the one the district has decided we'll use. Nope. We're to use the website they have already set up for attendance.

The website which is NOT a grading program. The website which goes down about twice a week.

But we must do it. For the parents.

How did parents ever deal with their kid's grades before this? Oh, wait, they spoke to the kids, or god forbid, the teachers directly.

oh. Sorry. Kinda went on there.

I feel your pain.

Anonymous said...

Here at pov u evals are the stuff of witch hunts for most juniors veering from the "established" curriculum. Recent big tenure denial cases seem to agree. And we don't do yearlies we do every 3 years after the initial 1st year review (you can request more if you want).

When I get "assigned too much reading" I take it as a compliment. I actually had grads complain they had to read a book a week. Umm, you were expecting cartoons?

(I do think evals that address content are helpful whether junior or senior; if you change things around regularly it is nice to hear what they got out of it even if you already know what worked and what did not. And yet, I am looking forward to this promiseland when on line! makes the official ones obsolete.)

pacalaga said...

Good grief - I sometimes didn't do my homework because I couldn't be bothered to think about class outside of class (lookit me! designing bridges!). No way would I have gone online to fill out an evaluation. Gah.

pacalaga said...

And also, I suspect that my comments on your blog prove that I am not actually smart enough to be reading said blog. I think I'm bringing down the curve. ;-)

Anonymous said...

My (only) experience with an online evaluation occurred last spring. Our university tested an online system using four or five classes, and my Shakespeare course was one of the handful selected!

All the professor could tell us was that an e-mail linking to the evaluation site would be sent to our university addresses. This meant anyone not using their university addresses (or not receiving such mail through forwarding) were automatically eliminated as possible evaluators.

My reaction to the process was neutral. The site operated without problems, it took very little time, and I appreciated having a short window to complete the evaluation. Missing were class-specific questions, often provided by some of our departments as a supplemental survey with the main evaluation form. Also missing were opportunities to answer open-ended questions about the strengths/weaknesses of course content and strengths/weaknesses of the professor.

Anonymous said...

I love how this sentence summarizes the modus operandi of ALL academe so neatly: "But why continue with that system when we can make it needlessly more complicated and inefficient?"

I think evals can be useful, but I resent (on behalf of all of us) when they are treated as "true" reflections of competence. Students can't possibly begin to honestly understand and evaluate, for example, "how much work the professor has put into this class" or however that criterion goes...

GayProf said...

Pilgrim/Heretic: Or they wanted you to think that they had given up. We'll see what happens next year. . .

Laverene: Whenever I issue a complaint like this, I always know that primary and secondary school teachers almost always have it worse. At least profs are shielded a wee bit by the theory that we are teaching adults (though that is being eroded).

Putting grades on-line for parents probably wouldn't be a terrible idea if it meant that parents would take responsibility for their child's work. Instead, I suspect that it is just another means to assign blame to overworked teachers.

Sussoros: "Too much reading," I think, means that they had to actually read something in order to pass the class.

Pacalaga: Are you kidding? You are totally classing up this joint.

Given your confessions, though, would you mind telling me what bodies of water your bridges span?

Jonathan: Missing any chance to give qualitative evaluations sounds bad to me. If you were "neutral" about the whole system, it also makes me think that the change wasn't really needed.

Outside Voice:But why continue with that system when we can make it needlessly more complicated and inefficient?

That's not just a rhetorical question at BMU. It's a commitment to an entire lifestyle.

dykewife said...

an assigment? and how are you supposed to determine that everyone down to the specific student, has filled out the forms? that's nuts! of course, that said, the u of s is probably on-line as well. i didn't do an evaluation for my 800 level 1/2 class last term but i did for the classes i took in my last term for my undergrad degree. all done on paper to be processed onto the computer and given to the respective profs.

Anonymous said...

"Missing any chance to give qualitative evaluations sounds bad to me."

Yes! I prefer opportunities to give feedback in my own words (in addition to those Likert scales). But stats are supposed to be ... what is the magic word? Oh yes, "generalizable."

tornwordo said...

This gave me a chuckle. At least you actually get to read the feedback, however unconstructive it might be. At all three of the places I work for, I never see the student evaluations of the course. Ever.

Anonymous said...

One would hope that university administrators understand that self-reports of a student's intellectual gains are--how shall I say?--worth less than a pile of dung.

We have better methods of assessing a course's effectiveness. Some of my best professors had demeanors only slightly more pleasant than those of Cheetah and Gorilla Grodd, but--goddess bless them--they were great professors who demanded intellectual rigor. Why should we look to self-reports as markers of these professors' effectiveness?

Alan said...

I had heard about these changes, but fortunately I'm not TAing, so I missed out. What a shame.

One would think that, given we have an entire School of Education and a whole Department of Statistics someone in the administration might have actually listened to the experts on this. (Not that one really needs to be an expert to learn that these things were going to be even less useful than the previous methods.)

I'm not really clear what was driving this. But I assume it was money.

Anonymous said...

I think everyone is failing to see the upside here. Everyone knows student evaluations are not probative, much less interesting. All that paper is not environmentally friendly, and the time it takes to tally scores can surely be better spent reorganizing pencil drawers, or whatever it is academic administrators do, right? (To say nothing of rekeying comments, because so many professors recognize the handwriting of their fiercest student critics.) Student evaluations will soon fall into the dustbin history should have consigned them to long ago, and there will be innumerable and endless "outcomes assessment" practices to take their place. If the cost is only assuaging the ego of some midlevel bureaucrat, well, that seems quite reasonable to me...

Anonymous said...

I teach high school and they are talking about instituting student evaluations there. I don't think I could seriously consider criticism from students who can't even bring a pencil/piece of paper to class.

Anonymous said...

Think of all the trees and jungles that are saved without all that paper for the evaluations. ;-)

Unless my grade "release" relied on the completion of an online evaluation, I never would have done them. And I see universities using these against the staff somehow, some way.

Mobile App Development Company in Bangalore said...

Thank you very nice sharing.