Friday, December 30, 2011


It's been months since I posted, but the last post from September is a good one, 'cause guess what? That student sure showed me.

Several days ago, just before the holidays began in earnest, I got the statistical reports back from my two sections of the "foundations" course that I had been teaching. Two sections: same syllabus, same lesson plans, same assignments, similar grade distribution. The earlier section was somewhat less talkative, and had fewer pure standout students, and the later section seemed to be running preternaturally well, but by-and-large, these were the same class, held back to back.

So when I handed out course evals in each class, I figured that they would look quite similar, and that (since both courses felt like they'd gone in familiar ways) both sets would look like most of the other sets of evaluations I had done in the past.

So, last week, when I looked at the statistical reports, I was pleased, but not surprised, to see that the later section (the first set I read) gave me quite good scores--on a 1-5 scale, most of the average scores were 4.7 and above. Statistically, these evaluations were the best of any course I'd ever taught that didn't involve an actual trip to London.

I didn't expect the next section's scores to be quite as high, but for a moment, I believed that this first batch confirmed what I had believed: that this particularly rigorous version of the course that I had designed had been successful. In addition to the three graded papers that each required conferences, I had students complete 20 written exercises that sometimes took particularly ambitious students 3 pages to complete fully. I had asked them to work quite hard, but for all but one student (and not the one you may be thinking) that work had yielded sometimes transformative dividends in their thinking and writing.

Imagine my surprise, then, when I opened the pdf for the other section and discovered that these scores were flat out the worst of my entire teaching career. A couple of mean scores dipped below the 4.0 mark (which for me is pretty shockingly low, and were in some cases in the 10th and 15th percentile across all university courses)


Now, part of this stems from what seems to be one student who (I would argue, in bad faith) simply gave me straight 1's. And my guess is that the student who produced that document was the same student mentioned in the below post. But even accounting for that student, these were still statistically low evaluations. How do I account for this? Some possibilities.

1) A poisoned well: This student was so disenchanted with me and this course that hir bitching and moaning when I wasn't in the room colored the perceptions of everyone else in the room. This is something of a possibility, but this was a fairly quiet student, so it's hard to attribute the entire anomaly to this effect.

2) My optimism about the course and how well it had gone is somewhere in between the two, and the rosy view of the "better" section is no more a precise measure than the scathing ones were. I certainly want to believe that the great scores were the true ones, and the poor ones were a statistical anomaly, but perhaps to a certain degree they are both statistical anomalies.

3) The difference in student populations between the two courses had a bigger effect on student perception than I had imagined. This is possible, but this theory is contradicted by other courses in my experience. Of the 10+ sections of the survey course I've taught, my perception of student ability and enthusiasm is usually irrelevant to their perceptions of the course, and sometimes they actually seem inversely related. Now, I went for "rigor" more vociferously here, and if anything (outside of actual learning) seems likely to produce lower course evaluations, it is more writing and "stricter" grading policies. In this case, then, the section with the fewer high performing students seems to have fostered a classroom culture that less thoroughly bought into what I was aiming for in the course.

So some lessons to learn here:
1) As I think we all know, course evaluations are an imprecise, if not downright inaccurate way of measuring how well a given instructor is doing in a given class. Certainly trends over several sections can be telling, but the caprice that seems to have determined the wild divergence in these two sets disrupts many sureties we may have about these assessment tools.

2) Perception may matter more than actual learning in student evals. I think we all knew this too, but it underscores a dangerous trend, and one that many assessment initiatives are unable to account for. This is, given that my merit raise is keyed to my annual evaluation, and that evaluation may in fact suffer from the comparative dissatisfaction of, maximum, five students, I am monetarily incentivized to move away from the practices that I believe created the conditions for these poor evaluations. And in at least one case, I think that practice was simply this: intellectual honesty with a poor performing student who is ill-suited for this discipline. So, what? when I meet a student like this one in the future, I smile and nod and say, "Sure, the civil right movement was about rainbows and ponies. What original thinking!"? No, of course, not, but when that choice may in fact literally cost me hundreds, and even thousands of dollars over the course of my lifetime (Since merit raises are a percentage of base pay, so the effect compounds over time)? Whew, that's a hard one. I understand why some folks have decided to simply punt on rigorous courses.

3) This is the one more personal to me: I care waaaay too much about this. This has bothered me for over a week now, and has unsettled my thinking in a number of ways. While, sure, it's weird, I shouldn't still be talking about it, or at least bringing it up in casual conversation. But the fact is, Like many of my students, I derive a not-small chunk of my self-worth from external validation--it used to be grades, and then conference paper acceptances, and now articles and book contracts and yes, on a predictably regular interval, course evaluations. Five students should not have this kind of sway over me, but dammit they do. And like the student with whom I believe I was intellectually honest (or so I strongly suspect), I have taken this personally. And I shouldn't.

1 comment:

Rosemary said...

Ugh--I'm right there with you, Horace. Those bad evals always stick in my craw, even--and especially--when they are in bad faith, as you put it. And they stick there for a long time.

Wish I had some sure-fire way to un-stick them, but I don't. I think for me the "stickiness" comes from a sense of betrayal.

And I agree, we need to come up with better ways to assess teaching effectiveness. I ought to be asking colleagues to observe my classes at least once a year, but I'm afraid I haven't even done that once, period. Why should that be scarier than giving student evaluations?