Whatever we may say about academia as a haven for liberals, systemically it is no less part of the systems of global capital than Chase Corp or Northrop Grumman. So let's be clear about the realities of academia: it’s hardly the ivory tower of pure intellectual discourse that we sometimes romanticize it to be.
So with all that Military Industrial Machine business swirling about our hallowed halls, it is no wonder that we daily observe, and as frequently lament the corporatization of the university, an institution that feels like it shouldn’t be corporatized.
All that said, my (and perhaps your) experience of a humanities education has given me the very tools I might use to locate such systems of discourse of capital, power, and hegemony. My humanities education sometimes seems like the last hope I have personally of feeling like nothing more than a cog in an economic machine (with a shout out to Sisyphus, who at least acknowledges her status as an Academic Cog).
And so, it seems like protecting the humanities is like protecting what is for me, the only available ground from which to build resistance, which sounds a lot more militaristic than I wish it did, and yet sometimes those seem to be the only metaphors available (thanks Lakoff!).
And so I’ve been thinking a lot lately about how the discourse of efficiency, of high performance, even technological performance, saturates the messier and (it seems to me) more humane discourses of the arts and the humanities, particularly in the space of the classroom.
And lately, this has shown up in administrative drives for assessment. Administrators, donors, corporate sponsors, government funding agencies, they all want data that says we’re doing a good job. The Bush Administration like to call it accountability, and we know where that has gotten us in primary and secondary education. But the drive to assess academic endeavors is essentially a drive to hold academic accountable.
Now, I’m not against being accountable in theory. I believe I do good work in the classroom and in my writing, and I want my colleagues across the university and across the profession to serve their students well, too. But the questions of assessment and accountability necessarily invoke the questions of what is being assessed, how it is being measured, to whom we are accountable, and to what ends that accountability serves.
I’m sure you, in your department, have been doing some assessment—and if not, you likely will soon. I’m now on my second committee at my second position where assessment has been part of a major discussion. And I’m in my second discussion about what purpose assessment serves.
In theory, I think assessment should be formative; that is, at its best, when we study ourselves closely, when we assess our work for ourselves, we should use it to improve what we do (a drive for better performance nonetheless; but that’s hard to argue against in practice). And so I always find myself wanting to do the kinds of assessment that are messy, that yield results that are complex and multifaceted, results that try to get at the complexities of the classroom and at the complexities of the kind of thinking we ask our students to do.
I often feel my colleagues (to greater and lesser extents) groan at this suggestion, for they know, as I do, that this is Not A Good Idea, and Will Not Fly With Administration. Because administration doesn’t really want us to tackle the complex vectors of teaching critical thinking or the nuanced space of the classroom.
They want numbers that make us look good. They want to report high performance, since in this regime, money follows high performance, and low performance (even with no actual transgression) is met with cuts to program funding, and eventually elimination of whole departments, disciplines, modes of thought: Just look at the state of Classics, once the core of a humanities education, now an endangered species.
And so our conversations inevitably turn to What We Can Assess, or rather, what measurements we can make of our activities that will look good enough to administrators that they will reward us with our new budget.
This is where it gets dangerous, because the kind of rhetorical summative assessment usually ends up assessing What Can Be Assessed: grades, concrete goals, objectives with observable results, things that can be quantified on a uniform scale.
Do you see the problems? Independent thinking, while sure, technically measurable, I guess, is not really a Thing That Can Be Assessed. Critical Analysis? Questioning Assumptions? Creativity? Contribution to a discourse community? Not so much the assessable. Hardly even measurable.
And so, because we can’t easily assess those goals, the things that for many of us, actually mark good thinking from our students. We may continue to push for them in the way we set up our lessons plans, in the way we reward students with higher grades, and the way we respond to their writing. But these are contingent responses, merely temporary and individual ways that we re-affirm our actual commitment to the humane work of the humanities. And the contingent, the humane, and the individual rarely if ever make into assessment rubrics.
What does make it into assessment rubrics? I am not an expert on the whole range of tactics that have been used in humanities assessment, but my sense is that even the most sensitive of them end up using quantitative scales that enshrine either actual grades on student work or on whole semesters, or else some numerical scale that looks at individual skills and makes judgments based on numerical evaluations.
The things that get assessed then are either so broad as to be meaningless (grade distributions on required classes), completely circular (the same: we assess ourselves based on our assessments), so local as to lose the big picture (One assessment activity I saw proposed actually wanted to measure students’ writing performance based on numbers of surface grammatical mistakes), or so uniform as to completely elide individuality.
But what, might you ask, is the harm in measuring, say, individual skills, or broad swaths of grade breakdowns? Especially if it’s being used solely for rhetorical purposes to distribute to donors and other purse-string holders?
Because it’s never used for just that: Because when we say we’re going to assess something, we often adjust our pedagogy to emphasize that element. Sometimes we write it into the goals and objectives statements for the course or for the entire department. (Really: we had an argument over whether we should say our objective for students was “apply a range of interpretive strategies to texts” vs. simply “apply interpretive strategies to texts” because no single assessment could track “range”). We revisit those assessments in department meetings, and via emails that report our assessment results. We end up, ultimately, dwelling more on the assessable and less on the humane (I’m using this word overly broadly, but I’ll let it remain a placeholder).
Let’s say that we as a department decide that we want to measure students’ knowledge of key literary terms. The knowledge of these terms is if not crucial, then at least beneficial to excellent literary study, we say, and we want our students to know them. OK. Fine.
How do we assess it? Well, let’s build in a unit in the gateway course. Fine. Let’s assess it by giving a uniform exam in all sections of the gateway. Great. Then we’ll know how much students have learned about key literary terms.
Already, the knowledge of literary terms has been taken out of the slow accretion of a student’s lexicon over semesters: she may learn “interstitial” in one class, “epistemology” in another, “Synecdoche” in a third. Now, though, she’s cramming them into three weeks of her freshman year, memorizing them on notecards, divorces from their application in actual texts. And then, some faculty whose students do less well on the vocabulary test start giving that unit four weeks, at the expense, of say, scansion, or and introduction to feminist theory, or critical race studies, or one-on-one paper conferences. In this system, not only are students getting short shrift out of what could have otherwise been weeks of more nuanced classroom experience, but they’re also likely to forget many of the terms that they probably never really learned how to use anyway.
OK, so let’s give the exam at the end of the course of study: we all know where this is going. Students have no incentive to prepare or privilege an assessment instrument that uses a range of key literary terms, and so they blow it off, don’t study for it, and give us results that neither tell us how well we’re doing nor tell the administration that we’re doing great, thanks!
So, what? we decide to make passing that exam mandatory? We make it an exit exam upon which the degree is contingent? Of course not: no self-respecting English department is going to rest the award of a degree on passing a vocabulary test.
Obviously, this example is over-determined, but the kinds of assessment instruments that get designed still boil our very complex subject matters down into digestible measurable bits, bits that when we decide to measure them stop being observable in their natural habitat (is that Schroedinger’s Cat? Or the Heisenberg Principle? I can never remember), bits that get distorted out of proportion and minimize in their shadows the unmeasurable work that we all say we want to foster in the humanities classroom.
And to design and implement an assessment instrument that would, somehow, take that into account would demand so much work from so many people, that the hours and energies that went into assessing on a grand scale that it will take away from our research, our writing, our preparing for new classes.
Listen, I assess my teaching every semester, and more often: on this blog, in annual reports, in post-mortem notes for future semesters. I can take into account things in these ways, and get feedback on them, that invigorate my teaching, that help me serve better my students in every class. But I’m not putting a vocabulary test in my syllabus. My students are much better off using those weeks reading something new, engaging their ideas, and maybe even having a big idea of their own.(ETA: Jason Jones has a thought-provoking essay in IHE that gives me much to consider as I continue to think through these issues. So as you read and consider this, please read and consider what Jason has to say as well. If anything happens, I hope to revisit this post soon with some "what to do" ruminations.)