During my teaching career, I rarely read student evaluations of the courses that I taught. After the first couple of years, I never looked at them. The comments from students excerpted here? I read them only after retiring from teaching. (By the way, I didn’t offer my children a chance to write evaluations of my parenting, either.)
Didn’t I care what my students were learning? Of course I did. So how did I make sure that we weren’t failing miserably? Easy. Every graded assignment, every class discussion, every question asked (or not asked!) during or after a lecture, every group activity, every oral presentation, every response paper, every essay or research paper, every quiz, every test, every exam, every one-on-one meeting with a student in my office: all of that told me pretty clearly whether we were getting the job done.
Not to mention, this student’s eagerness to answer my questions. That one’s refusal to meet my eye. The slouch in posture over here. Over there, the way she’s begun to tutor her friends at the beginning of class. All of that, too.
But end-of-semester, college-administered, official surveys of student opinion? I would be hard-pressed to identify a more telling indication that we are failing utterly to decide what education is for. What its purpose should be. Or purposes.
If we can’t decide what it’s for, how can we evaluate how well we’re doing it? Here’s a simple question for you. What headings would you assign each of these two columns?
|Assured success||Failure as progress|
|Transaction||Facilitation and collaboration|
|Proven techniques||Intuitive leaps|
|What||How and why|
|Horizon: end of class, unit, term||Horizon: lifetime|
|Lends itself to analytical description||Resists analytical description|
|Effects easily assessed||Outcomes not easily assessed|
|What works||What’s good|
Here’s a second set of questions. How many standardized evaluations, do you think, ask students to indicate whether a course seemed designed to expose them to failure? Whether their professor sometimes left them befuddled at the end of class? Whether they themselves had more questions at the end of the term than at the beginning? How about something like this: “Write a short essay in which you reflect on what you have learned about yourself as a consequence of taking this course.”
Here’s an anecdote. I was participating in a college workshop several years ago. Some professors were working together to develop a rubric for assessing student writing in our program of first-year seminars. In other words, we were trying to develop a consensus about one of its primary purposes. “Our seminar program exists to teach students to write better? Okay, let’s decide what we mean by ‘better.'”
At some point, when conversation had broadened to include other purposes of these required seminars, I asked whether we should be assessing the degree to which students felt inspired by their experiences in our courses.
“Nope.” That was the response. Mind you, my colleagues didn’t disagree that it’s a wonderful thing when some of us succeed in “inspiring” our students, though what is meant by inspiration is certainly not obvious on its face. But it’s not the purpose of a liberal arts education, that was their point. Call it a nice add.
I’d title the lefthand column “effective,” the righthand one something like “inspirational” or “transformational” teaching. I’m sure you’re not surprised to hear — and may remember from your own experiences — that the typical course evaluation at a college or university focuses almost exclusively on items that fit comfortably in the lefthand column, i.e., on whether the instructor was organized, well-prepared, clear, fair, etc.
In fact, we need to be talking about the righthand column. If for no other reason that it’s the only way we can save humanity.
Kidding. A bit.