An experiment in online course evaluation

Back when I was in the cube farm of academic technology, we tried an experiment within our then-new course management system: we had a large class (hundreds upon hundreds of students) pilot a mid-semester evaluation.  The instructor emphasized the importance of the evaluation and reminded students to take it, but our return rate was still only 8 percent.  It pretty much soured me on online evaluations, as such a low return rate renders the evals useless.  (At UC Davis at the time, veterinary students did get an invitation to chat with the dean personally if they didn’t fill out their course evals. Otherwise, there wasn’t any institutional effort to “incentivize”* students–that is, the registrar wouldn’t withhold a student’s grades until she had filled out her course evals.)

Fast forward to today. Boise State is offering online course evaluations, but recently the university announced that whether or not a course participates is not up to the instructor; each department either has to stick with in-class, paper-based evaluations or go all in with the online evals.  In the department meeting where we discussed the issue, we were leaning toward paper, and then one colleague said he had piloted online evals and was getting response rates of 90 percent.  I’d like to see the evidence of that, but whatever. . . it was persuasive enough that the sense of the meeting shifted toward a semester trial of online evaluations.

We’ve been told we should “incentivize” student participation in online evaluations, for example by offering perks (e.g. students could bring a 3″ x 5″ note card with them to the final exam or we’d drop the lowest quiz grade) if the class return rate reached, say, 80 percent.  And yes–those are the actual suggestions from the administration.  Never mind that I don’t give quizzes, and my students already can bring essay outlines on notecards to the final–I’m not going to reward students for doing something that I see as part of fulfilling the social and intellectual contract for the course.

So instead of offering to bribe my survey students, I spent an entire class talking (as I often do, but this time more frankly and comprehensively) about why I’ve taught History 111–U.S. history to 1877–the way I have.

Topics covered, and student reactions to each one:

  • memories of high school history, what they learned, and what they’ve used since then: mostly not good, dates and events, and not much, respectively.
  • experiences with, and feelings about, lectures in college, regardless of discipline: mostly bad, crappy PowerPoint presentations; suspicions that a professor or two is bullshitting them full-time.
  • political versus social and cultural history: prior to college, students haven’t been exposed, by and large, to social and cultural histories, except in very small amounts; they find it refreshing, particularly if we’re doing “history from below.”
  • students as vessels to be filled with “content”**  and the relation of this approach to online courses and pedagogies of scale: resentment, boredom, disbelief.
  • survey textbooks***: expensive, unreadable, useless–pretty much unadulterated loathing.

Our conversation lasted 45 minutes, and at the end I made another pitch for them to fill out course evaluations, saying that their feedback is not only valuable to me individually, but it also allows instructors to make a case to deans and provosts and beyond that customers students do think about learning in ways that should matter to us.  I then reiterated to them that I really do make changes in my course structure and teaching style based on student feedback, and that since I may have 30 more years (!!!!) in the classroom, they have the opportunity to make a big impact on future students’ learning experiences.  I encouraged them to take ten minutes or so to fill out the evaluation as soon as possible.

This class’s online response rate thus far, more than halfway through the response window? Twenty-one percent.  Anyone care to guess how little that number will rise, even with repeated urgings, by the time the survey closes on Friday evening?  Leave your bets in the comments.

* Worst word ever?  Possibly.

** Remember when we used to say “knowledge” instead of “content”?

*** For the record, I used  Major Problems in American History, Volume I by Elizabeth Cobbs Hoffman et. al.; Abraham in Arms by Ann Little; Mongrel Nation by Clarence Walker; and They Saw the Elephant by Joann Levy.  Each book takes a very different approach to history, with Little’s being the most traditional (yet also very readable!), Walker’s serving as a witty and searing examination of why different American demographic groups view the Jefferson-Hemings liaison in divergent ways, Levy’s book offering thematic chapters but not footnotes or endnotes, and Major Problems bringing together eight to ten primary sources in each chapter with two essays usually excerpted from books by academic historians.  My students found Little’s book challenging at first but conceded they enjoyed each chapter more than the previous ones.  Walker’s book was puzzling but made for the best class discussion because it was the most explicitly provocative. Levy’s book was the most accessible, and my Idaho students seemed to appreciate its focus on western women’s history, as their exposure to regional women’s history (or, actually, any women’s history) previously was via pioneer wives and Sacajawea.  I suspect most students stopped reading the essays in Major Problems as early as a third of the way into the semester, and many students needed a great deal of guidance in interpreting primary sources.

Whiteboard image by Skye Christensen, and used under a Creative Commons license.

Comments

  1. Leslie:

    The ridiculousness of online course evaluations has been on my mind for months now. I guess I’ve been reluctant to weigh in since I’ve been afraid to generalize our awful experience. Not any more.

    Our school went all-online, no choice maybe three years ago and response rates dropped to almost nothing. People’s entire careers are now resting on making sure they don’t anger students enough to get them to actually log in and say what they think. The worst thing about it is that the administration absolutely refuses to admit that they made a terrible, terrible mistake.

  2. Our university does all online evaluations and “incentivizes” them by delaying the release of students’ grades by a week if they don’t fill them out. This means that usually 50-80% of students fill them out.

  3. So where did you end up in your response rate? I’m interested because you had the exact conversation faculty should have with their students about online evals. I’m curious if you saw a bump in your response rate as the semester closed.

  4. Leslie M-B says:

    Andrew, the return rate in my survey course was only 44%, and that’s with multiple reminders in person and by e-mail. My other, smaller, upper-division course had a return rate of 67%. Of course, my typical return rate with paper-based exams is between 90 and 100 percent, so I consider this experiment a failure.

Trackbacks

  1. […] evaluations. Leslie M-B beat me to it in December with a useful and brave post that you should read here if you didn’t see it the first time […]