Press "Enter" to skip to content

UO Senate’s action to kill numerical student course evaluations now part of a national movement

The Chronicle has the report here, highlighting new research showing bias and irrelevance of the old evaluations and new efforts to replace them with meaningful feedback from students and colleagues, A snippet:

What sparks this kind of change? Growing concern about the inequity of student course evaluations has inspired some campuses to start there, either rewriting them in ways that make them more useful or reducing their weight in determining raises and promotions. That work frequently opens the door to deeper conversations in departments and across campus about how to create a culture of teaching excellence.

University of Oregon leaders took this approach, scrapping the traditional course evaluations in favor of a new instrument called the Student Experience Survey. They created new teaching-evaluation standards, grouping them into four categories — professional, inclusive, engaged, and research-informed — and made sure the questions on the student survey aligned with those categories. And they created new tools for peer review and self-reflection.

Lee Rumbarger, associate vice provost for teaching engagement, notes that this was a multi-year process starting with the Office of the Provost and the University Senate, then moving out into colleges and departments.

For the record, the process started with two of my Economics honors students, Emily Wu and Ken Ancell. A previous Chronicle report notes:

“Having a female instructor is correlated with higher student achievement,” Wu said, but female instructors received systematically lower course evaluations. In looking at prerequisite courses, the two researchers found a negative correlation between students’ evaluations and learning. “If you took the prerequisite class from a professor with high student teaching evaluations,” Harbaugh said, “you were likely, everything else equal, to do worse in the second class.”

The team found numerous studies with similar findings. “It replicates what many, many other people found,” said Harbaugh. “But to see it at my own university, I sort of felt like I had to do something about it.”

He did. In the spring of 2017, Harbaugh assembled a task force on the issue and invited Sierra Dawson, now associate vice provost for academic affairs, to join. The last time that course evaluations had been reviewed was a decade earlier, when the university moved from a paper system to online.

This academic research result would have gone nowhere, however, without the enthusiastic support and hard work of Dawson and Rumbarger from the Provost’s office who worked with the UO Senate to come up with a new system – which asks students about the use of specific teaching methods, not about how they like the instructor – or without the somewhat less eager but essential support of then UO President Michael Schill.

So how’s it going? From The Chronicle:

Samantha Hopkins, head of the department of earth sciences at Oregon, sees those barriers on her campus.

Faculty members are largely happy with the reforms to the course evaluations and other changes that have made evaluations more substantive, she says. But administrators accustomed to the numbers-driven systems are finding the new process challenging. “I’ve heard a lot of people expressing a feeling that they miss the student evaluations, and they don’t like the student-experience surveys as much because it’s so much harder to pull an assessment of someone’s teaching out of it,” she says.

Hopkins doesn’t miss the old system but understands the feeling: It is so much easier to compare numbers to numbers. “It’s something I’m struggling with right now,” during annual evaluation time, she says. “It’s the challenge of looking at what people are doing and saying: Is this good enough? What is good enough?”

And then there are the old, hard-to-budge hierarchies. “You hear a lot of lip service given to the importance of teaching,” she says. “But really when it comes down to it, so much of university culture is really centered around the importance of research.”

She recalls a conversation she had with a senior administrator who objected to the idea that a senior instructional faculty member should make as much as assistant professors. It’s a view widely held across campus, she says. “This idea that someone who does only teaching and not research can’t make as much as someone who does research, even the most junior member of the research faculty, tells you where they are actually putting their money.”

7 Comments

  1. Anonymous 02/11/2024

    I was on the UO Senate when we voted for the new system, and I was happy to vote for it. But in four years on the Senate, it’s the only vote I would take back. Neither I, nor any of the colleagues in my department that I’ve talked with about the Student Experience Surveys find them useful in the least. They don’t tell me anything, and I barely look at them anymore. Besides, it’s not like numbers don’t exist under the hood, of course they do. Administrators must know that. But the real problem as far as I’m concerned, is that they allow us to pretend a bias doesn’t exist among our students, and it does. I’d much rather have it out in the open, obvious for all to see, than hiding in the data in ways no one can see. I don’t quite know what the solution is, but there’s got to be something better than this.

    • Dog 02/12/2024

      Facilitated end of term random student group interviews; but this costs money and time

  2. Please pivot 02/13/2024

    I have respect for the folks who worked on this project, and understand why it was appealing to the Senate. That said, the implementation is not working, at least in my quadrant, which includes very large classes. Unfortunately, the majority of students and instructors do not seem to find value in the survey as it stands. The reports have include small numbers of student comments that have undue impact. It’s not clear that the intended outcome of improved teaching is being met, or that this is the tool to get us there.
    Sometimes it’s important to pivot.
    What about a much simpler/shorter survey, using ‘beneficial to my learning’ rather than numbers?

  3. vhils 02/16/2024

    The problematic related to the clear biases of the old system have been replaced with the lack of any impetus for students to spend the time to do the new evaluations. The result is that the students that do take the time are self-selecting and tend to either end of the evaluative spectrum. The response rate average is so low that there is almost no statistical value to the evals, but because they are a defined component of our merit, promotion, process, etc. they receive way more weight than they should. This is a problem, btw, that could be fixed or made better by the senate, without changing the actual evaluations themselves.

  4. OneSilverLining 02/17/2024

    One positive impact of the new evaluations has been that our merit reviews actually focused primarily on the quality and quantity of each person’s professional development, service, research, and other often invisible labor rather than placing primary focus on problematic teaching evaluation numerical scores. I do have issues with the way the evaluations (surveys) are written as well as issues with the “optional” teaching reflections. I was involved in the pilot of the new course surveys and reflections and it was crystal clear that the folks creating and implementing them (TEP and friends) were clearly not interested in critical feedback that could have improved them at the time.

  5. Please pivot 02/18/2024

    For ethical purposes, clinical trials can get halted when it’s clearly evident that a treatment either is or is not working…..

    Other units on campus that have administered top-down projects have modified them in face of feedback. TEP and OtP have been uniquely resistant.

  6. [email protected] 02/20/2024

    I was following this discussion in 2019 and recall that the intention was to sideline student commentary in the evaluation process entirely. Did I dream that?

    The biases reflected in numerical evaluations (gender bias, bias against hard graders, etc.) are surely there in written comments as well. And comments without numbers allow evaluators to more easily cherry-pick/interpret to suit a preferred narrative.

Leave a Reply

Your email address will not be published. Required fields are marked *