Press "Enter" to skip to content

UO Senate & administration among leaders in national effort to reform evaluation and improvement of teaching

When was the last time UO made in into the national higher ed press for something other than a sports scandal, a b.s. branding campaign, or because their general counsel wanted to look at a faculty member’s emails with reporters?

The UO Senate votes on the reform proposal this Wednesday. InsideHigherEd’s Colleen Flaherty has the story today:

 

Most institutions say they value teaching. But how they assess it tells a different story. University of Southern California has stopped using student evaluations of teaching in promotion decisions in favor of peer-review model. Oregon seeks to end quantitative evaluations of teaching for holistic model.

Research is reviewed in a rigorous manner, by expert peers. [UOM: Except for $100K a year Academic Analytics, that is.] Yet teaching is often reviewed only or mostly by pedagogical non-experts: students. There’s also mounting evidence of bias in student evaluations of teaching, or SETs — against female and minority instructors in particular. And teacher ratings aren’t necessarily correlated with learning outcomes.

ll that was enough for the University of Southern California to do away with SETs in tenure and promotion decisions this spring. Students will still evaluate their professors, with some adjustments — including a new focus on students’ own engagement in a course. But those ratings will not be used in high-stakes personnel decisions.

The changes took place earlier than the university expected. But study after recent study suggesting that SETs advantage faculty members of certain genders and backgrounds (namely white men) and disadvantage others was enough for Michael Quick, provost, to call it quits, effective immediately.

‘I’m Done’

“He just said, ‘I’m done. I can’t continue to allow a substantial portion of the faculty to be subject to this kind of bias,” said Ginger Clark, assistant vice provost for academic and faculty affairs and director of USC’s Center for Excellence in Teaching. “We’d already been in the process of developing a peer-review model of evaluation, but we hadn’t expected to pull the Band-Aid off this fast.” …

Not Just USC

Philip B. Stark, associate dean of the Division of Mathematical and Physical Sciences and a professor of statistics at the University of California at Berkeley who has studied SETs and argued that evaluations are biased against female instructors in so many ways that adjusting them for that bias is impossible, called the USC news “terrific.”

“Treating student satisfaction and engagement as what they are — and I do think they matter — rather than pretending that student evaluations can measure teaching effectiveness is a huge step forward,” he said. “I also think that using student feedback to inform teaching but not to assess teaching is important progress.”

Stark pointed out that the University of Oregon also is on the verge of killing traditional SETs and adopting a Continuous Improvement and Evaluation of Teaching System based on non-numerical feedback. Under the system, student evaluations would still be part of promotion decisions, but they wouldn’t reduce instructors to numbers. 

Elements of the program already have been piloted. Oregon’s Faculty Senate is due to vote on the program as a whole this week, to be adopted in the fall. The proposed system includes a midterm student experience survey, an anonymous web-based survey to collect non-numerical course feedback to be provided only to the instructor, along with an end-of-term student experience survey. An end-of-term instructor reflection survey also would be used for course improvement and teaching evaluation. Peer review and teaching evaluation frameworks, customizable to academic units, are proposed, too.

“As of Fall 2018, faculty personnel committees, heads, and administrators will stop using numerical ratings from student course evaluations in tenure and promotion reviews, merit reviews, and other personnel matters,” reads the Oregon’s Faculty Senate’s proposal. “If units or committees persist in using these numerical ratings, a statement regarding the problematic nature of those ratings and an explanation for why they are being used despite those problems will be included with the evaluative materials.”

The motion already has administrative support, with Jayanth R. Banavar, provost, soliciting pilot participantson his website, saying, “While student feedback can be an important tool for continual improvement of teaching and learning, there is substantial peer-reviewed evidence that student course evaluations can be biased, particularly against women and faculty of color, and that numerical ratings poorly correlate with teaching effectiveness and learning outcomes.”

More than simply revising problematic evaluation instruments, the page says, Oregon “seeks to develop a holistic new teaching evaluation system that helps the campus community describe, develop, recognize and reward teaching excellence.” The goal is to “increase equity and transparency in teaching evaluation for merit, contract renewal, promotion and tenure while simultaneously providing tools for continual course improvement.” …

6 Comments

  1. terrible research 05/22/2018

    The research that the claims of bias are based on is laughable. Here are the three strains of evidence the article discusses:

    1) Non-randomized conditional correlations between gender and student evaluations. Problem: non-randomization.

    2) A study by sociologists involving about 30 observations in which TAs for an online course were, in one case, assigned a male gender and, in another case, assigned female gender. Problem: TAs were told the purpose of the study prior to running it. Cue experimenter bias.

    3) My favorite: Mitchell and Martin each teach a similar online course. They do not hold constant grading, emailing, or behavior in office hours. Martin (man) does better on evals. They write a paper about it claiming gender bias! Problem: I’m not making this up.

    • uomatters Post author | 05/22/2018

      Thanks, I didn’t read that paper. (And it sounds like I shouldn’t bother). There’s plenty of other bias evidence though. One result is from an online course where instructors swapped names (and therefore perceived genders). The “man” got better evaluations than the “woman”.

      • Dog 05/22/2018

        for the most part, I support the proposed new scheme for teaching evaluation but

        1. I think it is too grandiose and initially will be very difficult to interpret as no real baseline exists

        2. I worry about the inclusive stuff because then will depend on

        a) class size
        b) physical nature of the room (yes this matters)
        c) amount of TA support

        For instance, a class of 200 students without TA support in a box room like (Will 100) is not a very inclusive environment, no matter how many jokes the instructor says.

        So I think the overall distribution of responses is going to be
        somewhat class size dependent, but I could be wrong.

      • #2 05/22/2018

        That’s strain #2 above. The one with the experimenter bias.

  2. terrible research 05/22/2018

    That’s study #2 above. If we’re going to overhaul policies based on research, I hope someone will at least read the research.

  3. Pollyanna 05/25/2018

    We overhauled teaching evals in the Aughts. These things aren’t sacrosanct, ffs. It’s better to revise them periodically and keep tinkering in response to research than to keep them as is because they’re familiar and the replacement isn’t perfect. (Full disclosure: the question about availability outside of class has always chapped my hide)

Leave a Reply

Your email address will not be published. Required fields are marked *