Press "Enter" to skip to content

No correlation between students’ course evaluations and learning

InsideHigherEd has the report on a new meta-analysis, here:

A number of studies suggest that student evaluations of teaching are unreliable due to various kinds of biases against instructors. (Here’s one addressing gender.) Yet conventional wisdom remains that students learn best from highly rated instructors; tenure cases have even hinged on it.

What if the data backing up conventional wisdom were off? A new study suggests that past analyses linking student achievement to high student teaching evaluation ratings are flawed, a mere “artifact of small sample sized studies and publication bias.”

“Whereas the small sample sized studies showed large and moderate correlation, the large sample sized studies showed no or only minimal correlation between [student evaluations of teaching, or SET] ratings and learning,” reads the study, in press with Studies in Educational Evaluation. “Our up-to-date meta-analysis of all multisection studies revealed no significant correlations between [evaluation] ratings and learning.”

These findings “suggest that institutions focused on student learning and career success may want to abandon SET ratings as a measure of faculty’s teaching effectiveness,” the study says. …

6 Comments

  1. Dog 09/22/2016

    Its about fucking time that this kind of analysis be front and centered. Yes faculty that are entertaining or caring tend to produce higher student engagement and that can correlate with “learning” but only indirectly. Indeed, often times bad ratings come for instructors that really focus on learning and not coddling.

    For my career (since I have both very good and very bad ratings) the only thing that produces learning is
    time on task, trial and error, failure and recovery –
    and student’s tolerance for that approach varies considerably.

  2. Let's Do Something About it 09/22/2016

    I’m not sure anyone ever really conceived of these evals as measuring “student learning”. They are mostly satisfaction surveys. If that’s what we want to measure, we should at least be up front about it and not rely on them for faculty evaluations (unless we are also measuring faculty on how well they satisfy students).

    Maybe it’s time for the Senate and it’s relevant committees to take up the issue and modify our approach to this. I have read about other approaches.

    For instance, we could, for starters, admit that these are at best indirect measures of something and then get more systematic and intentional about what that something is. I remember reading about one faculty member at another school who got tired of the course evals and created his own. Since we know a bit about the kinds of classroom practices that tend to produce better engagement, and maybe better learning, he simply asked students to what extent the he engaged in X or Y practice. This seems to me at least marginally better than asking 20-somethings their subjective opinion about the “quality” of a class.

  3. honest Uncle Bernie 09/23/2016

    It always struck me as goofy that academia would take student evaluations so seriously over several decades. They started as a cheap way to placate rebellious students in the violent late 60s.

    In retrospect, I now value most some of the professors that I couldn’t stand.

    The notion that evaluations measure teaching effectiveness is just pretty silly.

    • Dog 09/23/2016

      agreed – I think that was my main point. Its not that
      studies haven’t been done that have shown this, its that we, at the UO, still stick to these evaluations in way too serious of way, as they are, ultimately, meaningless in terms of defining good or effective teaching (there is a difference between those two).

      They can ferret out disorganized and uniformly poor teaching (but in my experience, the UO does nothing with such evaluations).

  4. Heath Hutto 09/23/2016

    I spent three years on the FPC and only read student evaluations for the comments–I remember finding those generally uninformative, but more instructive than the numbers. It led me, as a student, to write much more concrete and detailed evaluations.

Leave a Reply

Your email address will not be published. Required fields are marked *