How surprising. As reported in InsideHigherEd here:
Student evaluations of teaching, or SETs, can provide a better understanding of what is working and what isn’t in classrooms. But gaining a “meaningful” understanding necessitates separating the “myths and realities” surrounding these evaluations, says a new report on the topic. That, in turn, requires data — lots of data.
So Campus Labs, a higher education assessment firm with 1,400 member campuses, opened its vault to create the new, myth-busting-style report. The study included more than 2.3 million evaluation responses from a dozen two- and four-year institutions that use Campus Labs’ course evaluation system, representing something of a national sample. All were collected in 2016 or later.
Philip Stark, professor of statistics at the University of California, Berkeley, and co-author of a major 2016 paper demonstrating gender bias in student evaluations, called the Campus Labs report “advertising, not science.”
“It’s particularly bad data analysis, including asking the wrong questions in the first place,” Stark said. Among his more specific criticisms was the lack of control group, conflating when students submitted their evaluations to when they were in class, and “no data on gender, ethnicity, grade expectations, grades or other measures of student performance.”
Based on existing research, “the strongest predictor of evaluations is grade expectations,” he said.