Press "Enter" to skip to content

Use of student evals for promotion and tenure banned in Canadian labor case

A extensive report here:

In a precedent-setting case, an Ontario arbitrator has directed Ryerson University to ensure that student evaluations of teaching, or SETs, “are not used to measure teaching effectiveness for promotion or tenure.” The SET issue has been discussed in Ryerson collective bargaining sessions since 2003, and a formal grievance was filed in 2009.

The long-running case has been followed, and the ruling applauded, by academics throughout Canada and internationally, who for years have complained that universities rely too heavily on student surveys as a means of evaluating professors’ teaching effectiveness.

“We were delighted,” said Sophie Quigley, professor of computer science at Ryerson, and the grievance officer who filed the case back in 2009. “These are statistically correct arguments we’ve been making over the years, and it’s wonderful that reason has prevailed.”

While acknowledging that SETs are relevant in “capturing student experience” of a course and its instructor, arbitrator William Kaplan stated in his ruling that expert evidence presented by the faculty association “establishes, with little ambiguity, that a key tool in assessing teaching effectiveness is flawed.”

It’s a position faculty have argued for years, particularly as SETs migrated online and the numbers of students participating plummeted, while at the same time university administrations relied more heavily on what on the surface seemed to them a legitimate data-driven tool.

Mr. Kaplan’s conclusion that SETs are in fact deeply problematic will “unleash debate at universities across the country,” said David Robinson, executive director of the Canadian Association of University Teachers. “The ruling really confirms the concerns members have raised.” While student evaluations have a place, Mr. Robinson argued, “they are not a clear metric. It’s disconcerting for faculty to find themselves judged on the basis of data that is totally unreliable.”

As Dr. Quigley pointed out, studies about SETs didn’t exist 15 years ago, and it was perhaps easier for universities to see the surveys as an effective means of assessment. “Psychologically, there is an air of authority in using all this data, making it seem official and sound,” she noted.

Now, however, there is much research to back up the argument against SETs as a reliable measure of teaching effectiveness, particularly when the data is used to plot averages on charts and compare faculty results. The Ontario Confederation of University Faculty Associations (OCUFA) commissioned two reports on the issue, one by Richard Freishtat, director of the Center for Teaching and Learning at the University of California, Berkeley, and another by statistician Philip B. Stark, also at Berkeley.

The findings in those two reports were accepted by Mr. Kaplan, who cited flaws in methodology and ethical concerns around confidentiality and informed consent. He also cited serious human-rights issues, with studies showing that biases around gender, ethnicity, accent, age, even “attractiveness,” may factor into students’ ratings of professors, making SETs deeply discriminatory against numerous “vulnerable” faculty. …

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *