UK research councils & Nature unimpressed by VP Brad Shelton’s shiny new metrics plan

2/7/2018: From The Times:

All seven of the UK’s research councils have signed up to a declaration that calls for the academic community to stop using journal impact factors as a proxy for the quality of scholarship.

The councils, which together fund about £3 billion of research each year, are among the latest to sign the San Francisco Declaration on Research Assessment, known as Dora.

Stephen Curry, the chair of the Dora steering committee, said that the backing of the research councils gives the initiative a “significant boost”.

Dora was initiated at the annual meeting of the American Society for Cell Biology in 2012 and launched the following year. It calls on researchers, universities, journal editors, publishers and funders to improve the ways they evaluate research.

It says that the academic community should not use the impact factor of journals that publish research as a surrogate for quality in hiring, promotion or funding decisions. The impact factor ranks journals according to the average number of citations that their articles receive over a set period of time, usually two years.

Professor Curry, professor of structural biology at Imperial College London, announces the new signatories to the declaration in a column published in Nature on 8 February. …

1/26/2018: Nobel laureate unimpressed by VP Brad Shelton’s shiny new metrics plan

The 2016 Nobel Prize for Economics went to Oliver Hart and Bengt Holmstrom, for their life work on optimal incentive contracts under incomplete information. Holmstrom started out in industry, designing incentive schemes that used data driven metrics and strong incentives to “bring the market inside the firm”. However, as he said in his Nobel Prize lecture:

Today, I know better. As I will try to explain, one of the main lessons from working on incentive problems for 25 years is, that within firms, high-powered financial incentives can be very dysfunctional and attempts to bring the market inside the firm are generally misguided. Typically, it is best to avoid high-powered incentives and sometimes not use pay-for-performance at all.

I thought that Executive Vice Provost of Academic Operations Brad Shelton and the UO administration had learned this lesson too, after the meltdown of the market-based “Responsibility Centered Management” budget model that Shelton ran. Apparently not. Today the Eugene Weekly has an article by Morgan Theophil on “Questionably measuring success” which focuses on UO’s $100K per year contract with Academic Analytics for their measure of faculty research “productivity”.

Brad Shelton, UO executive vice provost of academic operations, says Academic Analytics measures faculty productivity by considering several factors: How many research papers has this faculty member published, where were the papers published, how many times have the papers been cited, and so on.

“Those are a set of metrics that very accurately measures the productivity of a math professor, for example,” Shelton says.

No they don’t. They might accurately count a few things, but those things are not accurate or complete measures of a professor’s productivity, and as Holmstrom explains later in his address – in careful mathematics and with examples such as the recent Wells Fargo case – there are many pitfalls to incentivising inaccurate, incomplete, and easily-gamed metrics. Most obviously, incentivizing the easily measured part of productivity raises the opportunity cost to employees (faculty) of the work that produces the things that the firm (university) actually cares about it, so true productivity may actually fall.

As the EW article also explains, UO has spent $500K on the Academic Analytics data on faculty “productivity” (i.e. grants, pubs, and citations) over the past 5 years, prompted in part by pressure from former Interim President Bob Berdahl, who now has a part-time job with Academic Analytics as a salesman.

Despite this expenditure, UO has never used the data for decisions about merit and promotion, in part because of opposition from the faculty and the faculty union, and in part because of a study by Spike Gildea from Linguistics documenting problems with the accuracy of the AA data. And today the Chronicle has a report on the vote by the faculty at UT-Austin to join Rutgers and Georgetown in opposing use of AA’s simple-minded metrics.

Meanwhile back at UO, VP Shelton is trumpeting the fact that AA has been responsive to complaints about past data quality:

“What we found is that Academic Analytics data is very accurate — it’s always accurate. If there are small errors, they fix them right away,” Shelton says.

Always accurate at measuring what?

Word from the CAS faculty heads meeting yesterday is that UO will not require departments to use the AA data – but that we’ll keep paying $100K, or about the salary of one scarce professor for it. Why? Because some people in Johnson Hall don’t understand another basic economic principle. When you’re in a hole, stop digging:

I forget who got the Nobel Prize for that one.

Here’s a draft of the sort of departmental incentive policies that are now floating around, in response to Shelton’s call:

Keep in mind that even if your department decides to develop a more rational evaluation system for itself, there will be nothing to prevent the Executive Vice Provost of Academic Operations from using the Academic Analytics data to run its own parallel evaluation system.

The Tyranny of Metrics

InsideHigherEd’s interview with Jerry Muller about his new book. Published by the high impact-factor Princeton University Press. One excerpt:

Q: Some colleges, government agencies and businesses promote tools to evaluate faculty productivity — number of papers written, number of citations, etc. What do you make of this use of metrics?

A: Here too, metrics have a place, but only if they are used together with judgment. There are many snares. The quantity of papers tells you nothing about their quality or significance. In some disciplines, especially in the humanities, books are a more important form of scholarly communication, and they don’t get included in such metrics. Citation counts are often distorted, for example by including only journals within a particular discipline, thereby marginalizing works that have a transdisciplinary appeal. And then of course evaluating faculty productivity by numbers of publications creates incentives to publish more articles, on narrower topics, and of marginal significance. In science, it promotes short-termism at the expense of developing long-term research capacity.

More on the $600K Brad Shelton has dropped on Academic Analytics so far here.