Live from Provost Banavar’s Metrics Town Hall:


Sorry, I can’t type fast enough to get everything. Some highlights from the town hall:

Banavar, Berkman, and Pratt are on stage. Shelton (EW interview with some unfortunate quotes here) has been relegated to the admin table towards towards the back. Obviously the administration is backing away as fast as they can from past proposals and the adults are now in charge.

Banavar announces he’s pushing back the deadline for departments to provide their metrics plans and data to JH from April 6 to June 6.

He also announces that he’s signed an MOU with the faculty union that will ensure that, whatever the administration decides on, there will be faculty input and negotiation.

The link to Berkman’s Metrics “blog” is here. No comments allowed – or at least there are none posted.

The faculty and heads are asking many very skeptical questions about how these metrics will guide resource allocations and influence faculty research goals.

Berkman closes by saying that Harbaugh’s criticisms of the metrics proposal, based on the work of Nobel Laureate Bengt Holmstrom, are off base because those relate to “strong financial incentives” and these metrics will only provide weak incentives.

It’s hard to respond to that when we don’t know what the departments metrics plans will actually be, but inevitably they will become guidelines for junior faculty to follow if they want tenure, and everyone to follow if they want merit raises, new colleagues, and to be seen as good department and university citizens, get responses to outside offers, etc. Those are pretty strong incentives, financial or not, and they will result in gaming and discouraging work that is not measured, just as Hengtrom’s research shows.

My takeaway is that this has been a botched 2 year effort by the administration, and it has taken a huge amount of faculty effort – away from our other jobs – to push back and try and turn it into something reasonable. We’ll see what happens.

Banavar, Pratt, and Berkman did not discuss the “faculty tracking software” that UO will be purchasing next year. This software will allow them to track faculty activities, and will generate reports comparing those activities across faculty, across departments, over time, etc.

There appears to be no truth to the rumors that this software will interface with the mandatory new faculty ankle bracelets to provide JH with real-time GPS location tracking, or that this is all part of the Tracktown 2021 championship plan.

Update: Rumor has it that the UO administration’s obsession with research metrics and Academic Analytics started with the hiring of Kimberly Espy as VPR.

After alienating everyone on campus except former Interim Provost Jim Bean, Espy was finally forced out thanks to the UO Senate’s threatened vote of no confidence and a blunt report written by CAS Assoc Dean Bruce Blonigen. History here.

Gottfredson appointed Brad Shelton as her interim replacement, and new VPR David Conover is still picking up the pieces.

Part of Espy’s legacy was UO’s ~$100K contract with Academic Analytics, which finally expires this December, for a total of $600K down the hole. While Shelton enthusiastically defends this sunk cost in the Eugene Weekly, no one else in the UO administration will admit to ever using Academic Analytics data as an input for any decision.

Despite this craziness, it’s still an open question as to whether or not Shelton, Conover, and Banavar will renew the contract, which Academic Analytics and their salesman and former UO Interim President Bob Berdahl are now pitching at $160K a year.

3/12/2018: UO physicist, Psychology Dept kick off Provost’s Friday Metrics Town Hall early, propose sensible alternatives to Brad Shelton’s silly metrics plan

A week or two back CAS started a “metrics blog” to collect suggestions on how departments could respond to the call from VPxyz Brad Shelton for simple metrics that the administration could use to rank departments and detect changes over time to help decide who will get new faculty lines. Or maybe the call was for information that they could show to Chuck Lillis and the trustees about how productive/unproductive UO’s faculty are. Or maybe it was a call for departments to provide information that Development could pitch to potential donors. All I know for sure is that departments are supposed to respond by April 6th with their perfect algorithm.

Ragu Parthasathaway from Physics has taken up the challenge, on his Eighteenth Elephant Blog:

… These are extreme examples, but they illustrate real differences between fields even within Physics. Biophysical studies typically involve one or at most a few labs, each with a few people contributing to the project. I’d guess that the average number of co-authors on my papers is about 5. High-energy physics experiments involve vast collaborations, typically with several hundred co-authors.

Is it “better” to have a single author paper with 205 citations, or a 2900-author paper with 11000 citations? One could argue that the former is better, since the citations per author (or even per institution) is higher. Or one could argue that the latter is better, since the high citation count implies an overall greater impact. Really, though, the question is silly and unanswerable.

Asking silly questions isn’t just a waste of time, though; it alters the incentives to pursue research in particular directions. …

In other words, this particular silly question is worse than a waste of time. Ulrich Mayr, chair of UO’s Psychology department (UO’s top research department, according the the National Research Council’s metrics, FWIW) has met with his faculty, and they have a better idea:

As obvious from posts on this blog, there is skepticism that we can design a system of quantitative metrics that achieves the goal of comparing departments within campus or across institutions, or that presents a valid basis for communicating about departments’ strengths and weaknesses.  The department-specific grading rubrics may seem like a step in the right direction, as they allow building idiosyncratic context into the metrics.  However, this eliminates any basis for comparisons and still preserves all the negative aspects of scoring systems, such as susceptibility to gaming and danger of trickle-down to evaluation on the individual level.  I think many of us agree that we would like our faculty to think about producing serious scholarly work, not how to achieve points on a complex score scheme.

Within Psychology, we would therefore like to try an alternative procedure, namely an annual, State of the Department report that will be made available at the end of every academic year.

Authored by the department head (and with help from the executive committee and committee chairs), the report will present a concise summary of past-year activity with regard to all relevant quality dimensions (e.g., research, undergraduate and graduate education, diversity, outreach, contribution to university service, etc.).  Importantly, the account would marry no-thrills, basic quantitative metrics with contextualizing narrative.  For example, the section on research may present the number of peer-reviewed publications or acquired grants during the preceding year, it may compare these number to previous years, or—as far as available–to numbers in peer institutions.  It can also highlight particularly outstanding contributions as well as areas that need further development.

Currently, we are thinking of a 3-part structure: (I) A very short executive summary (1 page). (II) A somewhat longer, but still concise narrative, potentially including tables or figures for metrics, (III) An appendix that lists all department products (e.g., individual articles, books, grants, etc.), similar to a departmental “CV” the covers the previous year.


––When absolutely necessary, the administration can make use of the simple quantitative metrics.

––However, the accompanying narrative provides evaluative context without requiring complex, department-specific scoring systems.  This preserves an element of expert judgment (after all, the cornerstone of evaluation in academia) and it reduces the risk of decision errors from taking numbers at face value.

––One stated goal behind the metrics exercise is to provide a basis for communicating about a department’s standing with external stakeholders (e.g., board members, potential donors).  Yet, to many of us it is not obvious how this would be helped through department–specific grading systems.  Instead, we believe that the numbers-plus-narrative account provides an obvious starting point for communicating about a department’s strengths and weaknesses.

––Arguably, for departments to engage in such an annual self-evaluation process is a good idea no matter what.  We intend to do this irrespectively of the outcome of the metrics discussion and I have heard rumors that some departments on campus are doing this already.  The administration could piggy-back on to such efforts and provide a standard reporting format to facilitate comparisons across departments.


––More work for heads (I am done in 2019).

So sensible it must be dead in the water. But if you haven’t given up hope in UO yet, show up at Provost Banavar’s Town Hall this Friday at 11:

Metrics and the evaluation of excellence will be at the center of a town hall-style discussion with Jayanth Banavar, provost and senior vice president, from 11 a.m. to noon in Room 156, Straub Hall on Friday, March 16.

The session was announced in a recent memo from the provost, who calls the event a “two-way discussion on the purpose, value, and use of metrics as well as other topics, including the new academic allocation system, the Institutional Hiring Plan, and whatever else is on your mind.”

“I know that there are a lot of questions about what this means, and I have heard concerns that the metrics will be used inappropriately for things such as ‘ranking’ faculty members or departments,” Banavar wrote. “I have also heard rumors that we will be using metrics to establish some sort of threshold at which faculty members could be ‘cut’ if they do not meet that threshold. I want to help allay some concerns and answer some questions. As a former dean and faculty member myself, I understand how questions and even some anxiety can arise when metrics are introduced into a conversation.”

Facutly members who are unable to attend are encouraged to share thoughts, concerns or ideas with the Office of the Provost at

“As we continue our work on the development of these metrics, we welcome your advice and input,” the memo reads. “The goal is to have a mechanism for the transparent allocation of resources to maximally enhance the excellence of our university.”

I do wonder who writes this nonsense.

UK research councils & Nature unimpressed by VP Brad Shelton’s shiny new metrics plan

2/7/2018: From The Times:

All seven of the UK’s research councils have signed up to a declaration that calls for the academic community to stop using journal impact factors as a proxy for the quality of scholarship.

The councils, which together fund about £3 billion of research each year, are among the latest to sign the San Francisco Declaration on Research Assessment, known as Dora.

Stephen Curry, the chair of the Dora steering committee, said that the backing of the research councils gives the initiative a “significant boost”.

Dora was initiated at the annual meeting of the American Society for Cell Biology in 2012 and launched the following year. It calls on researchers, universities, journal editors, publishers and funders to improve the ways they evaluate research.

It says that the academic community should not use the impact factor of journals that publish research as a surrogate for quality in hiring, promotion or funding decisions. The impact factor ranks journals according to the average number of citations that their articles receive over a set period of time, usually two years.

Professor Curry, professor of structural biology at Imperial College London, announces the new signatories to the declaration in a column published in Nature on 8 February. …

1/26/2018: Nobel laureate unimpressed by VP Brad Shelton’s shiny new metrics plan

The 2016 Nobel Prize for Economics went to Oliver Hart and Bengt Holmstrom, for their life work on optimal incentive contracts under incomplete information. Holmstrom started out in industry, designing incentive schemes that used data driven metrics and strong incentives to “bring the market inside the firm”. However, as he said in his Nobel Prize lecture:

Today, I know better. As I will try to explain, one of the main lessons from working on incentive problems for 25 years is, that within firms, high-powered financial incentives can be very dysfunctional and attempts to bring the market inside the firm are generally misguided. Typically, it is best to avoid high-powered incentives and sometimes not use pay-for-performance at all.

I thought that Executive Vice Provost of Academic Operations Brad Shelton and the UO administration had learned this lesson too, after the meltdown of the market-based “Responsibility Centered Management” budget model that Shelton ran. Apparently not. Today the Eugene Weekly has an article by Morgan Theophil on “Questionably measuring success” which focuses on UO’s $100K per year contract with Academic Analytics for their measure of faculty research “productivity”.

Brad Shelton, UO executive vice provost of academic operations, says Academic Analytics measures faculty productivity by considering several factors: How many research papers has this faculty member published, where were the papers published, how many times have the papers been cited, and so on.

“Those are a set of metrics that very accurately measures the productivity of a math professor, for example,” Shelton says.

No they don’t. They might accurately count a few things, but those things are not accurate or complete measures of a professor’s productivity, and as Holmstrom explains later in his address – in careful mathematics and with examples such as the recent Wells Fargo case – there are many pitfalls to incentivising inaccurate, incomplete, and easily-gamed metrics. Most obviously, incentivizing the easily measured part of productivity raises the opportunity cost to employees (faculty) of the work that produces the things that the firm (university) actually cares about it, so true productivity may actually fall.

As the EW article also explains, UO has spent $500K on the Academic Analytics data on faculty “productivity” (i.e. grants, pubs, and citations) over the past 5 years, prompted in part by pressure from former Interim President Bob Berdahl, who now has a part-time job with Academic Analytics as a salesman.

Despite this expenditure, UO has never used the data for decisions about merit and promotion, in part because of opposition from the faculty and the faculty union, and in part because of a study by Spike Gildea from Linguistics documenting problems with the accuracy of the AA data. And today the Chronicle has a report on the vote by the faculty at UT-Austin to join Rutgers and Georgetown in opposing use of AA’s simple-minded metrics.

Meanwhile back at UO, VP Shelton is trumpeting the fact that AA has been responsive to complaints about past data quality:

“What we found is that Academic Analytics data is very accurate — it’s always accurate. If there are small errors, they fix them right away,” Shelton says.

Always accurate at measuring what?

Word from the CAS faculty heads meeting yesterday is that UO will not require departments to use the AA data – but that we’ll keep paying $100K, or about the salary of one scarce professor for it. Why? Because some people in Johnson Hall don’t understand another basic economic principle. When you’re in a hole, stop digging:

I forget who got the Nobel Prize for that one.

Here’s a draft of the sort of departmental incentive policies that are now floating around, in response to Shelton’s call:

Keep in mind that even if your department decides to develop a more rational evaluation system for itself, there will be nothing to prevent the Executive Vice Provost of Academic Operations from using the Academic Analytics data to run its own parallel evaluation system.

Elsevier buys Academic Analytics competitor


10/27/2016: Provost drops $100K subscription to faulty Academic Analytics faculty data

This is great news. The $100K that Provost Coltrane just saved will allow UO to hire a tenure track humanities professor.

Oh wait, sorry. This comes from the Provost of Georgetown University, Robert Groves. Read his full blog post (yes, their provost has real blog, with comments) here:

With the rise of the Internet and digital records of publications, comparisons of quality of universities are increasingly utilizing statistics based on this documentation (e.g., the Times Higher Education university rankings). Many academic fields themselves are comparing the product of scholars by using counts of citations to work (through h-indexes and other statistics). Journals are routinely compared on their impact partially through such citation evidence. Some academic fields have rated their journals into tiers of “quality” based on these numbers. Platforms like Google Scholar and ResearchGate are building repositories of documentation of the work of scholars. …

In short, the quality of AA coverage of the scholarly products of those faculty studied are far from perfect. Even with perfect coverage, the data have differential value across fields that vary in book versus article production and in their cultural supports for citations of others’ work. With inadequate coverage, it seems best for us to seek other ways of comparing Georgetown to other universities. For that reason, we will be dropping our subscription to Academic Analytics.

12/11/2015: Faculty object to use of secret Academic Analytics data in tenure decisions

This is at Rutgers, InsideHigherEd has the report by Colleen Flaherty here. UO has had a contract with AA for several years, at about $100K.

The data available includes reports on individual faculty, such as this, from their website:

Screen Shot 2015-12-10 at 12.47.16 PM

Obviously more information is good, but the administration holds these reports pretty tight to the vest – even the departmental level ones. Maybe our Senate will need to look into how these data are being used.

Board of Trustees ASAC to meet Wed by phone to approve CoE diversity plan

Wednesday, April 13, 2016 at 2:30 pm HEDCO Education Building, Room 240. It’s a telephonic meeting, but apparently there will be a phone there to listen in. The full draft of the proposal is here.

From what I can tell this is the first specific reference to the administration’s use of the confidential Academic Analytics data to rank departments.

Screen Shot 2016-04-10 at 10.42.56 AM Screen Shot 2016-04-10 at 10.43.10 AM Screen Shot 2016-04-10 at 10.43.37 AM Screen Shot 2016-04-10 at 10.44.22 AM Screen Shot 2016-04-10 at 10.45.23 AM Screen Shot 2016-04-10 at 10.46.27 AM Screen Shot 2016-04-10 at 10.47.30 AM Screen Shot 2016-04-10 at 10.48.35 AM Screen Shot 2016-04-10 at 10.49.03 AM Screen Shot 2016-04-10 at 10.50.27 AM Screen Shot 2016-04-10 at 10.50.57 AM Screen Shot 2016-04-10 at 10.51.13 AM