
Liveblog:
Sorry, I can’t type fast enough to get everything. Some highlights from the town hall:
Banavar, Berkman, and Pratt are on stage. Shelton (EW interview with some unfortunate quotes here) has been relegated to the admin table towards towards the back. Obviously the administration is backing away as fast as they can from past proposals and the adults are now in charge.
Banavar announces he’s pushing back the deadline for departments to provide their metrics plans and data to JH from April 6 to June 6.
He also announces that he’s signed an MOU with the faculty union that will ensure that, whatever the administration decides on, there will be faculty input and negotiation.
The link to Berkman’s Metrics “blog” is here. No comments allowed – or at least there are none posted.
The faculty and heads are asking many very skeptical questions about how these metrics will guide resource allocations and influence faculty research goals.
Berkman closes by saying that Harbaugh’s criticisms of the metrics proposal, based on the work of Nobel Laureate Bengt Holmstrom, are off base because those relate to “strong financial incentives” and these metrics will only provide weak incentives.
It’s hard to respond to that when we don’t know what the departments metrics plans will actually be, but inevitably they will become guidelines for junior faculty to follow if they want tenure, and everyone to follow if they want merit raises, new colleagues, and to be seen as good department and university citizens, get responses to outside offers, etc. Those are pretty strong incentives, financial or not, and they will result in gaming and discouraging work that is not measured, just as Hengtrom’s research shows.
My takeaway is that this has been a botched 2 year effort by the administration, and it has taken a huge amount of faculty effort – away from our other jobs – to push back and try and turn it into something reasonable. We’ll see what happens.
Banavar, Pratt, and Berkman did not discuss the “faculty tracking software” that UO will be purchasing next year. This software will allow them to track faculty activities, and will generate reports comparing those activities across faculty, across departments, over time, etc.
There appears to be no truth to the rumors that this software will interface with the mandatory new faculty ankle bracelets to provide JH with real-time GPS location tracking, or that this is all part of the Tracktown 2021 championship plan.
Update: Rumor has it that the UO administration’s obsession with research metrics and Academic Analytics started with the hiring of Kimberly Espy as VPR.
After alienating everyone on campus except former Interim Provost Jim Bean, Espy was finally forced out thanks to the UO Senate’s threatened vote of no confidence and a blunt report written by CAS Assoc Dean Bruce Blonigen. History here.
Gottfredson appointed Brad Shelton as her interim replacement, and new VPR David Conover is still picking up the pieces.
Part of Espy’s legacy was UO’s ~$100K contract with Academic Analytics, which finally expires this December, for a total of $600K down the hole. While Shelton enthusiastically defends this sunk cost in the Eugene Weekly, no one else in the UO administration will admit to ever using Academic Analytics data as an input for any decision.
Despite this craziness, it’s still an open question as to whether or not Shelton, Conover, and Banavar will renew the contract, which Academic Analytics and their salesman and former UO Interim President Bob Berdahl are now pitching at $160K a year.
3/12/2018: UO physicist, Psychology Dept kick off Provost’s Friday Metrics Town Hall early, propose sensible alternatives to Brad Shelton’s silly metrics plan
A week or two back CAS started a “metrics blog” to collect suggestions on how departments could respond to the call from VPxyz Brad Shelton for simple metrics that the administration could use to rank departments and detect changes over time to help decide who will get new faculty lines. Or maybe the call was for information that they could show to Chuck Lillis and the trustees about how productive/unproductive UO’s faculty are. Or maybe it was a call for departments to provide information that Development could pitch to potential donors. All I know for sure is that departments are supposed to respond by April 6th with their perfect algorithm.
Ragu Parthasathaway from Physics has taken up the challenge, on his Eighteenth Elephant Blog:
… These are extreme examples, but they illustrate real differences between fields even within Physics. Biophysical studies typically involve one or at most a few labs, each with a few people contributing to the project. I’d guess that the average number of co-authors on my papers is about 5. High-energy physics experiments involve vast collaborations, typically with several hundred co-authors.
Is it “better” to have a single author paper with 205 citations, or a 2900-author paper with 11000 citations? One could argue that the former is better, since the citations per author (or even per institution) is higher. Or one could argue that the latter is better, since the high citation count implies an overall greater impact. Really, though, the question is silly and unanswerable.
Asking silly questions isn’t just a waste of time, though; it alters the incentives to pursue research in particular directions. …
In other words, this particular silly question is worse than a waste of time. Ulrich Mayr, chair of UO’s Psychology department (UO’s top research department, according the the National Research Council’s metrics, FWIW) has met with his faculty, and they have a better idea:
As obvious from posts on this blog, there is skepticism that we can design a system of quantitative metrics that achieves the goal of comparing departments within campus or across institutions, or that presents a valid basis for communicating about departments’ strengths and weaknesses. The department-specific grading rubrics may seem like a step in the right direction, as they allow building idiosyncratic context into the metrics. However, this eliminates any basis for comparisons and still preserves all the negative aspects of scoring systems, such as susceptibility to gaming and danger of trickle-down to evaluation on the individual level. I think many of us agree that we would like our faculty to think about producing serious scholarly work, not how to achieve points on a complex score scheme.
Within Psychology, we would therefore like to try an alternative procedure, namely an annual, State of the Department report that will be made available at the end of every academic year.
Authored by the department head (and with help from the executive committee and committee chairs), the report will present a concise summary of past-year activity with regard to all relevant quality dimensions (e.g., research, undergraduate and graduate education, diversity, outreach, contribution to university service, etc.). Importantly, the account would marry no-thrills, basic quantitative metrics with contextualizing narrative. For example, the section on research may present the number of peer-reviewed publications or acquired grants during the preceding year, it may compare these number to previous years, or—as far as available–to numbers in peer institutions. It can also highlight particularly outstanding contributions as well as areas that need further development.
Currently, we are thinking of a 3-part structure: (I) A very short executive summary (1 page). (II) A somewhat longer, but still concise narrative, potentially including tables or figures for metrics, (III) An appendix that lists all department products (e.g., individual articles, books, grants, etc.), similar to a departmental “CV” the covers the previous year.
Advantages:
––When absolutely necessary, the administration can make use of the simple quantitative metrics.
––However, the accompanying narrative provides evaluative context without requiring complex, department-specific scoring systems. This preserves an element of expert judgment (after all, the cornerstone of evaluation in academia) and it reduces the risk of decision errors from taking numbers at face value.
––One stated goal behind the metrics exercise is to provide a basis for communicating about a department’s standing with external stakeholders (e.g., board members, potential donors). Yet, to many of us it is not obvious how this would be helped through department–specific grading systems. Instead, we believe that the numbers-plus-narrative account provides an obvious starting point for communicating about a department’s strengths and weaknesses.
––Arguably, for departments to engage in such an annual self-evaluation process is a good idea no matter what. We intend to do this irrespectively of the outcome of the metrics discussion and I have heard rumors that some departments on campus are doing this already. The administration could piggy-back on to such efforts and provide a standard reporting format to facilitate comparisons across departments.
Disadvantages:
––More work for heads (I am done in 2019).
So sensible it must be dead in the water. But if you haven’t given up hope in UO yet, show up at Provost Banavar’s Town Hall this Friday at 11:
Metrics and the evaluation of excellence will be at the center of a town hall-style discussion with Jayanth Banavar, provost and senior vice president, from 11 a.m. to noon in Room 156, Straub Hall on Friday, March 16.
The session was announced in a recent memo from the provost, who calls the event a “two-way discussion on the purpose, value, and use of metrics as well as other topics, including the new academic allocation system, the Institutional Hiring Plan, and whatever else is on your mind.”
“I know that there are a lot of questions about what this means, and I have heard concerns that the metrics will be used inappropriately for things such as ‘ranking’ faculty members or departments,” Banavar wrote. “I have also heard rumors that we will be using metrics to establish some sort of threshold at which faculty members could be ‘cut’ if they do not meet that threshold. I want to help allay some concerns and answer some questions. As a former dean and faculty member myself, I understand how questions and even some anxiety can arise when metrics are introduced into a conversation.”
Facutly members who are unable to attend are encouraged to share thoughts, concerns or ideas with the Office of the Provost at [email protected].
“As we continue our work on the development of these metrics, we welcome your advice and input,” the memo reads. “The goal is to have a mechanism for the transparent allocation of resources to maximally enhance the excellence of our university.”
I do wonder who writes this nonsense.