Last updated on 03/17/2018
Liveblog:
Sorry, I can’t type fast enough to get everything. Some highlights from the town hall:
Banavar, Berkman, and Pratt are on stage. Shelton (EW interview with some unfortunate quotes here) has been relegated to the admin table towards towards the back. Obviously the administration is backing away as fast as they can from past proposals and the adults are now in charge.
Banavar announces he’s pushing back the deadline for departments to provide their metrics plans and data to JH from April 6 to June 6.
He also announces that he’s signed an MOU with the faculty union that will ensure that, whatever the administration decides on, there will be faculty input and negotiation.
The link to Berkman’s Metrics “blog” is here. No comments allowed – or at least there are none posted.
The faculty and heads are asking many very skeptical questions about how these metrics will guide resource allocations and influence faculty research goals.
Berkman closes by saying that Harbaugh’s criticisms of the metrics proposal, based on the work of Nobel Laureate Bengt Holmstrom, are off base because those relate to “strong financial incentives” and these metrics will only provide weak incentives.
It’s hard to respond to that when we don’t know what the departments metrics plans will actually be, but inevitably they will become guidelines for junior faculty to follow if they want tenure, and everyone to follow if they want merit raises, new colleagues, and to be seen as good department and university citizens, get responses to outside offers, etc. Those are pretty strong incentives, financial or not, and they will result in gaming and discouraging work that is not measured, just as Hengtrom’s research shows.
My takeaway is that this has been a botched 2 year effort by the administration, and it has taken a huge amount of faculty effort – away from our other jobs – to push back and try and turn it into something reasonable. We’ll see what happens.
Banavar, Pratt, and Berkman did not discuss the “faculty tracking software” that UO will be purchasing next year. This software will allow them to track faculty activities, and will generate reports comparing those activities across faculty, across departments, over time, etc.
There appears to be no truth to the rumors that this software will interface with the mandatory new faculty ankle bracelets to provide JH with real-time GPS location tracking, or that this is all part of the Tracktown 2021 championship plan.
Update: Rumor has it that the UO administration’s obsession with research metrics and Academic Analytics started with the hiring of Kimberly Espy as VPR.
After alienating everyone on campus except former Interim Provost Jim Bean, Espy was finally forced out thanks to the UO Senate’s threatened vote of no confidence and a blunt report written by CAS Assoc Dean Bruce Blonigen. History here.
Gottfredson appointed Brad Shelton as her interim replacement, and new VPR David Conover is still picking up the pieces.
Part of Espy’s legacy was UO’s ~$100K contract with Academic Analytics, which finally expires this December, for a total of $600K down the hole. While Shelton enthusiastically defends this sunk cost in the Eugene Weekly, no one else in the UO administration will admit to ever using Academic Analytics data as an input for any decision.
Despite this craziness, it’s still an open question as to whether or not Shelton, Conover, and Banavar will renew the contract, which Academic Analytics and their salesman and former UO Interim President Bob Berdahl are now pitching at $160K a year.
3/12/2018: UO physicist, Psychology Dept kick off Provost’s Friday Metrics Town Hall early, propose sensible alternatives to Brad Shelton’s silly metrics plan
A week or two back CAS started a “metrics blog” to collect suggestions on how departments could respond to the call from VPxyz Brad Shelton for simple metrics that the administration could use to rank departments and detect changes over time to help decide who will get new faculty lines. Or maybe the call was for information that they could show to Chuck Lillis and the trustees about how productive/unproductive UO’s faculty are. Or maybe it was a call for departments to provide information that Development could pitch to potential donors. All I know for sure is that departments are supposed to respond by April 6th with their perfect algorithm.
Ragu Parthasathaway from Physics has taken up the challenge, on his Eighteenth Elephant Blog:
… These are extreme examples, but they illustrate real differences between fields even within Physics. Biophysical studies typically involve one or at most a few labs, each with a few people contributing to the project. I’d guess that the average number of co-authors on my papers is about 5. High-energy physics experiments involve vast collaborations, typically with several hundred co-authors.
Is it “better” to have a single author paper with 205 citations, or a 2900-author paper with 11000 citations? One could argue that the former is better, since the citations per author (or even per institution) is higher. Or one could argue that the latter is better, since the high citation count implies an overall greater impact. Really, though, the question is silly and unanswerable.
Asking silly questions isn’t just a waste of time, though; it alters the incentives to pursue research in particular directions. …
In other words, this particular silly question is worse than a waste of time. Ulrich Mayr, chair of UO’s Psychology department (UO’s top research department, according the the National Research Council’s metrics, FWIW) has met with his faculty, and they have a better idea:
As obvious from posts on this blog, there is skepticism that we can design a system of quantitative metrics that achieves the goal of comparing departments within campus or across institutions, or that presents a valid basis for communicating about departments’ strengths and weaknesses. The department-specific grading rubrics may seem like a step in the right direction, as they allow building idiosyncratic context into the metrics. However, this eliminates any basis for comparisons and still preserves all the negative aspects of scoring systems, such as susceptibility to gaming and danger of trickle-down to evaluation on the individual level. I think many of us agree that we would like our faculty to think about producing serious scholarly work, not how to achieve points on a complex score scheme.
Within Psychology, we would therefore like to try an alternative procedure, namely an annual, State of the Department report that will be made available at the end of every academic year.
Authored by the department head (and with help from the executive committee and committee chairs), the report will present a concise summary of past-year activity with regard to all relevant quality dimensions (e.g., research, undergraduate and graduate education, diversity, outreach, contribution to university service, etc.). Importantly, the account would marry no-thrills, basic quantitative metrics with contextualizing narrative. For example, the section on research may present the number of peer-reviewed publications or acquired grants during the preceding year, it may compare these number to previous years, or—as far as available–to numbers in peer institutions. It can also highlight particularly outstanding contributions as well as areas that need further development.
Currently, we are thinking of a 3-part structure: (I) A very short executive summary (1 page). (II) A somewhat longer, but still concise narrative, potentially including tables or figures for metrics, (III) An appendix that lists all department products (e.g., individual articles, books, grants, etc.), similar to a departmental “CV” the covers the previous year.
Advantages:
––When absolutely necessary, the administration can make use of the simple quantitative metrics.
––However, the accompanying narrative provides evaluative context without requiring complex, department-specific scoring systems. This preserves an element of expert judgment (after all, the cornerstone of evaluation in academia) and it reduces the risk of decision errors from taking numbers at face value.
––One stated goal behind the metrics exercise is to provide a basis for communicating about a department’s standing with external stakeholders (e.g., board members, potential donors). Yet, to many of us it is not obvious how this would be helped through department–specific grading systems. Instead, we believe that the numbers-plus-narrative account provides an obvious starting point for communicating about a department’s strengths and weaknesses.
––Arguably, for departments to engage in such an annual self-evaluation process is a good idea no matter what. We intend to do this irrespectively of the outcome of the metrics discussion and I have heard rumors that some departments on campus are doing this already. The administration could piggy-back on to such efforts and provide a standard reporting format to facilitate comparisons across departments.
Disadvantages:
––More work for heads (I am done in 2019).
So sensible it must be dead in the water. But if you haven’t given up hope in UO yet, show up at Provost Banavar’s Town Hall this Friday at 11:
Metrics and the evaluation of excellence will be at the center of a town hall-style discussion with Jayanth Banavar, provost and senior vice president, from 11 a.m. to noon in Room 156, Straub Hall on Friday, March 16.
The session was announced in a recent memo from the provost, who calls the event a “two-way discussion on the purpose, value, and use of metrics as well as other topics, including the new academic allocation system, the Institutional Hiring Plan, and whatever else is on your mind.”
“I know that there are a lot of questions about what this means, and I have heard concerns that the metrics will be used inappropriately for things such as ‘ranking’ faculty members or departments,” Banavar wrote. “I have also heard rumors that we will be using metrics to establish some sort of threshold at which faculty members could be ‘cut’ if they do not meet that threshold. I want to help allay some concerns and answer some questions. As a former dean and faculty member myself, I understand how questions and even some anxiety can arise when metrics are introduced into a conversation.”
Facutly members who are unable to attend are encouraged to share thoughts, concerns or ideas with the Office of the Provost at [email protected].
“As we continue our work on the development of these metrics, we welcome your advice and input,” the memo reads. “The goal is to have a mechanism for the transparent allocation of resources to maximally enhance the excellence of our university.”
I do wonder who writes this nonsense.
Hmm, “Maximal Enhancement of Excellence”
I think there are pills you can order on the Internet for
that sort of outcome …
For the record, the Provost has asked departments to come up with their own metrics – they are supposed to be determined by faculty. So the posts on the blog are exactly what OPAA is looking for.
The point of the CAS Metrics blog is to help departments in this task by providing a venue for sharing information and feedback.
My understanding is that OPAA has seen psychology’s idea and is fine with it. We can build on this idea. For example, we could put together a “state-of-the-department” template with subsections that correspond to research activities (e.g., scholarship, mentorship, outreach, etc) and departments can fill in with narrative and numbers as appropriate. The template would save work for heads and provide at least some standardization across departments.
It could also save the $160K that Academic Analytics is proposing to charge us for next year.
I think in the real world “incentives to pursue particular research areas” is more strongly driven by available funding in various areas than anything else.
But, what do dogs know?
Back when I was a lad, we used to speak with disdain about the “bean counters” running the university. (This was at another school.)
Little did we know what was coming: the “corporatization” of the university with the rise of computers leading to the “metrification” of everything, promulgated by ever-more-grossly overpaid administrators, along with their ever-growing legions of bureaucrats. (The administrative bloat phenomenon.)
Now I look at the bizarre contemporary university, and find myself asking, what have the metrifiers (metricators?) really achieved in terms of metrics that might bear some relation to something significant?
As the present regime apparently proceeds to turn UO upside down, how is the research funding metric doing? The all-important enrollment metric? (Maybe Moffitt and Shelton can help with the latter.) How has the SAT metric been doing? Perhaps you can come up with your own favorites. What are these people achieving with all their finely honed tools with their bogus precision?
My guess is that if they were really achieving anything, we would be hearing about it. But beneath the background noise, all I hear is silence. Am I missing something? (I do know about the Knight campus; that was kind of a singular event; I doubt that it had much to do with metrics.)
Espy was a premonition of Donald Trump…surrounded by sycophantic idiots without leadership abilities, and enabled by Gottfredson’s disconnected and oblivious “leadership”. To make the comparison even more spot on, throw in the blatant conflicts of interest as she attempted to build her own research institute centered on her own mediocre work.
The former VPR (hey, don’t forget the -IGE!)’s epic awfulness has become an indelible part of our campus lore. Though I am at times an “on-topic!” stickler, I’ll admit that I thoroughly enjoy it whenever SHE makes an unexpected cameo anywhere on the blog. (Trump as well.)
Kimberly Espy: may her legend and her UOM content tag live forever.
I’m a forgive and forget kind of guy – what was the IGE?
Why, Innovation and Graduate Education, of course!
So it was written. You will be forgiven for forgetting. Keeping up is like trying to read a bowl of alphabet soup sometimes.
on the contrary
this is an NSF funded program
and Oregon State did quite well in it
http://nrtige.oregonstate.edu/recruitment
For me, one sensible way for a Research University to evolve is to invest in emerging funding areas (hopefully the Knight campus will do this).
The history of the UO is one of cloning ourselves to maintain our discipline niches. While there is not anything wrong with that, it does prevent the kind of agile evolution the Research University needs to have to better perform in a changing world.
Where is this all going? Will we will need hyper-metrics to access decisions based on metrics? Suppose Dept X’s self-proposed metric decreases; presumably it is denied some resources based on this. Its metric continuous to decline. We can all speculate on the counter-factual about what would have happened had the resources been increased instead. But how will we know if this was a good decision? Since the people in charge are so keen on metrics, shouldn’t we translate the overall performance of the University into a tidy metric to keep track of how these decisions play out? Should we just aggregate all the individual metrics into some hyper-metric?
I guess the alternative is to think clearly about what kind of institution we want to be, and sensibly put resources towards getting there. The admins would argue that this is what the metrics are about, but since they love measurement so much, I am looking forward to hearing about how they measure their progress towards this goal?
https://www.usnews.com/best-colleges ?
Metrics are like the gods, ἵππος. Some say that many kinds exist and together share dominion over the universe. Others believe there is but one Metric, all-knowing and all-powerful. It is indeed an eternal mystery.
So it is written, for all have sinned and come short of the glory of the algorithm.
Regarding Holstrom’s work, I think faculty performance follows what he calls a “career concerns” model (e.g., https://academic.oup.com/restud/article-abstract/66/1/169/1666374). Given the employment market for tenure track faculty, the strongest incentive for faculty is to make ourselves look good to competitors (i.e., other universities) to get better offers or retention raises. This pressure is asymmetrical by career stage where the pressure is strongest on junior faculty and drops off precipitously once our careers are established (e.g., after tenure).
The other model of Holstrom’s that is useful is how incentives work in multi-tasking jobs (such as ours) where performance is multidimensional. In those cases, and when employees own the work product (which we do), and when employers do not directly restrict or control work time (which Universities do not), then weak incentives can improve performance (e.g., http://www.jstor.org/stable/764957).
So: together one could hypothesize that weak and indirect incentives (e.g., marginal changes in unit-level TTF) should be applied to performance only of senior faculty. I’m not sure if that is a ludicrous proposal or not, but it does seem to square with the theoretical and empirical literature in this area.
Yes psychology/economic lens battle – not veryhelpful for us
ignorant masses.
The only incentive that I ever responded too or was interested in
was that related to better support of my various research directions. That was far more important than merit raises and
shit like that.
Ha! My apologies for thinking that bringing relevant scholarship to a discussion would be persuasive to academics. There I go being naive again!
The papers are quite well written and accessible even to canines. We can all introspect about what drives us personally, but psychology and economics have studied the various factors that motivate behavior in a variety of situations. Economics tends to focus on external motivators (e.g., money) and psychology more in internal ones (e.g., desire for achievement or mastery in a domain), which maps on to your point I think.
Dog tells us that “The history of the UO is one of cloning ourselves to maintain our discipline niches.” Perhaps Dog is too young to know that in the late 1950’s the UO’s distinctly traditional Departments of Chemistry, Biology and Physics backed the creation of a new (the world’s first) research institute that brought together chemists, biologists and physicists dedicated to understanding life at the molecular level.This radical departure from a tradition of self-cloning was soon widely recognized as a success. It is my impression that the Sciences have profited and learned from that example.
yes the initial formation of the research institutes was radical
but that was long over by 1990 or so.
Change is measured as integrated over time, and not as single
moments. Change is continuous, not punctuated equilibrium.
A veritable process for change. Remember that?
Sounds like a case of Intelligent Design to me!
1999