AVP Ellen Herman has no records on status of Faculty Track Software

4/23/2020: 

Another year, another budget crisis, more questions about where UO’s money is going. I emailed VP Herman, who is charge of this project, on March 25th:

Hi Ellen,

I’m heard a rumor that the administration has abandoned or perhaps just delayed this effort. I’m hoping that you can provide some details on where this proposal currently stands. Thanks,

Bill Harbaugh

She didn’t answer, so on April 1st I filed a public record request. Yesterday I got this response:

Dear Mr. Harbaugh,

The University has searched for, but was unable to locate, records responsive to your request for “…a public record showing the current status of the Faculty tracking / Insights project”, made 4/1/2020.

It is the office’s understanding that this project has been placed on hold, however there are no records documenting this decision.

The office considers this to be fully responsive to your request, and will now close your matter. Thank you for contacting the office with your request.

Sincerely, Office of Public Records

5/8/2019 update: 

With the budget crisis, you’d think this proposal would be in the trash can. Apparently not.

3/18/2019 Faculty tracking software vendor explains time-suck & “thought leadership programming” junket

So why isn’t the provost’s office being clear about what this will cost?

Continue reading

UO’s 100 most excellent faculty, ranked in a convenient spreadsheet

The Board of Trustees is meeting this week, and Chairman Chuck Lillis is obsessed with the idea that UO’s faculty are overpaid deadwood. So I’ve prepared this helpful Spreadsheet of Excellence. As a bonus I added another 20 excellent faculty at the bottom to get it to 120, to offset the fact that some of our most cited researchers are post-docs, retired, have left UO but haven’t updated their profile, or are just plain dead – which, according to this new economics paper can be good for their field, at least in the life sciences.

Of course this list only includes those who’ve added their profile to google scholar, which is easy. As of 9/3/2019, here.

If you’re interested in what’s wrong with these numbers and their inevitable misuse by UO administrators check out the posts and discussion in the metrics tag below. This post on the new Faculty Tracking Software – not a joke, but an actual initiative from UO Vice Provost Ellen Herman that will go out for bids soon – is a good place to start.

Or just indulge yourself in a little gratuitous ranking voyeurism:

SPREADSHEET OF FACULTY EXCELLENCE:

Number: Name, rank, citations number
1 Paul Slovic
Decision Research and University of Oregon
Cited by 229411
2 Eric Torrence
University of Oregon
Cited by 206954
High Energy Physics Particle Physics Physics
3 David M Strom
Professor of Physics, University of Oregon
Cited by 181282
Particle Physics High Energy Physics
4
Prof Emeritus of psychology University of Oregon
Cited by 139751
attention
5 Mark Johnson
Professor of Philosophy, University of Oregon
Cited by 106280
cognitive science philosophy of language moral theory aesthetics American Philosophy
6 Raymond Frey
Department of Physics, University of Oregon
Cited by 103521
astrophysics high-energy physics
7 Jacob Searcy
University of Oregon
Cited by 101308
8
Professor of Physics, University of Oregon
Cited by 57854
atomic physics
9
Professor, University of Oregon; Professor Emeritus, Yale University
Cited by 42664
Neuroscience
10 John R Seeley
University of Oregon
Cited by 39491
Emotional and Behavioral Disorders Substance Abuse Suicide Prevention Mental Health Promotion
11
Professor of Biology, University of Oregon
Cited by 32410
Developmental biology molecular genetics genomics evolution of development
12
Postdoctoral Research Scholar, University of Oregon
Cited by 30843
13 Davison Soper
University of Oregon
Cited by 26527
14 Alan D. Meyer
Professor of Management, University of Oregon
Cited by 24930
Organization design change innovation technology
15
Professor of Geography, University of Oregon
Cited by 24194
geography physical geography climatology paleoclimatology paleoecology
16 Ellen Peters
University of Oregon
Cited by 23870
decision making risk perception affect/emotion numeracy communication
17 Andrew Kern
Evergreen Associate Professor of Biology, University of Oregon
Cited by 23703
Population Genetics Evolutionary biology Genetics Genomics Computational Biology
18
Professor of Sociology, University of Oregon
Cited by 23462
Political Economy Environmental Sociology Marxism
19
University of Oregon
Cited by 23258
20
University of Oregon
Cited by 22339
Physics General relativity interferometer calibration
21 Helen Neville
Professor, Psychology and Neuroscience, University of Oregon
Cited by 21807
22 cq doe
Univ Oregon
Cited by 21322
23 Joan Acker
Sociology, University of Oregon
Cited by 21169
sociology gender work organizations feminism
24 Don M. Tucker
University of Oregon
Cited by 20975
emotion psychopathology cognitive neuroscience EEG
25 Eric Selker
Professor of Biology, University of Oregon
Cited by 20883
epigenetics DNA methylation chromatin RIP heterochromatin
26
University of Oregon
Cited by 20860
27 Nicholas Allen
Ann Swindells Professor of Psychology, University of Oregon
Cited by 20045
Developmental Psychopathology Adolescence Brain Development Prevention Science Sleep
28 Linda Price
Professor of Marketing, University of Oregon
Cited by 19978
Marketing consumer identity family research consumer behavior
29 Jon Erlandson
Professor of Anthropology, Executive Director of the Museum of Natural & Cultural Historty …
Archaeology Anthropology Historical Ecology Human Migrations Seafaring and Maritime Adaptations
30
University of Oregon
Cited by 18343
Statistical physics Ecology Proteins Neuroscience River networks
31
Professor Emerita at the University of Oregon
32
Professor of Psychology, University of Oregon
Cited by 18087
Interpersonal perception Emotions Personality Development
33 Lynn Kahle
Professor of Marketing, University of Oregon
Cited by 17981
marketing sports values psychology attitudes
34
University of Oregon
Cited by 17575
geosciences paleontology paleopedology paleobotany paleoclimatology
35
Vice President and Robert and Leona DeArmond Executive Director, Knight Campus …
Cited by 17311
Musculoskeletal regenerative medicine tissue engineering and biomechanics
36
Assoc. Prof. Chemistry, University of Oregon
Cited by 17178
37 Hill M. Walker
Professor Emeritus, University of Oregon
Cited by 17174
behavior disorders school safety bullying early intervention social skills
38
Professor of Environmental Studies and Biology, University of Oregon
Cited by 16654
39
University of Oregon
Cited by 14372
ecology evolution fisheries marine science
40 Jennifer Freyd
University of Oregon
Cited by 14271
Psychology of Trauma Psychology of Gender
41 William Cresko
University of Oregon
Cited by 13230
Evolution Genomics Quantitative Biology
42
Lokey-Harrington Chair in Chemistry, University of Oregon
Cited by 12882
Nanoscience molecular recognition surface chemistry green chemistry
43
Professor of Physics, Department of Physics and Oregon Center for Optics, University of …
Cited by 12472
Quantum optics Nonlinear optics Quantum information
44
Oregon Retina, Oregon Health Sciences University, University of Oregon, Mayo Clinic …
Cited by 12116
Ophthalmology Retinal diseases and surgery Macular and diabetic eye diseases Uveitis Ocular oncology
45 ulrich mayr
University of Oregon
Cited by 11835
cognitive control cognitive aging decision making
46 Phil Fisher
University of Oregon
Cited by 11746
stress neurobiology prevention science foster care adversity
47
University of Oregon
Cited by 11692
Microbial Ecology Biodiversity Science Architecture
48
Institute of Molecular Biology, University of Oregon
Cited by 11528
cell division cell polarity cytoskeleton
49
Emeritus Professor of Psychology, University of Oregon
Cited by 11093
50 Dare Baldwin
University of Oregon
Cited by 10889
event processing social cognition development
51
Richard M. & Patricia H. Noyes Professor of Chemistry, University of Oregon
Cited by 10827
52
Professor of Marketing, University of Oregon
Cited by 10822
Sponsorship Advertising Communications Marketing Health
53 Bruce Blonigen
University of Oregon
Cited by 10761
54
Professor of Human Physiology, University of Oregon
Cited by 10758
Cardiovascular Physiology Thermoregulation Sex Steroids
55 Nash Unsworth
University of Oregon
Cited by 9701
working memory memory attention individual differences
56
University of Oregon
Cited by 9652
57 Gerard Saucier
Professor of Psychology, University of Oregon
Cited by 9628
Personality Cultural Psychology Moral Psychology Political Psychology Psychology of Religion
58
University of Oregon
Cited by 9413
Biomechanics
59 Richard York
Professor of Sociology and Environmental Studies, University of Oregon
Cited by 9335
environmental sociology ecological economics human ecology animal studies sociology of science
60 Craig M. Young
Professor of Biology, University of Oregon
Cited by 9304
subtidal and deep-sea ecology larval development invertebrate zoology
61
University of Oregon
Cited by 9283
Observational Cosmology Climate Change Energy Policy and Sustainability Data Science Complexity
62 Alice Barkan
University of Oregon
Cited by 9158
63 Graham Kribs
Professor of Physics, University of Oregon
Cited by 8948
64
Professor of Biology, Institute for Ecology and Evolution, University of Oregon
Cited by 8856
Evolution Evolutionary Genetics Quantitative Genetics Genomics Behavior
65 Scott Bridgham
Professor of Biology and Environmental Studies, University of Oregon
ecosystem ecology wetlands climate change
66 alan l shanks
university of oregon
Cited by 8746
marine biology
67 Dennis Howard
Professor of Marketing, University of Oregon
economics of sport finance
68
Professor of Sociology, University of Oregon
Criminology Demography Quantitative Methods Sociology
69
Professor of Chemistry, University of Oregon
Cited by 8529
Materials Science Solid State Chemistry Electrochemistry
70
Department of Human Physiology, University of Oregon
Cited by 8426
Post-exercise hypotension Recovery from exercise
71
Lundquist Professor of Sustainable Management, University of Oregon
Cited by 8339
72 Reza Rejaie
Professor of Computer and Information Science, University of Oregon
Cited by 8290
Network Measurement Online Social Networks P2P Streaming P2P Networks Congestion Control
73 John Conery
Professor of Biology, University of Oregon
Cited by 8094
bioinformatics computational science high performance computing
74 Paul J. Wallace
University of Oregon
Cited by 7991
petrology geochemistry volcanology geology
75
RF Mikesell Professor of Environmental and Resource Economics, University of Oregon
Cited by 7900
Environmental Economics Environmental Health Benefits Climate Change Mitigation and Adaptation Valuation of Ecosystem Benefit
76 Jean Stockard
University of Oregon
Cited by 7889
Sociology
77 Leslie Leve
University of Oregon
Cited by 7795
adoption foster care delinquency prevention science interventions
78 Stephen Fickas
Professor of Computer and Information Science University of Oregon
Cited by 7772
software engineering requirements engineering
79
Professor of Political Science, University of Oregon
Cited by 7760
International relations International environmental politics
80
University of Oregon
Cited by 7573
81
University of Oregon
Cited by 7276
parallel computing performance analysis
82 Ilya Bindeman
Professor of Geology, U of Oregon
Cited by 7055
Isotope geochemistry volcanology
83
Professor, University of Oregon
Cited by 7004
developmental social neuroscience adolescence self-evaluation emotion translational neuroscience
84 Marjorie Taylor
University of Oregon
Cited by 6906
85 Li-Shan Chou
University of Oregon
Cited by 6577
Human movement analysis balance control traumatic brain injury
86 Kim Sheehan
University of Oregon
Cited by 6432
Commmunication New Media Ethics Advertising
87 Hailin Wang
Professor, Department of Physics, University of Oregon, Eugene, Oregon, USA
Cited by 6422
Optical Physics Semiconductor Physics Quantum Information and Quantum …
88
Professor of Biology, University of Oregon
Cited by 6391
microbiota zebrafish symbiosis intestinal development Helicobacter
89 David Krinsley
Courtesy Professor of Earth Sciences, University of Oregon
Cited by 6300
Nanotechnology in Geology Rock varnish Rock varnish on Mars
90
Professor of Economics, University of Oregon
Cited by 6249
econ sophisticated brain imaging bodily fluids & the odd survey
91 CJ Pascoe
Associate Professor of Sociology, University of Oregon
Cited by 6211
sociology gender youth sexuality inequality
92 Ray Weldon
Professor of Geology, University of Oregon
Cited by 6168
neotectonics paleoseismology seismic hazards structural geology
93 Yuan Xu
Professor of Mathematics, University of Oregon
Cited by 6161
Approximation theory Orthogonal polynomials Harmonic analysis Special functions Numerical analysis
94 Laura Pulido
University of Oregon
Cited by 6129
95
Assistant Professor of Chemistry, University of Oregon
Cited by 5994
Materials Modeling Boundary Pushing Coffee
96 Jane Squires
Early Intervention/Special Education, University of Oregon
Cited by 5827
developmental screening social emotional competence and testing
97 SJ van Enk
University of Oregon
Cited by 5774
Quantum Information Theory Quantum Optics
98 Josh Roering
Professor, Department of Earth Sciences, University of Oregon
Cited by 5757
Geomorphology Surface Processes Landscape Evolution Landslides
99 Dietrich Belitz
University of Oregon
Cited by 5612
Strongly Correlated Electrons Quantum Phase Transitions
100 Scott DeLancey
University of Oregon
Cited by 5530
linguistic typology Sino-Tibetan Tibeto-Burman Penutian grammaticalization
101
Head of Physics Department, University of Oregon
Cited by 5505
Nanoelectronics Fractals Retinal Implants Solar Energy
102 Seth C. Lewis
Shirley Papé Chair in Emerging Media, University of Oregon
Cited by 5392
Journalism Emerging Media Media Sociology Journalism Studies Digital Technologies
103 Michael Pluth
Associate Professor, University of Oregon
Cited by 5389
Organic Chemistry Chemical Biology Bioinorganic Chemistry
104 Daniel G. Gavin
Professor, Department of Geography, University of Oregon
Cited by 5292
paleoecology climate change biogeography forest ecology refugia
105
University of Oregon
Cited by 5251
biology cell biology developmental biology invertebrate biology
106 Holly Arrow
Professor of Psychology, University of Oregon
Cited by 5236
Group Dynamics Psychology of War Complexity Theory
107
Professor, Counseling Psychology, University of Oregon
Cited by 5047
prevention science intervention family parenting
108
University of Oregon
Cited by 5027
Earth sciences marine geophysics mid-ocean ridges hotspots subduction zones
109 Jeremy Piger
Professor of Economics, University of Oregon
Cited by 4819
Macroeconomics Time-Series Econometrics Bayesian Econometrics
110 Ken Prehoda
Professor of Chemistry, University of Oregon
Cited by 4782
Cell biology stem cells protein structure and function
111
Professor of Chemistry and Biochemistry, University of Oregon
Cited by 4688
Bioinorganic chemistry nucleic acids RNA spectroscopy
112
University of Oregon
Cited by 4616
Teacher-Student Relationships Transition Among Students with Disabilities
113
The University of Oregon, Department of Physics
Cited by 4569
Biophysics Microscopy Microbiology Membranes Gut microbiota
114 Lynn Stephen
University of Oregon
Cited by 4539
Indigenous Communities in the Americas Race Gender Social Movements Transborder migration
115
University of Oregon
Cited by 4534
116 Kent McIntosh
Verified email at uoregon.edu
Cited by 4475
117 Kryn Stankunas
Associate Professor of Biology, Institute of Molecular Biology, University of Oregon
Cited by 4440
118
Director, Performance Research Laboratory, University of Oregon and President, ParaTools …
Cited by 4424
Performance Evaluation Tools Instrumentation Measurement Runtime Systems
120 Hans C. Dreyer
University of Oregon
Cited by 4380

Bad metrics and strong incentives are a nasty combination

Bengt Holmstrom got a 2016 Nobel for this basic insight, which given how little I’ve heard about research metrics lately, may even have taken hold with our administrative leadership.

Now Stanford economists Caroline Hoxby and Sarah Turner have a new paper on the consequences for low SES students, as explained in Inside Higher Ed:

… The two charts below show how the adjusted gross family incomes of the two colleges’ enrolled students changed from 2008 (when pressure was just beginning to build on selective colleges to enroll low-income students) and 2016. In 2008 (the chart with the blue bars), the colleges’ enrollment of students just below and above the Pell Grant threshold (the red line) was roughly in proportion to what one would have expected based on what the researchers describe as the colleges’ “relevant pool” based on their geography and mission.

By 2016, in contrast, the colleges were admitting almost twice as many students with incomes just below the Pell threshold as their relevant pools would have predicted, while they were admitting far fewer (still needy) students with incomes above the Pell threshold.

These data showing a “large discontinuity” between students just under and just over the Pell Grant threshold pretty clearly suggest, Turner said, that these institutions are “actively targeting Pell Grant recipients.” Colleges like these are almost certainly giving significant financial support to those students to ensure they enroll, and because their financial aid dollars are limited, they are in all likelihood giving less financial aid to those low- and middle-income students who the graphs show to be enrolling at those colleges in smaller numbers than they were before.

But “if you’re a student who’s just above [the Pell threshold], you still need just as much financial aid as somebody who is just below,” Turner said. “We know that there’s distortion going on there” in the colleges’ aid policies.

Hoxby and Turner say they’re not trying to cast aspersions on the work done by the researchers who have focused on Pell eligibility or the bottom income quartile as benchmarks — especially since many of them are former students of Hoxby’s, she notes.

But by embracing flawed measures and deciding to “do something that’s a little bit sexy or prurient by putting people in rankings,” Hoxby said, these analyses “may not actually be helping the situation, but making the situation somewhat worse. These measures are not measuring what they’re supposed to be measuring.

New faculty tracking software will implement Provost’s metrics scheme

A letter from Provost Banavar, here:

The project, called Faculty Insights, will result in a sophisticated online system that enhances our ability to capture the wide range of research and creative activities that our faculty do. The primary purpose of the system will be to manage the faculty review process university-wide – including promotion, tenure, and post-tenure review – more efficiently and effectively. Introducing a Faculty Insights system at UO will enhance our ability to streamline faculty personnel processes and make the achievements and instructional activities of faculty in all the schools and colleges more visible, within the campus community and to the broader public. The system will also support the local metrics process and the production of annual unit-level research reports.

 

Bad news from Italy for Brad Shelton’s metrics scheme

At this point the whole metrics fiasco is so toxic that (almost) everyone involved wants to just drop it. Yet it is a well-known fact that Johnson Hall never makes a mistake. What a dilemma!

At his recent metrics town hall, after hearing the litany of objections from faculty and heads, Provost Banavar offered the ingenious solution of saying that any proposal departments submit, even purely verbal descriptions of faculty research productivity that refuse to categorize journals and presses by quality, will count as “metrics”.

Dilemma resolved!

Meanwhile here’s the news from Italy – thanks to UO Psych department prof Sanjay Srivastava for the tip.

Self-citations as strategic response to the use of metrics for career decisions

Marco Seeber, Mattia Cattaneo, Michele Meoli, Paolo Malighetti

There is limited knowledge on the extent to which scientists may strategically respond to metrics by adopting questionable practices, namely practices that challenge the scientific ethos, and the individual and contextual factors that affect their likelihood. This article aims to fill these gaps by studying the opportunistic use of self-citations, i.e. citations of one’s own work to boost metric scores. Based on sociological and economic literature exploring the factors driving scientists’ behaviour, we develop hypotheses on the predictors of strategic increase in self-citations. We test the hypotheses in the Italian Higher Education system, where promotion to professorial positions is regulated by a national habilitation procedure that considers the number of publications and citations received. The sample includes 886 scientists from four of science’s main disciplinary sectors, employs different metrics approaches, and covers an observation period beginning in 2002 and ending in 2014. We find that the introduction of a regulation that links the possibility of career advancement to the number of citations received is related to a strong and significant increase in self-citations among scientists who can benefit the most from increasing citations, namely assistant professors, associate professors and relatively less cited scientists, and in particular among social scientists. Our findings suggest that while metrics are introduced to spur virtuous behaviours, when not properly designed they favour the usage of questionable practices.

Live from Provost Banavar’s Metrics Town Hall:

Liveblog:

Sorry, I can’t type fast enough to get everything. Some highlights from the town hall:

Banavar, Berkman, and Pratt are on stage. Shelton (EW interview with some unfortunate quotes here) has been relegated to the admin table towards towards the back. Obviously the administration is backing away as fast as they can from past proposals and the adults are now in charge.

Banavar announces he’s pushing back the deadline for departments to provide their metrics plans and data to JH from April 6 to June 6.

He also announces that he’s signed an MOU with the faculty union that will ensure that, whatever the administration decides on, there will be faculty input and negotiation.

The link to Berkman’s Metrics “blog” is here. No comments allowed – or at least there are none posted.

The faculty and heads are asking many very skeptical questions about how these metrics will guide resource allocations and influence faculty research goals.

Berkman closes by saying that Harbaugh’s criticisms of the metrics proposal, based on the work of Nobel Laureate Bengt Holmstrom, are off base because those relate to “strong financial incentives” and these metrics will only provide weak incentives.

It’s hard to respond to that when we don’t know what the departments metrics plans will actually be, but inevitably they will become guidelines for junior faculty to follow if they want tenure, and everyone to follow if they want merit raises, new colleagues, and to be seen as good department and university citizens, get responses to outside offers, etc. Those are pretty strong incentives, financial or not, and they will result in gaming and discouraging work that is not measured, just as Hengtrom’s research shows.

My takeaway is that this has been a botched 2 year effort by the administration, and it has taken a huge amount of faculty effort – away from our other jobs – to push back and try and turn it into something reasonable. We’ll see what happens.

Banavar, Pratt, and Berkman did not discuss the “faculty tracking software” that UO will be purchasing next year. This software will allow them to track faculty activities, and will generate reports comparing those activities across faculty, across departments, over time, etc.

There appears to be no truth to the rumors that this software will interface with the mandatory new faculty ankle bracelets to provide JH with real-time GPS location tracking, or that this is all part of the Tracktown 2021 championship plan.

Update: Rumor has it that the UO administration’s obsession with research metrics and Academic Analytics started with the hiring of Kimberly Espy as VPR.

After alienating everyone on campus except former Interim Provost Jim Bean, Espy was finally forced out thanks to the UO Senate’s threatened vote of no confidence and a blunt report written by CAS Assoc Dean Bruce Blonigen. History here.

Gottfredson appointed Brad Shelton as her interim replacement, and new VPR David Conover is still picking up the pieces.

Part of Espy’s legacy was UO’s ~$100K contract with Academic Analytics, which finally expires this December, for a total of $600K down the hole. While Shelton enthusiastically defends this sunk cost in the Eugene Weekly, no one else in the UO administration will admit to ever using Academic Analytics data as an input for any decision.

Despite this craziness, it’s still an open question as to whether or not Shelton, Conover, and Banavar will renew the contract, which Academic Analytics and their salesman and former UO Interim President Bob Berdahl are now pitching at $160K a year.

3/12/2018: UO physicist, Psychology Dept kick off Provost’s Friday Metrics Town Hall early, propose sensible alternatives to Brad Shelton’s silly metrics plan

A week or two back CAS started a “metrics blog” to collect suggestions on how departments could respond to the call from VPxyz Brad Shelton for simple metrics that the administration could use to rank departments and detect changes over time to help decide who will get new faculty lines. Or maybe the call was for information that they could show to Chuck Lillis and the trustees about how productive/unproductive UO’s faculty are. Or maybe it was a call for departments to provide information that Development could pitch to potential donors. All I know for sure is that departments are supposed to respond by April 6th with their perfect algorithm.

Ragu Parthasathaway from Physics has taken up the challenge, on his Eighteenth Elephant Blog:

… These are extreme examples, but they illustrate real differences between fields even within Physics. Biophysical studies typically involve one or at most a few labs, each with a few people contributing to the project. I’d guess that the average number of co-authors on my papers is about 5. High-energy physics experiments involve vast collaborations, typically with several hundred co-authors.

Is it “better” to have a single author paper with 205 citations, or a 2900-author paper with 11000 citations? One could argue that the former is better, since the citations per author (or even per institution) is higher. Or one could argue that the latter is better, since the high citation count implies an overall greater impact. Really, though, the question is silly and unanswerable.

Asking silly questions isn’t just a waste of time, though; it alters the incentives to pursue research in particular directions. …

In other words, this particular silly question is worse than a waste of time. Ulrich Mayr, chair of UO’s Psychology department (UO’s top research department, according the the National Research Council’s metrics, FWIW) has met with his faculty, and they have a better idea:

As obvious from posts on this blog, there is skepticism that we can design a system of quantitative metrics that achieves the goal of comparing departments within campus or across institutions, or that presents a valid basis for communicating about departments’ strengths and weaknesses.  The department-specific grading rubrics may seem like a step in the right direction, as they allow building idiosyncratic context into the metrics.  However, this eliminates any basis for comparisons and still preserves all the negative aspects of scoring systems, such as susceptibility to gaming and danger of trickle-down to evaluation on the individual level.  I think many of us agree that we would like our faculty to think about producing serious scholarly work, not how to achieve points on a complex score scheme.

Within Psychology, we would therefore like to try an alternative procedure, namely an annual, State of the Department report that will be made available at the end of every academic year.

Authored by the department head (and with help from the executive committee and committee chairs), the report will present a concise summary of past-year activity with regard to all relevant quality dimensions (e.g., research, undergraduate and graduate education, diversity, outreach, contribution to university service, etc.).  Importantly, the account would marry no-thrills, basic quantitative metrics with contextualizing narrative.  For example, the section on research may present the number of peer-reviewed publications or acquired grants during the preceding year, it may compare these number to previous years, or—as far as available–to numbers in peer institutions.  It can also highlight particularly outstanding contributions as well as areas that need further development.

Currently, we are thinking of a 3-part structure: (I) A very short executive summary (1 page). (II) A somewhat longer, but still concise narrative, potentially including tables or figures for metrics, (III) An appendix that lists all department products (e.g., individual articles, books, grants, etc.), similar to a departmental “CV” the covers the previous year.

Advantages:

––When absolutely necessary, the administration can make use of the simple quantitative metrics.

––However, the accompanying narrative provides evaluative context without requiring complex, department-specific scoring systems.  This preserves an element of expert judgment (after all, the cornerstone of evaluation in academia) and it reduces the risk of decision errors from taking numbers at face value.

––One stated goal behind the metrics exercise is to provide a basis for communicating about a department’s standing with external stakeholders (e.g., board members, potential donors).  Yet, to many of us it is not obvious how this would be helped through department–specific grading systems.  Instead, we believe that the numbers-plus-narrative account provides an obvious starting point for communicating about a department’s strengths and weaknesses.

––Arguably, for departments to engage in such an annual self-evaluation process is a good idea no matter what.  We intend to do this irrespectively of the outcome of the metrics discussion and I have heard rumors that some departments on campus are doing this already.  The administration could piggy-back on to such efforts and provide a standard reporting format to facilitate comparisons across departments.

Disadvantages:

––More work for heads (I am done in 2019).

So sensible it must be dead in the water. But if you haven’t given up hope in UO yet, show up at Provost Banavar’s Town Hall this Friday at 11:

Metrics and the evaluation of excellence will be at the center of a town hall-style discussion with Jayanth Banavar, provost and senior vice president, from 11 a.m. to noon in Room 156, Straub Hall on Friday, March 16.

The session was announced in a recent memo from the provost, who calls the event a “two-way discussion on the purpose, value, and use of metrics as well as other topics, including the new academic allocation system, the Institutional Hiring Plan, and whatever else is on your mind.”

“I know that there are a lot of questions about what this means, and I have heard concerns that the metrics will be used inappropriately for things such as ‘ranking’ faculty members or departments,” Banavar wrote. “I have also heard rumors that we will be using metrics to establish some sort of threshold at which faculty members could be ‘cut’ if they do not meet that threshold. I want to help allay some concerns and answer some questions. As a former dean and faculty member myself, I understand how questions and even some anxiety can arise when metrics are introduced into a conversation.”

Facutly members who are unable to attend are encouraged to share thoughts, concerns or ideas with the Office of the Provost at [email protected].

“As we continue our work on the development of these metrics, we welcome your advice and input,” the memo reads. “The goal is to have a mechanism for the transparent allocation of resources to maximally enhance the excellence of our university.”

I do wonder who writes this nonsense.

On the Work of the University, from Prof Ken Calhoon

It’s not just Nobel Prize winning economists and the UK Research Councils who think the administration’s research metrics plan is a mistake. Ken Calhoon, head of UO’s Dept of Comparative Literature, provides a less mathematical but no less thorough dissection:

February 27th, 2018

Dear Friends and Colleagues,

Mozart wrote forty-one symphonies, Beethoven only nine. I have written none, but I offer these thoughts on metrics. I apologize in advance for the naiveté, as well as the pathos.

On September 14th, at the beginning of the current academic year, University Provost and Senior Vice President Jayanth Banavar hosted a retreat for “academic leaders” in the EMU Ballroom. The highpoint of the assembly, in my view, was Jayanth’s own (seemingly impromptu) description of the research of David Wineland, the Nobel Laureate who recently joined the UO’s Department of Physics as a Knight Professor. In a manner that suggested that he himself must have been a gifted teacher, Jayanth provided a vivid and accessible account of Wineland’s signature accomplishment—speculative work aimed at increasing the computational speed of computers by “untrapping” atoms, enabling them to exist at more than one energy level at a time. With a humorous gesture to his own person, Jayanth ventured that it might be hard to imagine his body being in two rooms at once, but Wineland had figured out how, in the case of very small particles, this is possible. My own knowledge of quantum physics is limited to the few dismissive quips for which Einstein was notorious, e. g. “God is subtle but not malicious.” In any event, Wineland’s work was made to sound original and impressive. Equally impressive was the personable, humane and effective fashion in which Jayanth, with recourse to imagery and physical self-reference, sought to convey the essence of his fellow physicist’s work across all the disciplines represented in the room—and at the University.

I was inspired by the experience of seeing one person so animated by the work of another. However, my enthusiasm is measured today against the discouragement and disaffection that I and so many of my colleagues feel at the University’s current push, without meaningful debate, to metricize excellence—to evaluate our research in terms quite alien to the values our work embodies. As a department head with a long history at this institution, I must say that I feel helpless before the task of breaking our work down into increments and assigning numerical values to them. It can be done, of course, but the resulting currency would be counterfeit.

Over the course of my thirty-one-year career at the University of Oregon, I have presided over quite a few tenure and promotion cases and have been party to many more, both as departmental participant and as a member, for a two-year stint, of the Dean’s Advisory Committee in the College of Arts and Sciences. I am also routinely asked to evaluate faculty for tenure and promotion at other colleges and universities, where the process is more or less identical to ours. In past years I have been asked to write for faculty at Cornell, Harvard (twice), Johns Hopkins (twice), Washington University, University of Chicago, University of Pennsylvania, University of Minnesota (twice), Penn State, and Irvine, among others. I mention this not to boast—god forbid!—but to emphasize that institutions of the highest standing readily recruit faculty from the UO to assist in their internal decisions on professional merit and advancement.

For such decisions at the UO, department heads solicit evaluations from outside reviewers who are not only experts in the relevant field but are also well placed. They are asked to submit, along with their review, their own curriculum vitae and a biographical sketch. Reviewers are instructed to identify the most significant scholarly contributions which the individual under review has made, and to assess the impact of those contributions on the discipline. They are also asked to discuss the “appropriateness” of the publication venues, and also to “contextualize” their remarks with regard to common practices within the discipline or sub-field. They are asked to compare, “both qualitatively and quantitatively,” the work of the individual under review with that of other scholars in the field at comparable stages in their academic careers. Finally, the outside reviewers are asked to state whether the research record under consideration would meet the standards for tenure and promotion at their home institution. These instructions, which follow a template provided by Academic Affairs, differ little if at all from those I have received from other universities.

In response to these requests, we typically receive narratives, often three and four pages in length, in which reviewers—in accordance with the instructions but also with the conventions of professional service—not only discuss the candidate’s work in detail but also contextualize that work in relation, for example, to the evolving nature of the field, to others working on the same or similar material, not to mention the human content of that material. (I am usually asked to review the work of scholars working on the history of German literature and thought, as well as literary and film theory.) Looking back over the reports I have authored, I see that they contain phrases like “body of work,” “breadth of learning,” “intellectual energy,” “daunting command,” “surprising intervention,” “dazzling insight,” “staggering productivity,” etc. These formulations are subjective. As such, they are consistent with the process whereby one mind comes to grip with another. I am inclined to say that this process is particular to the humanities, but Jayanth Banavar’s lively and lucid presentation of David Wineland’s research would prove me wrong. It conveyed excitement.

What distinguishes the humanities from the sciences and many of the other, empirically oriented fields is that our disciplines are not consensus-based. We disagree among ourselves, often sharply, on questions of approach or method, on the validity and importance of the materials studied, on how arguments or interpretations should be structured or conceptualized. These disagreements may take place between departments at different universities, or within a single department. Disciplines within the humanities are in flux, and we suffer the additional burden of finding ourselves in a social and cultural world whose regard for humanistic work is markedly diminished. We often scramble to re-define our relevance while the ground shifts beneath our feet. To seek a stable set of ostensibly objective standards for measuring our work is to misrecognize the very essence of our work. These same standards risk becoming the instruments of this misrecognition.

In any case, the process of review for tenure and promotion, as formalized by Academic Affairs and by the more extensive guidelines which each unit has created, and for which each unit has secured approval both by its respective college and by Academic Affairs, already accounts for such factors as the stature of a press or journal, the rigor with which books and articles are reviewed, the quantity of publications balanced against their quality, and the impact which the faculty member’s research has had, or may be expected to have. But why the need to strip these judgments of their connective tissue? And for whom?

Curriculum vitae – “the course of [one’s] life.” When I was an undergraduate (at the University of Louisville, no less), I was greatly influenced by an historian of seventeenth-century Britain, Arthur J. Slavin. The dean of the college, he had been a friend of the mathematician Jacob Bronowski, recently deceased at the time, best known for his PBS series The Ascent of Man. One episode of the series begins with a blind woman carefully running her fingers over the face of an elderly, gaunt gentleman and speculating as to the hard course of his life. “The lines of his face could be lines of possible agony,” she says. The judgment is subjective, but accurate: The man, like Bronowski a Polish Jew, had survived Auschwitz, the remnants of which provide Bronowski with a physical backdrop for the dramatic and moving summation of an episode dedicated to the ramifications of the Principle of Uncertainty, which had been formulated by Werner Heisenberg just as all of Europe was about to fall victim to a despotic belief in absolute certainty. “It is said that science will dehumanize people and turn them into numbers. That is false: tragically false. Look for yourself…. This is where people were turned into numbers.”

I don’t mean to overdramatize the analogy, or even really to suggest one. I am more interested in Bronowski’s general statement that “[all] knowledge, all information between human beings, can only be exchanged within a play of tolerance. And that is true whether the exchange is in science, or in literature, or in religion, or in politics, or in any form of thought that aspires to dogma.” The dogma we are faced with today is that of corporate thinking, which is despotic in the sense that it mystifies. We in this country are inclined to think that people who have amassed great wealth know something we don’t—that they have the magic touch. It is from them and their public advocates that we hear the constant calls for governments, universities, prisons, hospitals, museums, utilities, national forests and parks to be run more like businesses. Why? (And which businesses? IBM? TWA? Pan Am? Bear Stearns? Enron? Wells Fargo?) Why is the business model the presumed natural guarantor of good organization? Why not a symphony? an eco-system? a cooperative? a republic? a citizenry? Why is the university not a model for business? Businesses certainly benefit from the talent we cultivate and send their way, outfitted with the knowledge, the verbal agility, the conceptual power that make up our stock in trade.

Our current national political scene presents us with constant images of promiscuous, self-reproducing wealth. Within this context, which is an extreme one, it is urgent that we as a collective make our case, and in terms commensurate with our self-understanding as researchers, thinkers, writers, fine artists, and teachers, not in terms that conform so transparently to the prevailing model of worker productivity.

Those who maintain that inert numbers are the only means we have for communicating our value have already been proven wrong by our own provost. I call upon our president, our provost and our many deans to bring their considerable talents, their public stature, as well as their commitment to the University, to bear on our cause. Many of us, I’m sure, are ready to support you.

With respect and thanks,

Ken

Kenneth S. Calhoon, Head
Department of Comparative Literature
University of Oregon
Eugene, OR 97403-5242

CAS faculty meet today at 2PM for “Metrics, Humanities, and Social Science”

Dear Humanities and Social Science faculty,

Please join your colleagues Scott DeLancey (Linguistics), Spike Gildea (Linguistics), Volya Kapatsinski (Linguistics), Leah Middlebrook (Comparative Literature), Lanie Millar (Romance Languages), and Lynn Stephen (Anthropology) for a discussion of metrics for measuring our departmental research quality and the quality of our graduate programs. The panel will briefly summarize work done in some of our departments to identify what we value in our own work, ways to measure how well we achieve goals we value, and how we might take leadership in moving comparator institutions towards identifying and measuring their goals in comparable ways.

Tuesday, February 27 2:00-3:30 pm Gerlinger Lounge

Thanks to Lanie, Leah, Lynn, Scott, Spike, and Volya for their willingness to lead a timely discussion as we all consider how to create meaningful and useful metrics for our departments and disciplines.

Karen Ford and Phil Scher

More misguided metrics – this time it’s “learning outcomes” assessment

UNC History Professor Molly Worthen in the NYT on learning outcomes assessment:

I teach at a big state university, and I often receive emails from software companies offering to help me do a basic part of my job: figuring out what my students have learned.

If you thought this task required only low-tech materials like a pile of final exams and a red pen, you’re stuck in the 20th century. In 2018, more and more university administrators want campuswide, quantifiable data that reveal what skills students are learning. Their desire has fed a bureaucratic behemoth known as learning outcomes assessment. This elaborate, expensive, supposedly data-driven analysis seeks to translate the subtleties of the classroom into PowerPoint slides packed with statistics — in the hope of deflecting the charge that students pay too much for degrees that mean too little.

It’s true that old-fashioned course grades, skewed by grade inflation and inconsistency among schools and disciplines, can’t tell us everything about what students have learned. But the ballooning assessment industry — including the tech companies and consulting firms that profit from assessment — is a symptom of higher education’s crisis, not a solution to it. …

No intellectual characteristic is too ineffable for assessment. Some schools use lengthy surveys like the California Critical Thinking Disposition Inventory, which claims to test for qualities like “truthseeking” and “analyticity.” The Global Perspective Inventory, administered and sold by Iowa State University, asks students to rate their agreement with statements like “I do not feel threatened emotionally when presented with multiple perspectives” and scores them on metrics like the “intrapersonal affect scale.” …

UO’s federal accreditor is the not very transparent Northwest Commission on Colleges and Universities (NWCCU). Their website has a message from their interim president:

I am writing to thank you for your participation in and support of the activities we initiated last November to gather information from you about how NWCCU can better achieve its mission of assuring educational quality, enhancing institutional effectiveness, and fostering continuous improvement. Your response to the survey and participation in the Annual Meeting and Town Halls guided development of a report from the Task Force on Renewal of Recognition that was accepted by the Board of Commissioners at its January 2018 meeting.

One of the most consistent recommendations received was that we improve communication with the member institutions. This message is part of a larger communication strategy that we are implementing to move forward on the recommendations of the Task Force.

Speaking of communication, good luck trying to find the Task Force report on their website.

UO’s website at https://accreditation.uoregon.edu/ documents the years of work faculty and administrators have spent on this assessment crap on orders from the NWCCU. More is coming.

UK research councils & Nature unimpressed by VP Brad Shelton’s shiny new metrics plan

2/7/2018: From The Times:

All seven of the UK’s research councils have signed up to a declaration that calls for the academic community to stop using journal impact factors as a proxy for the quality of scholarship.

The councils, which together fund about £3 billion of research each year, are among the latest to sign the San Francisco Declaration on Research Assessment, known as Dora.

Stephen Curry, the chair of the Dora steering committee, said that the backing of the research councils gives the initiative a “significant boost”.

Dora was initiated at the annual meeting of the American Society for Cell Biology in 2012 and launched the following year. It calls on researchers, universities, journal editors, publishers and funders to improve the ways they evaluate research.

It says that the academic community should not use the impact factor of journals that publish research as a surrogate for quality in hiring, promotion or funding decisions. The impact factor ranks journals according to the average number of citations that their articles receive over a set period of time, usually two years.

Professor Curry, professor of structural biology at Imperial College London, announces the new signatories to the declaration in a column published in Nature on 8 February. …

1/26/2018: Nobel laureate unimpressed by VP Brad Shelton’s shiny new metrics plan

The 2016 Nobel Prize for Economics went to Oliver Hart and Bengt Holmstrom, for their life work on optimal incentive contracts under incomplete information. Holmstrom started out in industry, designing incentive schemes that used data driven metrics and strong incentives to “bring the market inside the firm”. However, as he said in his Nobel Prize lecture:

Today, I know better. As I will try to explain, one of the main lessons from working on incentive problems for 25 years is, that within firms, high-powered financial incentives can be very dysfunctional and attempts to bring the market inside the firm are generally misguided. Typically, it is best to avoid high-powered incentives and sometimes not use pay-for-performance at all.

I thought that Executive Vice Provost of Academic Operations Brad Shelton and the UO administration had learned this lesson too, after the meltdown of the market-based “Responsibility Centered Management” budget model that Shelton ran. Apparently not. Today the Eugene Weekly has an article by Morgan Theophil on “Questionably measuring success” which focuses on UO’s $100K per year contract with Academic Analytics for their measure of faculty research “productivity”.

Brad Shelton, UO executive vice provost of academic operations, says Academic Analytics measures faculty productivity by considering several factors: How many research papers has this faculty member published, where were the papers published, how many times have the papers been cited, and so on.

“Those are a set of metrics that very accurately measures the productivity of a math professor, for example,” Shelton says.

No they don’t. They might accurately count a few things, but those things are not accurate or complete measures of a professor’s productivity, and as Holmstrom explains later in his address – in careful mathematics and with examples such as the recent Wells Fargo case – there are many pitfalls to incentivising inaccurate, incomplete, and easily-gamed metrics. Most obviously, incentivizing the easily measured part of productivity raises the opportunity cost to employees (faculty) of the work that produces the things that the firm (university) actually cares about it, so true productivity may actually fall.

As the EW article also explains, UO has spent $500K on the Academic Analytics data on faculty “productivity” (i.e. grants, pubs, and citations) over the past 5 years, prompted in part by pressure from former Interim President Bob Berdahl, who now has a part-time job with Academic Analytics as a salesman.

Despite this expenditure, UO has never used the data for decisions about merit and promotion, in part because of opposition from the faculty and the faculty union, and in part because of a study by Spike Gildea from Linguistics documenting problems with the accuracy of the AA data. And today the Chronicle has a report on the vote by the faculty at UT-Austin to join Rutgers and Georgetown in opposing use of AA’s simple-minded metrics.

Meanwhile back at UO, VP Shelton is trumpeting the fact that AA has been responsive to complaints about past data quality:

“What we found is that Academic Analytics data is very accurate — it’s always accurate. If there are small errors, they fix them right away,” Shelton says.

Always accurate at measuring what?

Word from the CAS faculty heads meeting yesterday is that UO will not require departments to use the AA data – but that we’ll keep paying $100K, or about the salary of one scarce professor for it. Why? Because some people in Johnson Hall don’t understand another basic economic principle. When you’re in a hole, stop digging:

I forget who got the Nobel Prize for that one.

Here’s a draft of the sort of departmental incentive policies that are now floating around, in response to Shelton’s call:

Keep in mind that even if your department decides to develop a more rational evaluation system for itself, there will be nothing to prevent the Executive Vice Provost of Academic Operations from using the Academic Analytics data to run its own parallel evaluation system.

The Tyranny of Metrics

InsideHigherEd’s interview with Jerry Muller about his new book. Published by the high impact-factor Princeton University Press. One excerpt:

Q: Some colleges, government agencies and businesses promote tools to evaluate faculty productivity — number of papers written, number of citations, etc. What do you make of this use of metrics?

A: Here too, metrics have a place, but only if they are used together with judgment. There are many snares. The quantity of papers tells you nothing about their quality or significance. In some disciplines, especially in the humanities, books are a more important form of scholarly communication, and they don’t get included in such metrics. Citation counts are often distorted, for example by including only journals within a particular discipline, thereby marginalizing works that have a transdisciplinary appeal. And then of course evaluating faculty productivity by numbers of publications creates incentives to publish more articles, on narrower topics, and of marginal significance. In science, it promotes short-termism at the expense of developing long-term research capacity.

More on the $600K Brad Shelton has dropped on Academic Analytics so far here.