Bad news from Italy for Brad Shelton’s metrics scheme

At this point the whole metrics fiasco is so toxic that (almost) everyone involved wants to just drop it. Yet it is a well-known fact that Johnson Hall never makes a mistake. What a dilemma!

At his recent metrics town hall, after hearing the litany of objections from faculty and heads, Provost Banavar offered the ingenious solution of saying that any proposal departments submit, even purely verbal descriptions of faculty research productivity that refuse to categorize journals and presses by quality, will count as “metrics”.

Dilemma resolved!

Meanwhile here’s the news from Italy – thanks to UO Psych department prof Sanjay Srivastava for the tip.

Self-citations as strategic response to the use of metrics for career decisions

Marco Seeber, Mattia Cattaneo, Michele Meoli, Paolo Malighetti

There is limited knowledge on the extent to which scientists may strategically respond to metrics by adopting questionable practices, namely practices that challenge the scientific ethos, and the individual and contextual factors that affect their likelihood. This article aims to fill these gaps by studying the opportunistic use of self-citations, i.e. citations of one’s own work to boost metric scores. Based on sociological and economic literature exploring the factors driving scientists’ behaviour, we develop hypotheses on the predictors of strategic increase in self-citations. We test the hypotheses in the Italian Higher Education system, where promotion to professorial positions is regulated by a national habilitation procedure that considers the number of publications and citations received. The sample includes 886 scientists from four of science’s main disciplinary sectors, employs different metrics approaches, and covers an observation period beginning in 2002 and ending in 2014. We find that the introduction of a regulation that links the possibility of career advancement to the number of citations received is related to a strong and significant increase in self-citations among scientists who can benefit the most from increasing citations, namely assistant professors, associate professors and relatively less cited scientists, and in particular among social scientists. Our findings suggest that while metrics are introduced to spur virtuous behaviours, when not properly designed they favour the usage of questionable practices.

Live from Provost Banavar’s Metrics Town Hall:

Liveblog:

Sorry, I can’t type fast enough to get everything. Some highlights from the town hall:

Banavar, Berkman, and Pratt are on stage. Shelton (EW interview with some unfortunate quotes here) has been relegated to the admin table towards towards the back. Obviously the administration is backing away as fast as they can from past proposals and the adults are now in charge.

Banavar announces he’s pushing back the deadline for departments to provide their metrics plans and data to JH from April 6 to June 6.

He also announces that he’s signed an MOU with the faculty union that will ensure that, whatever the administration decides on, there will be faculty input and negotiation.

The link to Berkman’s Metrics “blog” is here. No comments allowed – or at least there are none posted.

The faculty and heads are asking many very skeptical questions about how these metrics will guide resource allocations and influence faculty research goals.

Berkman closes by saying that Harbaugh’s criticisms of the metrics proposal, based on the work of Nobel Laureate Bengt Holmstrom, are off base because those relate to “strong financial incentives” and these metrics will only provide weak incentives.

It’s hard to respond to that when we don’t know what the departments metrics plans will actually be, but inevitably they will become guidelines for junior faculty to follow if they want tenure, and everyone to follow if they want merit raises, new colleagues, and to be seen as good department and university citizens, get responses to outside offers, etc. Those are pretty strong incentives, financial or not, and they will result in gaming and discouraging work that is not measured, just as Hengtrom’s research shows.

My takeaway is that this has been a botched 2 year effort by the administration, and it has taken a huge amount of faculty effort – away from our other jobs – to push back and try and turn it into something reasonable. We’ll see what happens.

Banavar, Pratt, and Berkman did not discuss the “faculty tracking software” that UO will be purchasing next year. This software will allow them to track faculty activities, and will generate reports comparing those activities across faculty, across departments, over time, etc.

There appears to be no truth to the rumors that this software will interface with the mandatory new faculty ankle bracelets to provide JH with real-time GPS location tracking, or that this is all part of the Tracktown 2021 championship plan.

Update: Rumor has it that the UO administration’s obsession with research metrics and Academic Analytics started with the hiring of Kimberly Espy as VPR.

After alienating everyone on campus except former Interim Provost Jim Bean, Espy was finally forced out thanks to the UO Senate’s threatened vote of no confidence and a blunt report written by CAS Assoc Dean Bruce Blonigen. History here.

Gottfredson appointed Brad Shelton as her interim replacement, and new VPR David Conover is still picking up the pieces.

Part of Espy’s legacy was UO’s ~$100K contract with Academic Analytics, which finally expires this December, for a total of $600K down the hole. While Shelton enthusiastically defends this sunk cost in the Eugene Weekly, no one else in the UO administration will admit to ever using Academic Analytics data as an input for any decision.

Despite this craziness, it’s still an open question as to whether or not Shelton, Conover, and Banavar will renew the contract, which Academic Analytics and their salesman and former UO Interim President Bob Berdahl are now pitching at $160K a year.

3/12/2018: UO physicist, Psychology Dept kick off Provost’s Friday Metrics Town Hall early, propose sensible alternatives to Brad Shelton’s silly metrics plan

A week or two back CAS started a “metrics blog” to collect suggestions on how departments could respond to the call from VPxyz Brad Shelton for simple metrics that the administration could use to rank departments and detect changes over time to help decide who will get new faculty lines. Or maybe the call was for information that they could show to Chuck Lillis and the trustees about how productive/unproductive UO’s faculty are. Or maybe it was a call for departments to provide information that Development could pitch to potential donors. All I know for sure is that departments are supposed to respond by April 6th with their perfect algorithm.

Ragu Parthasathaway from Physics has taken up the challenge, on his Eighteenth Elephant Blog:

… These are extreme examples, but they illustrate real differences between fields even within Physics. Biophysical studies typically involve one or at most a few labs, each with a few people contributing to the project. I’d guess that the average number of co-authors on my papers is about 5. High-energy physics experiments involve vast collaborations, typically with several hundred co-authors.

Is it “better” to have a single author paper with 205 citations, or a 2900-author paper with 11000 citations? One could argue that the former is better, since the citations per author (or even per institution) is higher. Or one could argue that the latter is better, since the high citation count implies an overall greater impact. Really, though, the question is silly and unanswerable.

Asking silly questions isn’t just a waste of time, though; it alters the incentives to pursue research in particular directions. …

In other words, this particular silly question is worse than a waste of time. Ulrich Mayr, chair of UO’s Psychology department (UO’s top research department, according the the National Research Council’s metrics, FWIW) has met with his faculty, and they have a better idea:

As obvious from posts on this blog, there is skepticism that we can design a system of quantitative metrics that achieves the goal of comparing departments within campus or across institutions, or that presents a valid basis for communicating about departments’ strengths and weaknesses.  The department-specific grading rubrics may seem like a step in the right direction, as they allow building idiosyncratic context into the metrics.  However, this eliminates any basis for comparisons and still preserves all the negative aspects of scoring systems, such as susceptibility to gaming and danger of trickle-down to evaluation on the individual level.  I think many of us agree that we would like our faculty to think about producing serious scholarly work, not how to achieve points on a complex score scheme.

Within Psychology, we would therefore like to try an alternative procedure, namely an annual, State of the Department report that will be made available at the end of every academic year.

Authored by the department head (and with help from the executive committee and committee chairs), the report will present a concise summary of past-year activity with regard to all relevant quality dimensions (e.g., research, undergraduate and graduate education, diversity, outreach, contribution to university service, etc.).  Importantly, the account would marry no-thrills, basic quantitative metrics with contextualizing narrative.  For example, the section on research may present the number of peer-reviewed publications or acquired grants during the preceding year, it may compare these number to previous years, or—as far as available–to numbers in peer institutions.  It can also highlight particularly outstanding contributions as well as areas that need further development.

Currently, we are thinking of a 3-part structure: (I) A very short executive summary (1 page). (II) A somewhat longer, but still concise narrative, potentially including tables or figures for metrics, (III) An appendix that lists all department products (e.g., individual articles, books, grants, etc.), similar to a departmental “CV” the covers the previous year.

Advantages:

––When absolutely necessary, the administration can make use of the simple quantitative metrics.

––However, the accompanying narrative provides evaluative context without requiring complex, department-specific scoring systems.  This preserves an element of expert judgment (after all, the cornerstone of evaluation in academia) and it reduces the risk of decision errors from taking numbers at face value.

––One stated goal behind the metrics exercise is to provide a basis for communicating about a department’s standing with external stakeholders (e.g., board members, potential donors).  Yet, to many of us it is not obvious how this would be helped through department–specific grading systems.  Instead, we believe that the numbers-plus-narrative account provides an obvious starting point for communicating about a department’s strengths and weaknesses.

––Arguably, for departments to engage in such an annual self-evaluation process is a good idea no matter what.  We intend to do this irrespectively of the outcome of the metrics discussion and I have heard rumors that some departments on campus are doing this already.  The administration could piggy-back on to such efforts and provide a standard reporting format to facilitate comparisons across departments.

Disadvantages:

––More work for heads (I am done in 2019).

So sensible it must be dead in the water. But if you haven’t given up hope in UO yet, show up at Provost Banavar’s Town Hall this Friday at 11:

Metrics and the evaluation of excellence will be at the center of a town hall-style discussion with Jayanth Banavar, provost and senior vice president, from 11 a.m. to noon in Room 156, Straub Hall on Friday, March 16.

The session was announced in a recent memo from the provost, who calls the event a “two-way discussion on the purpose, value, and use of metrics as well as other topics, including the new academic allocation system, the Institutional Hiring Plan, and whatever else is on your mind.”

“I know that there are a lot of questions about what this means, and I have heard concerns that the metrics will be used inappropriately for things such as ‘ranking’ faculty members or departments,” Banavar wrote. “I have also heard rumors that we will be using metrics to establish some sort of threshold at which faculty members could be ‘cut’ if they do not meet that threshold. I want to help allay some concerns and answer some questions. As a former dean and faculty member myself, I understand how questions and even some anxiety can arise when metrics are introduced into a conversation.”

Facutly members who are unable to attend are encouraged to share thoughts, concerns or ideas with the Office of the Provost at provost@uoregon.edu.

“As we continue our work on the development of these metrics, we welcome your advice and input,” the memo reads. “The goal is to have a mechanism for the transparent allocation of resources to maximally enhance the excellence of our university.”

I do wonder who writes this nonsense.

On the Work of the University, from Prof Ken Calhoon

It’s not just Nobel Prize winning economists and the UK Research Councils who think the administration’s research metrics plan is a mistake. Ken Calhoon, head of UO’s Dept of Comparative Literature, provides a less mathematical but no less thorough dissection:

February 27th, 2018

Dear Friends and Colleagues,

Mozart wrote forty-one symphonies, Beethoven only nine. I have written none, but I offer these thoughts on metrics. I apologize in advance for the naiveté, as well as the pathos.

On September 14th, at the beginning of the current academic year, University Provost and Senior Vice President Jayanth Banavar hosted a retreat for “academic leaders” in the EMU Ballroom. The highpoint of the assembly, in my view, was Jayanth’s own (seemingly impromptu) description of the research of David Wineland, the Nobel Laureate who recently joined the UO’s Department of Physics as a Knight Professor. In a manner that suggested that he himself must have been a gifted teacher, Jayanth provided a vivid and accessible account of Wineland’s signature accomplishment—speculative work aimed at increasing the computational speed of computers by “untrapping” atoms, enabling them to exist at more than one energy level at a time. With a humorous gesture to his own person, Jayanth ventured that it might be hard to imagine his body being in two rooms at once, but Wineland had figured out how, in the case of very small particles, this is possible. My own knowledge of quantum physics is limited to the few dismissive quips for which Einstein was notorious, e. g. “God is subtle but not malicious.” In any event, Wineland’s work was made to sound original and impressive. Equally impressive was the personable, humane and effective fashion in which Jayanth, with recourse to imagery and physical self-reference, sought to convey the essence of his fellow physicist’s work across all the disciplines represented in the room—and at the University.

I was inspired by the experience of seeing one person so animated by the work of another. However, my enthusiasm is measured today against the discouragement and disaffection that I and so many of my colleagues feel at the University’s current push, without meaningful debate, to metricize excellence—to evaluate our research in terms quite alien to the values our work embodies. As a department head with a long history at this institution, I must say that I feel helpless before the task of breaking our work down into increments and assigning numerical values to them. It can be done, of course, but the resulting currency would be counterfeit.

Over the course of my thirty-one-year career at the University of Oregon, I have presided over quite a few tenure and promotion cases and have been party to many more, both as departmental participant and as a member, for a two-year stint, of the Dean’s Advisory Committee in the College of Arts and Sciences. I am also routinely asked to evaluate faculty for tenure and promotion at other colleges and universities, where the process is more or less identical to ours. In past years I have been asked to write for faculty at Cornell, Harvard (twice), Johns Hopkins (twice), Washington University, University of Chicago, University of Pennsylvania, University of Minnesota (twice), Penn State, and Irvine, among others. I mention this not to boast—god forbid!—but to emphasize that institutions of the highest standing readily recruit faculty from the UO to assist in their internal decisions on professional merit and advancement.

For such decisions at the UO, department heads solicit evaluations from outside reviewers who are not only experts in the relevant field but are also well placed. They are asked to submit, along with their review, their own curriculum vitae and a biographical sketch. Reviewers are instructed to identify the most significant scholarly contributions which the individual under review has made, and to assess the impact of those contributions on the discipline. They are also asked to discuss the “appropriateness” of the publication venues, and also to “contextualize” their remarks with regard to common practices within the discipline or sub-field. They are asked to compare, “both qualitatively and quantitatively,” the work of the individual under review with that of other scholars in the field at comparable stages in their academic careers. Finally, the outside reviewers are asked to state whether the research record under consideration would meet the standards for tenure and promotion at their home institution. These instructions, which follow a template provided by Academic Affairs, differ little if at all from those I have received from other universities.

In response to these requests, we typically receive narratives, often three and four pages in length, in which reviewers—in accordance with the instructions but also with the conventions of professional service—not only discuss the candidate’s work in detail but also contextualize that work in relation, for example, to the evolving nature of the field, to others working on the same or similar material, not to mention the human content of that material. (I am usually asked to review the work of scholars working on the history of German literature and thought, as well as literary and film theory.) Looking back over the reports I have authored, I see that they contain phrases like “body of work,” “breadth of learning,” “intellectual energy,” “daunting command,” “surprising intervention,” “dazzling insight,” “staggering productivity,” etc. These formulations are subjective. As such, they are consistent with the process whereby one mind comes to grip with another. I am inclined to say that this process is particular to the humanities, but Jayanth Banavar’s lively and lucid presentation of David Wineland’s research would prove me wrong. It conveyed excitement.

What distinguishes the humanities from the sciences and many of the other, empirically oriented fields is that our disciplines are not consensus-based. We disagree among ourselves, often sharply, on questions of approach or method, on the validity and importance of the materials studied, on how arguments or interpretations should be structured or conceptualized. These disagreements may take place between departments at different universities, or within a single department. Disciplines within the humanities are in flux, and we suffer the additional burden of finding ourselves in a social and cultural world whose regard for humanistic work is markedly diminished. We often scramble to re-define our relevance while the ground shifts beneath our feet. To seek a stable set of ostensibly objective standards for measuring our work is to misrecognize the very essence of our work. These same standards risk becoming the instruments of this misrecognition.

In any case, the process of review for tenure and promotion, as formalized by Academic Affairs and by the more extensive guidelines which each unit has created, and for which each unit has secured approval both by its respective college and by Academic Affairs, already accounts for such factors as the stature of a press or journal, the rigor with which books and articles are reviewed, the quantity of publications balanced against their quality, and the impact which the faculty member’s research has had, or may be expected to have. But why the need to strip these judgments of their connective tissue? And for whom?

Curriculum vitae – “the course of [one’s] life.” When I was an undergraduate (at the University of Louisville, no less), I was greatly influenced by an historian of seventeenth-century Britain, Arthur J. Slavin. The dean of the college, he had been a friend of the mathematician Jacob Bronowski, recently deceased at the time, best known for his PBS series The Ascent of Man. One episode of the series begins with a blind woman carefully running her fingers over the face of an elderly, gaunt gentleman and speculating as to the hard course of his life. “The lines of his face could be lines of possible agony,” she says. The judgment is subjective, but accurate: The man, like Bronowski a Polish Jew, had survived Auschwitz, the remnants of which provide Bronowski with a physical backdrop for the dramatic and moving summation of an episode dedicated to the ramifications of the Principle of Uncertainty, which had been formulated by Werner Heisenberg just as all of Europe was about to fall victim to a despotic belief in absolute certainty. “It is said that science will dehumanize people and turn them into numbers. That is false: tragically false. Look for yourself…. This is where people were turned into numbers.”

I don’t mean to overdramatize the analogy, or even really to suggest one. I am more interested in Bronowski’s general statement that “[all] knowledge, all information between human beings, can only be exchanged within a play of tolerance. And that is true whether the exchange is in science, or in literature, or in religion, or in politics, or in any form of thought that aspires to dogma.” The dogma we are faced with today is that of corporate thinking, which is despotic in the sense that it mystifies. We in this country are inclined to think that people who have amassed great wealth know something we don’t—that they have the magic touch. It is from them and their public advocates that we hear the constant calls for governments, universities, prisons, hospitals, museums, utilities, national forests and parks to be run more like businesses. Why? (And which businesses? IBM? TWA? Pan Am? Bear Stearns? Enron? Wells Fargo?) Why is the business model the presumed natural guarantor of good organization? Why not a symphony? an eco-system? a cooperative? a republic? a citizenry? Why is the university not a model for business? Businesses certainly benefit from the talent we cultivate and send their way, outfitted with the knowledge, the verbal agility, the conceptual power that make up our stock in trade.

Our current national political scene presents us with constant images of promiscuous, self-reproducing wealth. Within this context, which is an extreme one, it is urgent that we as a collective make our case, and in terms commensurate with our self-understanding as researchers, thinkers, writers, fine artists, and teachers, not in terms that conform so transparently to the prevailing model of worker productivity.

Those who maintain that inert numbers are the only means we have for communicating our value have already been proven wrong by our own provost. I call upon our president, our provost and our many deans to bring their considerable talents, their public stature, as well as their commitment to the University, to bear on our cause. Many of us, I’m sure, are ready to support you.

With respect and thanks,

Ken

Kenneth S. Calhoon, Head
Department of Comparative Literature
University of Oregon
Eugene, OR 97403-5242

CAS faculty meet today at 2PM for “Metrics, Humanities, and Social Science”

Dear Humanities and Social Science faculty,

Please join your colleagues Scott DeLancey (Linguistics), Spike Gildea (Linguistics), Volya Kapatsinski (Linguistics), Leah Middlebrook (Comparative Literature), Lanie Millar (Romance Languages), and Lynn Stephen (Anthropology) for a discussion of metrics for measuring our departmental research quality and the quality of our graduate programs. The panel will briefly summarize work done in some of our departments to identify what we value in our own work, ways to measure how well we achieve goals we value, and how we might take leadership in moving comparator institutions towards identifying and measuring their goals in comparable ways.

Tuesday, February 27 2:00-3:30 pm Gerlinger Lounge

Thanks to Lanie, Leah, Lynn, Scott, Spike, and Volya for their willingness to lead a timely discussion as we all consider how to create meaningful and useful metrics for our departments and disciplines.

Karen Ford and Phil Scher

More misguided metrics – this time it’s “learning outcomes” assessment

UNC History Professor Molly Worthen in the NYT on learning outcomes assessment:

I teach at a big state university, and I often receive emails from software companies offering to help me do a basic part of my job: figuring out what my students have learned.

If you thought this task required only low-tech materials like a pile of final exams and a red pen, you’re stuck in the 20th century. In 2018, more and more university administrators want campuswide, quantifiable data that reveal what skills students are learning. Their desire has fed a bureaucratic behemoth known as learning outcomes assessment. This elaborate, expensive, supposedly data-driven analysis seeks to translate the subtleties of the classroom into PowerPoint slides packed with statistics — in the hope of deflecting the charge that students pay too much for degrees that mean too little.

It’s true that old-fashioned course grades, skewed by grade inflation and inconsistency among schools and disciplines, can’t tell us everything about what students have learned. But the ballooning assessment industry — including the tech companies and consulting firms that profit from assessment — is a symptom of higher education’s crisis, not a solution to it. …

No intellectual characteristic is too ineffable for assessment. Some schools use lengthy surveys like the California Critical Thinking Disposition Inventory, which claims to test for qualities like “truthseeking” and “analyticity.” The Global Perspective Inventory, administered and sold by Iowa State University, asks students to rate their agreement with statements like “I do not feel threatened emotionally when presented with multiple perspectives” and scores them on metrics like the “intrapersonal affect scale.” …

UO’s federal accreditor is the not very transparent Northwest Commission on Colleges and Universities (NWCCU). Their website has a message from their interim president:

I am writing to thank you for your participation in and support of the activities we initiated last November to gather information from you about how NWCCU can better achieve its mission of assuring educational quality, enhancing institutional effectiveness, and fostering continuous improvement. Your response to the survey and participation in the Annual Meeting and Town Halls guided development of a report from the Task Force on Renewal of Recognition that was accepted by the Board of Commissioners at its January 2018 meeting.

One of the most consistent recommendations received was that we improve communication with the member institutions. This message is part of a larger communication strategy that we are implementing to move forward on the recommendations of the Task Force.

Speaking of communication, good luck trying to find the Task Force report on their website.

UO’s website at https://accreditation.uoregon.edu/ documents the years of work faculty and administrators have spent on this assessment crap on orders from the NWCCU. More is coming.

UK research councils & Nature unimpressed by VP Brad Shelton’s shiny new metrics plan

2/7/2018: From The Times:

All seven of the UK’s research councils have signed up to a declaration that calls for the academic community to stop using journal impact factors as a proxy for the quality of scholarship.

The councils, which together fund about £3 billion of research each year, are among the latest to sign the San Francisco Declaration on Research Assessment, known as Dora.

Stephen Curry, the chair of the Dora steering committee, said that the backing of the research councils gives the initiative a “significant boost”.

Dora was initiated at the annual meeting of the American Society for Cell Biology in 2012 and launched the following year. It calls on researchers, universities, journal editors, publishers and funders to improve the ways they evaluate research.

It says that the academic community should not use the impact factor of journals that publish research as a surrogate for quality in hiring, promotion or funding decisions. The impact factor ranks journals according to the average number of citations that their articles receive over a set period of time, usually two years.

Professor Curry, professor of structural biology at Imperial College London, announces the new signatories to the declaration in a column published in Nature on 8 February. …

1/26/2018: Nobel laureate unimpressed by VP Brad Shelton’s shiny new metrics plan

The 2016 Nobel Prize for Economics went to Oliver Hart and Bengt Holmstrom, for their life work on optimal incentive contracts under incomplete information. Holmstrom started out in industry, designing incentive schemes that used data driven metrics and strong incentives to “bring the market inside the firm”. However, as he said in his Nobel Prize lecture:

Today, I know better. As I will try to explain, one of the main lessons from working on incentive problems for 25 years is, that within firms, high-powered financial incentives can be very dysfunctional and attempts to bring the market inside the firm are generally misguided. Typically, it is best to avoid high-powered incentives and sometimes not use pay-for-performance at all.

I thought that Executive Vice Provost of Academic Operations Brad Shelton and the UO administration had learned this lesson too, after the meltdown of the market-based “Responsibility Centered Management” budget model that Shelton ran. Apparently not. Today the Eugene Weekly has an article by Morgan Theophil on “Questionably measuring success” which focuses on UO’s $100K per year contract with Academic Analytics for their measure of faculty research “productivity”.

Brad Shelton, UO executive vice provost of academic operations, says Academic Analytics measures faculty productivity by considering several factors: How many research papers has this faculty member published, where were the papers published, how many times have the papers been cited, and so on.

“Those are a set of metrics that very accurately measures the productivity of a math professor, for example,” Shelton says.

No they don’t. They might accurately count a few things, but those things are not accurate or complete measures of a professor’s productivity, and as Holmstrom explains later in his address – in careful mathematics and with examples such as the recent Wells Fargo case – there are many pitfalls to incentivising inaccurate, incomplete, and easily-gamed metrics. Most obviously, incentivizing the easily measured part of productivity raises the opportunity cost to employees (faculty) of the work that produces the things that the firm (university) actually cares about it, so true productivity may actually fall.

As the EW article also explains, UO has spent $500K on the Academic Analytics data on faculty “productivity” (i.e. grants, pubs, and citations) over the past 5 years, prompted in part by pressure from former Interim President Bob Berdahl, who now has a part-time job with Academic Analytics as a salesman.

Despite this expenditure, UO has never used the data for decisions about merit and promotion, in part because of opposition from the faculty and the faculty union, and in part because of a study by Spike Gildea from Linguistics documenting problems with the accuracy of the AA data. And today the Chronicle has a report on the vote by the faculty at UT-Austin to join Rutgers and Georgetown in opposing use of AA’s simple-minded metrics.

Meanwhile back at UO, VP Shelton is trumpeting the fact that AA has been responsive to complaints about past data quality:

“What we found is that Academic Analytics data is very accurate — it’s always accurate. If there are small errors, they fix them right away,” Shelton says.

Always accurate at measuring what?

Word from the CAS faculty heads meeting yesterday is that UO will not require departments to use the AA data – but that we’ll keep paying $100K, or about the salary of one scarce professor for it. Why? Because some people in Johnson Hall don’t understand another basic economic principle. When you’re in a hole, stop digging:

I forget who got the Nobel Prize for that one.

Here’s a draft of the sort of departmental incentive policies that are now floating around, in response to Shelton’s call:

Keep in mind that even if your department decides to develop a more rational evaluation system for itself, there will be nothing to prevent the Executive Vice Provost of Academic Operations from using the Academic Analytics data to run its own parallel evaluation system.

The Tyranny of Metrics

InsideHigherEd’s interview with Jerry Muller about his new book. Published by the high impact-factor Princeton University Press. One excerpt:

Q: Some colleges, government agencies and businesses promote tools to evaluate faculty productivity — number of papers written, number of citations, etc. What do you make of this use of metrics?

A: Here too, metrics have a place, but only if they are used together with judgment. There are many snares. The quantity of papers tells you nothing about their quality or significance. In some disciplines, especially in the humanities, books are a more important form of scholarly communication, and they don’t get included in such metrics. Citation counts are often distorted, for example by including only journals within a particular discipline, thereby marginalizing works that have a transdisciplinary appeal. And then of course evaluating faculty productivity by numbers of publications creates incentives to publish more articles, on narrower topics, and of marginal significance. In science, it promotes short-termism at the expense of developing long-term research capacity.

More on the $600K Brad Shelton has dropped on Academic Analytics so far here.