Press "Enter" to skip to content

Bad news from Italy for Brad Shelton’s metrics scheme

At this point the whole metrics fiasco is so toxic that (almost) everyone involved wants to just drop it. Yet it is a well-known fact that Johnson Hall never makes a mistake. What a dilemma!

At his recent metrics town hall, after hearing the litany of objections from faculty and heads, Provost Banavar offered the ingenious solution of saying that any proposal departments submit, even purely verbal descriptions of faculty research productivity that refuse to categorize journals and presses by quality, will count as “metrics”.

Dilemma resolved!

Meanwhile here’s the news from Italy – thanks to UO Psych department prof Sanjay Srivastava for the tip.

Self-citations as strategic response to the use of metrics for career decisions

Marco Seeber, Mattia Cattaneo, Michele Meoli, Paolo Malighetti

There is limited knowledge on the extent to which scientists may strategically respond to metrics by adopting questionable practices, namely practices that challenge the scientific ethos, and the individual and contextual factors that affect their likelihood. This article aims to fill these gaps by studying the opportunistic use of self-citations, i.e. citations of one’s own work to boost metric scores. Based on sociological and economic literature exploring the factors driving scientists’ behaviour, we develop hypotheses on the predictors of strategic increase in self-citations. We test the hypotheses in the Italian Higher Education system, where promotion to professorial positions is regulated by a national habilitation procedure that considers the number of publications and citations received. The sample includes 886 scientists from four of science’s main disciplinary sectors, employs different metrics approaches, and covers an observation period beginning in 2002 and ending in 2014. We find that the introduction of a regulation that links the possibility of career advancement to the number of citations received is related to a strong and significant increase in self-citations among scientists who can benefit the most from increasing citations, namely assistant professors, associate professors and relatively less cited scientists, and in particular among social scientists. Our findings suggest that while metrics are introduced to spur virtuous behaviours, when not properly designed they favour the usage of questionable practices.

11 Comments

  1. Dog 03/24/2018

    Only the Italians would talk about virtuous behavior …

  2. honest Uncle Bernie 03/24/2018

    Word is that we have the board to thank for the demand for metrics and the ensuing insanity. The same Board that apparently can’t gather simple budget data needed to make informed observations, let alone decisions.

    Any useful metrics are probably available at maybe 5% of the expenditure of effort. Want to know how UO mathematics ranks? Can get a pretty good idea in a minute online. Want to know about the sub-discipline of analysis? Ditto. Over the course of 50 years? A little more will get it. How is UO resourced? We all know that.

    Can anyone bring home the bacon? We know that too, right?

  3. honest Uncle Bernie 03/24/2018

    By the way, has anyone heard metrics on our students? Anecdotes abound of badly lagging performance lately, at least in my circles. Is UO quietly scraping closer to the bottom? It would fit with declining enrollment, especially among foreign, er, international students.

  4. Elliot Berkman 03/24/2018

    Good thing the metrics being discussed at UO are not tied to promotion or pay in any way!

    I think it is important for readers of this blog to know that the scholarly literature on metrics does not support the claims being made here about perverse incentives. It is unclear to me why UO Matters is using easily disproven arguments to scare people about metrics instead of engaging in discussion in the substance of the issue, which would be welcome.

    • uomatters Post author | 03/25/2018

      Hi Elliot –

      I and almost everyone else believe these dept level metrics will inevitably drive individual level promotion and merit decisions. They’ll be cheap, easily available information that the dept will have to collect anyway as part of the “metrics” report. Why wouldn’t the dept and the college and the provost use them? Why wouldn’t the provost use the department’s ranking of journals to determine whether or not to match outside offers, reject or approve a job candidate, etc?

      The history of the misuse of metrics is legion. For example, when Simon Kuznets developed GDP as a measure of economic activity he explicitly warned against its use as a welfare measure. Now we’ve got Trump using the increasing GDP numbers to tell us he’s Making America Great Again.

      But hey, I’ve been called a skeptic before. In fact, as part of the Econ department’s current metrics scheme, faculty get 1 points each time we get called a skeptic, and 2 points for being called a cynic. So far this year I’m well below the department median on this dimension, and I’m just trying to boost my numbers. Sorry, nothing personal!

      • Dog 03/25/2018

        My department already uses these kind of metrics to determine
        merit raises – it always has.

        The only time the department semi-tried to not give some one tenure, that decision was entirely metrics based.

      • Elliot Berkman 03/25/2018

        Thanks for clarifying your argument. This is helpful in advancing the discussion. As I understand it, the concern is less about the stated use of the metrics – for understanding how departments contribute to the mission of the university – and more about the slippery slope where they might lead in terms of individual evaluation. It makes sense to consider that, and I have two points that mitigate the concern.

        First, United Academics UO and the Provost recently signed a Memorandum of Understanding (MOU) to prevent these metrics from being used for individual promotion and tenure decisions unless approved by the department and Union through the usual channels for changing the collective bargaining agreement (more here: https://mailchi.mp/uauoregon/important-update-on-metrics-mou). The fact that UAUO and OPAA were easily able to reach this agreement demonstrates good faith on the part of both, and that potential concerns can be addressed through conversation.

        However, that MOU does not address decisions related to outside offers (i.e., retention offers) and approvals of new hire requests. That leads to my second point: OPAA could be using metrics already for that purpose but has not used them to my knowledge. Nothing prevents OPAA from, say, counting pubs or grant dollars or looking up IFs or – heaven forbid – using Academic Analytics to render decisions, but it has not. Instead, it has used listed to expert judgment, perhaps alongside available metrics, to render decisions. If anything, we could supplant those crude measures with better, department-informed measures. And past behavior is a good predictor of future behavior.

        Another important point to emphasize is that the departments are nominating these metrics. If a department says it contributes to the research mission of the university through activities A, B, and C, then requests a new line for someone doing D, then it makes sense for the Provost to push back. And if a department cannot make a compelling argument for a candidate who does not fit the usual mold in the department (because, again, there is always room for discussion), then it seems totally reasonable to deny that request. Right now we have the opportunity to think about what A, B, and C are. Presumably we are already evaluating candidates on those attributes in an implicit way – right now is the chance to make those criteria explicit.

        I think the GDP, which you brought up, is a useful analogy. Of course, it can be misused. But no serious economist (and I apologize if I am accusing you of being an unserious economist here!) thinks that we should not calculate GDP, and only set economic policy based on expert judgement (which is necessarily informed by metrics but through an opaque, implicit process). Instead, policy makers such as the Fed Board look at a range of metrics and make a decision on the basis of those metrics together with their expert opinion. They also provide a rationale for their decision that describes how they used the information to make a decision. This is more or less exactly what OPAA is proposing to do with respect to resource allocation at the department level.

        Right now, they have opinions (whether expert or not is another question), plus metrics about budgets and SCH. It makes all the sense in the world to me to supplement those with broad bandwidth information about departmental contributions to the research mission of the university, which right now really are not formally represented in the decision making process.

        • Dog 03/25/2018

          Is this how we are going to hire Knight Campus faculty?

        • oldtimer 03/25/2018

          Some good points, but the experience with budget model metrics is not encouraging. Over the course of the introduction of paying for students on seats, with little attention to quality, the campus quickly experienced a sharp shift to large lecture classes poorly paid NTTFs, and a drop to the bubble on membership in the AAU. Ooh, yes, and a faculty union in response. is there any evidence that the current experts understand the subtleties required in the use of metrics, as exemplified for example, in prior mistakes?

          • Dog 03/25/2018

            and there could be so me small positive net good that comes out of this metrics exercise as departments might get a bit self-educated through self-reflection

            Once upon a time we did have external reviews of departments –
            I am not sure if that practice has continued. While nothing ever became of those reviews in terms of resources or Deans even paying attention to the dept (Deans treat these things around here as checklist items only) – these reviews did point out some
            areas in which the department was lagging behind (in addition
            to where they were ahead) and that information was somewhat useful.

  5. Dog 03/24/2018

    Students, students?

    we don’t need no stinkin’ students

    and

    we don’t need no stinkin’ metrics

Leave a Reply

Your email address will not be published. Required fields are marked *