Press "Enter" to skip to content

Faculty has wasted $600K on unused metrics data that purport to measure and incentivise excellence in Johnson Hall administrators

Last updated on 02/19/2018

Or do I have this backwards? Come to the March 16 Town-Hall meeting to find out:

Dear Colleagues,

I have now been here for several months and I’ve seen some tremendous activity and enthusiasm among members of the faculty. I am continually impressed with the terrific work underway from a variety of disciplines.

The University of Oregon has an institutional mission of excellence—in teaching, in research and scholarship, and in service. Our university is able to celebrate its strengths in a large variety of academic disciplines including the liberal and fine arts, the physical and social sciences, and the professional programs because of the outstanding contributions from members of our tenure-track and non-tenure-track faculty. To aid in our mission, we are creating new systems and tools, such as a new resource allocation system and the Institutional Hiring Plan for the recruitment of tenure-track faculty members. But how should we measure excellence? How should we decide which areas to target for faculty recruitment? How should we know whether we are continuously improving?

At the UO, we are in the process of developing several types of metrics to help address these questions.

I know that there are a lot of questions about what this means, and I have heard concerns that the metrics will be used inappropriately for things such as “ranking” faculty members or departments. I have also heard rumors that we will be using metrics to establish some sort of threshold at which faculty members could be “cut” if they do not meet that threshold. I want to help allay some concerns and answer some questions. As a former dean and faculty member myself, I understand how questions and even some anxiety can arise when metrics are introduced into a conversation.

Before my arrival, the UO had established a work group of faculty and staff members to make recommendations on metrics. The members committed a great deal of time and insight into this process, and I’m grateful for their work.

It is also important that we learn lessons from other institutions which have employed—or tried to employ—metrics. Without going into a lot of detail, please be aware that I am committed to studying what has worked and what hasn’t worked at other universities. This not only makes sense but is the responsible thing to do.

It is now important to move forward in our discussion. I am sure not all of your questions will be answered below, but I hope it provides some important context. If you have additional questions, do not hesitate to reach out to my office at [email protected]

We have organized our thinking around metrics in two overarching areas—operational and mission. Operational metrics objectively measure student demand and the capacity to accommodate this demand with existing instructors. This will provide initial information on capacity and need. Mission metrics help us understand how well we are collectively contributing to the university’s mission. Are we serving our students? Are we contributing to our professional fields? Are we expanding knowledge? They represent ways to understand our impact, hold ourselves accountable, and assist with allocating resources.

Mission metrics can be disaggregated into three components: undergraduate education, graduate education, and research. Some of these will look at activity and performance specific to the UO (e.g., time to degree, general participation in first-year programs) and others will look at discipline-specific information as articulated by schools and colleges (e.g., citations, publications, awards).

The process of defining local-level mission metrics must start with local units, where the disciplinary experts reside. The provost’s office will be coordinating this process with assistance from the deans of the schools and colleges. I appreciate the work being done within individual departments, schools, and colleges to develop these latter metrics. You, not I, know how best to assess quality in your area: how the College of Design assesses performance will vary from the natural sciences or law, and so forth.

We have the operational metrics in place and are currently in the process of defining the local mission metrics with input from the units. When developed, the metrics will help promote and measure excellence. A thoughtful, data-driven approach to managing investments is critical, especially in a time of constrained resources. There are terrific programs here at the UO, but there are also—as in any massive organization—pockets that may not be of the highest quality. Being able to identify both is critical for strong and effective management. It is imperative that we use not just good data but the right data to inform these decisions.

However, we of course must exercise caution in focusing singularly on certain indicators (and the movement thereof) without keeping in mind the larger reasons for which they were created. Typically, a unit that performs well across a range of metrics is likely to be excellent. But I recognize and appreciate there are a number of factors that go into that assessment, and thus I do not intend to have a prescribed set of “if-then” outcomes simply based on information gathered from metrics. While the conversation about metrics continues to unfold, I can emphatically state that these metrics cannot be used for individual personnel decisions unless they are added to the unit-level promotion and tenure and merit raise policies through the shared governance process established by the United Academics collective bargaining agreement.

As we continue our work on the development of these metrics, we welcome your advice and input. The goal is to have a mechanism for the transparent allocation of resources to maximally enhance the excellence of our university.

This is a new approach, and one that is likely to raise questions. I encourage you to share your thoughts, ideas, or questions with my office. You are also invited to an open town hall–style meeting on Friday, March 16, from 11:00 a.m. to noon in 156 Straub Hall. This town hall will be a two-way discussion on the purpose, value, and use of metrics as well as other topics, including the new academic allocation system, the Institutional Hiring Plan, and whatever else is on your mind.

Please, feel free to contact my office with any questions you may have. I look forward to seeing you at the forum.

Best Regards,

Jayanth Banavar

Provost and Senior Vice President

4 Comments

  1. Anonymous 02/15/2018

    “I can emphatically state that these metrics cannot be used for individual personnel decisions…”

    The metrics are going to be used to allocate resources to units. But the metrics are based on what the individual faculty members in the unit do: publish books and articles, win awards, win grants. There is literally no way for units to respond to those incentives without passing the incentives to their faculty.

    Evaluation of individual faculty by these metrics may or may not be the intent, but it will predictably and inescapably be the effect.

  2. Elliot Berkman 02/18/2018

    I am sympathetic to the argument that people respond to incentives. Of course we do. But I think this particular slippery slope argument (incentivizing at the unit level will permeate down to individual faculty) has at least two problems.

    First, the procedures for rewarding individual faculty for performance (e.g., P&T, merit raises, post-tenure review) are articulated in the CBA and run by the units themselves. So the people making decisions about unit resources (provost, deans) are different from the people making decisions about faculty resources (review committees, heads).

    Second, quality evaluations are already informing resource decisions. People outside departments are already making decisions about allocations, particularly of faculty lines. I have not yet seen any evidence that the reward of getting more TTF lines or the thread of getting fewer is being felt by individual faculty to the point that they are changing their behavior in any particular way. I don’t think many of us are right now motivated to increase the quality or quantity of our work so that our units might get some goodies next year. (And I don’t actually see that changing too much once we articulate what indices of quality/quantity might be.)

    • uomatters Post author | 02/18/2018

      I don’t understand your comment.

      First: Yes, the people making the evaluations are different, but how will that prevent departments from adopting the incentives the deans will now be using for individual faculty promotion & merit decisions? Obviously it’s in their interest to do so. More colleagues = more interesting people to work with.

      Second: While I believe you when you say quality evaluations are already informing resource decisions, it’s not clear which way this runs. UO’s law school, for example, has had many detailed metrics for many years, due to ABA rules. They were falling on those metrics relative to other law schools. How did this decline inform JH’s resource allocation decisions? JH decided to take $2M per year from CAS, and gave it to Law. This year the subsidy was cut to $1M – but law lost $2M, and so JH made up the difference and then topped it off with a new faculty line for them.

      Was this the right response to the Law School’s falling metrics? Who knows. But I don’t see how the metrics informed the decision, or how they will inform future decisions, for good or bad. On the other hand I do have an idea of what the administration is proposing to pay for metrics next year – $160K for Academic Analytics, plus who knows how much for administrators to spin the data.

      Let’s call it what it is – a waste of money that will cost us one less professor, and produce no plausible improvement in JH decision-making.

      • Dog 02/18/2018

        Wasn’t going to bring this up but whatever,

        An example of high variation among departments can be found,
        in the form of a metric, by taking the annual department budget
        and divided that by the annual number of Bachelors degrees given – compare that to the average for CAS.

        When doing this, I believe it will show a very large variation and this too large of variation, I believe is part of the resource issue.

        Some departments are very cheap in this metric (particularly ENVS) while other departments can be expensive.

        Cheap mean – relatively small faculty size – lots of degrees

        Expensive means – relatively large faculty size – few degrees (in some cases less than the number of faculty in the dept.)

        I am not saying this metric should be used, I am saying it illuminates large variation within CAS.

        I am sure this metric has never been published.

Leave a Reply

Your email address will not be published. Required fields are marked *