Press "Enter" to skip to content

Moneyball for professors

InsideHigherEd has a review:

Michael Lewis’s 2003 book, Moneyball — later made into a movie starring Brad Pitt — tells the story of how predictive analytics transformed the Oakland Athletics baseball team and, eventually, baseball itself. Data-based modeling has since transcended sport. It’s used in hiring investment bankers, for example. But is academe really ready for its own “moneyball moment” in terms of personnel decisions?

A group of management professors from the Massachusetts Institute of Technology think so, and they’ve published a new study on a data-driven model they say is more predictive of faculty research success than traditional peer-based tenure reviews. In fact, several of the authors argue in a related essay in MIT Sloan Management Review that it’s “ironic” that “one of the places where predictive analytics hasn’t yet made substantial inroads is in the place of its birth: the halls of academia. Tenure decisions for the scholars of computer science, economics and statistics — the very pioneers of quantitative metrics and predictive analytics — are often insulated from these tools.”

… Bringing predictive analytics to any new industry means “identifying metrics that often have not received a lot of focus” and “how those metrics correlate with a measurable definition of success,” they note. In the case of academics, they say, important metrics are not just related to the impact of scholars’ past research but also details about their research partnerships and how their research complements the existing literature.

Their study, published in Operations Research, suggests that operations research scholars recommended for tenure by the new model had better future research records, on average, than those granted tenure by the tenure committees at top institutions. Defining future success as the volume and impact of a scholar’s future research, the researchers used models based on a concept called “network centrality,” measuring how connected a scholar is within networks related to success: citations, co-authorship and a combination of both. …

5 Comments

  1. counter 12/20/2016

    A “moneyball” model based solely on predicting future research success leads to more accuracy in predicting future research success than a tenure review, which considers research, teaching, and collegiality. Color me surprised.

  2. Dog 12/20/2016

    My concern here is that volume output will once again be disproportional ranked with respect to academic impact on both your research field and teaching.

  3. just different 12/20/2016

    Data-driven analytics is almost always preferable to “professional discretion,” provided the data is used properly. The interpretation of the data needs to take into account people who either get an unfair advantage or who are best positioned to game the system. For example, the SAT heavily advantages wealthier kids, in part because it can gamed by people with the money to pay for SAT prep. Anyone who works in college admissions can tell you that a 90th percentile poor kid is a lot smarter than a 90th percentile rich kid. But without the SAT, it might have been hard to identify the smart poor kid in the first place.

  4. Anonymous 12/20/2016

    “Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing—no deliberate cheating nor loafing—by scientists, only that publication is a principal factor for career advancement. Some normative methods of analysis have almost certainly been selected to further publication instead of discovery. In order to improve the culture of science, a shift must be made away from correcting misunderstandings and towards rewarding understanding. We support this argument with empirical evidence and computational modelling. We first present a 60-year meta-analysis of statistical power in the behavioural sciences and show that power has not improved despite repeated demonstrations of the necessity of increasing power. To demonstrate the logical consequences of structural incentives, we then present a dynamic model of scientific communities in which competing laboratories investigate novel or previously published hypotheses using culturally transmitted research methods. As in the real world, successful labs produce more ‘progeny,’ such that their methods are more often copied and their students are more likely to start labs of their own. Selection for high output leads to poorer methods and increasingly high false discovery rates. We additionally show that replication slows but does not stop the process of methodological deterioration. Improving the quality of research requires change at the institutional level.”

    Smaldino & McElreath (2016). The natural selection of bad science. Royal Society Open Science, 3, 160384.
    http://dx.doi.org/10.1098/rsos.160384
    http://rsos.royalsocietypublishing.org/content/3/9/160384

  5. honest Uncle Bernie 12/20/2016

    Yes, look how well the data driven experts did in calling the recent election! To say nothing of the prmary campaigns before.

Leave a Reply

Your email address will not be published. Required fields are marked *