© 2024 KGOU
News and Music for Oklahoma
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Embrace Your Inner Algorithm

iStockphoto

Here's your task: Based on information about individual applicants to an MBA program, you need to predict each applicant's success in the program and in subsequent employment. Specifically, you'll be given basic information — such as the applicant's undergraduate major, GMAT scores, years of work experience and an interview score — and you'll need to assess the applicant's success (relative to other applicants) in terms of GPA in the MBA program and other metrics of achievement. Will the person be in the top quarter of all applicants? In the bottom quarter?

Now, you have a choice. You could either make these predictions yourself, or you could let a sophisticated statistical model make the predictions. The model was designed by thoughtful analysts and based on data from hundreds of past students. It will make predictions based on the same information presented to you: undergraduate major, GMAT scores, years of work experience, and so on. Which do you choose, you or the model?

In a paper just published in the Journal of Experimental Psychology: General, researchers from the University of Pennsylvania's Wharton School of Business presented people with decisions like these. Across five experiments, they found that people often chose a human — themselves or someone else — over a model when it came to making predictions, especially after seeing the model make some mistakes. In fact, they did so even when the model made far fewer mistakes than the human. The researchers call the phenomenon "algorithm aversion," where "algorithm" is intended broadly, to encompass — as they write — "any evidence-based forecasting formula or rule."

If algorithm aversion is a real and robust phenomenon, it could have enormous practical implications. Statistical models can outperform people when it comes to predictions in a variety of domains, including academic performance, clinical diagnosis and parole violations. So, if people are systematically biased against the best tools for prediction — and instead favor less-reliable human judgments — that could result in suboptimal decisions with costs for both individuals and society at large.

So, where might algorithm aversion come from?

The authors of the new research — Berkeley Dietvorst, Joseph Simmons and Cade Massey — suggest that algorithm aversion is due, at least in part, to greater intolerance for errors generated by algorithms than those generated by humans. Such errors led to a reliable decrease in people's confidence in the algorithm's predictions, but seeing comparable (or more frequent) errors from a human didn't lead to an equivalent drop in confidence concerning the human predictor. This might point us towards an answer as to why people favor humans over algorithms, but only by raising a more focused question: Why are we more tolerant of human error than of algorithmic error?

Additional data from the studies provide some hints. For example, the researchers found that people tended to think that humans would be better than the algorithm when it came to detecting exceptions, finding underappreciated candidates, learning from mistakes and getting better with practice. In contrast, they thought the algorithm would be better at avoiding obvious mistakes and weighing information consistently. If people assume — rightly or wrongly — that algorithms employ simple, fixed rules, then errors could be taken as evidence that the algorithm is bad, whereas human error could be explained in more nuanced ways, and with the possibility for improvements in future performance.

This research helps establish an important phenomenon with theoretical and practical implications. But is it truly evidence that people have an aversion towards algorithms?

There a few reasons for caution. First, a systematic preference might not indicate an aversion — a term that implies an affective or emotional component to the decision. I might prefer to eat Thai food over Indian food, for example, without having an aversion to Indian food, and even preferring it on occasion.

Second, and more fundamentally, I'm left wondering how people are thinking of their own decision process if not in algorithmic terms — that is, as some evidence-based forecasting formula or rule. Perhaps the aversion — if it is that — is not to algorithms per se, but to the idea that the outcomes of complex, human processes can be predicted deterministically. Or perhaps people assume that human "algorithms" have access to additional information that they (mistakenly) believe will aid predictions, such as cultural background knowledge about the sorts of people who select different majors, or about the conditions under which someone might do well versus poorly on the GMAT. People may simply think they're implementing better algorithms than the computer-based alternatives.

So, here's what I want to know. If this research reflects a preference for "human algorithms" over "nonhuman algorithms," what is it that makes an algorithm human? And if we don't conceptualize our own decisions as evidence-based rules of some sort, what exactly do we think they are?


Tania Lombrozo is a psychology professor at the University of California, Berkeley. She writes about psychology, cognitive science and philosophy, with occasional forays into parenting and veganism. You can keep up with more of what she is thinking on Twitter: @TaniaLombrozo.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Tania Lombrozo is a contributor to the NPR blog 13.7: Cosmos & Culture. She is a professor of psychology at the University of California, Berkeley, as well as an affiliate of the Department of Philosophy and a member of the Institute for Cognitive and Brain Sciences. Lombrozo directs the Concepts and Cognition Lab, where she and her students study aspects of human cognition at the intersection of philosophy and psychology, including the drive to explain and its relationship to understanding, various aspects of causal and moral reasoning and all kinds of learning.
More News
Support nonprofit, public service journalism you trust. Give now.