Researchers have found people trust algorithms over humans
When a problem gets more complicated, trust in AI increases, the research shows.
What do people trust more: An answer they believe is an algorithmically generated one, or one from a fellow human? It turns out people are more likely to trust answers from an algorithm than crowd-sourced evaluations, especially when it comes to tough problems.
A research paper from the University of Georgia found that people showed more trust in an algorithm's answer when the problem presented to them got more complex over time. Given that algorithms essentially govern our daily lives — by way of online retail, advertising, streaming TV shows or films, looking for live events, consulting Google Maps, determining “beauty scores,” or even trying to find love — this kind of reliance and confidence in such technology is a bit of concern for anyone remotely familiar with artificial intelligence’s numerous flaws.
The methodology — In one test, the 1,500 participants had to look at photos of crowds and decide whether they thought the human or algorithm’s tally of the people was more likely to be correct. Subsequent images showed larger crowds, and as the number of people in the pictures grew, so did participants’ propensity to favor the algorithm.
Another ongoing test involves investigating "the effect of varying the quality of advice and how that relates to algorithmic appreciation." Yet another test involves comparing human appreciation for algorithmic advice "to the advice of a crowd."
With the crowd photos study, researchers noted, "In three pre-registered online experiments, we found that people rely more on algorithmic advice relative to social influence as tasks become more difficult. All three experiments focused on an intellective task with a correct answer and found that subjects relied more on algorithmic advice as difficulty increased."
Researchers say that this effect of relying on the algorithm over human-generated answers "persisted even after controlling for the quality of the advice, the numeracy, and accuracy of the subjects.”
Misplaced trust — As The Next Web points out, AI machines still have trouble differentiating between inanimate objects and people in photos. It requires copious amounts of training and millions of photos to get a model to do so accurately.
Putting our faith in AI when problems get complex is precisely the sort of behavior big tech is positioned to exploit. And countless examples exist of AI replicating the biases and prejudices of those who create it. So what this study shows, if anything, is that we shouldn’t be so quick to put our faith in it, because there are plenty of things at which it’s still worse than humans are, and it’s likely to have the same failings and blind spots as humans do anyway.