AI Bots May Have Defeated CAPTCHA Tests for Good

They’re also getting tougher for humans.

Written by Joanna Thompson
Concept of  CAPTCHA — Completely Automated Public Turing test to tell Computers and Humans Apart. Gl...

Are you a robot?

It’s a simple question — one that most of us regularly answer in the form of a CAPTCHA, a hacker-averting quiz that asks us to distinguish a squiggly “l” from an “S,” or perhaps select all of the traffic lights from a grid of blurry photos. But as it turns out, this straightforward query turns out to be one of most pressing in all of cybersecurity.

In recent years, sophisticated text and image-based AI wielded by hackers have sparked an arms race with CAPTCHA programs. Machine learning even may soon render these straightforward Turing tests obsolete — that is, unless they get trickier.

Fancy bots used by hackers could render CAPTCHA tests obsolete.


Today, bot operators looking to sidestep a CAPTCHA can easily find a cheap (or free) “solver” online. “It’s a constantly evolving scenario,” Ting Wang, an AI and cybersecurity researcher at Penn State, tells Inverse.

At the same time, efforts to ramp up CAPTCHAs have made them tougher for humans to crack. In 2014, Google even pitted an algorithm against one of its gnarliest CAPTCHAs. The algorithm passed with flying colors, but only 33 percent percent of human users were able to solve it.

Cracking CAPTCHAs

CAPTCHA tests arrived in the late ‘90s thanks to hackers.


Ironically enough, hackers essentially created CAPTCHAs in the late 1980s and early ‘90s. Early internet forum users realized that moderator programs monitored words related to certain sensitive topics. To post about said topics anyway, they would trick the bots by replacing specific letters with numbers or symbols (a method that eventually evolved into jargon called leetspeak).

By the late ‘90s, computer scientists had realized that these computer-confounding lines of text could help prevent data theft by halting scammer algorithms. In 1997, a group of researchers from Carnegie Mellon University and a team at cybersecurity company Sanctum each independently developed a way to thwart bots using distorted text.

In a feat of acronym acrobatics that would make the folks at NASA (who brought us gems like MESSENGER and NuSTAR) proud, the Carnegie Mellon team named the system “CAPTCHA” for “Completely Automated Public Turing test to tell Computers and Humans Apart.”

CAPTCHAs usually consist of a short line of slightly warped text and/or numbers — but not beyond most people’s recognition. “Basically, the idea is to use a set of tasks that are difficult for computers, but kind of easy for humans,” Wang says.

Humans (supposedly) have no trouble reading these funhouse-mirror symbols. But for many basic bots, they’re unrecognizable. Or at least, they were in the past.

CAPTCHAs are meant to filter out malicious programs that might otherwise spam a website or attempt to steal, say, your credit card information. As a result, they became ubiquitous in the mid-2000s, popping up with virtually every internet purchase.

To beef up online security, computer scientists have come up with various additions to simple text-based tests. Some CAPTCHAs now use visual cues, like picking out traffic lights or distinguishing between pictures of cats and dogs.

Others seek out uniquely human behavioral qualities with CAPTCHAs that unlock depending on how the user interacts with their computer. Whenever you type, for example, there are tiny differences in the amount of time between keystrokes because your fingers have to travel across the keyboard. “We try to model the humanity of that interaction,” Aythami Morales Moreno, a biometrics researcher at the Autonomous University of Madrid in Spain, tells Inverse.

But there’s a problem with designing better CAPTCHAs: They have a built-in ceiling. “If it is too difficult, people give up,” Cengiz Acartürk, a cognition and computer scientist at Jagiellonian University in Kraków, Poland, tells Inverse.

Acartürk and his colleagues conducted a 2021 study in which they scanned the brains of volunteers as they attempted to solve a series of CAPTCHAs. They noticed that the participants were very engaged, as evidenced by the relatively large amounts of oxygen used by their brains — but only up to a point. When they encountered a CAPTCHA that was too tough, the subjects gave up; whatever website they were trying to access wasn’t worth the effort.

A cryptic future

Retina scanning could offer a more secure way to confirm people’s identities.

GeorgePeters/DigitalVision Vectors/Getty Images

So, are CAPTCHAs still worth using?

It depends on the degree of security a website or app needs. CAPTCHA “kind of protects against the low-profile bots, but cannot defend against more sophisticated ones,” Wang says. Similar to the volunteers in Acartürk’s study, hackers using such low-level bots might give up and move on if they don’t get past a hard CAPTCHA after a couple of tries. “So, they are still useful in some limited contexts.”

It’s unclear how engineers will upgrade future CAPTCHA systems, but Wang suspects that — in a case of fighting fire with fire — solutions will probably be designed by AI.

Acartürk thinks that CAPTCHAs might become obsolete in the next couple of decades in favor of other authenticating technologies, such as retina scanning and fingerprint verification.

“My feeling is that we will have authentication systems which will be dispersed throughout the environment,” he says. Though, he admits, CAPTCHA could hold on — after all, 20 years ago, Bill Gates predicted that passwords were on their way out.

In the meantime, the cybersecurity company Cloudflare announced a CAPTCHA alternative called Turnstile this past September. Instead of forcing you to point out photos of motorcycles or taxis, Turnstile works by checking your browser for human behavior. And best of all, the process only takes a second — potentially saving hours of our time in the long run.

Related Tags