CAPTCHAs are an annoying security tool intended to prevent spam by testing website visitors to make sure they're actual people and not automated bots. Sometimes you have to input the characters you see on the screen, other times you're asked to identify objects seen in a collection of pictures. But as artificial intelligence gets better at completing these tasks, security methods will have to keep up.
Researchers from the security firm F-Secure recently figured out how to use AI to fool Outlook's text-based CAPTCHA into thinking a human solved it.
Those squiggly letters are tough — F-Secure said it pulled off the feat by manually labeling over 1,400 CAPTCHAs for its algorithm to learn from. The AI had trouble distinguishing between lowercase and uppercase letters, however, such as by confusing "I" (uppercase i) with "l" (lowercase L). But its researchers noticed that Outlook's CAPTCHA never uses the lowercase L, and once they told their algorithm to never try that letter, its accuracy shot up from 16 percent to 47 percent. And funny enough, some of the mistakes made by the algorithm were because the humans initially labeled some letters incorrectly, like by mistaking "Y" with "V."
Once the labeling was done, the researchers taught their algorithm to mimic the typing of an actual human. If it typed in an answer too quickly, Outlook's CAPTCHA system would recognize that the behavior as atypical and block it. It's easy enough to tell the software to type its answer in at a more lifelike speed, though, so that's what they programmed it to do.
“We said it last year and we will say it again: text-based CAPTCHAs are just not cutting it anymore,” F-Secure said in a blog post. “We are not saying that CAPTCHAs are useless, they should just not be seen as the silver bullet that stops automated attacks.”
The danger of internet bots — Bots can cause serious harm to our experience on the web by doing things like sending out spam emails en masse, or scooping up scarce concert tickets and Nintendo Switch consoles before real humans even have a chance to type in their credit card details. Companies are constantly playing cat-and-mouse with these automated tools, looking for new ways to block them from ruining the web. But as AI becomes more lifelike, it will be progressively more difficult to differentiate between human and robot visitors — as this demonstration from F-Secure shows.