A robotic security guard this week destroyed itself by taking a long, underwater walk in the Washington D.C. mall that it was supposed to patrol. The robot, which has made headlines as being “suicidal” or having “drowned itself,” malfunctioned and tumbled into the fountain.
The egg-shaped security guard was a K5 robot, manufactured by a security company called Knightscope. The K5, which patrols several shopping malls, parking lots, and office complexes, touts advanced scanners and video surveillance that can stream panoramic video, read license plates, and identify the IP addresses of individual smartphones.
Facial detection systems and other sensors can alert human operators to a potential problem while the K5 continues to collect and store vast amounts of biometric data. Knightscope advertises that the K5 will also soon be able to detect guns and alert the authorities.
But this isn’t the first time that a K5 has run into problems — last summer one of the six-foot-tall, 400-pound robots failed to detect and ran into a toddler, knocking the child over and running over his foot without stopping.
It’s not yet known why the K5 failed to notice the big-ass fountain in its path, so people on Twitter have taken to speculation while making some tasteless jokes about the robot committing suicide to escape a boring, thankless job. But one software engineer has a reasonable guess as to what happened.
Why Did the Robot Crash Into the Fountain?
“I could imagine it was “cliff detectors” — exactly what they sound like — reflecting off the water’s surface giving a false positive,” Jeff Masters, a robotics engineer, tells Inverse. Masters said he’s tested “everything imaginable” on robots with similar guidance systems, but wasn’t sure if he had ever “tried to find out if it would ‘see’ water in that situation.”
As of yet, Knightscope hasn’t provided an official comment as to what went wrong with its security robot, but Stacy Dean Stephens, Vice President of Marketing and Sales tells Inverse that this was an isolated incident that they are actively investigating, that no one was harmed, and that Knightscope will provide the mall with a new K5 robot for free.
This marks the latest example that artificial intelligence systems can’t quite reliably make sense of the what their cameras are picking up.
Just last week, a team of researchers demonstrated that the image-detection algorithms designed for autonomous cars was susceptible to small labeling issues that, for example, could trick a car into thinking a dangerous obstacle was something innocuous.