Science

We Asked a Facebook A.I. Researcher Whether the App Secretly Records Us

"What crazy people would believe in."

Unsplash / Tim Bennett

Facebook’s ad recommendations are a little too good sometimes, leading to a popular theory that the social network’s app is using smartphone microphones to secretly listen in on conversations. The rumor is widespread enough to have garnered a Snopes post debunking the idea.

And as Tomas Mikolov, a research scientist with Facebook AI Research, explained in an Inverse interview, the grim reality that humans are actually quite easy to predict.

“The idea that the Facebook application would be listening to what you are saying to your phone, and trying to infer from it something for personalized ads, just seems to be kind of like some conspiracy theory, and more like what crazy people would believe in,” Mikolov told Inverse Friday during an interview at the Human-Level Artificial Intelligence conference organized by GoodAI in Prague, Czech Republic. However, Mikolov also clarified that he’s “not working on ads, I’m not even working on applied teams, so this is basically my personal opinions rather than the opinions of someone that actually knows the reality, but I’ve been doing my PhD at speech recognition groups, so I actually know how computation-intensive speech recognition is.”

The theory has proven popular, even after Facebook categorically denied it in a June 2016 statement, where it explained that ads are “based on people’s interests and other profile information — not what you’re talking out loud about.” A New Statesman experiment in March failed to alter ads by speaking close to the phone, with the article suggesting the theory was a distraction from bigger Facebook privacy issues. Nonetheless, the theory still regularly crops up on social media and in research studies:

“Let’s assume that you’re a person in Prague, in Czech Republic,” Mikolov said. “It’s around noon. You are not speaking Czech, and now you want to search for something, and the phone will already show you ads for recommendations for some restaurants around you. You’ll think ‘oh, we were just talking about restaurants and going for lunch, so the phone was listening to me, and there must be some A.I. that knows what I’m talking about and that’s why I see these recommendations!’ But maybe no, maybe it’s actually different. There have been thousands of people in the same situation before you, in the same city, with the same background who around noon were looking for places where to go for lunch, and now you get some recommendations right away. Maybe it’s just some large-scale statistics about what other people are doing in the same context…nobody actually needs to listen to what you are saying, so the phone can guess what you want to do.”

That doesn’t put Facebook in the clear for how it uses data, just that analyzing conversations with the mic doesn’t make much sense. Even Christopher Wylie, the whistleblower in the Facebook-Cambridge Analytica scandal, told the British House of Commons’ culture committee in March that natural language processing of the sort that could listen to a conversation and provide an intelligent response “would actually be quite hard to scale.”

Facebook could be using the mic in other ways, though. Wylie explained that companies “generally” use the mic for “environmental” purposes, analyzing whether a user is at home, in the workplace, or somewhere else based on the general audio in their surroundings. Facebook also filed a patent for a system in June that would use the mic to identify which show a user is watching — Shazam for TV, essentially. However, Facebook quickly issued a statement describing the idea as “speculative in nature,” and that the idea “has not been included in any of our products, and never will be.”

However, Mikolov noted that computer systems can sometimes work in ways that give the illusion of real intelligence. He described his work at Google, where he was a research scientist for nearly two years until May 2014, working on changing words into vectors under the team “word2vec.” These number-based representations of concepts allowed the team to produce seemingly intelligent results with little knowledge.

“You can do king minus man plus woman, and this equation gives you the result of a vector that’s very close to queen,” Mikolov said. “It almost seems magical that you can be forming analogies just by basically calculating with the words. Again, even if it looks magical, it’s basically just a property of the data itself. Even if you have a huge amount of data, and you capture some simple statistics in amongst the data, you have a lot of these very fancy patterns emerging that at the first sight seem to require some high-level understanding of the language.”

Facebook already has a lot of information to analyze. It knows your likes, dislikes, location, friend circles, online interactions and even more. What’s more is that users regularly provide this information willingly — no secret microphone tricks required.

“If you have billions of likes for some pages from people in the same context then you can probably predict what other people in the same situation would do, and you don’t need to understand much beyond simple statistics that given these features this is the likely outcome,” Mikolov said.

Editor’s Note: The Human-Level Artificial Intelligence conference funded Inverse’s travel and accommodation to cover the event, but the organization has no input over Inverse’s editorial coverage.

Related Tags