During Facebook’s end-of-the-year meeting yesterday, company executives revealed a number of plans for upcoming tech ventures, including a neural sensor designed to translate users’ thoughts into action. And thus the tech behemoth’s quest for world domination continues.
Much of the meeting was dedicated to re-upping Facebook’s commitment to artificial intelligence development, BuzzFeed News reports. The meeting was led by CEO Mark Zuckerberg; discussion topics ranged from Facebook’s many public relations crises to praise of the company’s response to the Black Lives Matter movement.
Topping the list of new technology in development by the company is an AI-powered tool designed to summarize news articles called “TLDR” (an acronym for “Too Long, Didn’t Read”). Facebook is aiming for the TLDR tool to have both text-based and audio narration.
There’s nothing quite like a Facebook hype-up meeting to remind us just how far Zuckerberg’s company is willing to stretch itself in order to rule every corner of the internet.
AI is everything — Underscoring the majority of yesterday’s meeting was Facebook’s commitment to tools that utilize artificial intelligence to increase efficiency and “help” users. There’s a video chat tool in the pipeline that will use AI to fake your face movements in low-data situations, for example.
“We are paving the way for breakthrough new experiences that, without hyperbole, will improve the lives of billions,” said Mike Schroepfer, Facebook’s Chief Technology Officer, during the meeting. We must have different definitions of the word hyperbole.
One tool, still in early development at Facebook, is meant to use neural sensor technology to translate your thoughts into actions. Schroepfer closed his section of the meeting with a very abstract teaser about this tech, never really explaining in detail how it will be used.
Schroepfer also touted Facebook’s AI moderation tools, which he said are already detecting 95 percent of all hate speech. He said the company’s data centers will be 10 to 30 times faster in the near future.
But…is it? — It’s obvious from this meeting alone that Facebook is really doubling down on AI. Nearly every development in the company’s future relies on the technology, and executives continue to push the message that AI is making Facebook — and the internet as a whole — a safer, better place to be.
As BuzzFeed News notes, though, not all of the company’s employees agree with this all-in approach. Facebook’s plans to continue moving away from human moderation worry some employees. “AI will not save us,” one departing employee said. Another claimed the company was deleting “less than 5 percent” of hate speech posted to Facebook.
We may not have an inside look at Facebook’s top-notch AI moderation tools, but even the public has seen the shortcomings of this tech. Besides the usual problems of bias and racism, Facebook’s AI in particular has struggled time and time again to keep up with moderating dangerous content on the platform.
The inclusion of more artificial intelligence in Facebook’s arsenal opens the platform to even more headaches. That TLDR news tool, for example: do we trust AI to summarize an article and present us with its nuances? The only way Twitter has been able to successfully slow the spread of misinformation is by forcing users to slow down and actually read an article before sharing it — now Facebook wants to do just the opposite.
All this expansion from a company already being investigated for monopolizing the internet. Can’t see any problems here.