It was on off-hand comment, but telling none the less: Reed Hastings, CEO of Netflix, joked that he wasn’t sure whether the company’s future would be in making television for robots.
The comment closed out a talk by Hastings at this year’s Mobile World Congress in Barcelona, which mostly centered around laying out the streaming giant’s future in a changing media ecosystem.
“Over twenty to fifty years, you get into some serious debate over humans,” Hastings said. “I don’t know if you can really talk about entertaining at that point. I’m not sure if in twenty to fifty years we are going to be entertaining you, or entertaining A.I.s.”
What could this cryptic comment mean? Hastings could be literally referring to the idea of the rise of conscious artificial intelligence robust enough for individual units to start watching and, presumably, paying for Netflix. More likely, however, his reference was more rooted in the integration of A.I. and daily life, and the modern content targeting strategies that Netflix is already starting to employ.
Hastings also said that he believes one of the biggest problems with Netflix is that people feel the service has too much content — and while the more common complaint is that the library of strong content is actually too small, there is a grain of truth to the idea.
Users do struggle to sift through the enormous library of content, and find the gems buried in the masses of third-rate ‘80s comedies and documentaries about UFOs. Historically, Netflix has met this challenge with ever-smarter artificial intelligence to match users with the content they will actually enjoy.
On the sort of 50-year timeline Hastings considered, this could easily expand to change how we consume media, and become the sort of challenge to intellectual independence that many currently see in Facebook. Once we’re getting a curated-enough stream of ideas, whether those ideas come from streaming television or social media news posts, the process of engaging with the world will have fundamentally changed.
If our consumption occurs within borders that are drawn by algorithmic derivations of our behavior — and, crucially, the behavior others would like us to have — then there are suddenly very different incentives determining what we can consume.
As Hastings implied, at that point there will be a legitimate question as to who it is that is being entertained. The person, or the A.I. followers that track the person’s life and communicate their findings to the world? Are we then tailoring our entertainment to the person, or to an increasingly disconnected, A.I.-born statistical metaphor that is supposed to represent them?
On this, Hastings seems unsure himself, but he did make it clear that instead of hanging back out of fear, companies like Netflix will be rushing toward that future headlong. We’ll just have to figure out whether there’s a problem with it after it has already arrived. The company’s current approach to A.I. pattern-matching is already robust enough that it caught the eye of specialists at NASA; it’s hard to overstate the potential for powerful algorithms to open up all new possibilities.
In other words, Hastings was saying that while too much Netflix may rot your brain, combined with an increasingly A.I.-mediated view of the world that effect could be about to get a whole lot worse in the future.