Spielberg's ‘A.I. Artificial Intelligence’ Feels Right Despite Bad Science
The Jude Law, Haley Joel Osmet vehicle misunderstands technology in a way that remains compelling even though it has no basis in reality.
When Steven Spielberg took over the development of A.I. Artificial Intelligence from Stanley Kubrick back in 1995, artificial intelligence, the technology, was in its infancy. Science fiction, which had spent a half century talking up the humanoid robot, still had more to offer by way of aesthetic guidance than science. In 2001, the year the movie debuted, the most famous A.I. system was IBM’s Deep Blue computer program, which played chess. The possibilities for A.I and robots seemed endless not because of proliferating research, but because we didn’t know enough to understand logical limits.
Steven Spielberg was making comp-sci fan-fic. He was as innocent as his audience.
Now, 14 years later, we’ve seen much more happening in the field of A.I. IBM created Watson, a program that mercilessly thrashes other players at Jeopardy. Georgia Tech roboticist just taught a robot how to make conclusions based on repeated exposures to certain visual stimuli. A new model of the Nao refuses commands in order to protect itself. Bots without bodies worm through the ubiquitous internet. Her won a lot of awards.
We haven’t created the realistic humanoid bots we see in A.I., but we also aren’t bent on trying. As it turns out, the Jude Law tech might be the only advancement that makes sense: The only real reason to give robots bodies like our own would be to sleep with them.
As the movie unfolds and the audience begins to explore the setting through the eyes of David — the new type of A.I. robot who is designed to experience love — we see that these mechanical androids, “mechas,” are ubiquitous. In this dystopian future, which has arrived hot on the heels of tsunamis, intelligent machines do everything.
From a scientific perspective, this is extremely silly. Humanoid robots aren’t particularly good at anything. The form that suits us so well, is difficult and pointless to replicate through engineering. If you want a machine to lift objects, cook, build equipment, work in dangerous places — whatever, you need to build them in such a way that they’re fit for the task, and a bot shaped like a person has way too many flaws. That is, unless you’re building something to love.
We’re also forgetting the fact that people simply don’t want robots to look like people. The ‘uncanny valley’ is a real issue in robot design — when something artificial looks too much like a human, we are instantly repulsed. You see this in the movie within the first few frames of David’s introduction to Monica. Haley Joel Osment’s performance is severely underrated — he freaks out the viewer simply by acting like a robot who’s trying too hard to act like a human, veering back and forth from a robotic vacant stare to a child’s glow of mesmerized curiosity. Monica (played by Frances O’Connor) sums it up when she says in tears, “he’s so real…but he’s not…”
But perhaps the biggest drawback of the movie, at least from a science and technology perspective, is that it perpetuates a world in which the internet seems to have never existed. In today’s world, the internet is quite possibly the most important tool for developing intelligent programs. It’s essentially become the dataset for how we teach intelligent programs about the world around them. That’s how Siri works. It’s how Google works. It’s what Facebook is doing with their algorithms designed to show you more of what you love, and less of what you hate. (An episode of Black Mirror, which takes a lot of inspiration from this movie, hones in on the power of social media very effectively.) The future of A.I. won’t simply be physical machines walking amongst us; it’ll be programs that don’t need to be built.
The movie, however, doesn’t feature that sort of A.I because it’s actually about robots. In once scene, David and the eccentric Gigolo Joe travel to Rouge City to ask Dr. Know, a computer program based in single location, some simple questions. In what world would someone have to travel to another town to ask a computer program a question? Information wants to spread, not settle down and buy an apartment.
There is, however, plenty that the movie gets right. As far as technology goes, Hobby’s character discusses how A.I. machines are made through systems that emulate neuron function. This is at the core of what’s called deep learning — where scientists try to create a model of the brain within a supercomputer and give it the capability to basically become as intelligent (or even more intelligent) as a human brain.
David, in A.I., is the first machine to exhibit such intelligence. It’s why, during the movie, he evolves from a Tabula Rasa persona into a more wholly formed person — as opposed to someone like Gigolo Joe, who’s preprogrammed to behave and think in a certain way for basically forever. David’s personality is learned, while Joe’s is preprogramed.
The latter is how most researchers are approaching A.I., but there is a stronger push these days to veer more towards the ‘blank slate’ design and teach a robot tasks through experience rather than programming. If you want to create A.I. that are able to adapt based on new experiences and information, this would be the way to do it. As that type of approach is more embraced by A.I. developers, artificial minds like David’s are what we’re more likely to see.
But it’s the cultural ramifications and conflicts that A.I. seems to pin down best of all. The first part of the movie, where David is constantly following Monica, unable and unsure of how to do anything that isn’t in response or a reaction to another human’s actions, emphasizes how A.I. are inextricably tied to humans. They can’t just do things on their own, even if they are intelligent enough to know they exist. Even to the end, Joe only knows how to have sex with women, and nothing else. A robot designed just to provide sexual release — and if predictions hold, there’s going to be a lot of them — almost certainly can’t also do construction, or teach particle physics. A.I. have a specific purpose or task at hand, and that’s all they know.
“They made us too smart, too quick, and too many,” says Joe.
Maybe there is reason to fear the rise of A.I. This is probably has less to do with an actual machine uprising led by Skynet, and more to do with how humans will seek to exploit and abuse A.I. for personal gain — as witnessed in the movie. People like Stephen Hawking and Bill Gates have already expressed legitimate concerns for why we need to approach A.I. development carefully. Elon Musk wants to democratize A.I. as a counterweight to these fears, leading to the formation of the nonprofit OpenAI. A.I. seems to have anticipated the proclivity of humans to turn great technologies into tools for twisted desires.
Ultimately, A.I. will probably go down in the canon of sci-fi film as a movie that didn’t quite get there on the robots, but managed some prescience on the subject of interactivity.