Elon Musk's OpenAI Project Has Identified 4 Big Problems for A.I.
Number one: 'Terminator' references.
Most of the problems that could result from increasingly sophisticated artificial intelligence are much more subtle than SkyNet, but that doesn’t mean smarter A.I. doesn’t come without its risks.
That’s why Elon Musk and other Silicon Valley bigwigs formed Open AI, a group devoted to solving problems with A.I. before they’re able to cause real damage. That group is called OpenAI, and it’s looking for help on these issues.
“OpenAI is a non-profit artificial intelligence research company,” the group says on its About page. “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”
A job posting disguised as a blog post titled “Special projects” outlines four such “projects” as problems that have “either very broad implications” or that address “important emerging consequences of AI development.” And then its writers — Ilya Sutskever, Dario Amodei, and Sam Altman — invite any strong machine learning experts to “start an effort on one of these problems at OpenAI.” Just submit an application, they ask. Here are those four big problems facing artificial intelligence:
Finding covert AI systems.
What’s worse than a very smart A.I. that can be used for the wrong purpose? A very smart A.I. that’s already used for the wrong purpose without anyone knowing. “As the number of organizations and resources allocated to AI research increases, the probability increases that an organization will make an undisclosed AI breakthrough and use the system for potentially malicious ends,” the OpenAI team writes in its blog post. “It seems important to detect this. We can imagine a lot of ways to do this — looking at the news, financial markets, online games, etc.”
Building an A.I. that can program.
Some of the best programmers are lazy. So it makes sense that OpenAI wants to create an A.I. capable of creating programs on its own. Better to spend the time making an A.I. smart enough to code anything you might need in the future than to code all those things yourself. This is 2016, for crying out loud, and coders need a break!
Using A.I. for cyber defense.
“An early use of AI will be to break into computer systems,” OpenAI writes. “We’d like AI techniques to defend against sophisticated hackers making heavy use of AI methods.” That sounds good, especially since MIT warned that current A.I. relies on humans to protect against cyberattacks.
Making really complex simulations.
“We’re interested in building a very large simulation with lots of different agents in it that can interact with each other, learn over a long period of time, discover language, and accomplish a rich variety of goals,” the OpenAI team writes.
Oh, come on. Now you guys are just trying to prove Musk is right about us living in a simulation.