Science

Mark Zuckerberg Announces Ambitious A.I. Program for Europe

Zuck's giving out computing power to A.I. researchers in Europe.

Flickr

Today in Berlin, Facebook founder Mark Zuckerberg and his chief of artificial intelligence announced the rollout of a new program that will give a major boost to European artificial intelligence researchers.

Soon, 200 Graphics Processing Units (GPUs) — the CPU to A.I. research — will be used to further develop A.I. in Europe, and the first 32 GPUs are already going out to a research lab in Berlin.

Zuckerberg sat down with Yann LeCun, the director of Facebook’s artificial intelligence research program, at the Facebook Innovation Hub in Berlin along with Martin Ott, Facebook’s managing director for northern, central, and eastern Europe, who interviewed Zuck and LeCun about global connectivity, A.I., and virtual reality.

Among the usual responses, Zuck and LeCun dove in to the future of A.I.

In LeCun’s words: “We’re hearing about A.I. now because the computer power was not around 20 years ago.” Facebook hopes to accelerate the advancement rate by further proliferating the necessary computing power.

The program will roll out in Berlin. From Facebook’s official announcement:

Klaus-Robert Müller at TU Berlin will be the first recipent [sic] of the first donation in this new program. Dr. Müller will receive four GPU servers that will enable his team to make quicker progress in two research areas: image analysis of breast cancer and chemical modeling of molecules.

A.I. research has long been in the hands of powerful, well-established companies. These companies have all the resources and infrastructure necessary to advance the field. Noncorporate research groups are often replete with motivated, extraordinary brains for A.I. research, but, without the necessary computing power, are unable to put their ideas into practice. Facebook’s program, then, is an effort to change this imbalance.

Facebook said in a statement it would “work with recipients to ensure they have the software to make use of the servers and send researchers to collaborate with these institutions.”

Deep-learning requires showing an A.I system huge amounts of whatever it is the researchers are attempting to “teach” it. For an A.I. to be able to pick out a photograph’s location or content, for instance, that A.I. needs to have seen a staggering amount of photographs already. And for the A.I. to encounter and learn from the requisite number of photographs, the researchers need GPUs. (The same is true for teaching A.I.s to understand spoken and written language, or for teaching self-driving car systems to familiarize themselves with situations they may encounter on the road.)

This is known as “supervised” learning, and, in essence, it’s pattern recognition. If you point out x enough times to an A.I., the A.I. will itself learn to identify x. While this technology has numerous exciting applications — such as the ability to teach an A.I.-equipped camera to identify skin cancer, or a system that can filter and interpret brain signals and thereby control prostheses — Zuck and LeCun say that unsupervised learning is the long-term, revolutionary goal.

If an A.I. can learn “on its own two feet,” if you will, there will be no stopping it. At minimum, we’re a decade away from achieving this breakthrough: to do so, we first need to understand how the human brain accomplishes unsupervised learning, and scientists, researchers, and academics alike are all still shooting in the dark about that.

Check out the full interview: