Facebook Revamps Image Algorithm for Super-Fast Translations
Researchers at Facebook just published a new translation algorithm that uses context to perform translations nine times faster and more accurately than other conventional systems used today.
Conventional translation algorithms go one word or phrase at a time, from the beginning of a sentence to the end. To improve translation, researchers teach algorithms to group words together to create context before translation begins.
On Tuesday, researchers at Facebook leveled up, and decided to use a technique from algorithms that analyzes pictures and translate sentences in a bunch of simultaneous contextual chunks.
It’s something the researchers at the Facebook Artificial Intelligence Research lab call “multi-hop attention,” which gives the impression of a bunch of excited bunnies working to translate a sentence together. The new technique speeds up translations significantly from the conventional methods, and has the potential to significantly improve translations. On top of that, it signals the beginning of using neural networks that are really good at tasks in specific fields to improve unrelated tasks. The researchers say they hope it can be used in document summarization, dialog systems, and question answering.
The translation algorithm uses a convolutional neural network which is used in computer vision because it can look at an image at many scales simultaneously. It was originally developed by Facebook’s own Yann LeCun.
“In our case, it took some time to figure out all the details to get CNN’s to work for translation. We tried several methods to train those systems and it was not obvious from the outset which one would work,” Michael Auli, at Facebook Artificial Intelligence Research tells Inverse.
Some researchers have used these kinds of neural networks for translation, but typically for sentence analysis and not word-for-word translation.
Finally, by using a technique to control the way the information flowed into the algorithm during training, the team was able to get good translations. The method is basically the idea that you can build knowledge from simple to more complex. So by shutting down some of the function of the algorithm early on, Auli and his team were able to build the speed and skill of the translation over time, by teaching the system how to identify the most important information. Once the training was over, the team was able to translate between French, German, and Romanian nine times faster than the neural networks that have typically been used for translation.
Right now, the translation process only works on sentences, and can’t use context from an entire document to improve its translation, Auli says. He thinks it should be possible to do that, and the team is working to be the first to apply a translation algorithm to an entire document.
“We also foresee that different translation systems could benefit from our multi-hop attention mechanism, which enable the translation system to look at the source sentence multiple times.” says Auli.
And this is just the beginning of using neural networks typically used for visual analysis in translation, Auli says. Algorithms that work by analyzing images by running parallel analyses at the same time is one of the next kinds of next tech the team intends to try in translation, he says.
Correction, May 12: Auli’s team will look into more than the neural networks typically used for visual analysis in their translation work, as originally reported in the story.