Facebook’s photo recognition system is largely invisible, but it’s come leaps and bounds since its launch last April — and the newest update means it can actually recognize leaps and bounds, literally. The artificial intelligence can scan photos, “see” the objects and write a description, information that can then be used to describe the photo to visually-impaired users, or find the photo through searches. On Thursday, the system received a major upgrade that allows it to not only describe objects in the photo, but explain what the people in it are doing.
The A.I. can now describe pictures using 12 new action phrases like “people walking,” “people playing instruments,” or “people dancing.” It’s a small number at first, but these improvements will help change the way people sort through the service.
“When you’re thinking back on your favorite memories, it can be hard to remember exactly when something took place and who took the photo to capture the moment,” Joaquin Quiñonero Candela, Facebook’s director of applied machine learning, said in a blog post. “We’ve built a search system that leverages image understanding to sort through this vast amount of information and surface the most relevant photos quickly and easily.”
The company has made the process of teaching its A.I. simpler than ever, with a system called FBLearner Flow. The system enable engineers to easily make changes without jumping through hoops. The new system means that Facebook is now performing six times more A.I. experiments per month than it was doing this time last year.
For the photo software, Facebook built a tool called Lumos that works with FBLearner Flow to quickly train the A.I. how to recognize new photos. This involves human intervention: people sat down and wrote descriptions for 130,000 Facebook photos that contained other people.
The Lumos interface then allowed the engineers to rapidly train the A.I. to recognize a new classification. Among other features, the software shows how likely the photo is to match with the description. The trainers can use previously-taught descriptions to speed up the process: for example, teaching the A.I. how to recognize people riding horses is easier if the A.I. already knows what a horse looks like, so it can filter out all the non-horse photos.
The list of actions is small to begin with, but it represents a step towards Facebook search working closer to how humans actually think about the way they search. The future can’t come soon enough.