This Incredible Robot Can Learn From Humans Just by Watching Them

If the robotic assistants are going to one day relieve you of your household responsibilities, they’ll need to not only have hands in the first place, they’ll also need to be able to learn what to do with them.

Fortunately, a team of researchers at the University of California, Berkeley are already on the case, making sure the robots of the future are adept at interpreting visual information and translating that into step-by-step tasks it can complete on its own.

This new sorting-bot was created by co-authors Tianhe Yu and Chelsea Finn, who together published the findings of their experiment back in July. In the paper, they explain how they were able to train a commercially-available robot, called PR2, to place household objects in color-coded containers by watching them do it first. They accomplished this by feeding a neural network footage of Yu putting a peach into a bowl and after that initial demonstration, prompting PR2 to imitate those actions.

PR2 learns how to interact and sort objects after just one demonstration.

Tianhe Yu / Chelsea Finn

This is #20 on Inverse’s list of the 20 Ways A.I. Became More Human in 2018.

This is a massive leap toward not only robot butlers, but also general purpose construction, cleaning, and potentially even sports-playing robots. The key to the breakthrough is that instead of having to specially program the bots for each individual task, owners could in theory just show them what to do themselves. It’s the difference between a futuristic Alexa that knows how to fold laundry and a robot that knows how you like your laundry folded.

Enabling robots to recreate the actions carried out by a human was no easy feat, and previous research generally required that a robot be trained by another robot. Human limbs simply don’t move like robotic arms do, which make it difficult for A.I. to track and imitate the motions we use to go about our daily lives.

PR2 learns to place the peach into the red bowl after watching Yu do so.

Berkeley Artificial Intelligence Research / Tianhe Yu / Chelsea Finn

Yu and Finn figured out how to overcome this hurdle by simply having PR2 focus on where the object needed to go, instead of how it needed to move it. In doing so they helped open the door who can not only clean, but who who can be easily taught by non-specialists.

Related Tags