A team of roboticists have taken another step toward the inevitable future where real-life Transformers move among us.
New research on modular, autonomous robots was published Wednesday that shows how robots can see, think, and decide to transform their shape based on the challenge facing them.
A six-person team published this research paper — “An Integrated System for Perception-Driven Autonomy with Modular Robots” — in the journal Science Robotics. The researchers hail from Cornell University and the University of Pennsylvania.
Here are the key areas of how the robot does what it does, in the words of the researchers.
“A lot of people have seen this in movies, if you’ve seen like Transformers or Big Hero 6, robots that can change their shape,” says Mark Yim, Professor, University of Pennsylvania, of the modular robots revealed this week. “We’ve had lots of examples of robots that can do things like walking or climbing stairs … but all of those things were done separately. This is the first time that we’ve actually had a system that could do all of this stuff autonomously.”
First, how does this robotic system see the world around it? Here’s researcher Jonathan Daudelin:
We use a 3-D camera mounted on our sensor module to perceive and create a 3-D map of the robot’s environment in real time, and then we have a suite of perceptions algorithms that use this data to do things such as direct the robot where to explore unknown areas and to characterize the environment in terms of the robots abilities.
And how does this robotic proto-Transformer know what shapes to take? Again, here’s Daudelin:
It may recognize stairs or narrow crevices, flat areas, et cetera, and then the high level planner uses this information to decide which entries from the library, which actions, which robot shapes are required to carry out the tasks given the environment conditions.
So, what’s next for this robot? Researcher Tarik Tosun tells Inverse there could be two situations where it’s used: A disaster zone — a scenario commonly used by roboticists — and the more everyday situation of a typical home, with carpet and hardwood floors and stairs and maybe even a pile of dirty laundry.
“If you’re going into a disaster zone, it might not even be clear what the task is before you actually go in, right? If you’re going into a collapsed building, you don’t know what it looks like on the inside or whether there are people in there that you might want to rescue,” Tosun says.
“So have a robot that is really very versatile could be useful in that scenario because it can go in, assess its surroundings and then maybe choose to become a snake to go through a small crevasse or even a shelter to protect people from falling rubble, something like that.”
These robots could be come domestic helpers, too, Tosun says:
A slightly less excited example or domain might just be around people’s houses. If you want to have a small robot that’s operating in someone’s home, actually our homes and offices and indoor environments have pretty complicated environments. There’s often clutter, lots of different surfaces that the robot might need to traverse, and having the ability to, for example, turn into a robot that – a shape that’s good for climbing stairs when you need to climb stairs or good at kind of zooming across the floor if you have flat floor. It could be very useful in a home as well.
What is something that these robots cannot yet do that they might soon? It comes down to how the robot thinks and how it might become stronger, say the researchers.
Tosun tells Inverse that modular roobts are very good at being flexible but they are not very strong; they can’t lift very heavy objects. The researchers may combine their modular nature with more powerful lifting robots or robots. The modular robots could also be used to build structures which would enable them to be used in new capacities, like scaling large structures.
The other interesting area that the modular proto-Transformer could improve would be related to artificial intelligence, or machine learning. Right now, the modular robot has a library of decisions or actions to take stored locally. Here’s Hadas Kress-Gazit, another researcher on the team and associate professor at Cornell:
“A really an interesting question would be can we automate that in some way?,” Kress Gazit tells Inverse. “So can we use machine learning? Can we use different (atomization) algorithms to be able to create these, or at least a set of candidate shapes and behaviors that span again the larger set of tasks then we can currently do. So that’s kind of an interesting research question that we’re exploring.”