Most artificial intelligence systems can be advanced by adding more: more computing power, more lines of code, more analysis, more neural networks, more machine learning.
And this is great if you have large amounts of power and space at your disposal, like on a car, or a rocket ship, or in a data center. But if you don’t have all that? Then you have to get simple and think creatively. That’s what Johannes Overvelde and his team at AMOLF, a government-funded Dutch physics research institute, did in a new study released this week in Proceedings of the National Academy of Sciences.
With only five lines of code, they were able to create a team of robots that worked together to achieve a common goal with a single sensor and no ability to communicate with each other. The findings have potential impacts on everything from self-healing materials to medical nanobots.
“It’s a big social experiment,” Overvelde, an associate professor at AMOLF, tells Inverse. Each robot “is very greedy and wants to do what they want to do.” In a way, it’s a form of the prisoner’s dilemma with robots. If they all work together, both robots can achieve their goal, but they can’t communicate with the other robots in any meaningful way.
What’s new — The team created some very simple robots that can do two things:
- sense its position
- push or pull on an adjacent robot
If you disconnect the robots, they push and pull on nothing, so they need to work together to accomplish their goal.
“We said to each robot: move as fast as possible in a predefined direction,” Overvelde says. They were told to “continuously perform experiments and try things, and have a higher probability of accepting things that take them in the right direction.” Then a bunch of robots were connected together and sent on their way.
“We aimed for simplicity over complexity. Robustness over optimal behavior,” Overvelde says. “It’s possible to make robots that are much faster and better and capable of moving forward, but you have to know the environment.”
From there, it’s up to the robots to find simple ways to adapt to their environments. This focus on simplicity, he says, is important for future development in things like micro-nanobots that simply don’t have the processing power to implement complex behavior.
But the development is about the algorithm, not the robot. The robot “is a bit of a joke. You can build much faster and better robots,” Overvelde says. “But it’s essential to see what the robot will really encounter, the surprises you get when you run experiments in the real world versus simulations where you control the environment.”
Here’s the background — Simple processes can lead to complex behaviors. Think about a flock of birds flying together, for example. “Evolution has resulted in changes to behavior until these birds respond to each other in such a way that generates a higher probability of survival,” Overvelde says. “What we’re trying to do is have a robot that doesn’t know anything and evolves into that flock of birds where you know the rules between them. Robots can embody the evolution that drives that behavior.”
The fun thing about simplicity is that you can tweak it in subtle ways to end up with analogies to all kinds of other situations, like the Prisoner’s Dilemma mentioned above. As a next step, for instance, they could have robots share their current phase with their neighbors with those robots then weighing that additional input in various ways.
“It becomes a bit like a neural network,” Overvelde says. “Connect to the neighbors and you start to have a neural network of robots.”
But the AMOLF team wants to get away from that — they wanted to stay as simple as possible, Overvelde says, in order to make the behaviors simple as well, because, “the more complex the behavior, it’s hard to tell in the end what it’s going to do.”
The simple algorithms could be used in more complex situations too, like a car trying to steer itself down a lane. So long as the lane lines are easily visible, this is a very simple task for a robot to accomplish.
What’s next — Perhaps the most essential part is that the system has no meaningful memory. This isn’t a machine learning neural network where it can be trained in iterative simulations to get human-esque behavior.
These robots don’t have a model of themselves. They just have a simple task, and try to accomplish it without knowing what’s going on in the world. The team was able to intentionally damage a robot, keeping it from being able to push and pull on its neighbor, and they expected it to simply give up. Unexpectedly, it determined that it could make a contribution by actuating the motor even if it didn’t appear to be helping the cause.
Overvelde says this kind of behavior is seen among living things, including fungi and slime mold, organisms that can solve mazes despite not having a central nervous system. They become “smarter” by cooperating with other cells.
The next step will be applying their algorithm to dedicated hardware or materials science where chemical processes are happening.
“Materials that can adapt to changes in the environment, that’s our main goal,” he says. “How can we put intelligence inside of materials or objects and make them learn how to deal with different environments?”
Abstract: One of the main challenges in robotics is the development of systems that can adapt to their environment and achieve autonomous behavior. Current approaches typically aim to achieve this by increasing the complexity of the centralized controller by, e.g., direct modeling of their behavior, or implementing machine learning. In contrast, we simplify the controller using a decentralized and modular approach, with the aim of finding specific requirements needed for a robust and scalable learning strategy in robots. To achieve this, we conducted experiments and simulations on a specific robotic platform assembled from identical autonomous units that continuously sense their environment and react to it. By letting each unit adapt its behavior independently using a basic Monte Carlo scheme, the assembled system is able to learn and maintain optimal behavior in a dynamic environment as long as its memory is representative of the current environment, even when incurring damage. We show that the physical connection between the units is enough to achieve learning, and no additional communication or centralized information is required. As a result, such a distributed learning approach can be easily scaled to larger assemblies, blurring the boundaries between materials and robots, paving the way for a new class of modular “robotic matter” that can autonomously learn to thrive in dynamic or unfamiliar situations, for example, encountered by soft robots or self-assembled (micro)robots in various environments spanning from the medical realm to space explorations.