Science

Microsoft’s Autonomous Gliders Might Be Key for Driverless Car

Training a sailplane is actually not so different from training a car.

On Wednesday, Microsoft published a blog post on its website highlighting the work of one of its A.I. research groups in building and testing autonomous gliders in the Nevada desert. The potential applications that the 16.5-foot, 12.5-pound sailplane may facilitate are myriad — be it measuring weather patterns, monitoring agricultural crops or wildlife areas, or providing internet access to areas with sparse connectivity.

But what makes Microsoft’s sailplane project so useful to all autonomous vehicle development is that it provides a relatively cheap platform for testing and training A.I. agents to independently operate a machine. Although the sailplane is just one specific type of vehicle, the algorithm that makes up the A.I. system is designed to physically navigate a machine around in three dimensions. There aren’t many factors that separate what an algorithm must do to correctly operate a glider versus a drone, or land vehicle like a car.

Peter Stone, an A.I. researcher and the founder and director of the Learning Agents Research Group (LARG) at the University of Texas at Austin, tells Inverse it’s important to remember A.I. is “not just one thing. It’s a collection of different technologies and parts.”

Stone describes three broad principles that autonomous vehicle algorithms must exhibit in order to operate its machine effectively. And he says of two of these types of capabilities are shared among all kinds of automated vehicles.

The first is “perception” — the ability for an A.I. system to take in sensory information and using that “to build a model of what the current state of the world is.” Like a human, an autonomous vehicle must be able to absorb data about what the surrounding environment looks like and feels like, and create an internal understanding of how to properly navigate through that world.

The second is “decision-making” — “once you perceive the environment…then you have to decide what actions to take,” Stone says. An A.I. agent that’s operating something like a glider has to have plan for how to contend with changing conditions, like weather.

Perception and decision-making are what Stone calls “general purpose” capabilities. “They’re something that can be developed on one vehicle and applied to another.”

It’s the third and final principle that presents itself as the limiting factor for how much cross-over appeal an A.I. agent possesses: executing actions. “Once you’ve decided what to do, how to execute that depends on the type of vehicle you’re operating,” he says. The kinds of actions an A.I. agent will undertake are specific.

For example, let’s say a glider and a self-driving car are both going along the same path. Both perceive the same exact environment in basically the same way, and both understand that they must move out of the way of any solid objects that may be in the way. Suddenly, on the path, the two vehicles see a large tower blocking most of the road. Both vehicles know they must get around the tower in order to proceed ahead.

Here’s where the two vehicles diverge. The glider may choose to fly around the tower, or perhaps fly above it if it’s able to, or perhaps use this instance to diverge from the path entirely in order to make it to its destination. The car, hampered by the fact that it can’t fly, instead must sidle along the side of the road to bypass the tower — and it basically has to stick to the road for the rest of the journey.

That’s just one of an infinite number of scenarios where an A.I. system may choose two different things depending on what machine it’s operating. And in order for that A.I. to be fully optimized to complete its tasks safely and effectively, it must be robustly trained on each platform.

According to Yi Fang, an assistant professor in the department of Electrical and Computer Engineering at NYU Abu Dhabi and NYU Tandon School of Engineering, and director of the NYU Multimedia and Visual Computing Lab, “We can apply the driving experience of a car to a minivan, or even a bus once the difference in their size can be carefully dealt with. But,” he tells Inverse, “we can’t apply driving skills to a glider, or vice-versa, since they operate in very different states, with very different manners. The glider does not, for example, care about street view, while cars can’t control movement up or down.”

“Many people think of A.I. as one single entity that can be sprinkled onto all sorts of things,” says Stone. “And that’s just not true. The algorithms developed to deal with decision making and perception can be generally applied,” but when it comes to actual executing actions, algorithms must be specifically tailored.

Still, parts can be shifted around without much difficulty. For example, “the training algorithm itself can remain the same and be applied to different vehicles,” says Fang.

Microsoft’s sailplane work could be an incredibly useful way to get autonomous cars trained two-thirds of the way. It’s just the remaining third in which on-the-ground testing will have no substitute.

Related Tags