Neural networks — or artificial replicas of the human brain — let scientists and engineers carry out analysis that would take humans ages. They can pour through endless tables of data and point out discrepancies in images that would go unnoticed by people.
They do have one drawback though: The best neural nets in the game use an unbelievable amount of energy to do their job.
“Some years ago IBM tried to simulate the brain activity of a cat in a supercomputer and they ended up consuming megawatts of power,” Purdue University researcher Abhronil Sengupta tells Inverse. “The biological human brain consumes nowhere near that much. This is not a direct one-to-one comparison [to a neural network], but it should give you an estimate of how power-hungry computing systems are.”
Sengupta and a team of computer scientists at Purdue University and the Institute of Electrical and Electronics Engineers (IEEE) came up with a way to get neural networks to consume way less energy while still doing a kick-ass job. A paper they have posted on the preprint site arXiv explains how they took inspiration from the human brain and implemented their idea to allow their neural net to consume roughly 11 times less energy than traditional systems would.
Their approach makes use of spiking neural networks, or SNN. Unlike their counterparts, these computational systems emulate biological neurons much more accurately.
Standard neural nets are made up of thousands of nodes used to make decisions and judgments about the data being presented to them. The output from these depend only on what is currently being presented, while SNN output depends on previous stimuli as well. Nodes in an SNN will only work when a certain level of stimulus is reached. So instead of constantly passing data to other nodes, SNN nodes only pass on information when they have to.
This normally comes at a giant energy cost because most of these systems are made using what’s known as complementary metal-oxide-semiconductors technology, or CMOS. That tech makes up all the chips in your laptop and has been used as the building blocks for neural networks. For their study the group of researchers ditched CMOS tech and built an SNN made completely out of memristors.
Short for “memory resistors,” memristors’ electrical resistance depends on how much electric charge flowed through it in the past. So unlike CMOS tech, it’s able to “remember” what passed through it before, which is exactly what nodes in SNNs need to do.
The results of the study demonstrated that memristors mimic the biological neuron pretty well. They communicate with each other using spikes, or short bursts of energy, as opposed to a constant flow of power. This memristor-SNN had a slight decrease in accuracy when it was used for image classification compared to its CMOS counterparts, but it took a fraction of the power standard neural nets would.
Before this study SNNs were the closest thing to an artificial human brain we had, but the huge amount of power they took to use cancelled out some of their benefits. If other scientists are able to replicate these power-saving neural networks, it could allow them to do more with less energy and moves them closer to understand how to replicate the biological brain.