Add this one to your list of artificial intelligence (AI) terms — neuromorphic chips. They’re described as chips that model the human brain. That’s what Stanford University professor Kwabena Boahen and other researchers are working on, according to EE Times Rick Merritt’s recent report.
The promise coming from these AI researchers is that neuromorphic chips will be able to handle or process orders of magnitude greater computations compared to today’s processors with a fraction of their power consumption. A tall order indeed.
But, don’t shrug that comment off so easily. Merritt reports that Boahen’s latest chip, dubbed “Braindrop” is said to beat Nvidia’s Tesla GPUs in energy efficiency. Plus, Braindrop also outpaces similar processors from academics, Merritt writes.
Moreover, the EE Times story emphasizes that the Stanford AI researcher is focused on obtaining funding for a next generation effort “that could do even better, probably using ferroelectric FETs at Globalfoundries.”
Still, there is a glitch in the works or a missing link, as it were. That missing portion of the puzzle is the crucial aspect of how the brain works. It’s all fine on the other fronts, meaning those researchers think they know about the analog process the brain uses for computing. According to Merritt’s article, they also think the researchers understand the spiking neural network technique it uses for efficiently communicating among neurons.
But what’s throwing them off is how the brain works. It’s a fundamental piece of the algorithm, and they don’t have it nailed down yet. Yet, these researchers have high hopes and are following certain clues. Merritt points out, however, that they are not in possession of the so-called “back-propagation” also known simply as “backprop.”
Merritt describes backprop as the heart of the training process. “It’s painfully slow and requires banks of expensive CPUs or GPUs with tons of memory working offline, but it is delivering stellar results on a wide range of pattern recognition problems,” he reports.
But here again, the researchers encounter an issue because backprop and deep learning, in a general sense, are artificial, the article asserts. Backprop doesn’t use neurons and techniques modeled after the brain that crunches through supercomputer-like tasks on the equivalent of a 35-watt power source.
Merritt reports that Boahen remains highly optimistic. “There’s a huge opportunity in this space,” he told Merritt. He added that a considerable number of applications aren’t served by deep neural networks that run in the cloud with batch requests that create latency.
Boahen cited one example dealing with bridges. He told Merritt that neuromorphic chips could monitor and analyze vibrations on bridges in real time with a few microwatts from an energy harvester. Communications would only occur when a human needs to take action.
“We should think about how we can give everything – not just cloud services — a nervous system,” Boahen is quoted in the EE Times article.
Leave a Reply