neural algorithms on neural hardware —

Brains scale better than CPUs. So Intel is building brains

The new Pohoiki Beach builds on the 2017 success of Intel's Loihi NPU.

This is a picture of an Intel Nahuku board, which can contain 8 to 32 Loihi neuromorphic processing units, interfaced to an Intel Arria 10 FPGA development kit. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips.
Enlarge / This is a picture of an Intel Nahuku board, which can contain 8 to 32 Loihi neuromorphic processing units, interfaced to an Intel Arria 10 FPGA development kit. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips.
Intel Labs

Neuromorphic engineering—building machines that mimic the function of organic brains in hardware as well as software—is becoming more and more prominent. The field has progressed rapidly, from conceptual beginnings in the late 1980s to experimental field programmable neural arrays in 2006, early memristor-powered device proposals in 2012, IBM's TrueNorth NPU in 2014, and Intel's Loihi neuromorphic processor in 2017. Yesterday, Intel broke a little more new ground with the debut of a larger-scale neuromorphic system, Pohoiki Beach, which integrates 64 of its Loihi chips.

Intel's Jon Tse demonstrates teaching a single Loihi chip to identify new objects in just a few seconds each.

Where traditional computing works by running numbers through an optimized pipeline, neuromorphic hardware performs calculations using artificial "neurons" that communicate with each other. This is a workflow that's highly specialized for specific applications, much like the natural neurons it mimics in function—so you likely won't replace conventional computers with Pohoiki Beach systems or its descendants, for the same reasons you wouldn't replace a desktop calculator with a human mathematics major.

However, neuromorphic hardware is proving able to handle tasks organic brains excel at much more efficiently than conventional processors or GPUs can. Visual object recognition is perhaps the most widely realized task where neural networks excel, but other examples include playing foosball, adding kinesthetic intelligence to prosthetic limbs, and even understanding skin touch in ways similar to how a human or animal might understand it.

 

Loihi, the underlying chip Pohoiki Beach is integrated from, consists of 130,000 neuron analogs—hardware-wise, this is roughly equivalent to half of the neural capacity of a fruit fly. Pohoiki Beach scales that up to 8 million neurons—about the neural capacity of a zebrafish. But what's perhaps more interesting than the raw computational power of the new neural network is how well it scales.

With the Loihi chip we’ve been able to demonstrate 109 times lower power consumption running a real-time deep learning benchmark compared to a GPU, and 5 times lower power consumption compared to specialized IoT inference hardware. Even better, as we scale the network up by 50 times, Loihi maintains real-time performance results and uses only 30 percent more power, whereas the IoT hardware uses 500 percent more power and is no longer real-time.
Chris Eliasmith, co-CEO of Applied Brain Research and professor at University of Waterloo

Pohoiki Beach appears to be step two of Intel's process-architecture-optimization development model. Step three, a larger integration of Loihi chips to be called Pohoiki Springs, is scheduled to debut later this year. Neuromorphic design is still in a research phase, but this and similar projects from competitors such as IBM and Samsung should break ground for eventual commoditization and commercial use.

Listing image by Intel Labs

Channel Ars Technica