May 07, 2015 by Bob Yirka report
Original link: http://phys.org/news/2015-05-neural-network-chip-built-memristors.html#inlRlv
(Phys.org)—A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has.
Memristors may sound like something from a sci-fi movie, but they actually exist—they are electronic analog memory devices that are modeled on human neurons and synapses. Human consciousness, some believe, is in reality, nothing more than an advanced form of memory retention and processing, and it is analog, as opposed to computers, which of course are digital. The idea for memristors was first dreamed up by University of California professor Leon Chua back in 1971, but it was not until a team working at Hewlett-Packard in 2008, first built one. Since then, a lot of research has gone into studying the technology, but until now, no one had ever built a neural-network chip based exclusively on them.
Up till now, most neural networks have been software based, Google, Facebook and IBM, for example, are all working on computer systems running such learning networks, mostly meant to pick faces out of a crowd, or return an answer based on a human phrased question. While the gains in such technology have been obvious, the limiting factor is the hardware—as neural networks grow in size and complexity, they begin to tax the abilities of even the fastest computers. The next step, most in the field believe, is to replace transistors with memristors—each on its own is able to learn, in ways similar to the way neurons in the brain learn when presented with something new. Putting them on a chip would of course reduce the overhead needed to run such a network.
The new chip, the team reports, was created using transistor-free metal-oxide memristor crossbars and represents a basic neural network able to perform just one task—to learn and recognize patterns in very simple 3 × 3-pixel black and white images. The experimental chip, they add, is an important step towards the creation of larger neural networks that tap the real power of remristors. It also makes possible the idea of building computers in lock-step with advances in research looking into discovering just how exactly our neurons work at their most basic level.
Explore further:
Computers that mimic the function of the brain
More information: Training and operation of an integrated neuromorphic network based on metal-oxide memristors, Nature 521, 61–64 (07 May 2015) doi:10.1038/nature14441
Abstract
Despite much progress in semiconductor integrated circuit technology, the extreme complexity of the human cerebral cortex, with its approximately 1014 synapses, makes the hardware implementation of neuromorphic networks with a comparable number of devices exceptionally challenging. To provide comparable complexity while operating much faster and with manageable power dissipation, networks based on circuits combining complementary metal-oxide-semiconductors (CMOSs) and adjustable two-terminal resistive devices (memristors) have been developed. In such circuits, the usual CMOS stack is augmented with one or several crossbar layers, with memristors at each crosspoint. There have recently been notable improvements in the fabrication of such memristive crossbars and their integration with CMOS circuits, including first demonstrations of their vertical integration. Separately, discrete memristors have been used as artificial synapses in neuromorphic networks. Very recently, such experiments have been extended to crossbar arrays of phase-change memristive devices. The adjustment of such devices, however, requires an additional transistor at each crosspoint, and hence these devices are much harder to scale than metal-oxide memristors, whose nonlinear current–voltage curves enable transistor-free operation. Here we report the experimental implementation of transistor-free metal-oxide memristor crossbars, with device variability sufficiently low to allow operation of integrated neural networks, in a simple network: a single-layer perceptron (an algorithm for linear classification). The network can be taught in situ using a coarse-grain variety of the delta rule algorithm to perform the perfect classification of 3 × 3-pixel black/white images into three classes (representing letters). This demonstration is an important step towards much larger and more complex memristive neuromorphic networks.