Selected from arXiv
Compiled by Machine Heart
Contributors: Jane W, Wu Pan
Neuromorphic computing is considered an important direction for future artificial intelligence computing. Recently, several researchers from the Institute of Electrical and Electronics Engineers (IEEE) jointly published an 88-page overview paper that comprehensively reviews the development of neuromorphic computing over the past 35 years and looks ahead to the future developments in this field. Machine Heart has summarized and compiled the main parts of this survey paper. For the original paper, please visit: https://arxiv.org/abs/1705.06963
Neuromorphic computing refers to computers, devices, and models inspired by the brain that sharply contrast with the conventional von Neumann computer architecture. This bionic approach creates highly interconnected synthetic neurons and synapses that can be used for neuroscience theoretical modeling and to solve challenging machine learning problems. The prospect of this technology is to create systems with learning and adaptation capabilities similar to the brain, but there are many technical challenges, from building accurate brain neural models to finding materials and technologies to construct devices that support these models, to developing a programming framework that enables the system to learn automatically, and to creating applications with brain-like functionalities. In this process, we investigate the origins and development history of neuromorphic computing research. First, we review the motivations and driving factors of neuromorphic computing over the past 35 years, then we understand the main scope of this field, which is defined as neuromorphic models, algorithms and learning methods, hardware and devices, supporting systems, and applications. Finally, we summarize the main research topics that need to be addressed in the coming years in a broadly discussed manner as a prospect for neuromorphic computing. The goal of this work is to provide a detailed review of the development process of neuromorphic computing research and to inspire further work by pointing out the new research needs.
I. Introduction
Figure 1. Research areas related to neuromorphic computing and their correlations. Involves multiple fields including biology, computation, devices, and materials.
II. Motivation
Figure 2. The number of papers on neuromorphic and neural network hardware over time.
Figure 3. The ten different motivations for developing neuromorphic systems, and the percentage change of each motivation in papers over time. From left to right: real-time performance, parallelism, von Neumann bottleneck, scalability, low power consumption, package size, fault tolerance, faster speed, online learning, neuroscience.
III. Models
One of the key issues related to neuromorphic computing is which neural network model to use. The neural network model determines the components that make up the network, how these components operate, and how they interact with each other. For example, a common component of artificial neural network models inspired by biological neural networks is neurons and synapses. When defining a neural network model, it is also necessary to define the model for each component (e.g., neuron model and synapse model); the model of each component determines how that component operates.
How to choose the right model? In some cases, the chosen model may be determined by a specific application scenario. For example, if the purpose of the neuron device is to use it to simulate the biological brain for faster neuroscience research than traditional von Neumann architecture, then biological realism and/or reasonable models are necessary. If the application scenario requires high-precision image recognition tasks, then a neuromorphic system with convolutional neural networks may be the best choice. The model itself can also be shaped or determined by the characteristics and/or limitations of specific devices or materials. For example, memristor-based systems (further discussed in Section V-B1) have characteristics that allow for spike-timing dependent plasticity mechanisms (a learning mechanism discussed further in Section IV), which are best suited for spiking neural network models. In many other cases, the choice of model or the complexity of the model is not entirely clear.
We have implemented various types of models in neuromorphic or neural network hardware systems. These models range from biologically inspired to computation-driven. The latter is more inspired by artificial neural network models than by the biological brain. This section will discuss the different neuron models, synapse models, and network models used in neuromorphic systems and list important papers for each type of model.
A. Neuron Models
Figure 4 provides an overview of the types of neuron models implemented in hardware. Neuron models are divided into five categories:
-
Biologically-plausible: Types that directly simulate the behavior of biological neural systems.
-
Biologically-inspired: Attempts to replicate the behavior of biological neural systems, but not necessarily in a biologically plausible way.
-
Neuron+Other: Neuron models that include other biologically inspired components not included in other neuromorphic models (such as axons, dendrites, or glial cells).
-
Integrate-and-fire: A simpler class of biologically inspired spiking neuron models.
-
McCulloch-Pitts: A neuron model derived from the McCulloch-Pitts neuron used in most artificial neural network papers. For this model, the output of neuron j follows the equation:
Where y_j is the output value, f is the activation function, N is the number of inputs to neuron j, w_{i,j} is the weight of the synapse from neuron i to neuron j, and x_i is the output value of neuron i.
Figure 4. Hierarchical structure of neuron models implemented in hardware. The size of the box corresponds to the number of implementations of that model, and the color of the box corresponds to the ‘series’ of neuron models, with the series name labeled above or below the box.
Figure 5. Qualitative comparison of neuron models in terms of biological inspiration and model complexity.
B. Synapse Models
Like some neuromorphic studies that focus specifically on neuron models, these models occasionally also include implementations of synapse models. In neuromorphic systems, there are also synapse models that are independently designed apart from neuron systems. We can classify synapse models into two categories: biologically inspired synapse implementations, including synapses for spike-based systems, and synapse implementations for traditional artificial neural networks (e.g., feedforward neural networks). Notably, synapses are often the most abundant elements in neuromorphic systems or the elements that require the most components on specific chips. For many hardware implementations, especially the development and use of new materials for neuromorphic applications, the focus is often on optimizing synapse implementations. Therefore, synapse models tend to be relatively simple unless they explicitly attempt to simulate biological behavior. For more complex synapse models, a common approach is to incorporate plasticity mechanisms, which cause the strength or weight of neurons to change over time. Plasticity mechanisms have been found to be related to learning in the biological brain.
C. Network Models
Network models explain how different neurons and synapses are interconnected and interact with each other. As shown in previous sections, there are various neural network models available in neuromorphic systems.
Figure 6. Different network topologies that may be required for neuromorphic implementations. Determining the level of connectivity required for neuromorphic implementations and then finding appropriate hardware for that level of connectivity is often an important task.
Figure 7. Breakdown of network models in neuromorphic implementations, grouped by overall type and the number of related papers.
D. Summary and Discussion
IV. Algorithms and Learning
The unresolved issues of neuromorphic systems focus around algorithms. The neuron, synapse, and network models we choose influence the algorithms we select, as certain algorithms are sensitive to specific network topologies, neuron models, or other network model characteristics. Additionally, a second issue is whether the training or learning of the system should be performed on-chip, or whether the training of the network should be executed off-chip and then migrated to the neuromorphic system for implementation. A third issue is whether the algorithms should use on-line and unsupervised learning (in which case training on-chip is required), whether off-line and supervised learning methods can meet the needs, or whether a combination of both should be used. In the post-Moore’s law era, a key reason why neuromorphic systems are a popular complementary architecture is their ability to learn on-line; however, even well-founded neuromorphic systems find it challenging to develop algorithms for hardware programming, whether in an off-line or on-line manner. In this section, we primarily focus on on-chip algorithms, chip-in-the-loop algorithms, and algorithms directly aimed at hardware implementations.
Figure 8. Changes in the model of neuromorphic implementations (number of papers published annually) over time.
Figure 9. Overview of on-chip training/learning algorithms. The size of the box corresponds to the number of papers in that category.
A. Supervised Learning
B. Unsupervised Learning
C. Summary and Discussion
V. Hardware
Figure 10. Hardware implementations of neuromorphic computing. These implementations are relatively basic hardware implementations and do not include more exotic device components discussed in Section V-B.
A. High-level
B. Device-level components
C. Materials of neuromorphic systems
D. Summary and Discussion
VI. Supporting Systems
VII. Applications
Figure 13. Breakdown of applications developed using neuromorphic systems. The size of the box corresponds to the number of applications developed using that neuromorphic system.
VIII. Discussion: The Future of Neuromorphic Computing
Figure 15. Challenges faced by major neuromorphic computing research in different fields.
IX. Summary
Acknowledgments and References (omitted)
For more content related to the GMIS 2017 conference, please click “Read the original” to visit the Machine Heart official GMIS topic↓↓↓