*Opening the "Black Box" of Silicon Chip Design in Neuromorphic Computing DOI: http://dx.doi.org/10.5772/intechopen.83832*

control the propagation of neuron signals. Based on the connection pattern and the learning algorithm, ANN methodologies can be classified into various categories, as depicted in **Figure 2**.

The multilayer perceptron (MLP), a representation of feedforward neural networks (FNNs), is composed by unidirectional connections between hidden layers. MLP has become the quintessential ANN model due to its advantages in ease of implementation [18]. However, the major design challenge in the MLP is that the runtime as well as the training and learning accuracy of the system are strongly affected by the number of neurons and hidden layers. As the neural information evolved into a much more sophisticated mixed-signal evaluation, disadvantages of MLP are exposed when such a neural network is deployed for temporal-spatial information processing tasks [19]. Recurrent neural networks (RNNs), successfully adopt the temporal-spatial characteristics within their hidden layer, closely mimic the working mechanism of biological neurons and synapses. However, the major design challenge is that all weights within the network need to be trained, which dramatically increases its computational complexity. In earlier 2000s, the reservoir computing, an emerging computing paradigm, exploits the dynamic behavior of conventional RNNs and computationally evolved its training mechanism [20]. Within the reservoir layer, synaptic connections are constructed by a layer of nonlinear neurons with fixed and untrained weights. In the reservoir computing, the complexity of the training process is significantly reduced, since only output weights are needed to be trained, thereby, higher computational efficiency can be achieved.

The conventional reservoir computing has been fully developed in the past decade to simplify the training operation of RNNs and proven its benefits across multifaceted applications [21–24]; however, the computational accuracy of the system is still highly proportional to the number of neurons within the reservoir layer. It can be observed that these enormous numbers of neurons significantly hinder the hardware development on the reservoir computing. In [25], it has been proven that the computing architecture is capable to exhibit rich dynamic behaviors during operations when the delay is employed into the system. Benefit from the embedded delay property, the training mechanism and the computing architecture of conventional reservoir computing have conceptually evolved, namely the time delay reservoir (TDR) computing [26]. In the TDR computing, the reservoir layer is built by only one nonlinear neuron with a feedback loop. In this context, time-series input data can be processed through the TDR computing by taking advantages of the feedback signal to form a short-term memory, thereby, higher computational efficiency and accuracy can be achieved.

**Figure 2.** *Overview of artificial neural networks.*

*Bio-Inspired Technology*

**Figure 1.**

As human beings, our brains are capable to analyze and memorize sophisticated information with only 20*W* of energy consumption [4]. In the 1980s, neuromorphic computing, proposed by Dr. Carver Mead, has matured to provide intelligent systems that able to mimic biological processes of mammalian brains through highly parallelized computing architectures; such systems typically model the function of neural network through very-large-scaled-integrated (VLSI) circuits [5]. Major differences between the von Neumann computing architecture and the neuromorphic computing system are illustrated in **Figure 1**. Recently, artificial neural networks (ANNs) have demonstrated their superior performance in many data-extensive applications, including image classification [6–8], handwritten digit recognition [9–11], speech recognition [12, 13] and others. For instance, *TrueNorth*, the neuromorphic chip fabricated by IBM in 2014, is capable to classify multiple objects within a 240 × 240-pixel video input with merely 65*mW* of energy consumption. Compared to the von Neumann computing system, such a neuromorphic computing system has five orders of magnitude more energy efficient [14]. *Loihi*, the latest prototype of brain-inspired chip fabricated by Intel in 2017, involves a mere 1/1000

*General architecture of (a) von Neumann computing system and (b) neuromorphic computing system.*

power consumption of the one used by a classic computer [15].

series predication and the image recognition are illustrated in Section 6.

In the endeavor to imitate the nervous system within mammalian brains, ANNs are built by employing electronic circuits to imitate biological neural networks [17]. In general, ANN methodologies adopt the biological behavior of neurons and synapses, so-call the hidden layer, in their architecture. The hidden layer is constituted by multiple "neurons" and "synapses", which carries activation functions that

Most recent hardware implementations on neuromorphic computing systems focus on the digital computation because of its advantages in noise immunity [16]. However, real-time data information is often recorded in the analog format; thereby, power-hungry operations, such as analog-to-digital (A/D) and digital-to-analog (D/A) conversions, are needed to facilitate the digital computation. It can be observed that the digital computation results in high power consumption with a large design area. In this chapter, an overview of ANNs will be discussed in Section 2. Section 3 introduces the spiking information processing technique through the temporal code with the leaky integrate-and-fire neuron. Our fabricated spiking neural network chip along with its measurement results on the chaotic behavior will be demonstrated in Section 4, followed by the investigation on 3D-IC implementation technique with memristive synapses in Section 5. Applications on the chaotic time

**12**

**2. Artificial neural networks**
