**3. Neuron design**

#### **3.1. Neuron models**

In the field of neuroscience, the research on the investigation of biological neurons has been continued in the past decade [26–31]. As discussed in Section 2, a neuron consists of four major elements, namely, dendrites, soma, axon, and synapse. Within the nervous system, signals are collected and transmitted to the soma by dendrites. The soma serves as the central processing unit where the nonlinear transformation carries out. When the input signal exceeds the threshold level, an output signal is generated, or so-called the firing process. The output signal is then transmitted along the axon, and to other neurons through the synapse. In a biological neuron, signals are in form of a nerve impulse, namely, action potential or spike [32].

When the signal, also known as the stimulus, from dendrites does not reach the critical threshold level, the membrane potential will leak out; otherwise, an action potential is generated. After the firing process takes place, the neuron will go through a refractory period, where the neuron is less likely to fire, and eventually reset to its initial state. This process is known as the firing and resting of a biological neuron, as illustrated in **Figure 6** [31]. Several well-known and representative neuron models are investigated, which include the integrate-and-fire (IF) model [26], Fitzhugh-Nagumo (FF) model [28], Hodgkin-Huxley (HH) model [33], and leaky integrate-and-fire (LIF) model [29]. The simplified electronic circuit representation of these neuron models is demonstrated in **Figure 7**.

#### **3.2. Hodgkin-Huxley (HH) and Fitzhugh-Nagumo (FN) neuron model**

Compared to the data that are extracted from the IF neuron, the HH neuron is found to be biologically meaningful and realistic [34]. The primary goal of the HH neuron is to mimic the electrochemical information transmission of a biological neuron [27]. **Figure 7(c)** demonstrates the simplified electronic circuit model of the HH neuron. The dynamic of the firing potential is described by a fourth-order nonlinear differential equation, which could be simplified as

$$\mathbf{C}\_{m} \cdot \frac{dV\_{m}}{dt} = I\_{ex} - \mathbf{g}\_{l} \{ h, m^{3}, n^{4} \} \cdot \boldsymbol{\Sigma} \mathbf{I} \{ \mathbf{E}\_{l}, V\_{m} \} \tag{1}$$

where *gi*

etc.), and *I*

integrate-and-fire.

*i* (*Ei*

Its mathematical expression could be written as

*dV*\_\_\_\_*<sup>m</sup>*

**Figure 6.** Action potential of a biological neuron.

is the conductance parameter for different ion channels (sodium Na, potassium K,

the HH neuron closely mimics the biological behavior of neurons, due to its design complexity, its electronic circuit model is not widely used in the hardware implementation, whereas the FN neuron is considered as the simplification of the HH neuron, as shown in **Figure 7(b)**.

**Figure 7.** Simplified neuron models of (a) integrate-and-fire, (b) Fitzhugh-Nagumo, (c) Hodgkin-Huxley, and (d) leaky

*Vm* 3 \_\_\_<sup>3</sup> − *w* + *I*

where *w* is the linear recovery variable. Although the FN neuron reduces the four-dimensional set of the equations down to a two-dimensional one, the hardware implementation of the FN

*dt* <sup>=</sup> *Vm* <sup>−</sup>

, *Vm*) is the ion current with controlling variable as a function of time [33]. Although

The Roadmap to Realize Memristive Three-Dimensional Neuromorphic Computing System

http://dx.doi.org/10.5772/intechopen.78986

31

*ex*, (2)

The Roadmap to Realize Memristive Three-Dimensional Neuromorphic Computing System http://dx.doi.org/10.5772/intechopen.78986 31

**Figure 6.** Action potential of a biological neuron.

Thirdly, the human brain has a powerful unsupervised learning ability, which enables us to learn from our experiences. A well-known learning mechanism named associate memory is to associate different types of signals captured by various sensing organs together [22] so that it correlates these signals. Based on the CNCA, a novel architecture, we defined it with the name of associative neuromorphic computing architecture (ANCA), is proposed. **Figure 5(c)** illustrates this architecture. In this architecture, original signals captured from surrounding environments are processed in different regions. After that, the abstracted information would couple to each other to construct an associative natural network. The simplified ANCA with

In the field of neuroscience, the research on the investigation of biological neurons has been continued in the past decade [26–31]. As discussed in Section 2, a neuron consists of four major elements, namely, dendrites, soma, axon, and synapse. Within the nervous system, signals are collected and transmitted to the soma by dendrites. The soma serves as the central processing unit where the nonlinear transformation carries out. When the input signal exceeds the threshold level, an output signal is generated, or so-called the firing process. The output signal is then transmitted along the axon, and to other neurons through the synapse. In a biological neuron, signals are in form of a nerve impulse, namely, action

When the signal, also known as the stimulus, from dendrites does not reach the critical threshold level, the membrane potential will leak out; otherwise, an action potential is generated. After the firing process takes place, the neuron will go through a refractory period, where the neuron is less likely to fire, and eventually reset to its initial state. This process is known as the firing and resting of a biological neuron, as illustrated in **Figure 6** [31]. Several well-known and representative neuron models are investigated, which include the integrate-and-fire (IF) model [26], Fitzhugh-Nagumo (FF) model [28], Hodgkin-Huxley (HH) model [33], and leaky integrate-and-fire (LIF) model [29]. The simplified electronic circuit representation of these

Compared to the data that are extracted from the IF neuron, the HH neuron is found to be biologically meaningful and realistic [34]. The primary goal of the HH neuron is to mimic the electrochemical information transmission of a biological neuron [27]. **Figure 7(c)** demonstrates the simplified electronic circuit model of the HH neuron. The dynamic of the firing potential is described by a fourth-order nonlinear differential equation, which could be simplified as

(*h*, *m*<sup>3</sup>

, *n*<sup>4</sup>) ∙ ∑*I i* (*Ei*

, *Vm*), (1)

two neurons and one synapse has been investigated [25].

30 Advances in Memristor Neural Networks – Modeling and Applications

**3. Neuron design**

**3.1. Neuron models**

potential or spike [32].

neuron models is demonstrated in **Figure 7**.

*Cm* ∙

**3.2. Hodgkin-Huxley (HH) and Fitzhugh-Nagumo (FN) neuron model**

*dV*\_\_\_\_*<sup>m</sup> dt* <sup>=</sup> *<sup>I</sup>*

*ex* − *gi*

**Figure 7.** Simplified neuron models of (a) integrate-and-fire, (b) Fitzhugh-Nagumo, (c) Hodgkin-Huxley, and (d) leaky integrate-and-fire.

where *gi* is the conductance parameter for different ion channels (sodium Na, potassium K, etc.), and *I i* (*Ei* , *Vm*) is the ion current with controlling variable as a function of time [33]. Although the HH neuron closely mimics the biological behavior of neurons, due to its design complexity, its electronic circuit model is not widely used in the hardware implementation, whereas the FN neuron is considered as the simplification of the HH neuron, as shown in **Figure 7(b)**. Its mathematical expression could be written as

$$\frac{dV\_m}{dt} = V\_m - \frac{V\_m^3}{3} - w + I\_{cv'} \tag{2}$$

where *w* is the linear recovery variable. Although the FN neuron reduces the four-dimensional set of the equations down to a two-dimensional one, the hardware implementation of the FN neuron is still excessive challenging due to its high circuit design complexity inherent from its highly nonlinear behavior.

#### **3.3. Leaky integrate-and-fire (LIF) neuron model**

The LIF neuron model, as illustrated in **Figure 7(d)**, is constructed based on the traditional IF neuron. Its leakage property mimics the diffusion of ions that occur through the membrane when the equilibrium is not reached in the cell. The dynamic of the firing potential could be expressed as:

$$\mathbf{C}\_{m} \cdot \frac{dV\_{m}}{dt} + I\_{lank} = I\_{\alpha \prime} \tag{3}$$

where *I leak* is the leakage current. Similar to the traditional IF neuron, the membrane potential is initially charged up by the excitation current. An action potential is generated once the membrane potential exceeds the threshold level; otherwise, all charges will be leaked out. After the firing process takes place, the membrane capacitor in the LIF neuron will be fully discharged to the resetting state. Hence, the LIF neuron processes both firing and resting properties, which has an adequate resemblance to the biological neuron and relatively easier to implement using analog electronic circuits.

Compared to other neuron models, the LIF neuron plays a major role in the neuron design due to its compact structure, robust performance, and adequate resemblance to the biological behavior of neurons. The simplified analog electronic circuit model of the LIF neuron is demonstrated in **Figure 8**.

In the analog electronic circuit model of the LIF neuron, there are several key parameters that need to be carefully designed; for instance, the excitation current *I ex*, the membrane capacitor *Cm*, the threshold level *Vth*, and the leakage current *<sup>I</sup> leak*. In Eq. (4), the membrane potential is controlled by the excitation current and the leakage current, or vice versa. A simple resistor model is adapted to represent such relation; thus, Eq. (3) could be rewritten as

$$I\_{cc} = \frac{V\_m}{R\_{load}} + C\_m \cdot \frac{dV\_m}{dt} \tag{4}$$

oscillator (VCO) can be regulated. The oscillating rate of the VCO is highly dependent upon integrated input stimulus signals. The final stage of the SIEN is formed by the parallel structure of a resistor and a capacitor to model charging and discharging behaviors of biological neurons, as depicted in **Figure 10**, whereas simulation results of the spiking signal are plotted

The Roadmap to Realize Memristive Three-Dimensional Neuromorphic Computing System

http://dx.doi.org/10.5772/intechopen.78986

33

**Figure 8.** Simplified analog electronic circuit model of the LIF neuron.

**Figure 9.** The diagram of the SIEN.

**Figure 10.** Simplified design scheme of the SIEN.

where *Rleak* defines the weighted resistance of the leakage current. By solving Eq. (4), the expression of the membrane potential could be determined as

$$V\_m = I\_{ex} \cdot R\_{\text{lank}} - e \, \frac{l}{\, k\_{\text{a}} \cdot C\_{\text{v}}} \,. \tag{5}$$

#### **3.4. Signal intensity encoding neuron**

In order to model the input intensity-dependent firing characteristic of neurons [22, 35], the signal intensity encoding neuron (SIEN) is designed, as depicted in **Figure 9** [36].

In this design, the input current is transferred into a voltage signal by a transimpedance amplifier (TIA), such that the oscillating frequency of a current-starved-voltage controlled The Roadmap to Realize Memristive Three-Dimensional Neuromorphic Computing System http://dx.doi.org/10.5772/intechopen.78986 33

**Figure 8.** Simplified analog electronic circuit model of the LIF neuron.

**Figure 9.** The diagram of the SIEN.

neuron is still excessive challenging due to its high circuit design complexity inherent from

The LIF neuron model, as illustrated in **Figure 7(d)**, is constructed based on the traditional IF neuron. Its leakage property mimics the diffusion of ions that occur through the membrane when the equilibrium is not reached in the cell. The dynamic of the firing potential could be

*leak* = *I*

*leak* is the leakage current. Similar to the traditional IF neuron, the membrane potential is initially charged up by the excitation current. An action potential is generated once the membrane potential exceeds the threshold level; otherwise, all charges will be leaked out. After the firing process takes place, the membrane capacitor in the LIF neuron will be fully discharged to the resetting state. Hence, the LIF neuron processes both firing and resting properties, which has an adequate resemblance to the biological neuron and relatively easier

Compared to other neuron models, the LIF neuron plays a major role in the neuron design due to its compact structure, robust performance, and adequate resemblance to the biological behavior of neurons. The simplified analog electronic circuit model of the LIF neuron is

In the analog electronic circuit model of the LIF neuron, there are several key parameters that

controlled by the excitation current and the leakage current, or vice versa. A simple resistor

+ *Cm* ∙ *dV*\_\_\_\_*<sup>m</sup>*

where *Rleak* defines the weighted resistance of the leakage current. By solving Eq. (4), the expres-

*ex* ∙ *Rleak* − *e*

In order to model the input intensity-dependent firing characteristic of neurons [22, 35], the

In this design, the input current is transferred into a voltage signal by a transimpedance amplifier (TIA), such that the oscillating frequency of a current-starved-voltage controlled

signal intensity encoding neuron (SIEN) is designed, as depicted in **Figure 9** [36].

\_\_\_\_\_\_\_ *t*

need to be carefully designed; for instance, the excitation current *I*

model is adapted to represent such relation; thus, Eq. (3) could be rewritten as

*ex* <sup>=</sup> *<sup>V</sup>*\_\_\_\_*<sup>m</sup> Rleak*

*Cm*, the threshold level *Vth*, and the leakage current *<sup>I</sup>*

sion of the membrane potential could be determined as

*I*

*Vm* = *I*

**3.4. Signal intensity encoding neuron**

*ex*, (3)

*ex*, the membrane capacitor

*leak*. In Eq. (4), the membrane potential is

*dt* , (4)

*Rleak*∙*Cm*. (5)

*dV*\_\_\_\_*<sup>m</sup> dt* <sup>+</sup> *<sup>I</sup>*

its highly nonlinear behavior.

expressed as:

where *I*

**3.3. Leaky integrate-and-fire (LIF) neuron model**

32 Advances in Memristor Neural Networks – Modeling and Applications

*Cm* ∙

to implement using analog electronic circuits.

demonstrated in **Figure 8**.

oscillator (VCO) can be regulated. The oscillating rate of the VCO is highly dependent upon integrated input stimulus signals. The final stage of the SIEN is formed by the parallel structure of a resistor and a capacitor to model charging and discharging behaviors of biological neurons, as depicted in **Figure 10**, whereas simulation results of the spiking signal are plotted

**Figure 10.** Simplified design scheme of the SIEN.

at the postsynaptic cell is highly dependent on the amount of the neurotransmitter received. A larger amount of neurotransmitter molecules stimulate a larger magnitude spiking signal, vice versa. In general, the large magnitude of spiking signal at the terminal of presynaptic neurons would stimulate more neurotransmitters. However, with the repeated stimulus in a short time (~hundreds of millisecond), the neurotransmitters released to the synapse from presynaptic neurons reduce gradually, which results in stimulating a smaller magnitude

The Roadmap to Realize Memristive Three-Dimensional Neuromorphic Computing System

http://dx.doi.org/10.5772/intechopen.78986

35

This phenomenon was investigated by Dr. Kandel's research on Aplysia [22]. In experiments depicted in **Figure 13**, the stimulus was repeatedly applied to the Aplysia's sensory neurons. When the constant stimulus was repeatedly applied to the sensory neuron multiple times (1, 2, 5, 10, 15), the magnitude of spiking signal stimulated in the response neuron (L7G) decreases accordingly [22]. This indicates that the previous stimulus captured by the sensor neuron is somehow stored in the neural network system through modifying the connectivity strength between neurons. In Dr. Kandel's experiments, the neural network is relatively small that is only constructed by two neurons. The connectivity strength of the synapse is defined as the weight. The weight value can be modified in two directions (strengthen or weaken) by both excitatory and inhibitory stimuli. This feature is called as

In order to physically realize the biological plasticity of a synapse, several features need to be satisfied. Firstly, the device should have only two terminals that are used for connecting the presynaptic and postsynaptic neurons, respectively. Secondly, the device should have a signal attenuation capability to mimic the plasticity of a synapse, and this capability should be reversible. All these features make the nanoscale two-terminal device memristor, also named as resistive RAM (RRAM), to be an ideal candidate for the electronic synapse implementation. The resistance of the memristor is reversibly programmable with the applied voltage pulse stimulus on its two terminals. When the voltage stimulus is applied on its two terminals, its resistance would be gradually changed between its low-resistance state (LRS) and high-resistance state (HRS). Typically, the memristor is constructed by the metal-insulator-metal (MIM) configuration as illustrated in **Figure 14(a)**. The decrease of resistance of the memristor is due to the formation of the conductive filament in the insulator layer. Transmission electron microscopy (TEM) photos of conductive filaments are demonstrated in **Figure 14(b)**. This breakdown phenomenon of the insulator can be recovered by applying

**Figure 13.** A sample of five identical action potential numbers 1, 2, 5, 10, and 15 along with the corresponding motor

response signals of diminishing strength recorded at the motor neuron (identified by L7G) (top) [37].

spiking signal in the postsynaptic neuron.

the plasticity of a synapse.

**Figure 11.** Spiking signals with respect to various stimulus voltage levels.

in **Figure 11**. In **Figure 11**, with higher input signal amplitudes, the firing rate increases correspondingly, which simulates the input intensity-dependent firing characteristic of the neurons in real biological systems.

#### **4. Memristor as synapse**

In the human brain, a synapse is defined as the structure connecting two neurons as shown in **Figure 12**. When a presynaptic action potential (spiking signal) approaches to the synapse, the chemical neurotransmitter molecules would be released to the synapse. The neurotransmitter would be diffused across from the presynaptic neuron cell to the postsynaptic neuron cell within the synapse. When the neurotransmitter arrived at the terminal of the postsynaptic cell, a spiking signal would be stimulated. The magnitude of the stimulated spiking signal

**Figure 12.** The structure of the synapse [22].

at the postsynaptic cell is highly dependent on the amount of the neurotransmitter received. A larger amount of neurotransmitter molecules stimulate a larger magnitude spiking signal, vice versa. In general, the large magnitude of spiking signal at the terminal of presynaptic neurons would stimulate more neurotransmitters. However, with the repeated stimulus in a short time (~hundreds of millisecond), the neurotransmitters released to the synapse from presynaptic neurons reduce gradually, which results in stimulating a smaller magnitude spiking signal in the postsynaptic neuron.

This phenomenon was investigated by Dr. Kandel's research on Aplysia [22]. In experiments depicted in **Figure 13**, the stimulus was repeatedly applied to the Aplysia's sensory neurons. When the constant stimulus was repeatedly applied to the sensory neuron multiple times (1, 2, 5, 10, 15), the magnitude of spiking signal stimulated in the response neuron (L7G) decreases accordingly [22]. This indicates that the previous stimulus captured by the sensor neuron is somehow stored in the neural network system through modifying the connectivity strength between neurons. In Dr. Kandel's experiments, the neural network is relatively small that is only constructed by two neurons. The connectivity strength of the synapse is defined as the weight. The weight value can be modified in two directions (strengthen or weaken) by both excitatory and inhibitory stimuli. This feature is called as the plasticity of a synapse.

**Figure 11.** Spiking signals with respect to various stimulus voltage levels.

34 Advances in Memristor Neural Networks – Modeling and Applications

in **Figure 11**. In **Figure 11**, with higher input signal amplitudes, the firing rate increases correspondingly, which simulates the input intensity-dependent firing characteristic of the neu-

In the human brain, a synapse is defined as the structure connecting two neurons as shown in **Figure 12**. When a presynaptic action potential (spiking signal) approaches to the synapse, the chemical neurotransmitter molecules would be released to the synapse. The neurotransmitter would be diffused across from the presynaptic neuron cell to the postsynaptic neuron cell within the synapse. When the neurotransmitter arrived at the terminal of the postsynaptic cell, a spiking signal would be stimulated. The magnitude of the stimulated spiking signal

**Figure 12.** The structure of the synapse [22].

rons in real biological systems.

**4. Memristor as synapse**

In order to physically realize the biological plasticity of a synapse, several features need to be satisfied. Firstly, the device should have only two terminals that are used for connecting the presynaptic and postsynaptic neurons, respectively. Secondly, the device should have a signal attenuation capability to mimic the plasticity of a synapse, and this capability should be reversible. All these features make the nanoscale two-terminal device memristor, also named as resistive RAM (RRAM), to be an ideal candidate for the electronic synapse implementation. The resistance of the memristor is reversibly programmable with the applied voltage pulse stimulus on its two terminals. When the voltage stimulus is applied on its two terminals, its resistance would be gradually changed between its low-resistance state (LRS) and high-resistance state (HRS). Typically, the memristor is constructed by the metal-insulator-metal (MIM) configuration as illustrated in **Figure 14(a)**. The decrease of resistance of the memristor is due to the formation of the conductive filament in the insulator layer. Transmission electron microscopy (TEM) photos of conductive filaments are demonstrated in **Figure 14(b)**. This breakdown phenomenon of the insulator can be recovered by applying

**Figure 13.** A sample of five identical action potential numbers 1, 2, 5, 10, and 15 along with the corresponding motor response signals of diminishing strength recorded at the motor neuron (identified by L7G) (top) [37].

**Figure 14.** Illustration of the switching mechanism of a memristor: (a) switching process and (b) TEM images of the dynamic evolution of conductive filaments [38].

increasing number of the stacked layers, while the number of masks for V-RRAM is relatively independent of the stacking number. With increasing number of the stacked layers, V-RRAM

The Roadmap to Realize Memristive Three-Dimensional Neuromorphic Computing System

http://dx.doi.org/10.5772/intechopen.78986

37

The recent fabricated neuromorphic chips implement neurons and synapses using traditional 2D CMOS and memory technology. In 2D placement methodology, a longer signal delivery distance is generally expected due to the routing density increment linearly with the number

To address these limitations of the state-of-the-art neuromorphic chip designs, a novel 3D neuromorphic architecture is proposed to combine 3D-integrated circuit (3D-IC) technology with the memristor as the electronic synapse. Applying 3D integration technology to neuromorphic chips permits vertical routing paths of reduced nanoscale dimension, subsequently diminishing critical path lengths. It also decreases power consumption and shrinks die areas with high-complexity, high-connectivity, and massively parallel signal processing capability. The benefits of applying 3D integration technology to neuromorphic chips design can be sum-

**1.** address the 2D neuron routing congestion problem, thereby increasing interconnectivity

**2.** allow numerous 3D interconnections between hardware layers that offer high device interconnection density, low-power density, and broad channel bandwidth using fast and

**3.** provide a high-complexity, high-connectivity, and massively parallel-processing circuital

The diagram structure of the proposed 3D neuromorphic computing (3D-NC) architecture is shown in **Figure 17(c)**. The multiple layers of the neural network can be implemented

and scalability of the NC network and reducing the critical-path lengths [42],

system that can accommodate highly demanding computational tasks.

**5. Memristive three-dimensional neuromorphic computing system**

**Figure 16.** 3D RRAM integration structure: (a) horizontal 3D structure and (b) vertical 3D structure.

of connections, which inevitably increases die area, power consumption, etc. [41].

becomes, even more, cost-effective [39, 40].

marized as follows:

energy-efficient links;

a reversed stimulus at the terminals, which consequently resets the memristor from its LRS to HRS. The physical mechanism of this reset behavior is the deconstruction of the conductive filament as illustrated in **Figure 14(b)**.

In general, the MIM structure of the memristor is fabricated massively in a 2D crossbar structure as depicted in **Figure 15**. In this structure, memristors are sandwiched between two layers of nanowires. The area of a single cell is 4F<sup>2</sup> , where the F is the minimum lithographic feature size dictated by technology node.

In order to further enhance the device density, the 2D crossbar structure of the memristor can be extended vertically into 3D space. There are two types of 3D RRAM (memristor) structures that can be used as 3D synaptic arrays: horizontal RRAM (H-RRAM) and vertical RRAM (V-RRAM), which are shown, respectively, in **Figure 16**.

In both structures, the area of the device size is 4F<sup>2</sup> /n, where n is the number of the stacked layers. The number of critical lithography masks for H-RRAM structure increases linearly with

**Figure 15.** Two-dimensional crossbar structure of the memristor.

The Roadmap to Realize Memristive Three-Dimensional Neuromorphic Computing System http://dx.doi.org/10.5772/intechopen.78986 37

**Figure 16.** 3D RRAM integration structure: (a) horizontal 3D structure and (b) vertical 3D structure.

increasing number of the stacked layers, while the number of masks for V-RRAM is relatively independent of the stacking number. With increasing number of the stacked layers, V-RRAM becomes, even more, cost-effective [39, 40].

### **5. Memristive three-dimensional neuromorphic computing system**

The recent fabricated neuromorphic chips implement neurons and synapses using traditional 2D CMOS and memory technology. In 2D placement methodology, a longer signal delivery distance is generally expected due to the routing density increment linearly with the number of connections, which inevitably increases die area, power consumption, etc. [41].

To address these limitations of the state-of-the-art neuromorphic chip designs, a novel 3D neuromorphic architecture is proposed to combine 3D-integrated circuit (3D-IC) technology with the memristor as the electronic synapse. Applying 3D integration technology to neuromorphic chips permits vertical routing paths of reduced nanoscale dimension, subsequently diminishing critical path lengths. It also decreases power consumption and shrinks die areas with high-complexity, high-connectivity, and massively parallel signal processing capability.

The benefits of applying 3D integration technology to neuromorphic chips design can be summarized as follows:


The diagram structure of the proposed 3D neuromorphic computing (3D-NC) architecture is shown in **Figure 17(c)**. The multiple layers of the neural network can be implemented

**Figure 15.** Two-dimensional crossbar structure of the memristor.

tive filament as illustrated in **Figure 14(b)**.

dynamic evolution of conductive filaments [38].

feature size dictated by technology node.

ers of nanowires. The area of a single cell is 4F<sup>2</sup>

36 Advances in Memristor Neural Networks – Modeling and Applications

In both structures, the area of the device size is 4F<sup>2</sup>

(V-RRAM), which are shown, respectively, in **Figure 16**.

a reversed stimulus at the terminals, which consequently resets the memristor from its LRS to HRS. The physical mechanism of this reset behavior is the deconstruction of the conduc-

**Figure 14.** Illustration of the switching mechanism of a memristor: (a) switching process and (b) TEM images of the

In general, the MIM structure of the memristor is fabricated massively in a 2D crossbar structure as depicted in **Figure 15**. In this structure, memristors are sandwiched between two lay-

In order to further enhance the device density, the 2D crossbar structure of the memristor can be extended vertically into 3D space. There are two types of 3D RRAM (memristor) structures that can be used as 3D synaptic arrays: horizontal RRAM (H-RRAM) and vertical RRAM

ers. The number of critical lithography masks for H-RRAM structure increases linearly with

, where the F is the minimum lithographic

/n, where n is the number of the stacked lay-

through this structure. **Figure 17(a)** illustrates multiple layers of the neural network structure, in which the decomposed two layers are marked in a red box. These two layers of the neural network can be implemented through 3D integration technology, which fabricates the layer of memristor in the middle between two neuron layers as depicted in **Figure 17(b)**. Besides, with the similar structure of **Figure 17(b)**, a large scale of neural networks can be implemented by extending the 3D structure of two layers neural network repeatedly in a horizontal direction, which is demonstrated in **Figure 17(c)**.

In these structures, the electronic synaptic array implemented with memristors is not in a traditional crossbar structure (**Figure 18(a)**), which suffers the sneaking path issue. The sneaking path is an undesired current path from the adjacent memristor cells marked as the white arrows in **Figure 18(a)**. In order to eliminate this issue, the horizontal nanowires, which are used for reading/writing access, are physically disconnected in the design. Meanwhile, reading/writing access ports are located on the upper and bottom layers. Without electrical connections between adjacent memristor cells, the sneaking path issue can be fundamentally addressed.

Two 3D integration technologies have the potential for implementing the 3D-NC architecture in **Figure 17(c)**, which are TSV (through-silicon via)-based and monolithic-based 3D integration technologies. The 3D integration technology with TSVs as vertical electrical connections has been studied for many years [19]. For TSV-based 3D integration technology, transistors are initially fabricated at separated wafers by traditional CMOS technologies. After that, two wafers are bonded together. In general, the capacitance between TSVs is large, which can cause capacitive coupling issue in a high-speed circuit. However, they can be used for implementing the capacitance in neuron models, resulting in further reduction of the chip design area [43–45]. However, there are several technical challenges for the TSV-based 3D integration technology. Firstly, wafers need to be thinned to make the metal contact from TSVs for

the bonding process. In these thinning processes, a lot of charges would be accumulated. These charges potentially cause electrostatic discharge (ESD) issue that can damage chips in bonding processes. Secondly, bonding the microscale TSVs needs extra effort to align them precisely. To overcome these challenges, another more aggressive 3D integration technology is proposed, which is called monolithic 3D integration. Unlike the TSV-based 3D technology, which uses a separate fabricate processes, the monolithic 3D technology integrates different layers of devices at a single wafer with nanoscale intertier vias serving as vertical connections. Due to the monolithic fabrication procedure, this 3D integration technology fundamentally eliminates the thinning and bonding processes. On the contrary, the main challenge for the monolithic 3D integration technology is the low-temperature fabrication constraint for upper layers, since the high fabrication temperature in upper layers would damage the lower layer transistors previously fabricated. This low-temperature requirement restricts the traditional

**Figure 18.** (a) The traditional crossbar structure of memristors and (b) disconnecting the horizontal connecting nanowire.

**Epi-like Si UTB**

I\_on/I\_off >107 >5 × 10<sup>5</sup> >5 × 10<sup>5</sup> >107 >107 >1021

**C** <400 <400 <400 <650 <400 <500

**SOI-Si UTB**

The Roadmap to Realize Memristive Three-Dimensional Neuromorphic Computing System

**Poly-Si/Ge FinFET**

http://dx.doi.org/10.5772/intechopen.78986

**IGZO OSFET**

39

upper layer circuitry implementation. Fortunately, several low-temperature transistors are the potential candidate to fit this requirement, such as FinFETs [46], carbon nanotube FETs [47, 48], etc. **Table 1** summarizes state-of-the-art transistors that are fabricated at low temperature and potentially can be employed in the monolithic 3D integration technology [46]. With these emerging technologies, the monolithic 3D-NC with memristors as electronic synapses is becoming the most promising next-generation non-von Neumann computing platform.

The conventional concept of the neuromorphic computing is to physically rebuild brain-like neural networks through very-large-scale integration (VLSI) [3]. In this chapter, we introduce a possibility to use an emerging device named memristor as an electronic synapse to construct

C) that does not fit the requirements for the

CMOS transistor (fabricates at more than 1000 ⚬

**3D device FinFET Epi-like Si** 

Thermal budget °

**NWFET**

**Table 1.** The emerging transistors with low-fabrication temperature [46].

**6. Conclusion**

**Figure 17.** 3D neuromorphic computing architecture (a) Deep neural network, (b) 3D structure of two layers of neural network and (c) 3D structure of multiple layers of neural network.

The Roadmap to Realize Memristive Three-Dimensional Neuromorphic Computing System http://dx.doi.org/10.5772/intechopen.78986 39

**Figure 18.** (a) The traditional crossbar structure of memristors and (b) disconnecting the horizontal connecting nanowire.


**Table 1.** The emerging transistors with low-fabrication temperature [46].

the bonding process. In these thinning processes, a lot of charges would be accumulated. These charges potentially cause electrostatic discharge (ESD) issue that can damage chips in bonding processes. Secondly, bonding the microscale TSVs needs extra effort to align them precisely. To overcome these challenges, another more aggressive 3D integration technology is proposed, which is called monolithic 3D integration. Unlike the TSV-based 3D technology, which uses a separate fabricate processes, the monolithic 3D technology integrates different layers of devices at a single wafer with nanoscale intertier vias serving as vertical connections. Due to the monolithic fabrication procedure, this 3D integration technology fundamentally eliminates the thinning and bonding processes. On the contrary, the main challenge for the monolithic 3D integration technology is the low-temperature fabrication constraint for upper layers, since the high fabrication temperature in upper layers would damage the lower layer transistors previously fabricated. This low-temperature requirement restricts the traditional CMOS transistor (fabricates at more than 1000 ⚬ C) that does not fit the requirements for the upper layer circuitry implementation. Fortunately, several low-temperature transistors are the potential candidate to fit this requirement, such as FinFETs [46], carbon nanotube FETs [47, 48], etc. **Table 1** summarizes state-of-the-art transistors that are fabricated at low temperature and potentially can be employed in the monolithic 3D integration technology [46]. With these emerging technologies, the monolithic 3D-NC with memristors as electronic synapses is becoming the most promising next-generation non-von Neumann computing platform.

#### **6. Conclusion**

through this structure. **Figure 17(a)** illustrates multiple layers of the neural network structure, in which the decomposed two layers are marked in a red box. These two layers of the neural network can be implemented through 3D integration technology, which fabricates the layer of memristor in the middle between two neuron layers as depicted in **Figure 17(b)**. Besides, with the similar structure of **Figure 17(b)**, a large scale of neural networks can be implemented by extending the 3D structure of two layers neural network repeatedly in a horizontal direction,

In these structures, the electronic synaptic array implemented with memristors is not in a traditional crossbar structure (**Figure 18(a)**), which suffers the sneaking path issue. The sneaking path is an undesired current path from the adjacent memristor cells marked as the white arrows in **Figure 18(a)**. In order to eliminate this issue, the horizontal nanowires, which are used for reading/writing access, are physically disconnected in the design. Meanwhile, reading/writing access ports are located on the upper and bottom layers. Without electrical connections between adjacent memristor cells, the sneaking path issue can be fundamentally addressed.

Two 3D integration technologies have the potential for implementing the 3D-NC architecture in **Figure 17(c)**, which are TSV (through-silicon via)-based and monolithic-based 3D integration technologies. The 3D integration technology with TSVs as vertical electrical connections has been studied for many years [19]. For TSV-based 3D integration technology, transistors are initially fabricated at separated wafers by traditional CMOS technologies. After that, two wafers are bonded together. In general, the capacitance between TSVs is large, which can cause capacitive coupling issue in a high-speed circuit. However, they can be used for implementing the capacitance in neuron models, resulting in further reduction of the chip design area [43–45]. However, there are several technical challenges for the TSV-based 3D integration technology. Firstly, wafers need to be thinned to make the metal contact from TSVs for

**Figure 17.** 3D neuromorphic computing architecture (a) Deep neural network, (b) 3D structure of two layers of neural

network and (c) 3D structure of multiple layers of neural network.

which is demonstrated in **Figure 17(c)**.

38 Advances in Memristor Neural Networks – Modeling and Applications

The conventional concept of the neuromorphic computing is to physically rebuild brain-like neural networks through very-large-scale integration (VLSI) [3]. In this chapter, we introduce a possibility to use an emerging device named memristor as an electronic synapse to construct a memristive neural network of the neuromorphic computing system, consequently, achieving a much smaller design area and power consumption. In this chapter, we also comprehensively analyze functions of the biological synapse in cellular level and further introduce the reasons that memristor can be considered as an electronic synapse. In architecture level of neuromorphic computing, we introduce three novel architectures that are fundamentally different from the traditional von Neumann architecture by locating the computing units (neurons) and memory units (synapse) distributedly. The realization of these three neuromorphic computing architectures potentially is a roadmap for implementing a power-efficient artificial intelligent system with self-learning capability.

[2] Yu S, Wu Y, Jeyasingh R, Kuzum D, Wong H-SP. An electronic synapse device based on metal oxide resistive switching memory for neuromorphic computation. IEEE

The Roadmap to Realize Memristive Three-Dimensional Neuromorphic Computing System

http://dx.doi.org/10.5772/intechopen.78986

41

[3] Mead C. Neuromorphic electronic systems. Proceedings of the IEEE. 1990;**78**:1629-1636

[4] Ghani A. Neuro-inspired speech recognition based on reservoir computing. Advances in

[5] Overton G. Photonic Reservoir Computing–A New Tool for Speech Recognition. https:// www.laserfocusworld.com/articles/2014/09/photonic-reservoir-computing-a-new-tool-

[6] Alalshekmubarak A, Smith LS. On improving the classification capability of reservoir computing for Arabic speech recognition. In: International Conference on Artificial

[7] Verstraeten D, Schrauwen B, Stroobandt D. Reservoir computing with stochastic bitstream neurons. In: Proceedings of the 16th annual Prorisc Workshop; 2005. pp. 454-459

[8] Jin Y, Zhao Q, Yin H, Yue H. Handwritten numeral recognition utilizing reservoir computing subject to optoelectronic feedback. In: Natural Computation (ICNC), 2015 11th

[9] Hinaut X, Dominey PF. On-line processing of grammatical structure using reservoir computing. In: International Conference on Artificial Neural Networks; 2012. pp. 596-603

[10] Goudarzi A, Lakin MR, Stefanovic D. Reservoir computing approach to robust computation using unreliable nanoscale networks. In: International Conference on Uncon-

[11] Jaeger H. Short Term Memory in Echo State Networks vol. 5. GMD-Forschungszentrum Informationstechnik. Germany: Schloss Birlinghoven 53757 Sankt Augustin; 2001

[12] Schrauwen B, Stroobandt D. Using reservoir computing in a decomposition approach for time series prediction. In: ESTSP 2008 European Symposium on Time Series Prediction;

[13] An H, Zhou Z, Yi Y. Opportunities and challenges on nanoscale 3D neuromorphic computing system. In: Electromagnetic Compatibility & Signal/Power Integrity (EMCSI),

[14] Qiao N, Mostafa H, Corradi F, Osswald M, Stefanini F, Sumislawska D, et al. A re-configurable on-line learning spiking neuromorphic processor comprising 256 neurons and

[15] Benjamin B, Gao P, McQuinn E, Choudhary S, Chandrasekaran AR, Bussat JM, et al. Neurogrid: A mixed-analog-digital multichip system for large-scale neural simulations.

ventional Computation and Natural Computation; 2014. pp. 164-176

2017 IEEE International Symposium on; 2017. pp. 416-421

128K synapses. Frontiers in Neuroscience. April 2015;**9**:141

Proceedings of the IEEE. May 2014;**102**:699-716

Transactions on Electron Devices. 2011;**58**:2729-2737

Speech Recognition. InTech; 2010;**2**:7-36

for-speech-recognition.html

2008. pp. 149-158

Neural Networks; 2014. pp. 225-232

International Conference on; 2015. pp. 1165-1169

Furthermore, the memristive neural network is generally implemented in the two-dimensional design method. In this chapter, we introduce and discuss a novel hardware implementation trend that combines memristor and 3D-IC integration technology; such technology has the capabilities to reduce the system power consumption, provide the high-connectivity, resolve the routing congestion issues, and offer the massively parallel data processing capability. Moreover, the design methodology of applying the capacitance formed by the throughsilicon vias (TSVs) to generate a membrane potential in a 3D neuromorphic computing system is discussed in this chapter.

Moreover, there are several challenges that hinder the employment of the memristors as the electronic synapse, e.g., the reliability, variability, endurance, etc. Additionally, fabrication techniques of lower temperature transistors (FinFET, carbon nanotube FETs, etc.), which can be integrated monolithically on the top layers, demand further research effort to demonstrate the memristive 3D neuromorphic computing system discussed in this chapter. The proposed novel neuromorphic computing architectures (DNCA, CNCA, and ANCA) are considered potentially to be the roadmap for achieving a self-learning artificial intelligence that can directly learn from the surrounding environment and be adaptive to it. However, mathematical foundations of these architecture concepts are still unclear and missing, which need further investigations in future.
