**Spike‐Timing‐Dependent Plasticity in Memristors**

**Spike**‐**Timing**‐**Dependent Plasticity in Memristors**

DOI: 10.5772/intechopen.69535

Yao Shuai, Xinqiang Pan and Xiangyu Sun Yao Shuai, Xinqiang Pan and Xiangyu Sun Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.69535

#### **Abstract**

The spike‐timing‐dependent plasticity (STDP) characteristic of the memristor plays an important role in the development of neuromorphic network computing in the future. The STDP characteristics were observed in different memristors based on different kinds of materials. The investigation regarding the influences of device hysteresis character‐ istic, the initial conductance of the memristors, and the waveform of the voltage pulses applied to the memristor as preneuron voltage spike and postneuron voltage spike on the STDP behavior of memristors are reviewed.

**Keywords:** Memristor, Spike‐timing‐dependent plasticity

## **1. Introduction**

The state‐of‐the‐art artificial intelligence based on traditional von Neumann computation paradigm has shown remarkable learning and thinking abilities, for instance, AlphaGo created by the Google‐owned company Deep Mind beat the top Go player Lee Sedol by 4:1 recently [1]. However, the information processing through the digital von Neumann computation paradigm is much less efficient as compared to human brains, which is the major bottleneck of von Neumann computation paradigm. Synapse plays the key role in learning, thinking, and memorizing for a human being, and there are approximately 1014 synapses in a human's brain [2]. A synapse is formed between two neuron cells [3], and the synapse weight can be precisely tuned by the ionic flowing through them. It is well known that the adaptation of the synapse weight between two neurons it connects with makes the biological systems functional [4]. In order to build up a system that behaves in

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2018 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

a much more efficient way like a human brain, people have never stopped searching for an electrical element that mimics the basic function of a synapse until "the miss memris‐ tor found [5]."

Similar to a biological synapse, memristor is a two‐terminal device whose conductance can be changed by the input pulses or by controlling the charge through it [4, 6] and in such a way, a memristor works as an artificial electronic synapse. Electronic synapses based on memristor devices are around three orders of magnitude smaller than a prominent CMOS design [2]; thus, the memristor has a great potential for scalability as compared to the electronic synapse made by traditional complex circuits [7].

Synapses have different kinds of plasticity, which have been realized and investigated in different memristors [8]. And the research on the application of memristors with the common synaptic plasticity in some kind of neural networks has also been conducted. For instance, HfO2 ‐based memristors were used in a Hopfield neural network to implement associative memory [9]. The relationship between the resistance of the memristor and the synaptic weight was defined. And the resistances of the memristors were tuned to the target resistances through the application of the voltage pulses on the memristors as the training process [9]. Prezioso et al. realized pattern classification by using the neural net‐ work based on memristors with synaptic plasticity [10]. The 12 × 12 crossbar with Pt/ Al2 O3 /TiO2−*<sup>x</sup>* /Ti/Pt memristors at each cross point was fabricated, which is illustrated in **Figure 1(a)**. Sixty memristors among them were used to realize the function. The relation‐ ship between synaptic weight and conductance of the memristors is shown in Eq. (1). The synaptic weight was changed by applying fixed voltage pulses with the amplitude of ±1.3 V on the memristors, and the change of conductance under different voltage pulses is shown in **Figure 1(c)**.

$$\mathbf{W}\_{i\rangle} = \mathbf{G}\_{i\rangle}^{+} - \mathbf{G}\_{i\rangle}^{-} \tag{1}$$

**2. STDP in memristors**

In the common synaptic plasticity mentioned above, the change of the conductance (weight) is only related to one voltage pulse applied on the memristors. Another kind of plasticity of the synapses is spike‐timing‐dependent plasticity (STDP). STDP is one of the most important syn‐ aptic characteristics. STDP modulates synapse weight based on the activities of the so‐called pre‐ and postsynaptic neurons [11]. The spikes from both preneuron and postneuron arrive at the synapse occasionally in the opposite direction [7]. In STDP, the change of the synaptic

spike arrives [4]. In a typical STDP behavior, if postsynaptic neuron spike arrives after presyn‐ aptic neuron spike (∆*t* < 0), the synaptic weight increases. If postsynaptic neuron spike arrives before presynaptic neuron spike (∆*t* > 0), the synaptic weight decreases. In electronic synapse based on memristor, voltage spikes or pulses are applied on the memristor through the two electrodes, which modulates the conductance of the memristor, and the change of conductance is related to the relative timing of voltage spikes or pulses. Memristors can realize STDP func‐ tion which is similar with that of biological synaptic systems, which is shown in **Figure 2** [4].

**Figure 2.** (a) The relationship between change of the memristor synaptic weight and the relative timing ∆*t* of the neuron spikes. The synaptic change was normalized to the maximum synaptic weight. Inset (a): SEM image of the crossbar structure of memristors. (b) The relationship between the change in excitatory postsynaptic current (EPSC) of rat hippocampal neurons after repetitive correlated spiking (60 pulses at 1 Hz) and relative spike timing. The figure was reconstructed with permission from Ref. [8, 12]. Inset (b) is the phase contrast image of a hippocampal neuron, which was adapted with permission from Ref. [4, 13, 26].

pre − *t*

post), where *t*

Spike‐Timing‐Dependent Plasticity in Memristors http://dx.doi.org/10.5772/intechopen.69535

post is the time when the postsynaptic neuron

pre is the time

285

weight is the function of relative neuron spike timing ∆*t* (∆*t* = *t*

when the presynaptic neuron spike arrives and *t*

**Figure 1.** Memristor crossbar. (a) Integrated 12 × 12 crossbar with an Al2 O3 /TiO2‐*<sup>x</sup>* memristor at each cross point. (b) I–V curve of the memristor. Inset (b): the cross‐sectional structure of the memristor device. (c) Absolute values of the change of memristor's conductance under voltage pulses (with the width of 500 μs) of two polarities, as a function of the initial conductance, for various pulse amplitudes [10].

## **2. STDP in memristors**

a much more efficient way like a human brain, people have never stopped searching for an electrical element that mimics the basic function of a synapse until "the miss memris‐

Similar to a biological synapse, memristor is a two‐terminal device whose conductance can be changed by the input pulses or by controlling the charge through it [4, 6] and in such a way, a memristor works as an artificial electronic synapse. Electronic synapses based on memristor devices are around three orders of magnitude smaller than a prominent CMOS design [2]; thus, the memristor has a great potential for scalability as compared to the electronic synapse

Synapses have different kinds of plasticity, which have been realized and investigated in different memristors [8]. And the research on the application of memristors with the common synaptic plasticity in some kind of neural networks has also been conducted. For

associative memory [9]. The relationship between the resistance of the memristor and the synaptic weight was defined. And the resistances of the memristors were tuned to the target resistances through the application of the voltage pulses on the memristors as the training process [9]. Prezioso et al. realized pattern classification by using the neural net‐ work based on memristors with synaptic plasticity [10]. The 12 × 12 crossbar with Pt/

**Figure 1(a)**. Sixty memristors among them were used to realize the function. The relation‐ ship between synaptic weight and conductance of the memristors is shown in Eq. (1). The synaptic weight was changed by applying fixed voltage pulses with the amplitude of ±1.3 V on the memristors, and the change of conductance under different voltage pulses is

‐based memristors were used in a Hopfield neural network to implement

/Ti/Pt memristors at each cross point was fabricated, which is illustrated in

<sup>+</sup> − *Gij*

O3 /TiO2‐*<sup>x</sup>*

curve of the memristor. Inset (b): the cross‐sectional structure of the memristor device. (c) Absolute values of the change of memristor's conductance under voltage pulses (with the width of 500 μs) of two polarities, as a function of the initial

<sup>−</sup> (1)

memristor at each cross point. (b) I–V

tor found [5]."

284 Memristor and Memristive Neural Networks

instance, HfO2

shown in **Figure 1(c)**.

*Wij* = *Gij*

**Figure 1.** Memristor crossbar. (a) Integrated 12 × 12 crossbar with an Al2

conductance, for various pulse amplitudes [10].

Al2 O3 /TiO2−*<sup>x</sup>*

made by traditional complex circuits [7].

In the common synaptic plasticity mentioned above, the change of the conductance (weight) is only related to one voltage pulse applied on the memristors. Another kind of plasticity of the synapses is spike‐timing‐dependent plasticity (STDP). STDP is one of the most important syn‐ aptic characteristics. STDP modulates synapse weight based on the activities of the so‐called pre‐ and postsynaptic neurons [11]. The spikes from both preneuron and postneuron arrive at the synapse occasionally in the opposite direction [7]. In STDP, the change of the synaptic weight is the function of relative neuron spike timing ∆*t* (∆*t* = *t* pre − *t* post), where *t* pre is the time when the presynaptic neuron spike arrives and *t* post is the time when the postsynaptic neuron spike arrives [4]. In a typical STDP behavior, if postsynaptic neuron spike arrives after presyn‐ aptic neuron spike (∆*t* < 0), the synaptic weight increases. If postsynaptic neuron spike arrives before presynaptic neuron spike (∆*t* > 0), the synaptic weight decreases. In electronic synapse based on memristor, voltage spikes or pulses are applied on the memristor through the two electrodes, which modulates the conductance of the memristor, and the change of conductance is related to the relative timing of voltage spikes or pulses. Memristors can realize STDP func‐ tion which is similar with that of biological synaptic systems, which is shown in **Figure 2** [4].

**Figure 2.** (a) The relationship between change of the memristor synaptic weight and the relative timing ∆*t* of the neuron spikes. The synaptic change was normalized to the maximum synaptic weight. Inset (a): SEM image of the crossbar structure of memristors. (b) The relationship between the change in excitatory postsynaptic current (EPSC) of rat hippocampal neurons after repetitive correlated spiking (60 pulses at 1 Hz) and relative spike timing. The figure was reconstructed with permission from Ref. [8, 12]. Inset (b) is the phase contrast image of a hippocampal neuron, which was adapted with permission from Ref. [4, 13, 26].

STDP have been intensively investigated in the different memristors with different materi‐ als. The memristors are usually composed of two electrodes and memristive materials sand‐ wiched between two electrodes. Metals such as Au, Pt, Ag, Cu, conductive nitrides such as TiN, and conductive oxides such as ITO are usually used as the materials of electrodes. The memristive materials can be grouped into binary oxides, ternary and more complex oxides, polymer, and other kind of materials.

The STDP of binary memristive materials such as TiO*<sup>x</sup>* [6], WO*<sup>x</sup>* [3], Al2 O3 /TiO2 [14], CeO*<sup>x</sup>* [15], TaO*<sup>x</sup>* /Ta2 O5 [16], and HfO2 [17, 18] have been investigated very intensively; Seo et al. tested the STDP function of the memristor based on TiO*<sup>x</sup>* , and they demonstrated the potential of such memristor as electronic synapses in neuromorphic network. The results are shown in **Figure 3**. Matveyev et al. demonstrated the STDP functionality of HfO2 ‐based memristor with the structure of TiN/HfO2 /Pt [17]. The function relationship between the relative change of the conductance ∆*G* and the spikes' delay time Δ*t* was obtained from the 4‐nm‐thick HfO2 40 × 40 nm2 device, which is shown in **Figure 4**. Tan et al. conducted investigation on the memristor with the structure of Pt/WO3 /Pt. The STDP behavior was demonstrated in such WO3 ‐based memristor, which is illustrated in **Figure 5(b)** [3]. Wang et al. carried out investi‐ gation on memristor device of Pt/HfO*<sup>x</sup>* /ZnO*<sup>x</sup>* /TiN. The STDP characteristics of the memristors were measured with voltage pulses with the amplitude of the V<sup>−</sup> /V+ = −1.0 V/1.0 V. Those voltage pulses were applied on the top electrode and bottom electrode as presynaptic and postsynaptic spikes. The relationship between the relative change of the synaptic weight and relative spike timing is illustrated in **Figure 6(b)**, which is basically consistent with the STDP behavior of biological synapse.

**Figure 4.** Asymmetric STDP characteristic emulated in crossbar 4‐nm‐thick, 40 × 40 nm2

**Figure 5.** Experimental results of the STDP characteristic of Pt/WO<sup>3</sup>

of a sequence of positive and negative pulses was measured with reading voltage with the amplitude of 0.05 V. The transition from volatile to nonvolatile is indicated in the dotted square. (b) The relationship between the change of the synaptic weight and the relative timing of the prespike and postspike. Inset (b): waveform of prespike and postspike [3].

HfO2

Spike‐Timing‐Dependent Plasticity in Memristors http://dx.doi.org/10.5772/intechopen.69535 287

/Pt memristor. (a) Current decay after the application

‐based memristors [17].

Memristors based on ternary and more complex oxides such as BiFeO3 [19], InGaZnO [20], and so on, were also investigated.

Wang et al. reported that STDP was observed in the memristors based on amorphous InGaZnO. As shown in **Figure 7(c, d)**, a pair of voltage spikes with amplitude of V+ /V<sup>−</sup> = 5

**Figure 3.** STDP synaptic characteristic of the memristor. Inset shows the anti‐STDP synaptic characteristic of the memristor [6].

STDP have been intensively investigated in the different memristors with different materi‐ als. The memristors are usually composed of two electrodes and memristive materials sand‐ wiched between two electrodes. Metals such as Au, Pt, Ag, Cu, conductive nitrides such as TiN, and conductive oxides such as ITO are usually used as the materials of electrodes. The memristive materials can be grouped into binary oxides, ternary and more complex oxides,

such memristor as electronic synapses in neuromorphic network. The results are shown in

the conductance ∆*G* and the spikes' delay time Δ*t* was obtained from the 4‐nm‐thick HfO2

/ZnO*<sup>x</sup>*

‐based memristor, which is illustrated in **Figure 5(b)** [3]. Wang et al. carried out investi‐

voltage pulses were applied on the top electrode and bottom electrode as presynaptic and postsynaptic spikes. The relationship between the relative change of the synaptic weight and relative spike timing is illustrated in **Figure 6(b)**, which is basically consistent with the STDP

Wang et al. reported that STDP was observed in the memristors based on amorphous

**Figure 3.** STDP synaptic characteristic of the memristor. Inset shows the anti‐STDP synaptic characteristic of the

InGaZnO. As shown in **Figure 7(c, d)**, a pair of voltage spikes with amplitude of V+

device, which is shown in **Figure 4**. Tan et al. conducted investigation on the

[6], WO*<sup>x</sup>*

[17, 18] have been investigated very intensively; Seo et al. tested

/Pt [17]. The function relationship between the relative change of

[3], Al2

/Pt. The STDP behavior was demonstrated in such

/TiN. The STDP characteristics of the memristors

/V+

O3 /TiO2

, and they demonstrated the potential of

[14], CeO*<sup>x</sup>*

‐based memristor with

= −1.0 V/1.0 V. Those

[19], InGaZnO [20],

/V<sup>−</sup> = 5

[15],

polymer, and other kind of materials.

286 Memristor and Memristive Neural Networks

[16], and HfO2

memristor with the structure of Pt/WO3

gation on memristor device of Pt/HfO*<sup>x</sup>*

behavior of biological synapse.

and so on, were also investigated.

the structure of TiN/HfO2

TaO*<sup>x</sup>* /Ta2 O5

WO3

40 × 40 nm2

memristor [6].

The STDP of binary memristive materials such as TiO*<sup>x</sup>*

the STDP function of the memristor based on TiO*<sup>x</sup>*

**Figure 3**. Matveyev et al. demonstrated the STDP functionality of HfO2

were measured with voltage pulses with the amplitude of the V<sup>−</sup>

Memristors based on ternary and more complex oxides such as BiFeO3

**Figure 4.** Asymmetric STDP characteristic emulated in crossbar 4‐nm‐thick, 40 × 40 nm2 HfO2 ‐based memristors [17].

**Figure 5.** Experimental results of the STDP characteristic of Pt/WO<sup>3</sup> /Pt memristor. (a) Current decay after the application of a sequence of positive and negative pulses was measured with reading voltage with the amplitude of 0.05 V. The transition from volatile to nonvolatile is indicated in the dotted square. (b) The relationship between the change of the synaptic weight and the relative timing of the prespike and postspike. Inset (b): waveform of prespike and postspike [3].

V/−5 V was applied on the two terminals of the memristors with relative timing Δ*t* to test the STDP characteristics. As shown in **Figure 7(e)**, the Δ*W* changed with Δ*t*, which is a typical

The STDP behavior was also observed in polymer such as poly(3,4‐ethylenedioxythiophene):

the STDP of Ag/PEDOT:PSS/Ta structure [23]. A pair of temporally correlated voltage pulses

which was applied to the memristors, respectively. The change of the synaptic weights related

In addition, the investigations on the STDP of the memristors based on other kind of materials

Some factors in the STDP measurements can change some characteristics of the STDP, for example, the waveform of voltage spikes used to imitate the presynaptic neuron spike and postsynaptic neuron spike influences the STDP behavior significantly. It has been reported that the STDP function can be strongly influenced by the shape of the input voltage spikes [25]. The shape of voltage spike generated from presynaptic neuron is the same with that gen‐ erated from postsynaptic neuron. Zamarreño‐Ramos et al. investigated the influence of the shape of the voltage spikes (spk(*t*)) on STDP learning function ξ (∆*T*). The results are shown in **Figure 9**. The results reveals that the voltage spikes with a narrow short positive pulse of large amplitude and a longer relaxing slowly decreasing negative tail are needed in order to

NH3 PbI3

to the precise timing between pre‐ and postsynaptic spikes is shown in **Figure 8(c)**.

obtain the STDP function similar with the behavior of the biological synapses [25].

/V<sup>−</sup>

= 2 V/−2 V. The current value gradually decayed

/V<sup>−</sup>

= 2 V/−2 V was

)2

= 2 V/−2 V was used as presynaptic spikes and postsynaptic spikes,

/BTPA‐F [22], and so on. Li et al. imitated

Spike‐Timing‐Dependent Plasticity in Memristors http://dx.doi.org/10.5772/intechopen.69535 289

[24], have also been conducted.

STDP characteristic of biological synapses.

/V<sup>−</sup>

such as Si/Ag mixture [4], polycrystal CH3

with amplitudes V+

poly(styrenesulfonate) (PEDOT:PSS) [21], EV(ClO<sup>4</sup>

**Figure 8.** Simulation of STDP. (a) EPSC. The preneuron spike was V<sup>+</sup>

between the change of synaptic weight and Δ*t* defined in (b).

back to zero within 50 ms after the spike. A pair of temporally correlated pulses with amplitudes V+

applied to the TE and BE as preneuron spikes and postneuron spikes, respectively. (b) Δ*t* is the interval between the beginning of the preneuron spikes and the beginning of the postneuron spikes. (c) STDP characteristics. The relationship

**Figure 6.** Nonlinear transmission characteristics and STDP of the memristor device. (a) Response of a memristor to different pulses; (b) emulation of STDP characteristics of memristor with the structure of Pt/HfO*<sup>x</sup>* /ZnO*<sup>x</sup>* /TiN—the relationship between the relative change of the memristor synaptic weight (Δ*W*) and the relative spike timing (Δ*t*). And the solid line is the exponential fitting curve to the experimental data. The insets (b): schematics of various spikes.

**Figure 7.** Demonstration of STDP characteristics of memristor. (a) The variation of the current with the interval of voltage pulses. (b) The formation and decay of spike‐induced EPSC. (c and d) The preneuron spike and postneuron spike applied on the memristor for STDP. (e) The relationship between the relative change of the memristor synaptic weight (Δ*W*) and the relative spike timing (Δ*t*). The exponential fitting results for the experimental data are illustrated by the solid lines in the graph.

V/−5 V was applied on the two terminals of the memristors with relative timing Δ*t* to test the STDP characteristics. As shown in **Figure 7(e)**, the Δ*W* changed with Δ*t*, which is a typical STDP characteristic of biological synapses.

The STDP behavior was also observed in polymer such as poly(3,4‐ethylenedioxythiophene): poly(styrenesulfonate) (PEDOT:PSS) [21], EV(ClO<sup>4</sup> )2 /BTPA‐F [22], and so on. Li et al. imitated the STDP of Ag/PEDOT:PSS/Ta structure [23]. A pair of temporally correlated voltage pulses with amplitudes V+ /V<sup>−</sup> = 2 V/−2 V was used as presynaptic spikes and postsynaptic spikes, which was applied to the memristors, respectively. The change of the synaptic weights related to the precise timing between pre‐ and postsynaptic spikes is shown in **Figure 8(c)**.

In addition, the investigations on the STDP of the memristors based on other kind of materials such as Si/Ag mixture [4], polycrystal CH3 NH3 PbI3 [24], have also been conducted.

Some factors in the STDP measurements can change some characteristics of the STDP, for example, the waveform of voltage spikes used to imitate the presynaptic neuron spike and postsynaptic neuron spike influences the STDP behavior significantly. It has been reported that the STDP function can be strongly influenced by the shape of the input voltage spikes [25]. The shape of voltage spike generated from presynaptic neuron is the same with that gen‐ erated from postsynaptic neuron. Zamarreño‐Ramos et al. investigated the influence of the shape of the voltage spikes (spk(*t*)) on STDP learning function ξ (∆*T*). The results are shown in **Figure 9**. The results reveals that the voltage spikes with a narrow short positive pulse of large amplitude and a longer relaxing slowly decreasing negative tail are needed in order to obtain the STDP function similar with the behavior of the biological synapses [25].

**Figure 6.** Nonlinear transmission characteristics and STDP of the memristor device. (a) Response of a memristor to

relationship between the relative change of the memristor synaptic weight (Δ*W*) and the relative spike timing (Δ*t*). And the solid line is the exponential fitting curve to the experimental data. The insets (b): schematics of various spikes.

**Figure 7.** Demonstration of STDP characteristics of memristor. (a) The variation of the current with the interval of voltage pulses. (b) The formation and decay of spike‐induced EPSC. (c and d) The preneuron spike and postneuron spike applied on the memristor for STDP. (e) The relationship between the relative change of the memristor synaptic weight (Δ*W*) and the relative spike timing (Δ*t*). The exponential fitting results for the experimental data are illustrated by the solid lines in the graph.

/ZnO*<sup>x</sup>*

/TiN—the

different pulses; (b) emulation of STDP characteristics of memristor with the structure of Pt/HfO*<sup>x</sup>*

288 Memristor and Memristive Neural Networks

**Figure 8.** Simulation of STDP. (a) EPSC. The preneuron spike was V<sup>+</sup> /V<sup>−</sup> = 2 V/−2 V. The current value gradually decayed back to zero within 50 ms after the spike. A pair of temporally correlated pulses with amplitudes V+ /V<sup>−</sup> = 2 V/−2 V was applied to the TE and BE as preneuron spikes and postneuron spikes, respectively. (b) Δ*t* is the interval between the beginning of the preneuron spikes and the beginning of the postneuron spikes. (c) STDP characteristics. The relationship between the change of synaptic weight and Δ*t* defined in (b).

**Figure 9.** Illustration of influence of shape of waveform of the voltage spikes on the STDP learning function ξ(∆*T*). X1 is spike waveform applied on the memristor, and X2 is resulting STDP learning function, where X goes from A to H [25].

Cederström et al. investigated the role of device hysteresis characteristic of the memris‐ tors played in the operation of its STDP function. Hysteresis characteristics of memristors based on BiFeO3 , Ag/Si, TiO2 , and chalcogenide (PCM) were compared. STDP character‐ istics were simulated with different models of different memristors, and the results are shown in **Figure 10**. The influence of switching characteristics on the operating region used for STDP was discussed. A smooth switching characteristics leads to a much wider operation region, and a steep switching characteristics leads to a much narrower operation region [26].

**Figure 10.** STDP simulations by the implementation of SPICE models, and for each ∆*t*, a sequence of 60 pulses has been used to

**Figure 11.** Schematic of the waveforms for memristor initialization, single pairing STDP, and memory consolidation. (A) A pre‐post spike order is used for long‐term potentiation (LTP). (B) A post‐pre spike order is used for long‐term

device model and (b) for our BFO device model [26].

Spike‐Timing‐Dependent Plasticity in Memristors http://dx.doi.org/10.5772/intechopen.69535 291

change the conductance. The waveforms used were adapted (a) for the TiO2

depression (LTD) [19].

Du et al. reported that the learning time constant can be adjusted through changing the duration of the voltage spikes. The scheme of the voltage spikes is shown in **Figure 11**, and pulse width (*t*p) is one of the parameters of the voltage spikes. The range of the delay time ∆*t* where the normalized current is larger than 50% is called learning window. As shown in **Figure 12**, learning window decreases from 25 ms to 125 μs with the decrease of pulse width (*t*p) from 10 ms to 50 μs. In addition, energy consumption of the memristors was also discussed in this work, the authors showed that energy consumption of the Au/BFO/ Pt/Ti memristor is 4.7 pJ. A method to reduce the energy consumption was proposed and tested, and the results indicate by decreasing the pulse width (*t*p) energy consumption can be reduced to 4.5 pJ.

Xiao et al. reported the STDP characteristics of the memristor with the structure of Au/poly‐ crystal CH3 NH3 PbI3 /ITO/PEDOT:PSS. Different waveforms were used as presynaptic neuron voltage spike and postsynaptic neuron voltage spike, which are shown in **Figure 13(b–e)**. Four different kinds of STDP characteristics, including asymmetric Hebbian rule, asymmetric anti‐ Hebbian rule, symmetric Hebbian rule, and symmetric anti‐Hebbian rule, were obtained cor‐ responding to four different waveforms applied to the memristor as shown in **Figure 13(f–i)**. And the four kinds of STDP behaviors were fit by different equations [24].

**Figure 10.** STDP simulations by the implementation of SPICE models, and for each ∆*t*, a sequence of 60 pulses has been used to change the conductance. The waveforms used were adapted (a) for the TiO2 device model and (b) for our BFO device model [26].

Cederström et al. investigated the role of device hysteresis characteristic of the memris‐ tors played in the operation of its STDP function. Hysteresis characteristics of memristors

**Figure 9.** Illustration of influence of shape of waveform of the voltage spikes on the STDP learning function ξ(∆*T*). X1 is spike waveform applied on the memristor, and X2 is resulting STDP learning function, where X goes from A to H [25].

istics were simulated with different models of different memristors, and the results are shown in **Figure 10**. The influence of switching characteristics on the operating region used for STDP was discussed. A smooth switching characteristics leads to a much wider operation region, and a steep switching characteristics leads to a much narrower operation

Du et al. reported that the learning time constant can be adjusted through changing the duration of the voltage spikes. The scheme of the voltage spikes is shown in **Figure 11**, and pulse width (*t*p) is one of the parameters of the voltage spikes. The range of the delay time ∆*t* where the normalized current is larger than 50% is called learning window. As shown in **Figure 12**, learning window decreases from 25 ms to 125 μs with the decrease of pulse width (*t*p) from 10 ms to 50 μs. In addition, energy consumption of the memristors was also discussed in this work, the authors showed that energy consumption of the Au/BFO/ Pt/Ti memristor is 4.7 pJ. A method to reduce the energy consumption was proposed and tested, and the results indicate by decreasing the pulse width (*t*p) energy consumption can

Xiao et al. reported the STDP characteristics of the memristor with the structure of Au/poly‐

voltage spike and postsynaptic neuron voltage spike, which are shown in **Figure 13(b–e)**. Four different kinds of STDP characteristics, including asymmetric Hebbian rule, asymmetric anti‐ Hebbian rule, symmetric Hebbian rule, and symmetric anti‐Hebbian rule, were obtained cor‐ responding to four different waveforms applied to the memristor as shown in **Figure 13(f–i)**.

And the four kinds of STDP behaviors were fit by different equations [24].

/ITO/PEDOT:PSS. Different waveforms were used as presynaptic neuron

, and chalcogenide (PCM) were compared. STDP character‐

based on BiFeO3

290 Memristor and Memristive Neural Networks

region [26].

be reduced to 4.5 pJ.

NH3 PbI3

crystal CH3

, Ag/Si, TiO2

**Figure 11.** Schematic of the waveforms for memristor initialization, single pairing STDP, and memory consolidation. (A) A pre‐post spike order is used for long‐term potentiation (LTP). (B) A post‐pre spike order is used for long‐term depression (LTD) [19].

*ΔW* = *A* exp(−\_\_

*ΔW* = *A* exp(− *<sup>Δ</sup> <sup>t</sup>*<sup>2</sup> \_\_\_

Pt/Al2 O3 /TiO2−*<sup>x</sup>*

(*G*<sup>0</sup>

conductance *G*<sup>0</sup>

*Δ*

/Ti/Pt. Three pairs of preneuron spike and postneuron spike with different

Prezioso et al. investigated the STDP characteristics of the memristor with the structure of

waveforms, which are shown in **Figure 14(a–c)**, were applied on the memristor. Three dif‐ ferent STDP behaviors were observed, which are illustrated in **Figure 14(g–i)**. The results demonstrated the dependence of STDP window on the waveform of preneuron spike and postneuron spike. The investigation regarding the influence of the initial conductance

) on the STDP behavior was also conducted. In this set of tests, the waveform shown

75, and 100 μS were measured and compared. The results shown in **Figure 15** indicate the influence of the switching dynamics' saturation of the memristors on the STDP property.

**Figure 14.** Experimental results for STDP characteristics. (a–c) The shapes of presynaptic and postsynaptic voltage pulses, marked by black lines and red lines, respectively (d–f) The time maxima and minima of the net voltage applied to the memristor, as functions of the time interval Δ*t* between the pre‐ and postsynaptic pulses. (g–i) STDP characteristic of the memristors: the relationship between the changes of memristor's conductance and Δ*t*. The initial memristor

was always set to about 33 μS in all the experiments mentioned above [14].

in **Figure 14(a)** was used. The STDP functions for different initial conductance *G*<sup>0</sup>

All the memristors have their own dynamic range of the conductance. When *G*<sup>0</sup>

its maximum value, the increase of the conductance is very low. And when *G*<sup>0</sup>

minimum value, the decrease of the conductance is very low [14].

*<sup>τ</sup>* ) +*W*<sup>0</sup> (2)

Spike‐Timing‐Dependent Plasticity in Memristors http://dx.doi.org/10.5772/intechopen.69535

<sup>2</sup> *<sup>τ</sup>*<sup>2</sup> ) <sup>+</sup>*W*<sup>0</sup> (3)

= 25, 50,

293

is close to

is close to its

**Figure 12.** STDP characteristics of a BFO‐based memristor with single pairing pulse width (A) *t* <sup>p</sup> = 10 ms, (B) *t* <sup>p</sup> = 1 ms, (C) *t*p = 500 μs, and (D) *t*p = 50 μs, measurement waiting time *t* w = 10,000 ms, pulse amplitude *V*p = 3.0 V, reading pulse amplitude *V*<sup>r</sup> = +2.0 V, and reading pulse width *t* r = 100 ms. The memristor was preset in HRS and LRS with a writing pulse amplitude of *V*w = −8.0 V and *V*w = +8.0 V, respectively [19].

**Figure 13.** STDP characteristics of memristor: (a) schematics of a biological synapse. The voltage spikes for (b) asymmetric Hebbian rule, (c) asymmetric anti‐Hebbian rule, (d) symmetric Hebbian rule, and (e) symmetric anti‐Hebbian rule. (f‐i) The current change with the applying of corresponding voltage spikes. The conductance of the synaptic device was read with a reading pulse amplitude of −0.75V before and after the applying of the voltage spikes with the interval of 3 s [24].

Spike‐Timing‐Dependent Plasticity in Memristors http://dx.doi.org/10.5772/intechopen.69535 293

$$
\Delta W = A \exp(-\frac{\Delta}{\tau}) \text{+W}\_0 \tag{2}
$$

$$
\Delta W = A \exp\left(-\frac{\Lambda}{2} \frac{t^2}{\tau^2}\right) + W\_0 \tag{3}
$$

Prezioso et al. investigated the STDP characteristics of the memristor with the structure of Pt/Al2 O3 /TiO2−*<sup>x</sup>* /Ti/Pt. Three pairs of preneuron spike and postneuron spike with different waveforms, which are shown in **Figure 14(a–c)**, were applied on the memristor. Three dif‐ ferent STDP behaviors were observed, which are illustrated in **Figure 14(g–i)**. The results demonstrated the dependence of STDP window on the waveform of preneuron spike and postneuron spike. The investigation regarding the influence of the initial conductance (*G*<sup>0</sup> ) on the STDP behavior was also conducted. In this set of tests, the waveform shown in **Figure 14(a)** was used. The STDP functions for different initial conductance *G*<sup>0</sup> = 25, 50, 75, and 100 μS were measured and compared. The results shown in **Figure 15** indicate the influence of the switching dynamics' saturation of the memristors on the STDP property. All the memristors have their own dynamic range of the conductance. When *G*<sup>0</sup> is close to its maximum value, the increase of the conductance is very low. And when *G*<sup>0</sup> is close to its minimum value, the decrease of the conductance is very low [14].

**Figure 12.** STDP characteristics of a BFO‐based memristor with single pairing pulse width (A) *t*

r

**Figure 13.** STDP characteristics of memristor: (a) schematics of a biological synapse. The voltage spikes for (b) asymmetric Hebbian rule, (c) asymmetric anti‐Hebbian rule, (d) symmetric Hebbian rule, and (e) symmetric anti‐Hebbian rule. (f‐i) The current change with the applying of corresponding voltage spikes. The conductance of the synaptic device was read with a reading pulse amplitude of −0.75V before and after the applying of the voltage spikes with the interval of 3 s [24].

(C) *t*p = 500 μs, and (D) *t*p = 50 μs, measurement waiting time *t*

292 Memristor and Memristive Neural Networks

= +2.0 V, and reading pulse width *t*

pulse amplitude of *V*w = −8.0 V and *V*w = +8.0 V, respectively [19].

amplitude *V*<sup>r</sup>

<sup>p</sup> = 10 ms, (B) *t*

w = 10,000 ms, pulse amplitude *V*p = 3.0 V, reading pulse

= 100 ms. The memristor was preset in HRS and LRS with a writing

<sup>p</sup> = 1 ms,

**Figure 14.** Experimental results for STDP characteristics. (a–c) The shapes of presynaptic and postsynaptic voltage pulses, marked by black lines and red lines, respectively (d–f) The time maxima and minima of the net voltage applied to the memristor, as functions of the time interval Δ*t* between the pre‐ and postsynaptic pulses. (g–i) STDP characteristic of the memristors: the relationship between the changes of memristor's conductance and Δ*t*. The initial memristor conductance *G*<sup>0</sup> was always set to about 33 μS in all the experiments mentioned above [14].

**References**

Nature. 2016;**531**:284‐285

1971;**18**:507‐519

2015;**6**:7522

2015;**9**:357

1998;**18**:10464‐10472

Anaheim, CA. 2008. pp. 85‐92

device. Nanotechnology. 2011;**22**:254023

memristors. Nature. 2015;**521**:61‐64

spike‐based perceptron learning rule using TiO2−*<sup>x</sup>*

Transactions on Electron Devices. 2011;**58**:2729‐2737

[1] Gibney E. Artificial intelligence. What Google's winning Go algorithm will do next.

[2] Snider GS. Spike‐timing‐dependent learning in memristive nanodevices. IEEE, NewYork, NY 10017 USA In: 2008 IEEE International Symposium on Nanoscale Architectures.

[3] Tan ZH, Yang R, Terabe K, Yin XB, Zhang XD, Guo X. Synaptic metaplasticity realized in

[4] Jo SH, Chang T, Ebong I, Bhadviya BB, Mazumder P, Lu W. Nanoscale memristor device

[5] Chua LO. Memristor—the missing circuit element. IEEE Transactions on Circuit Theory.

[6] Seo K, Kim I, Jung S, Jo M, Park S, Park J, et al. Analog memory and spike‐timing‐depen‐ dent plasticity characteristics of a nanoscale titanium oxide bilayer resistive switching

[7] Yu SM, Wu Y, Jeyasingh R, Kuzum DG, Wong HSP. An electronic synapse device based on metal oxide resistive switching memory for neuromorphic computation. IEEE

[8] Jang JW, Park S, Jeong YH, Hwang H. ReRAM‐based synaptic device for neuromorphic computing. IEEE, New York, NY 10017 USA. In: 2014 IEEE International Symposium on

[9] Hu SG, Liu Y, Liu Z, Chen TP, Wang JJ, Yu Q, et al. Associative memory realized by a reconfigurable memristive Hopfield neural network. Nature Communications.

[10] Prezioso M, Merrikh‐Bayat F, Hoskins BD, Adam GC, Likharev KK, Strukov DB. Training and operation of an integrated neuromorphic network based on metal‐oxide

[11] Mostafa H, Khiat A, Serb A, Mayr CG, Indiveri G, Prodromakis T. Implementation of a

[12] Bi GQ, Poo MM. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. Journal of Neuroscience.

[13] Kaech S, Banker G. Culturing hippocampal neurons. Nature Protocols. 2006;**1**:2406‐2415 [14] Prezioso M, Merrikh Bayat F, Hoskins B, Likharev K, Strukov D. Self‐adaptive spike‐ time‐dependent plasticity of metal‐oxide memristors. Scientific Reports. 2016;**6**:21331

memristors. Frontiers in Neuroscience.

Spike‐Timing‐Dependent Plasticity in Memristors http://dx.doi.org/10.5772/intechopen.69535 295

Circuits and Systems (ISCAS). Melbourne, Australia. 2014. pp. 1054‐1057

oxide memristive devices. Advanced Materials. 2016;**28**:377‐384

as synapse in neuromorphic systems. Nano Letters. 2010;**10**:1297‐1301

**Figure 15.** The experimentally measured STDP window function with several initial values *G*<sup>0</sup> = 25, 50, 75, and 100 μs together with the results of its fitting with equations (dash‐dot lines) [14].

## **3. Conclusions**

In summary, the STDP characteristics have been observed in different memristors based on different kinds of materials, which make memristors become promising in the bio‐inspired neuromorphic application. Great efforts have also been made in the investigation on the influence factors of the STDP characteristics such as device hysteresis characteristic and the waveform of the voltage pulses applied to the memristor as preneuron voltage spike and postneuron voltage spike. Different kinds of waveform were used, and different kinds of STDP characteristics were observed.

#### **Acknowledgements**

This work was supported by the National Natural Science Foundation of China (No. 51402044, 51602039 and U1435208).

#### **Author details**

Yao Shuai\*, Xinqiang Pan and Xiangyu Sun

\*Address all correspondence to: yshuai@uestc.edu.cn

State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu, China

## **References**

**3. Conclusions**

294 Memristor and Memristive Neural Networks

STDP characteristics were observed.

Yao Shuai\*, Xinqiang Pan and Xiangyu Sun

\*Address all correspondence to: yshuai@uestc.edu.cn

Science and Technology of China, Chengdu, China

**Acknowledgements**

51602039 and U1435208).

**Author details**

In summary, the STDP characteristics have been observed in different memristors based on different kinds of materials, which make memristors become promising in the bio‐inspired neuromorphic application. Great efforts have also been made in the investigation on the influence factors of the STDP characteristics such as device hysteresis characteristic and the waveform of the voltage pulses applied to the memristor as preneuron voltage spike and postneuron voltage spike. Different kinds of waveform were used, and different kinds of

= 25, 50, 75, and 100 μs

**Figure 15.** The experimentally measured STDP window function with several initial values *G*<sup>0</sup>

together with the results of its fitting with equations (dash‐dot lines) [14].

This work was supported by the National Natural Science Foundation of China (No. 51402044,

State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic


[15] Hsieh C‐C, Roy A, Chang Y‐F, Shahrjerdi D, Banerjee SK. A sub‐1‐volt analog metal oxide memristive‐based synaptic device with large conductance change for energy‐effi‐ cient spike‐based computing systems. Applied Physics Letters. 2016;**109**:223501

**Chapter 14**

Provisional chapter

**Neural Network-Based Analog-to-Digital Converters**

DOI: 10.5772/intechopen.73038

In this chapter, we present an overview of the recent advances in analog-to-digital converter (ADC) neural networks. Biological neural networks consist of natural binarization reflected by the neurosynaptic processes. This natural analog-to-binary conversion ability of neurons can be modeled to emulate analog-to-digital conversion using a set of nonlinear circuit elements and existing artificial neural network models. Since one neuron during processing consumes on average only about half nanowatts of power, neurons can perform highly energy-efficient operations, including pattern recognition. Analog-to-digital conversion itself is an example of simple pattern recognition where input analog signal can be presented in one of the 2<sup>N</sup> different patterns for N bits. The classical configuration of neural network-based ADC is Hopfield neural network ADC. Improved designs, such as modified Hopfield network ADC, T-model neural ADC, and multilevel neurons-based neural ADC, will be discussed. In addition, the latest architecture designs of neural ADC such as hybrid complementary metal-oxide semiconductor (CMOS)-

memristor Hopfield ADC are covered at the end of this chapter.

Keywords: neural networks, analog-to-digital converters, Hopfield network

This chapter presents a review of the advancements in the application of neural network (NN) systems in analog-to-digital converter (ADC) design. Analog-to-digital (A/D) conversion is an essential part of all microelectronic systems design that serves as a link between analog sensors and digital-processing circuitry [1]. The dominant period of the ADC design development came with the maturity of complementary metal-oxide semiconductor (CMOS) technologies [1]. At present, there is a huge variety of high-speed and high-resolution ADCs based on the most

> © 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

> © 2018 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Neural Network-Based Analog-to-Digital Converters

Aigerim Tankimanova and Alex Pappachen James

Aigerim Tankimanova and Alex Pappachen James

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.73038

Abstract

1. Introduction


Provisional chapter

## **Neural Network-Based Analog-to-Digital Converters** Neural Network-Based Analog-to-Digital Converters

DOI: 10.5772/intechopen.73038

Aigerim Tankimanova and Alex Pappachen James Aigerim Tankimanova and Alex Pappachen James

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.73038

#### Abstract

[15] Hsieh C‐C, Roy A, Chang Y‐F, Shahrjerdi D, Banerjee SK. A sub‐1‐volt analog metal oxide memristive‐based synaptic device with large conductance change for energy‐effi‐

[16] Kim S, Choi S, Lu W. Comprehensive physical model of dynamic resistive switching in

[17] Matveyev Y, Kirtaev R, Fetisova A, Zakharchenko S, Negrov D, Zenkevich A. Crossbar

[19] Du N, Kiani M, Mayr CG, You T, Burger D, Skorupa I, et al. Single pairing spike‐tim‐

[20] Wang ZQ, Xu HY, Li XH, Yu H, Liu YC, Zhu XJ. Synaptic learning and memory func‐ tions achieved using oxygen ion migration/diffusion in an amorphous InGaZnO mem‐

[21] Chen Y, Liu G, Wang C, Zhang WB, Li RW, Wang LX. Polymer memristor for informa‐ tion storage and neuromorphic applications. Materials Horizons. 2014;**1**:489‐506

[22] Liu G, Wang C, Zhang W, Pan L, Zhang C, Yang X, et al. Organic biomimicking memris‐ tor for information storage and processing applications. Advanced Electronic Materials.

[23] Li S, Zeng F, Chen C, Liu H, Tang G, Gao S, et al. Synaptic plasticity and learning behav‐ iours mimicked through Ag interface movement in an Ag/conducting polymer/Ta mem‐

[24] Xiao Z, Huang J. Energy‐efficient hybrid perovskite memristors and synaptic devices.

[25] Zamarreno‐Ramos C, Camunas‐Mesa LA, Perez‐Carrasco JA, Masquelier T, Serrano‐ Gotarredona T, Linares‐Barranco B. On spike‐timing‐dependent‐plasticity, memristive devices, and building a self‐learning visual cortex. Frontiers in Neuroscience. 2011;**5**:26

[26] Cederstrom L, Starket P, Mayr C, Shuai Y, Schmidt H, Schuffny R. A model based com‐

10017 USA; In: 2013 IEEE International Symposium on Circuits and Systems (ISCAS).

device applicability in neuromorphic hardware. IEEE, New York, NY

for neuromorphic applications. IEEE, New York, NY 10017 USA; In: IEEE International

‐based electronic synapses. Nanoscale Research Letters. 2016;**11**:147

memristors with a time window of 25 ms to 125 μs.

‐based memristors

cient spike‐based computing systems. Applied Physics Letters. 2016;**109**:223501

an oxide memristor. ACS Nano. 2014;**8**:2369‐2376

ing dependent plasticity in BiFeO3

Frontiers in Neuroscience. 2015;**9**:227

[18] Covi E, Brivio S, Serb A, Prodromakis T, Fanciulli M, Spiga S. HfO<sup>2</sup>

ristor. Advanced Functional Materials. 2012;**22**:2759‐2765

ristive system. Journal of Materials Chemistry C. 2013;**1**:5292

Advanced Electronic Materials. 2016;**2**:1600100

Symposium on Circuits and Systems. Montreal, Canada. 2016. pp. 393‐396

nanoscale HfO2

296 Memristor and Memristive Neural Networks

2016;**2**:1500298

parison of BiFeO3

Beijing, China. 2013. pp. 2323‐2326

In this chapter, we present an overview of the recent advances in analog-to-digital converter (ADC) neural networks. Biological neural networks consist of natural binarization reflected by the neurosynaptic processes. This natural analog-to-binary conversion ability of neurons can be modeled to emulate analog-to-digital conversion using a set of nonlinear circuit elements and existing artificial neural network models. Since one neuron during processing consumes on average only about half nanowatts of power, neurons can perform highly energy-efficient operations, including pattern recognition. Analog-to-digital conversion itself is an example of simple pattern recognition where input analog signal can be presented in one of the 2<sup>N</sup> different patterns for N bits. The classical configuration of neural network-based ADC is Hopfield neural network ADC. Improved designs, such as modified Hopfield network ADC, T-model neural ADC, and multilevel neurons-based neural ADC, will be discussed. In addition, the latest architecture designs of neural ADC such as hybrid complementary metal-oxide semiconductor (CMOS) memristor Hopfield ADC are covered at the end of this chapter.

Keywords: neural networks, analog-to-digital converters, Hopfield network

## 1. Introduction

This chapter presents a review of the advancements in the application of neural network (NN) systems in analog-to-digital converter (ADC) design. Analog-to-digital (A/D) conversion is an essential part of all microelectronic systems design that serves as a link between analog sensors and digital-processing circuitry [1]. The dominant period of the ADC design development came with the maturity of complementary metal-oxide semiconductor (CMOS) technologies [1]. At present, there is a huge variety of high-speed and high-resolution ADCs based on the most

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons © 2018 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

advanced CMOS processes that are applicable for different applications [1]. In fact, even though the ADC design field is mature, the complexity of the construction of properly operating ADC system that fits certain applications is still high. In conventional CMOS ADCs, a number of appropriately designed analog circuitries, such as switches, operational amplifiers, voltage converters, and so on, are required [2]. However, with modern advancements in computational systems and processing applications, the demand for faster processing and more flexible architectures that can perform a variety of tasks in the most efficient manner has increased. Artificial neural network (ANN) technology is a well-known candidate that can resolve such demands in high-performance A/D conversion as it divides the task between a number of simple processing elements (neurons) [3]. Further, neurons can perform highly energy-efficient operations of pattern recognition, in particular, one neuron during processing consumes on average only about half nanowatts of power [4].

of the Hopfield NN, the description of how to construct an ADC structure and the problems that appear in the Hopfield NN ADC. In Section 3, a review of different designs based on original Hopfield ADC such as modified Hopfield neural ADC, NN-based ADCs with nonsymmetrical weight matrix, NN-based ADC with multilevel neurons and level-shifted neural ADC is presented. In Section 4, recent CMOS-memristor-based ADC architectures are

Neural Network-Based Analog-to-Digital Converters http://dx.doi.org/10.5772/intechopen.73038 299

In his early works, Hopfield introduced the ideas behind the emergent collective computational properties of highly interconnected associative networks [9, 10]. The neural network models that were presented earlier were of Perceptron type and were implemented by feedforward architecture [13]. By contrast, Hopfield presented a different type of architecture with fully interconnected neurons, where each neuron translates its output to the inputs of the remaining neurons through feedback connections [9, 10]. The strength of each feedback connection is represented by its weight (or synapse). In a later work, Hopfield and Tank [11] presented methods of how the network can be applied in solving optimization problems, such

One of the earliest works on artificial neural networks (ANNs) by McCulloch and Pitts [5] described a two-state (on-state and off-state) stochastic neuron model that simplifies biological neural function to simple logical operation. However, this model was not applicable for analog processing as it did not have the continuous behaviour as of biological neurons [10, 11, 13]. Hopfield proposed the NN model with continuous neuron response [10, 11, 13], which has computation properties of the stochastic model [9] that can be implemented in hardware. Continuous neuron response in Hopfield NN can be interpreted as an analogy of graded dependence of firing rate produced by the soma of biological neuron as the input signal to the neuron membrane [10, 11, 13] without considering action potential signal details. Two states in neuron model are considered as '0' for not firing state and '1' for firing at a maximum rate [10, 11, 13]. The graded response function that describes such depen-

> 1 1 þ exp ð Þ ui

where ui is the input voltage to neuron, so the neuron's output signal will be equal to Vi ¼ gi

The neuron output Vi can be either logic high or logic low depending on whether the effective input voltage to neuron ui is higher or lower than the neuron threshold, as it can be observed

ð Þ ui that is represented by monotonically increasing

(1)

ð Þ ui .

reviewed. The last section summarizes and gives a conclusion for this chapter.

as A/D conversion, signal decomposition and linear programming.

gi ð Þ¼ ui

2. Hopfield neural network ADC

dence is neuron's activation function gi

sigmoid function (Eq. (1))

from Figure 1.

2.1. The Hopfield ADC theory

Since the early twentieth century, scientists and engineers have been trying to explain how human brain functions, and a number of models that are aimed to mimic some features of the biological neural networks were proposed. The work presented by McCulloch and Pitts [5] is one of the first examples of mathematical modeling of ANN that is based on a two-state neuron model. The information processing that is performed in biological neural networks incorporates memorization, learning, classification, and so on and is performed by natural binarization mechanism reflected by the neurosynaptic processes. Brain associative property that is used in processing information is being discussed widely since the 1960s and later. Based on the works presented by [5–8], Hopfield proposed a neural network model that incorporates associative memory property. The idea that he presented is actually a model of content addressable memory (CAM) that can be implemented in hardware [9–11]. He discovered that such a type of network has collective computational properties so that it can be used in solving different optimization problems [9–11].

One of the applications of such CAM-based neural network (NN) that was introduced by Hopfield and Tank includes solving simple optimization problem such as analog-to-digital (A/D) conversion, where the dynamics of the system is described by an energy function (or cost function) [9]. The main concept behind the proper operation of the Hopfield NN is based on minimization of the energy function, so that when the minimum value is achieved, the network reaches its stable state [9–12]. In general, A/D conversion can be classified as an example of simple pattern recognition where input analog signal can be presented in one of the 2<sup>N</sup> different patterns for N bits. In Hopfield NN-based ADC, these digital patterns are stored as a memory and are retrieved when the network reaches stable state after conversion period [11].

The NN model proposed by Hopfield represents a network of interconnected processing units (neurons) connected through a symmetric connection matrix with zero diagonal elements [9–11, 13]. The interconnection nodes between neurons can be viewed as synaptic strength values. The strength of each synapse is represented by the conductance value at each node. The network dynamics is governed by the behaviour of energy function, E, so that when the energy function is of the minimum value, the network reaches stable state and gives digital output [9–11, 13].

Therefore, in Section 2, a comprehensive discussion on Hopfield NN in general and the ADC based on the Hopfield NN design is presented. The section addresses such topics as the theory of the Hopfield NN, the description of how to construct an ADC structure and the problems that appear in the Hopfield NN ADC. In Section 3, a review of different designs based on original Hopfield ADC such as modified Hopfield neural ADC, NN-based ADCs with nonsymmetrical weight matrix, NN-based ADC with multilevel neurons and level-shifted neural ADC is presented. In Section 4, recent CMOS-memristor-based ADC architectures are reviewed. The last section summarizes and gives a conclusion for this chapter.

## 2. Hopfield neural network ADC

#### 2.1. The Hopfield ADC theory

advanced CMOS processes that are applicable for different applications [1]. In fact, even though the ADC design field is mature, the complexity of the construction of properly operating ADC system that fits certain applications is still high. In conventional CMOS ADCs, a number of appropriately designed analog circuitries, such as switches, operational amplifiers, voltage converters, and so on, are required [2]. However, with modern advancements in computational systems and processing applications, the demand for faster processing and more flexible architectures that can perform a variety of tasks in the most efficient manner has increased. Artificial neural network (ANN) technology is a well-known candidate that can resolve such demands in high-performance A/D conversion as it divides the task between a number of simple processing elements (neurons) [3]. Further, neurons can perform highly energy-efficient operations of pattern recognition, in particular, one neuron during processing consumes on average only

Since the early twentieth century, scientists and engineers have been trying to explain how human brain functions, and a number of models that are aimed to mimic some features of the biological neural networks were proposed. The work presented by McCulloch and Pitts [5] is one of the first examples of mathematical modeling of ANN that is based on a two-state neuron model. The information processing that is performed in biological neural networks incorporates memorization, learning, classification, and so on and is performed by natural binarization mechanism reflected by the neurosynaptic processes. Brain associative property that is used in processing information is being discussed widely since the 1960s and later. Based on the works presented by [5–8], Hopfield proposed a neural network model that incorporates associative memory property. The idea that he presented is actually a model of content addressable memory (CAM) that can be implemented in hardware [9–11]. He discovered that such a type of network has collective computational properties so that it can be used

One of the applications of such CAM-based neural network (NN) that was introduced by Hopfield and Tank includes solving simple optimization problem such as analog-to-digital (A/D) conversion, where the dynamics of the system is described by an energy function (or cost function) [9]. The main concept behind the proper operation of the Hopfield NN is based on minimization of the energy function, so that when the minimum value is achieved, the network reaches its stable state [9–12]. In general, A/D conversion can be classified as an example of simple pattern recognition where input analog signal can be presented in one of the 2<sup>N</sup> different patterns for N bits. In Hopfield NN-based ADC, these digital patterns are stored as a memory

The NN model proposed by Hopfield represents a network of interconnected processing units (neurons) connected through a symmetric connection matrix with zero diagonal elements [9–11, 13]. The interconnection nodes between neurons can be viewed as synaptic strength values. The strength of each synapse is represented by the conductance value at each node. The network dynamics is governed by the behaviour of energy function, E, so that when the energy function is of the minimum value, the network reaches stable state and gives digital output [9–11, 13].

Therefore, in Section 2, a comprehensive discussion on Hopfield NN in general and the ADC based on the Hopfield NN design is presented. The section addresses such topics as the theory

and are retrieved when the network reaches stable state after conversion period [11].

about half nanowatts of power [4].

298 Memristor and Memristive Neural Networks

in solving different optimization problems [9–11].

In his early works, Hopfield introduced the ideas behind the emergent collective computational properties of highly interconnected associative networks [9, 10]. The neural network models that were presented earlier were of Perceptron type and were implemented by feedforward architecture [13]. By contrast, Hopfield presented a different type of architecture with fully interconnected neurons, where each neuron translates its output to the inputs of the remaining neurons through feedback connections [9, 10]. The strength of each feedback connection is represented by its weight (or synapse). In a later work, Hopfield and Tank [11] presented methods of how the network can be applied in solving optimization problems, such as A/D conversion, signal decomposition and linear programming.

One of the earliest works on artificial neural networks (ANNs) by McCulloch and Pitts [5] described a two-state (on-state and off-state) stochastic neuron model that simplifies biological neural function to simple logical operation. However, this model was not applicable for analog processing as it did not have the continuous behaviour as of biological neurons [10, 11, 13]. Hopfield proposed the NN model with continuous neuron response [10, 11, 13], which has computation properties of the stochastic model [9] that can be implemented in hardware. Continuous neuron response in Hopfield NN can be interpreted as an analogy of graded dependence of firing rate produced by the soma of biological neuron as the input signal to the neuron membrane [10, 11, 13] without considering action potential signal details. Two states in neuron model are considered as '0' for not firing state and '1' for firing at a maximum rate [10, 11, 13]. The graded response function that describes such dependence is neuron's activation function gi ð Þ ui that is represented by monotonically increasing sigmoid function (Eq. (1))

$$g\_i(\mu\_i) = \frac{1}{1 + \exp\left(\mu\_i\right)}\tag{1}$$

where ui is the input voltage to neuron, so the neuron's output signal will be equal to Vi ¼ gi ð Þ ui .

The neuron output Vi can be either logic high or logic low depending on whether the effective input voltage to neuron ui is higher or lower than the neuron threshold, as it can be observed from Figure 1.

The hardware implementation of 4-bit Hopfield NN ADC proposed in [11] is shown in Figure 2. As it is described in [11], at each analog input level, the network creates an energy function surface that consists of local minima states with one global minimum for this particular analog input. The global minimum for each input level represents the correct digital representation for the input signal [11]. The dynamics of the system can be viewed as a flow in energy state space that tends to minimize E, so that when the network reaches minimum it stops searching process [10, 13]. When the ADC network arrives at an energy minimum state, it should produce an output code that best represents the corresponding analog input. Thus, the E function is a Lyapunov stability function of the system [10]. The proper operation of the Hopfield ADC is achieved when the voltage level of the output code is equal to the value of the analog input, Eq. (2).

VIn ¼

function for the Hopfield network with symmetric weight matrix is shown by Eq. (3)

i

1=Ri ðVi 0 gi �1

The NN proposed by Hopfield has the features that correlate with biological NNs and so it represents a simplified analogy of biological NNs. The system dynamic change can be described by the first-order differential equation of the rate of change of ith neuron input potential, Eq. (4). The capacitance C that is present at the neuron input is a circuit representa-

i

From Eq. (4), it is seen that ith neuron is charged by integrating the current flowing into the neuron with charging RC time constant [10, 13]. The current that flows into the neuron consists of three components formed prior to the neuron input, which are postsynaptic current TijVj from

The ADC operation also can be described by the energy function shown by Eq. (5) [11]. The first term of Eq. (5) shows the squared difference between analog input voltage and the corresponding digital output voltage. As it was previously assumed, the value of analog input voltage should be close to the voltage level of the corresponding output code, see Eq. (2). If for particular VIn the output code V3V2V1V<sup>0</sup> is the most correct digital representation, the first term of Eq. (5) should be equal to zero [11]. The second term in Eq. (5) is added to ensure that

ð Þ <sup>V</sup> dV �<sup>X</sup>

Tij !ui <sup>þ</sup> TIniVIn <sup>þ</sup> TRiVref (4)

�<sup>1</sup>ð Þ <sup>V</sup> is equal to the neuron input potential ui and Ri is the neuron input

i

IiVi (3)

<sup>i</sup> Tij <sup>¼</sup> <sup>1</sup> <sup>R</sup> <sup>=</sup> <sup>i</sup> in which Ri

½ � Við Þ Vi � 1 (5)

TijViVj <sup>þ</sup><sup>X</sup>

tion of neuron cell membrane capacitance, while the term TIni <sup>þ</sup> TRi <sup>þ</sup> <sup>P</sup>

TijVj � TIni <sup>þ</sup> TRi <sup>þ</sup><sup>X</sup>

neuron j, analog input current TIniVIn and constant reference current TRiVref [10, 13].

can be viewed as neuron cell transmembrane resistance [6]

the digital output voltages Vi will be of logic '0' and '1' [11]

<sup>2</sup> VIn �

N X�1 i¼0

!<sup>2</sup>

Vi2<sup>i</sup>

� N X�1 i¼0

2<sup>i</sup> � �<sup>2</sup>

<sup>E</sup> <sup>¼</sup> <sup>1</sup>

<sup>C</sup> dui dt <sup>¼</sup> <sup>X</sup> j

<sup>E</sup> ¼ � <sup>1</sup> 2 X i, j

where the term gi

resistance [10, 13].

N X�1 i¼0 2i

The ADC network consists of four neurons that are interconnected by a synaptic weight matrix. The network dynamics is highly dependent on the values of synaptic matrix elements. This dependency was analysed by Hopfield in his work [9, 10], where it is deduced that for the system to reach a stable state two conditions should be maintained: (1) the symmetrical synaptic weight matrix Tij ¼ Tji and (2) the diagonal synaptic weights that correspond to feedbacks from neurons to their own inputs should be equal to zero Tii ¼ 0. Following this condition, as it is shown in [9–11, 13], the Hopfield neural network should converge to a stable state. The energy

Vi (2)

Neural Network-Based Analog-to-Digital Converters http://dx.doi.org/10.5772/intechopen.73038 301

Figure 1. Neuron sigmoidal transfer function.

Figure 2. 4-bit Hopfield neural network ADC.

Neural Network-Based Analog-to-Digital Converters http://dx.doi.org/10.5772/intechopen.73038 301

$$V\_{ln} = \sum\_{i=0}^{N-1} 2^i V\_i \tag{2}$$

The ADC network consists of four neurons that are interconnected by a synaptic weight matrix. The network dynamics is highly dependent on the values of synaptic matrix elements. This dependency was analysed by Hopfield in his work [9, 10], where it is deduced that for the system to reach a stable state two conditions should be maintained: (1) the symmetrical synaptic weight matrix Tij ¼ Tji and (2) the diagonal synaptic weights that correspond to feedbacks from neurons to their own inputs should be equal to zero Tii ¼ 0. Following this condition, as it is shown in [9–11, 13], the Hopfield neural network should converge to a stable state. The energy function for the Hopfield network with symmetric weight matrix is shown by Eq. (3)

The hardware implementation of 4-bit Hopfield NN ADC proposed in [11] is shown in Figure 2. As it is described in [11], at each analog input level, the network creates an energy function surface that consists of local minima states with one global minimum for this particular analog input. The global minimum for each input level represents the correct digital representation for the input signal [11]. The dynamics of the system can be viewed as a flow in energy state space that tends to minimize E, so that when the network reaches minimum it stops searching process [10, 13]. When the ADC network arrives at an energy minimum state, it should produce an output code that best represents the corresponding analog input. Thus, the E function is a Lyapunov stability function of the system [10]. The proper operation of the Hopfield ADC is achieved when the voltage level of the output code is equal to the value of the

analog input, Eq. (2).

300 Memristor and Memristive Neural Networks

Figure 1. Neuron sigmoidal transfer function.

Figure 2. 4-bit Hopfield neural network ADC.

$$E = -\frac{1}{2} \sum\_{i,j} T\_{ij} V\_i V\_j + \sum\_i \mathbf{1}/\mathcal{R}\_i \int\_0^{V\_i} \mathbf{g}\_i^{-1}(V) \,dV - \sum\_i I\_i V\_i \tag{3}$$

where the term gi �<sup>1</sup>ð Þ <sup>V</sup> is equal to the neuron input potential ui and Ri is the neuron input resistance [10, 13].

The NN proposed by Hopfield has the features that correlate with biological NNs and so it represents a simplified analogy of biological NNs. The system dynamic change can be described by the first-order differential equation of the rate of change of ith neuron input potential, Eq. (4). The capacitance C that is present at the neuron input is a circuit representation of neuron cell membrane capacitance, while the term TIni <sup>þ</sup> TRi <sup>þ</sup> <sup>P</sup> <sup>i</sup> Tij <sup>¼</sup> <sup>1</sup> <sup>R</sup> <sup>=</sup> <sup>i</sup> in which Ri can be viewed as neuron cell transmembrane resistance [6]

$$\mathbf{C}\frac{du\_l}{dt} = \sum\_j T\_{\vec{\eta}} V\_j - \left(T\_{lni} + T\_{Ri} + \sum\_i T\_{i\vec{\eta}}\right) u\_i + T\_{lni} V\_{ln} + T\_{Ri} V\_{ref} \tag{4}$$

From Eq. (4), it is seen that ith neuron is charged by integrating the current flowing into the neuron with charging RC time constant [10, 13]. The current that flows into the neuron consists of three components formed prior to the neuron input, which are postsynaptic current TijVj from neuron j, analog input current TIniVIn and constant reference current TRiVref [10, 13].

The ADC operation also can be described by the energy function shown by Eq. (5) [11]. The first term of Eq. (5) shows the squared difference between analog input voltage and the corresponding digital output voltage. As it was previously assumed, the value of analog input voltage should be close to the voltage level of the corresponding output code, see Eq. (2). If for particular VIn the output code V3V2V1V<sup>0</sup> is the most correct digital representation, the first term of Eq. (5) should be equal to zero [11]. The second term in Eq. (5) is added to ensure that the digital output voltages Vi will be of logic '0' and '1' [11]

$$E = \frac{1}{2} \left( V\_{\text{Int}} - \sum\_{i=0}^{N-1} V\_i \mathbb{2}^i \right)^2 - \sum\_{i=0}^{N-1} \left( \mathbb{2}^i \right)^2 \left[ V\_i (V\_i - 1) \right] \tag{5}$$

After expanding and rearrangement of the above equation, we get the expression shown in Eq. (6). By using Eq. (6), the expressions for synaptic weights calculation can be obtained, Eq. (7)

$$E = -\frac{1}{2} \sum\_{\substack{i = 0 \ j = 0 \\ i \neq j}}^{N-1} \left( -2^{i+j} \right) V\_i V\_j - \frac{1}{2} \sum\_{i=0}^{N-1} \left( -2^{(2i-1)} + 2^i V\_{\text{ln}} \right) V\_i \tag{6}$$

$$T\_{i\dagger} = -\mathfrak{D}^{(i+j)} \quad T\_{\text{ref}} = -\mathfrak{D}^{(2i-1)} \quad T\_{I\text{ri}} = \mathfrak{D}^i \tag{7}$$

It is found that after each A/D conversion cycle, the threshold voltage of each neuron circuitry differs from the pre-set value of uth ¼ 0 V [11], such that the response of comparators exhibits offset. The authors in [11] proposed that the behaviour caused by the hysteresis of CMOS neurons in addition to the local minima states is a dominant contributor to the wrong network response. This hysteresis change in thresholds makes the system to stabilize at the local minima state, which is located closer to the network's present energy state at the moment of conversion [14–19]. In order to solve this problem, one of the solutions is to reset the neuron state to the initial threshold value after each conversion [11]. However, the main disadvantage

Neural Network-Based Analog-to-Digital Converters http://dx.doi.org/10.5772/intechopen.73038 303

Alternatively, in the works [11–16], the authors proposed to change the Hopfield ADC network architecture itself in order to eliminate the local minima states, which cause errors in the ADC outputs. Thus, different methods on eliminating local minima problem are proposed in

The design presented by Hopfield and Tank is a first example of ADC task implementation with neural networks. This idea became very popular later, as it appeared very simple compared to the conventional designs and, moreover, it opens up the possibilities to explore its phenomenological computational abilities which is a good contribution for science and engi-

As it was previously described, the existence of local minima in the dynamics of the original Hopfield network ADC design corrupts its digital output by generating spurious states. This problem was addressed by several works that presented the ways of eliminating the local minima states by changing the structure of the ADC network [14–19]. In the following subsection, the two methods that are claimed to eliminate the problem of local minima of the energy

In the study by [14], the authors analyzed the stability of the output codes of Hopfield network ADC in terms of overlap of input currents between two adjacent output codes which is defined as GAP. According to Lee and Sheu [14], in order to avoid the local minima state, this parameter should be higher or equal to zero. Thus, it was deduced that in order to eliminate this current overlap condition, the correction currents can be applied back to the inputs of

The modified Hopfield network ADC schematic diagram is shown in Figure 2. The correction currents are generated by inverting amplifiers in order to compensate the overlap and to

of this method is that it requires more power.

[14–19], which are discussed in more detail in Section 3.

3.1. Eliminating the local minima problem of Hopfield ADC

Hopfield network through the additional set of conductance weights [14].

3.1.1. Modified Hopfield architecture with correction currents

3. Hopfield neural network-based ADCs

neering by itself.

function are presented.

Therefore, four-bit Hopfield NN ADC can be designed by using Eqs. (2)–(7).

#### 2.2. The local minima problem

As it is already discussed, the stability of the Hopfield NN is achieved when the energy function is at its minimum in the state space. The dynamics of the system is moving towards decreasing the energy function. Thus, for the Hopfield NN the energy state space will have multiple local minima, where each of these local minima states is able to stabilize the system dynamics. In theory, the ADC structure proposed by Tank and Hopfield [11], which is based on the Hopfield NN with symmetric weight matrix, has to retrieve correct digital response of the analog input voltage by means of the energy local minima states that are assigned for each correct digital output. However, in practice, this concept does not work as is expected. It appears that the local minima states corrupt the correct operation of the network [14–19].

In the original Hopfield's work [11], it was proposed to implement neurons with the CMOS operational amplifiers. The results that were obtained exhibit not ideal ADC behaviour with incorrect output states (Figure 3).

Figure 3. Hopfield NN ADC transfer characteristics with digital errors.

It is found that after each A/D conversion cycle, the threshold voltage of each neuron circuitry differs from the pre-set value of uth ¼ 0 V [11], such that the response of comparators exhibits offset. The authors in [11] proposed that the behaviour caused by the hysteresis of CMOS neurons in addition to the local minima states is a dominant contributor to the wrong network response. This hysteresis change in thresholds makes the system to stabilize at the local minima state, which is located closer to the network's present energy state at the moment of conversion [14–19]. In order to solve this problem, one of the solutions is to reset the neuron state to the initial threshold value after each conversion [11]. However, the main disadvantage of this method is that it requires more power.

Alternatively, in the works [11–16], the authors proposed to change the Hopfield ADC network architecture itself in order to eliminate the local minima states, which cause errors in the ADC outputs. Thus, different methods on eliminating local minima problem are proposed in [14–19], which are discussed in more detail in Section 3.

## 3. Hopfield neural network-based ADCs

After expanding and rearrangement of the above equation, we get the expression shown in Eq. (6). By using Eq. (6), the expressions for synaptic weights calculation can be obtained, Eq. (7)

As it is already discussed, the stability of the Hopfield NN is achieved when the energy function is at its minimum in the state space. The dynamics of the system is moving towards decreasing the energy function. Thus, for the Hopfield NN the energy state space will have multiple local minima, where each of these local minima states is able to stabilize the system dynamics. In theory, the ADC structure proposed by Tank and Hopfield [11], which is based on the Hopfield NN with symmetric weight matrix, has to retrieve correct digital response of the analog input voltage by means of the energy local minima states that are assigned for each correct digital output. However, in practice, this concept does not work as is expected. It appears that the local minima states corrupt the correct operation of the network [14–19].

In the original Hopfield's work [11], it was proposed to implement neurons with the CMOS operational amplifiers. The results that were obtained exhibit not ideal ADC behaviour with

2 N X�1 i¼0

�2ð Þ <sup>2</sup>i�<sup>1</sup> <sup>þ</sup> <sup>2</sup><sup>i</sup>

Tij ¼ �2ð Þ <sup>i</sup>þ<sup>j</sup> Trefi ¼ �2ð Þ <sup>2</sup>i�<sup>1</sup> TIni <sup>¼</sup> <sup>2</sup><sup>i</sup> (7)

� �

VIn

Vi (6)

�2<sup>i</sup>þ<sup>j</sup> � �ViVj � <sup>1</sup>

Therefore, four-bit Hopfield NN ADC can be designed by using Eqs. (2)–(7).

<sup>E</sup> ¼ � <sup>1</sup> 2

2.2. The local minima problem

302 Memristor and Memristive Neural Networks

incorrect output states (Figure 3).

Figure 3. Hopfield NN ADC transfer characteristics with digital errors.

N X�1 i ¼ 0 j ¼ 0 i 6¼ j

> The design presented by Hopfield and Tank is a first example of ADC task implementation with neural networks. This idea became very popular later, as it appeared very simple compared to the conventional designs and, moreover, it opens up the possibilities to explore its phenomenological computational abilities which is a good contribution for science and engineering by itself.

> As it was previously described, the existence of local minima in the dynamics of the original Hopfield network ADC design corrupts its digital output by generating spurious states. This problem was addressed by several works that presented the ways of eliminating the local minima states by changing the structure of the ADC network [14–19]. In the following subsection, the two methods that are claimed to eliminate the problem of local minima of the energy function are presented.

#### 3.1. Eliminating the local minima problem of Hopfield ADC

### 3.1.1. Modified Hopfield architecture with correction currents

In the study by [14], the authors analyzed the stability of the output codes of Hopfield network ADC in terms of overlap of input currents between two adjacent output codes which is defined as GAP. According to Lee and Sheu [14], in order to avoid the local minima state, this parameter should be higher or equal to zero. Thus, it was deduced that in order to eliminate this current overlap condition, the correction currents can be applied back to the inputs of Hopfield network through the additional set of conductance weights [14].

The modified Hopfield network ADC schematic diagram is shown in Figure 2. The correction currents are generated by inverting amplifiers in order to compensate the overlap and to maintain system dynamics converging to a stable state. In Eq. (8), the dynamics of network in a stable state with applied correction current, IiC, is described.

$$T\_i u\_i = \sum\_{\substack{i = 0 \ j = 0 \\ j \neq i}}^{N-1} T\_{ij} V\_j + I\_i + I\_{i\mathbb{C}} \tag{8}$$

Network is applied to the original Hopfield neural network ADC by replacing the conventional two-state sigmoidal neurons by multiple-state (or multiple threshold) neurons [21]. The motivation under this idea is to create a type of neural ADC with better resolution but with the same number (or even less number) of synaptic weights as in the original Hopfield ADC design [18]. This method reduces the complexity of weight matrix and makes it easier to

The schematic diagram of the ADC proposed in [21] is shown in Figure 4. Being a distinguishable alternative neural networks-based ADC design, it still does not solve the problems of the local minima states of the Hopfield associative network. In [21], the authors considered this case and proposed to solve the local minima by additional correction current method [14].

The multilevel neuron dynamics is described by the block diagram shown in Figure 5 [21]. In the original Hopfield ADC, continuous neuron model dynamics is described by the first-order differential equation (Eq. (4)). The two-state neuron activation function is expressed by Eq. (1).

low (refer to Section 2). In multilevel neuron model, a two-state activation function neuron is

The nonlinearity function M uð Þ shown in Eq. (10) is described as a sum of monotonically

replaced with the multiple-state nonlinearity block (Eq. (10)) (Figures 6 and 7)

ð Þ ui , and it can take two states logic high and logic

Neural Network-Based Analog-to-Digital Converters http://dx.doi.org/10.5772/intechopen.73038 305

Vi ¼ Mið Þ ui (10)

ð Þ: with different threshold values θj, where the state of the

implement the ADC with improved resolution in hardware [21].

The neuron output is then equal to Vi ¼ gi

Figure 4. Schematic representation of modified Hopfield network ADC.

nondecreasing step functions f <sup>j</sup>

Figure 5. Non-symmetric T-model ADC.

The energy function of the modified Hopfield network can be described by adding an additional term that represents the correction currents, Eq. (8). The correcting energy eliminates the local minima states and gives the network one global minimum energy state [14].

$$E\_{\mathbb{C}} = -\frac{1}{2} \sum\_{\substack{i=0 \ j=0 \\ i \neq j}}^{N-1} T\_{ij} V\_i V\_j - \sum\_{i=0}^{N-1} I\_i V\_i - \sum\_{i=1}^{N-1} I\_{i\mathbb{C}} V\_i \tag{9}$$

There are certain conditions, according to [14], that should be followed while selecting the correction currents and conductance values. The first condition is to avoid the state when the GAPC parameter is less than zero so that to avoid the two adjacent codes be stable simultaneously. The second condition states that the network dynamics must be sustained in the operation that minimizes energy function of the system. The last condition is to maintain the input current range in appropriate for the global minimum value. For the detailed description of the method, please refer to [14].

## 3.1.2. Non-symmetric Hopfield architecture

The type of architecture based on Hopfield network is built with non-symmetric connection weight matrix, which is another example that is aimed to solve the local minima states problem. In the designs by [15–19], the properties of the triangular connections are analyzed. In [19], the authors prove that by triangular interconnection matrix the network operates without spurious states and that this type of architecture can be a good alternative for the original Hopfield design. Similar network type was analyzed by Sun et al. [18], and it is proven that the local minima problem can be mitigated by using this architecture. Taking into account the structure of the model [18], the learning component can be applied to the network making this type of architecture advantageous over the original one.

In Figure 3, the non-symmetric T-model ADC is shown. The input current at each raw represents the current flowing from the external analog input source and from the reference.

This section presents an overview of the designs of neural network ADC of Hopfield network type that solves the problem of the local minima of energy function that creates the digital error at the output of the ADC. We introduced a brief explanation of the two methods of elimination of the local minima.

#### 3.2. Hopfield ADC with multilevel neurons

An interesting alternative design is proposed by [20, 21] in which the authors focus on implementing analog neurons to be of multiple states. The design named as Multilevel Neural Network is applied to the original Hopfield neural network ADC by replacing the conventional two-state sigmoidal neurons by multiple-state (or multiple threshold) neurons [21]. The motivation under this idea is to create a type of neural ADC with better resolution but with the same number (or even less number) of synaptic weights as in the original Hopfield ADC design [18]. This method reduces the complexity of weight matrix and makes it easier to implement the ADC with improved resolution in hardware [21].

The schematic diagram of the ADC proposed in [21] is shown in Figure 4. Being a distinguishable alternative neural networks-based ADC design, it still does not solve the problems of the local minima states of the Hopfield associative network. In [21], the authors considered this case and proposed to solve the local minima by additional correction current method [14].

The multilevel neuron dynamics is described by the block diagram shown in Figure 5 [21]. In the original Hopfield ADC, continuous neuron model dynamics is described by the first-order differential equation (Eq. (4)). The two-state neuron activation function is expressed by Eq. (1). The neuron output is then equal to Vi ¼ gi ð Þ ui , and it can take two states logic high and logic low (refer to Section 2). In multilevel neuron model, a two-state activation function neuron is replaced with the multiple-state nonlinearity block (Eq. (10)) (Figures 6 and 7)

$$V\_i = M\_i(\mu\_i) \tag{10}$$

The nonlinearity function M uð Þ shown in Eq. (10) is described as a sum of monotonically nondecreasing step functions f <sup>j</sup> ð Þ: with different threshold values θj, where the state of the

Figure 4. Schematic representation of modified Hopfield network ADC.

Figure 5. Non-symmetric T-model ADC.

maintain system dynamics converging to a stable state. In Eq. (8), the dynamics of network in a

The energy function of the modified Hopfield network can be described by adding an additional term that represents the correction currents, Eq. (8). The correcting energy eliminates the

TijViVj �

There are certain conditions, according to [14], that should be followed while selecting the correction currents and conductance values. The first condition is to avoid the state when the GAPC parameter is less than zero so that to avoid the two adjacent codes be stable simultaneously. The second condition states that the network dynamics must be sustained in the operation that minimizes energy function of the system. The last condition is to maintain the input current range in appropriate for the global minimum value. For the detailed description

The type of architecture based on Hopfield network is built with non-symmetric connection weight matrix, which is another example that is aimed to solve the local minima states problem. In the designs by [15–19], the properties of the triangular connections are analyzed. In [19], the authors prove that by triangular interconnection matrix the network operates without spurious states and that this type of architecture can be a good alternative for the original Hopfield design. Similar network type was analyzed by Sun et al. [18], and it is proven that the local minima problem can be mitigated by using this architecture. Taking into account the structure of the model [18], the learning component can be applied to the network making

In Figure 3, the non-symmetric T-model ADC is shown. The input current at each raw represents the current flowing from the external analog input source and from the reference.

This section presents an overview of the designs of neural network ADC of Hopfield network type that solves the problem of the local minima of energy function that creates the digital error at the output of the ADC. We introduced a brief explanation of the two methods of

An interesting alternative design is proposed by [20, 21] in which the authors focus on implementing analog neurons to be of multiple states. The design named as Multilevel Neural

N X�1 i¼0

IiVi �

N X�1 i¼1

TijVj þ Ii þ IiC (8)

IiCVi (9)

N X�1 i ¼ 0 j ¼ 0 j 6¼ i

local minima states and gives the network one global minimum energy state [14].

stable state with applied correction current, IiC, is described.

EC ¼ � <sup>1</sup> 2

this type of architecture advantageous over the original one.

of the method, please refer to [14].

304 Memristor and Memristive Neural Networks

elimination of the local minima.

3.2. Hopfield ADC with multilevel neurons

3.1.2. Non-symmetric Hopfield architecture

Tiui ¼

N X�1 i ¼ 0 j ¼ 0 i 6¼ j

neuron changes. Each step function is multiplied by the positive coefficient parameter that can be seen as an offset parameter bj. The sigmoidal multilevel nonlinearity can be observed in Figure 8

j¼0

ui <sup>¼</sup> <sup>M</sup>�<sup>1</sup>

The energy function for the multilevel ADC architecture can also be found by the square of difference expression, Eq. (14). The number of levels in the multilevel nonlinearity block of the neuron is m ¼ 0, 1, 2,…, l � 1 and l represents the base of conversion [21]. The system tends to find the correct digital representation with base l of analog input signal with the minimum energy function value [21]. After expanding Eq. (14), Eq. (15) is obtained, which gives the

The neuron dynamics can be expressed by the block diagram shown in Figure 7. The term Xið Þ¼ ui Giui translates information about the current state to its own input so that when the input current value Ii is higher than the Xið Þ ui , the state of the neuron is increased. In this design, the additional Gi value is present as a diagonal element in the weight matrix [18].

bjf <sup>j</sup> u � θ<sup>j</sup>

� � (11)

Neural Network-Based Analog-to-Digital Converters http://dx.doi.org/10.5772/intechopen.73038 307

TijVj � Giui þ Ii (12)

ð Þ Vi (13)

(14)

<sup>2</sup> (15)

M uð Þ¼ <sup>X</sup> l�1

Therefore, the system is described by Eq. (12)

Figure 8. Sigmoidal multilevel nonlinearity.

synaptic weight values of the network, Eq. (16)

<sup>E</sup> ¼ � <sup>1</sup> 2

N X�1 i ¼ 0 j ¼ 0 i 6¼ j

Ci dui dt <sup>¼</sup> <sup>X</sup><sup>n</sup>�<sup>1</sup> j¼0

<sup>E</sup> <sup>¼</sup> <sup>1</sup>

�l <sup>i</sup>þ<sup>j</sup> � �ViVj �

<sup>2</sup> VIn �

N X�1 i¼0 l i Vi

!<sup>2</sup>

N X�1 i¼0 l i VIn Vi þ 1 2 N X�1 i¼0 l 2i Vi

Figure 6. Multilevel neural ADC.

Figure 7. Multilevel neuron block diagram.

Figure 8. Sigmoidal multilevel nonlinearity.

Figure 6. Multilevel neural ADC.

306 Memristor and Memristive Neural Networks

Figure 7. Multilevel neuron block diagram.

neuron changes. Each step function is multiplied by the positive coefficient parameter that can be seen as an offset parameter bj. The sigmoidal multilevel nonlinearity can be observed in Figure 8

$$M(\boldsymbol{\mu}) = \sum\_{j=0}^{l-1} b\_j \boldsymbol{f}\_j(\boldsymbol{\mu} - \boldsymbol{\theta}\_j) \tag{11}$$

The neuron dynamics can be expressed by the block diagram shown in Figure 7. The term Xið Þ¼ ui Giui translates information about the current state to its own input so that when the input current value Ii is higher than the Xið Þ ui , the state of the neuron is increased. In this design, the additional Gi value is present as a diagonal element in the weight matrix [18]. Therefore, the system is described by Eq. (12)

$$\mathbb{C}\_{i}\frac{du\_{i}}{dt} = \sum\_{j=0}^{n-1} T\_{ij}V\_{j} - \mathbb{G}\_{i}u\_{i} + I\_{i} \tag{12}$$

$$
\mu\_i = M^{-1}(V\_i) \tag{13}
$$

The energy function for the multilevel ADC architecture can also be found by the square of difference expression, Eq. (14). The number of levels in the multilevel nonlinearity block of the neuron is m ¼ 0, 1, 2,…, l � 1 and l represents the base of conversion [21]. The system tends to find the correct digital representation with base l of analog input signal with the minimum energy function value [21]. After expanding Eq. (14), Eq. (15) is obtained, which gives the synaptic weight values of the network, Eq. (16)

$$E = \frac{1}{2} \left( V\_{\text{Int}} - \sum\_{i=0}^{N-1} l^i V\_i \right)^2 \tag{14}$$

$$E = -\frac{1}{2} \sum\_{\substack{i=0 \ j=0 \\ i \neq j}}^{N-1} \left( -l^{i+j} \right) V\_i V\_j - \sum\_{i=0}^{N-1} l^i V\_{\ln} \, V\_i + \frac{1}{2} \sum\_{i=0}^{N-1} l^{2i} V\_i^2 \tag{15}$$

$$T\_{i\dagger} = -l^{i+j} \tag{16}$$

Hopfield ADC blocks that operate in parallel. Each successive 2-bit ADC block receives input signal that is DC-shifted to some small positive voltage level. The design parameters can be

Neural Network-Based Analog-to-Digital Converters http://dx.doi.org/10.5772/intechopen.73038 309

The preliminary results of the level-shifted neural ADC for 16-quantization level ADC are presented in [23]. As the design consists of multiple operation in parallel 2-bit Hopfield ADCs, the number of output bits in the digital code is larger compared to the conventional Hopfield ADC. Therefore, it is proposed to use a feedforward neural network encoder so that the digital output will be of a 4-bit format and also to reduce the error in computation due to the local

Since memristor, the fourth fundamental circuit element [24], was discovered by HP Labs in 2008 [25], the device is receiving very high attention as it has a potential to emulate the functionality of biological synapses. During the past decade, many scientists have shown a variety of methods of memristor application in hardware design of ANN systems. For instance, in [26] the hybrid CMOS-memristor Hopfield network-based associative memory is demonstrated. While in the work conducted by Guo et al. [27], the CMOS-memristor hybrid architecture is applied in the design of 4-bit Hopfield neural ADC. Figure 10 reflects the

The CMOS-memristor hybrid Hopfield ADC [27] consists of memristor-based weight matrix and sigmoidal CMOS neurons. The advantage of implementing constant synapses (in Hopfield NN for ADC design synaptic weights a preset and kept unchanged [11]) with memristors is that being a nanoscale device, memristors consume much less power [27]. Moreover, they significantly reduce the on-chip area compared to CMOS-based synaptic weights [27]. In their work, Guo et al. [27] demonstrated the simulation of the proposed system

The tuning of memristors is performed by applying either voltage or current pulses with gradually changing amplitude (and/or width) continuously until the device reaches a desired resistance state [27]. In order to sustain the pre-programmed resistances in memristive weight matrix, the network-operating region (analog input and neuron maximum output voltage) was scaled down so as to prevent any resistance state fluctuations in memristors [27]. The CMOS-memristor hybrid ADC applied resetting the neuron states technique similar to that

Another type of CMOS-memristor hybrid neural ADC is a T-model neural ADC architecture proposed by [2]. In the design by Wang et al. [2], the additional least mean square (LMS) training algorithm is applied in order to optimize the system operation to certain conditions. The LMS algorithm that was used in [2] allows flexibility to ADC in terms of voltage operation region. The training algorithm is implemented by means of digital training block connected to the T-model weight matrix. The works presented in [2, 27] introduce architectures of neural

adjusted depending on the application of the ADC.

4. CMOS/memristor hybrid network-based ADC

and also successfully implemented their circuit in hardware.

demonstrated in [11] for reduction of the effects of the local minima states.

minima and circuit nonidealities.

schematic of the system proposed in [27].

In the ADC with multilevel neurons, the design system suffers from the local minima problem, which they solve by applying a similar technique that was proposed by [14] described in the previous subsection [21]. Another method of eliminating incorrect output response for multilevel neuron-based ADC was presented in [19], where the parallel hardware-annealing technique was introduced.

#### 3.3. Hopfield neural network-based level-shifted ADC

In the previous subsections, we discussed various types of architectures that are the modified versions of Hopfield ADC, such as the ADC with correction current, the ADC with nonsymmetric weight matrix and the ADC with multilevel neurons. All these designs are based on the original Hopfield ADC structure. However, in this subsection we discuss a type of Hopfieldbased ADC that is different from the earlier architectures discussed. The level-shifted neural ADC [23] is a new type of architecture that is constructed with multiple 2-bit Hopfield ADCs and voltage level shifters (Figure 9). The ADC design proposed by Hopfield and Tank [11] produces a 4-bit digital output, which is not very much practical in modern technologies. In order to increase the number of neurons in Hopfield NN ADC [11], the corresponding scaling of input and output voltages should be made according to Eq. (2). Therefore, if the goal is to increase the resolution by increasing the number of neurons of Hopfield ADC, the binary output voltage values from neurons will be reduced. Furthermore, the resolution change will require appropriate scaling of the weight matrix. These two problems were addressed in [20–22] and methods that solve these problems were presented. The level-shifted neural ADC is another method that can solve the resolution improvement issue of Hopfield NN ADC.

The operation principle of the proposed level-shifted neural ADC [23] is not very complicated compared to the designs in [14–22]. As it was mentioned, the design consists of multiple 2-bit

Figure 9. Level-shifted neural ADC.

Hopfield ADC blocks that operate in parallel. Each successive 2-bit ADC block receives input signal that is DC-shifted to some small positive voltage level. The design parameters can be adjusted depending on the application of the ADC.

The preliminary results of the level-shifted neural ADC for 16-quantization level ADC are presented in [23]. As the design consists of multiple operation in parallel 2-bit Hopfield ADCs, the number of output bits in the digital code is larger compared to the conventional Hopfield ADC. Therefore, it is proposed to use a feedforward neural network encoder so that the digital output will be of a 4-bit format and also to reduce the error in computation due to the local minima and circuit nonidealities.

## 4. CMOS/memristor hybrid network-based ADC

Tij ¼ �l

In the ADC with multilevel neurons, the design system suffers from the local minima problem, which they solve by applying a similar technique that was proposed by [14] described in the previous subsection [21]. Another method of eliminating incorrect output response for multilevel neuron-based ADC was presented in [19], where the parallel hardware-annealing

In the previous subsections, we discussed various types of architectures that are the modified versions of Hopfield ADC, such as the ADC with correction current, the ADC with nonsymmetric weight matrix and the ADC with multilevel neurons. All these designs are based on the original Hopfield ADC structure. However, in this subsection we discuss a type of Hopfieldbased ADC that is different from the earlier architectures discussed. The level-shifted neural ADC [23] is a new type of architecture that is constructed with multiple 2-bit Hopfield ADCs and voltage level shifters (Figure 9). The ADC design proposed by Hopfield and Tank [11] produces a 4-bit digital output, which is not very much practical in modern technologies. In order to increase the number of neurons in Hopfield NN ADC [11], the corresponding scaling of input and output voltages should be made according to Eq. (2). Therefore, if the goal is to increase the resolution by increasing the number of neurons of Hopfield ADC, the binary output voltage values from neurons will be reduced. Furthermore, the resolution change will require appropriate scaling of the weight matrix. These two problems were addressed in [20–22] and methods that solve these problems were presented. The level-shifted neural ADC is another method that

The operation principle of the proposed level-shifted neural ADC [23] is not very complicated compared to the designs in [14–22]. As it was mentioned, the design consists of multiple 2-bit

technique was introduced.

308 Memristor and Memristive Neural Networks

Figure 9. Level-shifted neural ADC.

3.3. Hopfield neural network-based level-shifted ADC

can solve the resolution improvement issue of Hopfield NN ADC.

<sup>i</sup>þ<sup>j</sup> (16)

Since memristor, the fourth fundamental circuit element [24], was discovered by HP Labs in 2008 [25], the device is receiving very high attention as it has a potential to emulate the functionality of biological synapses. During the past decade, many scientists have shown a variety of methods of memristor application in hardware design of ANN systems. For instance, in [26] the hybrid CMOS-memristor Hopfield network-based associative memory is demonstrated. While in the work conducted by Guo et al. [27], the CMOS-memristor hybrid architecture is applied in the design of 4-bit Hopfield neural ADC. Figure 10 reflects the schematic of the system proposed in [27].

The CMOS-memristor hybrid Hopfield ADC [27] consists of memristor-based weight matrix and sigmoidal CMOS neurons. The advantage of implementing constant synapses (in Hopfield NN for ADC design synaptic weights a preset and kept unchanged [11]) with memristors is that being a nanoscale device, memristors consume much less power [27]. Moreover, they significantly reduce the on-chip area compared to CMOS-based synaptic weights [27]. In their work, Guo et al. [27] demonstrated the simulation of the proposed system and also successfully implemented their circuit in hardware.

The tuning of memristors is performed by applying either voltage or current pulses with gradually changing amplitude (and/or width) continuously until the device reaches a desired resistance state [27]. In order to sustain the pre-programmed resistances in memristive weight matrix, the network-operating region (analog input and neuron maximum output voltage) was scaled down so as to prevent any resistance state fluctuations in memristors [27]. The CMOS-memristor hybrid ADC applied resetting the neuron states technique similar to that demonstrated in [11] for reduction of the effects of the local minima states.

Another type of CMOS-memristor hybrid neural ADC is a T-model neural ADC architecture proposed by [2]. In the design by Wang et al. [2], the additional least mean square (LMS) training algorithm is applied in order to optimize the system operation to certain conditions. The LMS algorithm that was used in [2] allows flexibility to ADC in terms of voltage operation region. The training algorithm is implemented by means of digital training block connected to the T-model weight matrix. The works presented in [2, 27] introduce architectures of neural

circuit analysis techniques in [14], and it was proposed to add a feedback current that will balance the network and create a single energy minimum for the whole system dynamics. Thus, the Hopfield NN-based ADC examples discussed in this chapter are still not adapted into practical use. Even though the local minima problem was mitigated, there is not much analysis on resolution improvement. In [20–22] by means of multilevel neuron structure, the 8 bit of resolution was achieved. However, ADC structure becomes much more complex since it incorporates multilevel nonlinearity blocks in each neuron and also uses correction current technique as in [14]. Therefore, in order to achieve performance as better as possible from such type of designs as Hopfield network-based ADCs, the complexity of system components must be increased and many parameters must be taken into account, such as circuit mismatches and offsets since they can affect the output significantly. In addition, the analog structure of Hopfield network-based ADCs creates limitations to resolution improvement and thus makes these designs difficult to be implemented and to be compatible with conventional ADCs.

Neural Network-Based Analog-to-Digital Converters http://dx.doi.org/10.5772/intechopen.73038 311

The alternative ADC structure based on Neural Engineering Framework (NEF) was demonstrated in [28], where it is proposed to shift system parts as much as possible into the digital domain, and only the front end of the ADC incorporates feedforward-type neural network encoder that passes signal to analog neurons, and the rest of the processing is done in digital form. Since the design uses a huge population of neurons in the input, even some amount of neurons will fail and the system is robust to such failures. Moreover, the stability issue is no longer valid in this type of architecture, as the neural network used in the design is purely feedforward. The NEF ADC is generally flexible and scalable, as it mostly consists of digital circuitry, and therefore, it can be adapted to any system requirements and technologies. However, the unresolved issue of the design is a very high power consumption of the network [28].

This chapter presents a review of existing technologies of neural network-based ADC designs. A/D conversion is an essential process in microelectronic systems that create a connection between analog systems (e.g., sensors) and digital-processing circuitry [1]. With the modern advancements in submicron CMOS technologies, a variety of high-speed and high-resolution ADCs that are used in different applications have increased [1]. In fact, considering the maturity of the field, the complexity of building an ADC has not been reduced. Moreover, due to the applications that require higher performance and flexibility, the resources of conventional ADC architectures may not be enough. Artificial intelligence is considered to tackle such high requirements on speed and performance. The A/D conversion is not excluded from

In classical works presented by Hopfield [9, 10], he proposed a mathematical CAM model that consists of a group of two-state neurons interconnected between each other that exhibit collective computational behaviour. He further presented the design that can solve optimization problems [11]. The A/D conversion in his work [11] was considered as a simple optimization problem in which it was desired to minimize the value of energy function that describes the dynamics of the ADC system. He presented a 4-bit NN ADC architecture that can be implemented in hardware

the list of operations that can be done by means of ANN.

6. Conclusion

Figure 10. CMOS-memristor hybrid neural ADC.

ADC that utilize memristors as a synaptic weight elements. They demonstrate that the lower power consumption of the memristive devices can be applied in the Hopfield NN ADC design. By contrast, the Hopfield network still requires additional circuitry to eliminate the local minima-related errors.

#### 5. Discussion

The Hopfield network-based ADCs represent a compact approach for the implementation of analog-to-digital conversion task. However, if trying to implement the model in hardware, the multiple circuit nonidealities create errors in the digital output that somehow must be corrected. For instance, as it was discussed previously, the offset response (hysteresis) of comparators after each conversion creates condition for the network to develop incorrect patterns. The possible solution for eliminating offset is to reset the comparators periodically after each conversion to the initial 0-V threshold state, as it was already mentioned [11]. However, this method is not preferable in terms of circuit implementation, as such circuit requires more power. Another problem, as it was previously discussed, was the local minima behaviour of Hopfield network that creates spurious states so that the output does not correspond to the desired response. The existence of local minima of the network is deduced by circuit analysis techniques in [14], and it was proposed to add a feedback current that will balance the network and create a single energy minimum for the whole system dynamics. Thus, the Hopfield NN-based ADC examples discussed in this chapter are still not adapted into practical use. Even though the local minima problem was mitigated, there is not much analysis on resolution improvement. In [20–22] by means of multilevel neuron structure, the 8 bit of resolution was achieved. However, ADC structure becomes much more complex since it incorporates multilevel nonlinearity blocks in each neuron and also uses correction current technique as in [14]. Therefore, in order to achieve performance as better as possible from such type of designs as Hopfield network-based ADCs, the complexity of system components must be increased and many parameters must be taken into account, such as circuit mismatches and offsets since they can affect the output significantly. In addition, the analog structure of Hopfield network-based ADCs creates limitations to resolution improvement and thus makes these designs difficult to be implemented and to be compatible with conventional ADCs.

The alternative ADC structure based on Neural Engineering Framework (NEF) was demonstrated in [28], where it is proposed to shift system parts as much as possible into the digital domain, and only the front end of the ADC incorporates feedforward-type neural network encoder that passes signal to analog neurons, and the rest of the processing is done in digital form. Since the design uses a huge population of neurons in the input, even some amount of neurons will fail and the system is robust to such failures. Moreover, the stability issue is no longer valid in this type of architecture, as the neural network used in the design is purely feedforward. The NEF ADC is generally flexible and scalable, as it mostly consists of digital circuitry, and therefore, it can be adapted to any system requirements and technologies. However, the unresolved issue of the design is a very high power consumption of the network [28].

## 6. Conclusion

ADC that utilize memristors as a synaptic weight elements. They demonstrate that the lower power consumption of the memristive devices can be applied in the Hopfield NN ADC design. By contrast, the Hopfield network still requires additional circuitry to eliminate the local

The Hopfield network-based ADCs represent a compact approach for the implementation of analog-to-digital conversion task. However, if trying to implement the model in hardware, the multiple circuit nonidealities create errors in the digital output that somehow must be corrected. For instance, as it was discussed previously, the offset response (hysteresis) of comparators after each conversion creates condition for the network to develop incorrect patterns. The possible solution for eliminating offset is to reset the comparators periodically after each conversion to the initial 0-V threshold state, as it was already mentioned [11]. However, this method is not preferable in terms of circuit implementation, as such circuit requires more power. Another problem, as it was previously discussed, was the local minima behaviour of Hopfield network that creates spurious states so that the output does not correspond to the desired response. The existence of local minima of the network is deduced by

minima-related errors.

Figure 10. CMOS-memristor hybrid neural ADC.

310 Memristor and Memristive Neural Networks

5. Discussion

This chapter presents a review of existing technologies of neural network-based ADC designs. A/D conversion is an essential process in microelectronic systems that create a connection between analog systems (e.g., sensors) and digital-processing circuitry [1]. With the modern advancements in submicron CMOS technologies, a variety of high-speed and high-resolution ADCs that are used in different applications have increased [1]. In fact, considering the maturity of the field, the complexity of building an ADC has not been reduced. Moreover, due to the applications that require higher performance and flexibility, the resources of conventional ADC architectures may not be enough. Artificial intelligence is considered to tackle such high requirements on speed and performance. The A/D conversion is not excluded from the list of operations that can be done by means of ANN.

In classical works presented by Hopfield [9, 10], he proposed a mathematical CAM model that consists of a group of two-state neurons interconnected between each other that exhibit collective computational behaviour. He further presented the design that can solve optimization problems [11]. The A/D conversion in his work [11] was considered as a simple optimization problem in which it was desired to minimize the value of energy function that describes the dynamics of the ADC system. He presented a 4-bit NN ADC architecture that can be implemented in hardware

[11]. However, the ADC architecture has intrinsic imperfection due to multiple local minima of energy function that creates digital error in the output of the ADC [14–19].

[2] Wang W, You Z, Liu P, Kuang J. An adaptive neural network A/D converter based on

Neural Network-Based Analog-to-Digital Converters http://dx.doi.org/10.5772/intechopen.73038 313

[3] Yang H, Sarpeshkar R. A bio-inspired ultra-energy-efficient analog-to-digital converter for biomedical applications. IEEE Transactions on Circuits and Systems I: Regular Papers.

[4] Aiello L. Brains and guts in human evolution: The expensive tissue hypothesis. Brazilian

[5] McCulloch WS, Pitts WH. A logical calculus of the ideas immanent in nervous activity.

[6] Cooper L. A possible organization of animal memory and learning. In: Lundquist B, Lundquist S, editors. Proceedings of the Nobel Symposium on Collective Properties of

[7] Warren S, Pitt M, Pitt W. A logical calculus of the ideas immanent in nervous activity.

[8] Longuet-Higgins HC. The non-local storage of temporal information. Proceedings of the

[9] Hopfield J. Neural networks and physical systems with emergent collective computational abilities. Proceedings of National Academy of Sciences of the United States of

[10] Hopfield J. Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of National Academy of Sciences of the United

[11] Tank D, Hopfield J. Simple "neural" optimization networks: An A/D converter, signal decision circuit, and linear programming circuit. IEEE Transactions on Circuits and Systems.

[12] Rosenblatt F. The perceptron: A probabilistic model for information storage and organi-

[13] Hopfield J, Tank D. Computing with neural circuits: A model. Science, New Series. 1986;

[14] Lee B, Sheu B. Modified Hopfield neural networks for retrieving the optimal solution.

[15] Chande V, Poonacha P. On neural networks for analog to digital conversion. IEEE Trans-

[16] Avitable G, Mannetti S. Some structures for neural based A/D conversion. Electronic Letters.

[17] Gray D, Michel A, Porod W. Application of neural networks to sorting problems. 27th

IEEE Conference on Decision and Control; 7-9 Dec; IEEE; 1988. pp. 350-351

zation in the brain. Psychological Review. 1958;65(6):386-408

IEEE Transactions on Neural Networks. 1991;2(1):137-142

actions on Neural Networks. 1995;6(5):1269-1274

States of America. 1984;81(Biophysics):3088-3092. DOI: 10.1073/pnas.81.10.3088

CMOS/memristor hybrid design. IEICE Electronics Express. 2014;11(24):1-6

2006;53(11):2349-2356

Journal of Genetics. 1997;20(1):141-148

America. 1982;79(8):2554-2558

1986;33(5):533-541

233(4764):625-633

1990;26(18):1516-1517

Bulletin of Mathematical Biophysics. 1943;5:115-133

Bulletin of Mathematical Biophysics. 1943;5(4):115-133

Physical Systems. New York: Academic Press; 1973. pp. 252-264

Royal Society of London B: Biological Sciences. 1968;171(1024):327-334

To solve the local minima problem, several methods are proposed in [14–19]. As it is discussed in Section 3, there are two main methods of eliminating the local minima states and obtaining one global minimum. In the modified Hopfield ADC design, it is proposed to apply correction currents back to the input of each neuron in order to reduce the overlapping current occurring between adjacent output codes [14]. This method eliminates local minima and creates one global minimum towards which the network flow is attracted [14]. Another interesting method that also reduces the effects of local minima is the neural ADC with non-symmetrical weight matrix connection [15–19]. According to this method, ADC architectures with nonsymmetrical weight matrix do not create multiple energy minima states; as a result, such networks are also attracted to a global minimum energy state [15–19].

Multilevel neural ADC architecture [21] is based on the original Hopfield ADC structure but with modified neuron model. The authors in [21] proposed a multiple-state neuron implementation that is aimed to improve the resolution of the ADC. A similar goal, to improve resolution, was pursued by the level-shifted neural ADC architecture [23] that is built with multiple Hopfield ADC blocks and voltage level shifters.

In addition to the presented CMOS-based neural ADC structures in Section 3, examples of CMOS-memristor-based neural ADC architecture [2, 27] are discussed in Section 4. The memristor device is a promising technology that is aimed to expand the capabilities of traditional CMOS-based systems. The application of memristors in neuromorphic circuits and the development of new memristor-based architectures are currently being widely discussed. However, in [2, 27], traditional implementation of neural ADC architecture was modified by the addition of memristors. Thus, the demonstrated results in [2, 27] have shown that there is a potential in the application of memristors in CMOS-based systems, as memristors consume less power and save on-chip area, which makes memristor-based neural ADC an attractive alternative to traditional NN-based ADC designs that are discussed previously. To sum up, a general overview on the NN-based ADC design area is presented in this chapter.

## Author details

Aigerim Tankimanova\* and Alex Pappachen James \*Address all correspondence to: atankimanova@nu.edu.kz Nazarbayev University, Astana, Kazakhstan

## References

[1] Van de Plassche RJ. CMOS Integrated Analog-to-Digital and Digital-to-Analog Converters. 2nd ed. Netherlands: Kluwer Academic Publishers; 2003. 588 p

[2] Wang W, You Z, Liu P, Kuang J. An adaptive neural network A/D converter based on CMOS/memristor hybrid design. IEICE Electronics Express. 2014;11(24):1-6

[11]. However, the ADC architecture has intrinsic imperfection due to multiple local minima of

To solve the local minima problem, several methods are proposed in [14–19]. As it is discussed in Section 3, there are two main methods of eliminating the local minima states and obtaining one global minimum. In the modified Hopfield ADC design, it is proposed to apply correction currents back to the input of each neuron in order to reduce the overlapping current occurring between adjacent output codes [14]. This method eliminates local minima and creates one global minimum towards which the network flow is attracted [14]. Another interesting method that also reduces the effects of local minima is the neural ADC with non-symmetrical weight matrix connection [15–19]. According to this method, ADC architectures with nonsymmetrical weight matrix do not create multiple energy minima states; as a result, such

Multilevel neural ADC architecture [21] is based on the original Hopfield ADC structure but with modified neuron model. The authors in [21] proposed a multiple-state neuron implementation that is aimed to improve the resolution of the ADC. A similar goal, to improve resolution, was pursued by the level-shifted neural ADC architecture [23] that is built with multiple

In addition to the presented CMOS-based neural ADC structures in Section 3, examples of CMOS-memristor-based neural ADC architecture [2, 27] are discussed in Section 4. The memristor device is a promising technology that is aimed to expand the capabilities of traditional CMOS-based systems. The application of memristors in neuromorphic circuits and the development of new memristor-based architectures are currently being widely discussed. However, in [2, 27], traditional implementation of neural ADC architecture was modified by the addition of memristors. Thus, the demonstrated results in [2, 27] have shown that there is a potential in the application of memristors in CMOS-based systems, as memristors consume less power and save on-chip area, which makes memristor-based neural ADC an attractive alternative to traditional NN-based ADC designs that are discussed previously. To sum up, a

[1] Van de Plassche RJ. CMOS Integrated Analog-to-Digital and Digital-to-Analog Converters.

general overview on the NN-based ADC design area is presented in this chapter.

energy function that creates digital error in the output of the ADC [14–19].

networks are also attracted to a global minimum energy state [15–19].

Hopfield ADC blocks and voltage level shifters.

312 Memristor and Memristive Neural Networks

Aigerim Tankimanova\* and Alex Pappachen James

Nazarbayev University, Astana, Kazakhstan

\*Address all correspondence to: atankimanova@nu.edu.kz

2nd ed. Netherlands: Kluwer Academic Publishers; 2003. 588 p

Author details

References


[18] Sun CL, Tang Z, Ishizuka O, Matsumoto H. Synthesis and implementation of T-model neural-based A/D converter. In: IEEE International Symposium on Circuits and Systems;

[19] Avitabile G, Forti M, Manetti S, Marini M. On a class of nonsymmetrical neural networks with application to ADC. IEEE Transactions on Circuits and Systems. 1991;38(2):202-209

[20] Yuh J, Newcomb R. Circuits for multi-level neuron nonlinearities. In: International Joint

[21] Yuh J, Newcomb W. A multilevel neural network for a/d conversion. IEEE Transactions

[22] Bang SH, Chen O, Chang J, Sheu B. Paralleled hardware annealing in multilevel Hopfield neural networks for optimal solutions. IEEE Transactions on Circuits and Systems II:

[23] Tankimanova A, Kumar Maan A, James A. Level-shifted neural encoded analog-to-digital converter. In: 24th IEEE International Conference on Electronics, Circuits and Systems

[24] Kumar Maan A, Ai JD, James AP. A survey of memristive threshold logic circuits. IEEE

[25] Strukov DB, Snider GS, Stewart DR, Williams RS. The missing memristor found. Nature.

[26] Hu SG, Liu Y, Liu Z, Chen TP, Wang JJ, Yu Q, Deng LJ, Yin Y, Hosaka S. Associative memory realized by a reconfigurable memristive Hopfield neural network. Nature Com-

[27] Guo X, Meririkh-Bayat F, Gao L, Hoskins DB, Alibart F, Linares-Barranco B, Theogarajan L, Teuscher C, Strukov DB. Modelling and experimental demonstration of a Hopfield network analog-to-digital converter with hybrid CMOS/memristor circuits. Frontiers in

[28] Mayr CG, Partzsch J, Noack M, Schuffny R. Configurable analog-digital conversion using

the neural engineering framework. Frontiers of Neuroscience. 2014;8:201

Transactions on Neural Networks and Learning Systems. 2016;PP(99):1-13

Conference on Neural Networks; 7–11 June; IEEE; 1992. pp. 27-32

Analog and Digital Signal Processing. 1995;42(1):46-49

10-13 May; IEEE; 1992. pp. 1573-1576

314 Memristor and Memristive Neural Networks

on Neural Networks. 1993;4(3):470-483

(ICECS 2017)

2008;453(7191):80-83

munications. 2015;6:7522

Neuroscience. 2015;9:488

## *Edited by Alex Pappachen James*

This book covers a range of models, circuits and systems built with memristor devices and networks in applications to neural networks. It is divided into three parts: (1) Devices, (2) Models and (3) Applications. The resistive switching property is an important aspect of the memristors, and there are several designs of this discussed in this book, such as in metal oxide/organic semiconductor nonvolatile memories, nanoscale switching and degradation of resistive random access memory and graphene oxide-based memristor. The modelling of the memristors is required to ensure that the devices can be put to use and improve emerging application. In this book, various memristor models are discussed, from a mathematical framework to implementations in SPICE and verilog, that will be useful for the practitioners and researchers to get a grounding on the topic. The applications of the memristor models in various neuromorphic networks are discussed covering various neural network models, implementations in A/D converter and hierarchical temporal memories.

Memristor and Memristive Neural Networks

Memristor and Memristive

Neural Networks