**3.2 Neural codes**

Neural code is used to characterize raw analog signals into neural responses. In general, there are two distinct classes to represent neural codes. One class converts analog signals into a spike train where only the number of spikes matters, knowing as the rate code. Another class converts analog signals into the temporal response structure [36] where time intervals matters, knowing as the temporal code.

**Figure 5** demonstrates major differences between the rate code and the temporal code. In the rate code, analog signals are encoded into the firing rate within a sampling period, as shown in **Figure 5a**. Considering the implementation complexity, the rate encoding scheme is easier to implement through electronic circuits compared to the temporal encoding scheme; however, small variation of an analog signal in the temporal response structure are neglected, which makes the rate

**Figure 5.**

*Neural codes in (a) rate code, (b) time-to-first-spike latency code, and (c) inter-spike-interval temporal code.*

code inherency ambiguous in the real-time computation [36]. In [37], researches discover that neural information does not only depend on the spatial, but also the temporal structure. Time-to-first-spike (TTFS) latency code [38–40] is one of the simplest temporal encoding schemes. As demonstrated in **Figure 5b**, in a TTFS latency code, analog signals are encoded into a time interval between the starting point of the sampling period and the generated spike. However, the encoding error would be large if the system performs abnormally.

The inter-spike-interval (ISI) code is another branch of the temporal code, where encoded analog signals depends on the internal time correlation between spikes [41, 42], as illustrated in **Figure 5c**. In general, the ISI temporal encoder converts all analog signals into several inter-spike-intervals, allowing each spike to be the reference frame to others. Obviously, the ISI code is capable of carrying more information within a sampling period compared to the TTFS latency code.

**Figure 6a** demonstrates the simplified function diagram of ISI temporal encoder. The ISI temporal encoder employs an iteration architecture such that each LIF neuron operates in separate clock periods. The signal regulation layer is built by a current mirror array to duplicate the input excitation current for each LIF neuron; the neuron pool along with the signal integration layer achieve the iterative characteristic. Our ISI temporal encoder chip was fabricated through the standard GlobalFoundries (GF) 180 nm CMOS technology, as depicted in **Figure 6b**.

The number of spikes in an ISI code as discussed in [32] is directly proportional to the number of neurons. Even though this linear proportional correlation is desirable, its hardware implementation is still far more challenging. On the other hand, it can be observed that the exponential relation would increase the number of spikes, thus, containing more information even with the same number of neurons. Through the iterative structured ISI temporal encoder, the number of generated spikes, *SN*, can be determined by the number of neurons, which can be written as

$$\mathbf{S}\_N = \mathbf{Z}^N - \mathbf{1},\tag{4}$$

**17**

*Opening the "Black Box" of Silicon Chip Design in Neuromorphic Computing*

the ISI temporal encoder has capability to carry more information. The iterative structure greatly reduces the power consumption, since a smaller number of

In this iterative structured design, the ISI temporal encoder samples the original analog signal without using A/D and D/A conversions, and converts analog signals into several inter-spike-intervals. The expression of the inter-spike-interval can be

*Iex* − *Ileak*

where *Ai* = *Cm* ∙ *Vm*. In the IC implementation, the membrane capacitor is fixed,

*Ai* = β ∙ *AN*−1 = β<sup>2</sup> ∙ *AN*−2 = ⋯ = β*<sup>N</sup>*−1 ∙ *A*1, (6)

The general expression of each inter-spike-interval, as demonstrated in

−1 = \_\_\_1 *AN*

−2 = \_\_1 *A*1 ∙ ( \_\_\_\_ *VN*−2 <sup>β</sup>*<sup>N</sup>*−2 <sup>−</sup> \_\_\_\_ *VN*−1

∙ \_\_\_\_ *VN*−1

<sup>β</sup><sup>3</sup> <sup>−</sup> <sup>⋯</sup> <sup>−</sup>\_\_\_\_

With the respect to the analog design of neural code, our spiking neural network chip adapts the ISI temporal encoding scheme as it pre-signal processing module, as well as the reservoir computing module with delay topology as the processing element. Our spiking neural network, named as the analog delayed feedback reservoir (DFR) system is considered as the simplification of conventional reservoir

*VN*−3 <sup>β</sup>*<sup>N</sup>*−3 <sup>−</sup> \_\_\_\_ *VN*−2 <sup>β</sup>*<sup>N</sup>*−2 <sup>−</sup> \_\_\_\_ *VN*−1

, (5)

, in terms of excitation current can be

<sup>β</sup>*<sup>N</sup>*−1 , (7)

β*<sup>N</sup>*−1), (8)

β*<sup>N</sup>*−1). (9)

neurons are needed to produce the equal number of spikes.

*Di* <sup>=</sup> \_\_\_\_\_\_ *Ai*

is a constant; thereby, the variable, *Ai*

where β is an arbitrary design parameter.

*D*2*<sup>N</sup>*−1

⋮

*A*1 ∙ ( \_\_\_ *V*1 <sup>β</sup><sup>1</sup> <sup>−</sup> \_\_\_ *V*2 <sup>β</sup><sup>2</sup> <sup>−</sup> \_\_\_ *V*3

*D*2*<sup>N</sup>*−1

*D*2*<sup>N</sup>*−1 = \_\_1

**4. CMOS nervous system design**

**Figure 7**, can be written as

*DOI: http://dx.doi.org/10.5772/intechopen.83832*

*ISI temporal spike train with N LIF neurons.*

simplified as

**Figure 7.**

thus, *Vi*

defined as

where *N* defines the total number of neurons.

From Eq. (4), it can be observed that even with the same number of neurons, the ISI temporal encoder is capable to produce more spikes compared to [35]; thereby,

## **Figure 6.**

*(a) Simplified function diagram of ISI temporal encoder and (b) die photo of our fabricated ISI temporal encoder chip [32].*

*Opening the "Black Box" of Silicon Chip Design in Neuromorphic Computing DOI: http://dx.doi.org/10.5772/intechopen.83832*

**Figure 7.** *ISI temporal spike train with N LIF neurons.*

*Bio-Inspired Technology*

**Figure 6b**.

code inherency ambiguous in the real-time computation [36]. In [37], researches discover that neural information does not only depend on the spatial, but also the temporal structure. Time-to-first-spike (TTFS) latency code [38–40] is one of the simplest temporal encoding schemes. As demonstrated in **Figure 5b**, in a TTFS latency code, analog signals are encoded into a time interval between the starting point of the sampling period and the generated spike. However, the encoding error

The inter-spike-interval (ISI) code is another branch of the temporal code, where encoded analog signals depends on the internal time correlation between spikes [41, 42], as illustrated in **Figure 5c**. In general, the ISI temporal encoder converts all analog signals into several inter-spike-intervals, allowing each spike to be the reference frame to others. Obviously, the ISI code is capable of carrying more

**Figure 6a** demonstrates the simplified function diagram of ISI temporal encoder. The ISI temporal encoder employs an iteration architecture such that each LIF neuron operates in separate clock periods. The signal regulation layer is built by a current mirror array to duplicate the input excitation current for each LIF neuron; the neuron pool along with the signal integration layer achieve the iterative characteristic. Our ISI temporal encoder chip was fabricated through the standard GlobalFoundries (GF) 180 nm CMOS technology, as depicted in

The number of spikes in an ISI code as discussed in [32] is directly proportional

*SN* = 2*<sup>N</sup>* − 1, (4)

From Eq. (4), it can be observed that even with the same number of neurons, the ISI temporal encoder is capable to produce more spikes compared to [35]; thereby,

*(a) Simplified function diagram of ISI temporal encoder and (b) die photo of our fabricated ISI temporal* 

to the number of neurons. Even though this linear proportional correlation is desirable, its hardware implementation is still far more challenging. On the other hand, it can be observed that the exponential relation would increase the number of spikes, thus, containing more information even with the same number of neurons. Through the iterative structured ISI temporal encoder, the number of generated spikes, *SN*, can be determined by the number of neurons, which can be written as

information within a sampling period compared to the TTFS latency code.

would be large if the system performs abnormally.

where *N* defines the total number of neurons.

**16**

**Figure 6.**

*encoder chip [32].*

the ISI temporal encoder has capability to carry more information. The iterative structure greatly reduces the power consumption, since a smaller number of neurons are needed to produce the equal number of spikes.

In this iterative structured design, the ISI temporal encoder samples the original analog signal without using A/D and D/A conversions, and converts analog signals into several inter-spike-intervals. The expression of the inter-spike-interval can be simplified as

$$D\_i = \frac{A\_i}{I\_{ex} - I\_{lab}},\tag{5}$$

where *Ai* = *Cm* ∙ *Vm*. In the IC implementation, the membrane capacitor is fixed, thus, *Vi* is a constant; thereby, the variable, *Ai* , in terms of excitation current can be defined as

$$A\_i = \mathfrak{P} \cdot A\_{N-1} = \mathfrak{P}^2 \cdot A\_{N-2} = \dots = \mathfrak{P}^{N-1} \cdot A\_{1s} \tag{6}$$

where β is an arbitrary design parameter.

The general expression of each inter-spike-interval, as demonstrated in **Figure 7**, can be written as

$$D\_{\mathbb{Z}^{N-1}-1} = \frac{1}{A\_N} \cdot \frac{V\_{N-1}}{\beta^{N-1}},\tag{7}$$

$$D\_{2^{N-1}-2} = \ \frac{1}{A\_1} \cdot \left(\frac{V\_{N-2}}{\beta^{N-2}} - \frac{V\_{N-1}}{\beta^{N-1}}\right),\tag{8}$$

⋮

$$D\_{2^{N-1}} = \frac{1}{A\_1} \cdot \left(\frac{V\_1}{\beta^1} - \frac{V\_2}{\beta^2} - \frac{V\_3}{\beta^3} - \dots - \frac{V\_{N-3}}{\beta^{N-3}} - \frac{V\_{N-2}}{\beta^{N-2}} - \frac{V\_{N-1}}{\beta^{N-1}}\right). \tag{9}$$
