**2. Neural encoding schemes**

As mentioned in Section 1, the neural encoding schemes represent the different ways the input signals get converted to spike signals in spiking neural networks. Researchers have put much effort into finding various encoding schemes that utilize different properties of spike trains in the SNN [24]. The most straightforward and, thus, the first discovered encoding scheme is rate encoding [21]. It uses the number of spikes in one spike train to typify the input information. **Figure 1(a)** shows that the input stimulus is transferred to the firing rate during the sampling window. Therefore, as long as the numbers of spikes are the same, two different spike trains still stand for the same input signal. The simplicity of this code leads to its common use in nowadays applications. For instance, the Intel Loihi chip, which is a neuromorphic

#### **Figure 1.**

*Examples of encoding scheme. (a) Presentation of rate encoding. Input stimulus is transferred to the firing rate in the encoding window. (b) Presentation of TTFS encoding. Input stimulus is converted to the time difference between the onset of the window and the first spike. (c) Presentation of ISI encoding. Input information is transferred to the time intervals of spikes.*

research test chip designed by Intel Labs, uses an asynchronous SNN to implement adaptive self-modifying event-driven fine-grained parallel computations used to implement learning and inference with high efficiency. It utilizes rate encoding in its neural network and has been evaluated through lots of applications such as adaptive robot arm control and drone motor control [25, 26]. It only consumes less than 1 watt of power while maintaining a good operating speed. Moreover, the Tianjic chip also implements rate encoders in the neural network and achieves high accuracy for pattern recognition applications [27, 28]. However, this simple encoding scheme has its disadvantages. First, rate encoding has a relatively lower data capacity than other encoding schemes since it only utilizes the number of spikes in a spike train but ignores the temporal patterns of the spike train. Second, the low data capacity also caused the low robustness of the encoding scheme against noises and errors. Since one spike train only represents one input information, any mistakes of the spike train will lead to an inaccurate result.

To overcome these drawbacks, other encoding schemes that can use other properties of spike trains are proposed for the spike encoding process. Temporal patterns, which mean the different timings of spikes in the spike train, are the most used aspects for encoding [29]. Therefore, a large category of neural code is called temporal encoding, which employs both the spike number and the temporal pattern of the spike train for stimulus. Among these temporal codes, three are the most widely used, the TTFS code, the ISI code, and the phase of firing code.

The time of the first spike encoding, also known as latency encoding, is the most basic temporal encoding scheme [22, 30]. Just as literal, the TTFS encoding converts the input information to the time difference between the onset of the sampling window and the first spike. Since the only useful spike is the first one, for energy efficiency, normally there is only one spike in the spike train for TTFS encoding, as demonstrated in **Figure 1(b)**. Since the onset of sampling windows is often defined by

#### *Spiking Neural Encoding and Hardware Implementations for Neuromorphic Computing DOI: http://dx.doi.org/10.5772/intechopen.113050*

external references, the precision of the encoding process is very much dependent on the accuracy of external signals. Any variation in the external source could affect the performance of the encoder [31]. Another shortcoming of the TTFS encoding scheme is that its robustness is low. With the property that only one spike is effective, even only one mistake in the TTFS-encoded spike train could cause enormous errors in the final output of the encoding process. Thus, the function of the TTFS encoder is not robust against even minor noise or error.

Another neural code is proposed to overcome the disadvantages of the TTFS code, the ISI code. As demonstrated in **Figure 1(c)**, instead of being converted to the time difference between the onset and the first spike, the input stimuli are converted to time intervals of spikes [15]. Unlike latency encoding, ISI encoding utilizes the spikes as the internal reference frame for each other, thus avoiding the dependence on the external references. As discussed in the previous paragraph, one main drawback of latency encoding is its relatively low data capacity. However, since the ISI code has multiple spikes in one sampling window, it can transfer more information than latency encoding. There are two types of ISI encoders, the parallel structure and the iteration structure, introduced in Zhao et al. [30]. The parallel encoder, the simpler type, could convey information faster but maintains fewer spikes in one encoding window. On the contrary, the iteration encoder generates more spikes in the sampling window but also takes more time. Both structures'spike numbers relate to the number of neurons in the encoder. The parallel structure has this relation:

$$N\_{\mathbb{S}} = N,\tag{1}$$

where *N* and *NS* are the number of spikes in one sampling window and the number of neurons in the encoder. The iteration structure has an exponential relation:

$$N\_{\mathbb{S}} = 2^{N-1}. \tag{2}$$

From Eqs. 1 and 2, we can notice that the iteration structure has more spikes when the encoder has more than two neurons. Thus, when looking for a high data capacity and high robustness, the iteration encoder is a promising candidate [17].

Besides relying on the number of spikes and intervals between spikes, information can also be conveyed as relative position on internal reference frames. The internal reference frames are called the subthreshold membrane oscillation (SMO). The SMO can replace the external reference frame and thus overcome the precision issue. What's more, with the help of SMO, the phase of firing encoding scheme can be implemented in neuromorphic computing systems [32–34]. In the phase encoding, input signals are transferred to the phase of SMO. When the SMO arrives at this phase, one spike will be fired. The math model of the SMO can be written as follows:

$$\text{SMO}\_i = A \cdot \cos(\alpha T + \phi\_i),\tag{3}$$

where *A*, *ω*, and *ϕ* present the amplitude of the SMO, the angular velocity of the signal, and the start phase of the sine SMO signal, respectively.

To further improve the performance of neuromorphic computing systems, another encoding mechanism has been investigated by researchers. First found in biological neural systems, the multiplexing encoding schemes combine multiple neural codes, especially the ones with different time scales, to have higher data capacity [23]. Each independent encoding scheme can carry a certain amount of information. For instance, in ISI-phase encoding, the ISI encoding scheme and phase of firing encoding

#### **Figure 2.**

*Information in codes for different noise levels. The blue line indicates the information carried in the rate code with different noise levels. The orange line represents the information level of the temporal encoding scheme. Similarly, the yellow line means the information in the rate and phase of firing multiplexed encoding scheme, and the purple line demonstrates the information carried in the temporal and phase of firing multiplexed code.*

scheme carry their own information. After the multiplexing process, the two parts of information are combined and transferred within one sampling time window. Therefore, with multiplexing encoding, the same amount of information can be conveyed within a shorter sampling window and thus increase the data transmission rate [35].

The multiplexing encoding schemes are more robust than the other ones. Experiments have been conducted to quantify the data density in the different neural codes with different levels of input sensory noise [23], as shown in **Figure 2**. From the figure, it is noticeable that although the information from all the encoding schemes decreases with the increase in noise level, the multiplexing encoding schemes always keep the highest data density. Moreover, we can also notice that temporal encoding keeps more information than rate encoding; thus, temporal-phase encoding also has higher data capacity than rate-phase encoding. With the result discussed above, it is proved that the multiplexing encoding scheme is more robust against noisy environments and the high data capacity in all noise levels helps with data transferring from noisy input to spike signals.

The multiplexing encoding requires two separate steps to transfer the input signals to multiplexing encoded spikes [36]. The first one is the encoding process that transferred the analog inputs to differently encoded spikes. For example, if rate encoding is utilized, the encoded spikes are in the form of spike trains with different numbers of spikes. For TTFS encoding, the encoded spike is normally in the form of a single spike. As for the ISI code, the spikes are spike trains with the same number of spikes but different temporal patterns.

After the encoding process, the spikes need to be shifted to meet the phase encoding mechanism [37, 38]. This step is called the gamma alignment step. In this step, the already generated spikes are shifted to the next immediate local maximum of SMOs. The relationship between the original spikes and the shifted spikes can be expressed as

*Spiking Neural Encoding and Hardware Implementations for Neuromorphic Computing DOI: http://dx.doi.org/10.5772/intechopen.113050*

$$P\_\pi' = P\_t,\tag{4}$$

where *t* is the timing of the original spike and *τ* is the next immediate local maximum.

As depicted in **Figure 1(d)**, the TTFS-encoded spikes are processed by the gamma alignment step and become the TTFS-phase encoded spikes. In this figure, the spikes are divided into four different channels. Each channel has its corresponding SMO. The SMOs have the same amplitude and angular velocity, and their phases follow this relationship:

$$
\phi\_i = \phi\_0 + (i - 1)\frac{2\pi}{N},\tag{5}
$$

where *i* means the *i*th channel and *N* represents the total number of channels. As for the ISI-phase encoding scheme, since only one channel in **Figure 1(e)** exists, the ISI-coded spikes are shifted to their immediately following local maximums of the same SMO [39].
