**1. Introduction**

Since the late 1980s, researchers have been paying attention to neuromorphic computing [1]. Researchers have noticed that by mimicking the biological neural systems with either software or hardware implementations, computing systems' data capacity and power efficiency can be greatly improved. With the limitations of conventional von Neumann structures appealing to the progress of application requirements, the self-training mechanism of neuromorphic computing structures has attracted more and more attention, both from the industrial and academic areas. For instance, neuromorphic computing systems can more efficiently process dataintensive tasks like speech processing and image recognition [2–4] due to the nature of parallel computing. Besides that, the comparison between the conventional structure

and neuromorphic chips has shown that the power consumed by neuromorphic computing systems is also lower than traditional von Neumann structures and it is achieved by the parallelism and distributed processing as well as event-driven processing nature of neuromorphic computing. For example, the IBM TrueNorth chip is very power efficient when processing recognition applications [5]. It only consumes less than 3 watts of power. Compared with conventional central processing units (CPUs) or graphics processing units (GPUs), it has shown great improvements. Although GPU has more small specialized cores than CPU, it still consumes tens or even hundreds of watts for the same tasks. Therefore, neuromorphic computing systems, especially those realized by application-specific integrated circuits (ASICs), have demonstrated their superiority in both learning capability and power efficiency aspects.

Among all the types of artificial neural networks (ANNs), one special type is the so-called spiking neural network (SNN) [6–9]. Inspired by the signal transformation in biological neural networks, researchers realized that information could be transmitted in the form of spikes in neural systems [10]. A neuron's function is to receive stimulus and output impulses. It is formed of four main parts, dendrites, soma, axons, and synapses. A dendrite receives the stimulus and transmits it to soma. Soma is the central computing unit in the biological neural network and generates an output signal when the input exceeds a threshold voltage. Each neuron has a specific threshold voltage, and when the output signal is generated, the soma fires a spike to the axon, which transmits the output signal to the synapse. Finally, the impulse will be conveyed through the synapse to the subsequent neurons. With such property, neurons in the network can stay silent unless triggered by coming spikes. In this way, the operating power consumption can be greatly saved. To convert information to the form of spikes, several encoding schemes have been investigated [11, 12]. This research started a few decades ago. These encoding schemes can be categorized into two main types, rate encoding and temporal encoding [13]. The rate encoding scheme represents that information is converted to the number of spikes in one spike train [14]. The spike rate in this encoding scheme means the number of spikes in one encoding window. The larger the input is, the higher the spike rate will be in the corresponding encoding window. This encoding scheme is very straightforward to implement and is one of the most commonly used encoding schemes. On the other hand, temporal encoding takes account of the number of spikes in the encoding window and utilizes the temporal property of the spike train [15]. With the different temporal properties used, the temporal encoding scheme can be more deeply divided into several types, like the Time-to-First-Spike (TTFS) [16], the Interspike Interval (ISI) [17], and the Phase of firing encoding [18].

With the advancement of neuroscience, researchers noticed a special encoding scheme in biological neural systems. This encoding scheme can integrate multiple encoding schemes that operate in different time scales [19]. This kind of encoding scheme is called multiplexing encoding [20]. For instance, ISI encoding can be integrated with phase encoding, forming the multiplexing ISI-phase encoding. Compared with just one encoding scheme, multiplexing encoding has various advantages, including high data capacity and high robustness, especially in noisy environments. The advantages and disadvantages of these encoding schemes are summarized in **Table 1**. For example, rate encoding is not only easier to implement than other schemes but it also has lower data capacity. Temporal encoding schemes not only have higher data capacity than rate encoding but also have lower robustness compared with multiplexing encoding.

*Spiking Neural Encoding and Hardware Implementations for Neuromorphic Computing DOI: http://dx.doi.org/10.5772/intechopen.113050*


#### **Table 1.**

*Advantages and disadvantages of different encoding schemes.*

A literature review of those encoding schemes is also carried out. This literature review provides a concise overview of key studies and advancements in the area of spike neural encoding.

In Rolls and Treves [21], the authors have carried out a quantitative analysis of information in the realm of neural encoding. The researchers noticed the existence of the firing rate encoding in the short time window. In the aspect of quantitative analysis, more information is encoded by the rate encoding scheme rather than temporal encoding. In rate code, neurons have been found to be able to take synaptic weighted sums of the inputs for training purposes.

In Auge et al. [22], the authors have summarized the theoretical foundation as well as the applications of encoding schemes. It includes both the rate encoding and the temporal encoding schemes. They concluded that the rate encoding has high robustness since it does not rely on the precise firing timing of spikes. They also noticed that temporal codes have been shown to have higher information capacity, faster reaction times, and higher transmission speeds.

In Kayser et al. [23], researchers have verified the hypothesis that different codes might be employed concurrently and provide complementary stimulus information. They also quantified the information encoded in the auditory cortex of animals and found that multiplexing those codes together will provide a much higher information level. What's more, the authors also found that the multiplexing codes with phase of firing code are very much robust to sensory noise added to the stimulus.

In this chapter, a deeper discussion of these mentioned encoding schemes will be presented in Section 2. Section 3 will discuss the ASIC implementations of these encoding schemes and their simulation results. Lastly, the training results of these encoding schemes working with some common datasets and the hardware testbench of the multiplexing temporal encoder will both be illustrated in Section 4.
