**8. Neuromorphic algorithms and applications**

### **8.1 Neuromorphic algorithms**

How to define an SNN for a certain application is frequently covered in algorithms for neuromorphic implementations. There are many different algorithmic strategies for neuromorphic computing systems, and they can be divided into two main groups: (1) algorithms for training or learning an SNN that will be used in a neuromorphic computer, and (2) non-machine learning algorithms where SNNs are manually built to complete a specific task. It's important to note that training and learning algorithms here refer to the mechanism of optimising.

It is difficult to learn in a spiking neural network. Traditional artificial neural networks have had great success with backpropagation-based gradient descent learning; however, training SNNs is challenging due to the nondifferentiable nature of spike events. Due to the interest in deep learning, a significant amount of research has been put into creating appropriate learning algorithms that can be used with multilayer SNNs. Unsupervised learning, supervised learning, conversion from learned ANNs, and evolutionary algorithms are the four main methods for training SNNs. The following subsections provide a quick overview of them.

#### *8.1.1 Unsupervised learning*

Learning without preexisting labels is referred to as unsupervised learning. The Hebbian rule, which unsupervised learning of SNNs is based on, entails adapting the synaptic connections of the network to the data received by the neurons [252]. Hebb's rule is applied through the spike-timing-dependent plasticity (STDP) algorithm. The STDP phenomenon, which has been observed in the brain, explains how the relative timing of presynaptic and postsynaptic spikes affects how effective a synapse is. The spike that reaches the neuron's synapse is referred to in this context as a presynaptic spike. The spike released by the neuron itself is known as a postsynaptic spike [253]. The idea behind STDP's mechanism is that the synapses that are most likely to have caused the neuron to fire should be reinforced. Similar to this, weakening should be done to synapses that did not participate or contributed negatively [254].

In unsupervised learning in SNNs, STDP is commonly employed as a learning method. A synaptic weight is said to be reinforced by STDP if a presynaptic neuron fires just before a postsynaptic neuron. Similar to that, the synaptic weight is diminished if the presynaptic spike follows the postsynaptic neuron for a brief period of time [255]. The most observed STDP rule is described by Eqs. (1) and (2):

$$
\Delta\omega = \begin{cases}
+ \mathcal{A}\_{\text{\textquotedblleft}} \exp\left(\frac{-\Delta t}{\tau}\right), \text{if } \Delta t > 0 \\
\end{cases}
\tag{1}
$$

$$
\Delta \mathbf{t} = \mathbf{t}\_{pact} - \mathbf{t}\_{pre} \tag{2}
$$

where *w* is the synaptic weights, *τ* is the time constant, and *A*+ and *A*− are constant parameters indicating the strength of potentiation and depression.

Significant study has been put on training SNNs with STDP in recent years. Two new hardware-friendly techniques, lateral inhibition and homeostasis, were created by Qu et al. [256]. These techniques lessen the number of inhibitory connections, which lowers the hardware overhead. On the MNIST data set, an STDP rule was utilised to modify the synapse weight between the input and learning layers to obtain 92% recognition accuracy. A hybrid learning system called deep CovDense SNN, put out by Xu et al. [255], combines the biological plausibility of SNNs and feature extraction from CNNs. They updated the parameters of their deep CovDenseSNN model, which is appropriate for implementation on neuromorphic hardware, using an unsupervised STDP learning method. Other STDP learning techniques include supervised learning and reinforcement learning [257].

A semisupervised method was put out by Lee et al. [258] to train a convolutional SNN with several hidden layers. The training approach consisted of two steps: initialise the network weights by unsupervised learning (namely, SSTDP), and the supervised gradient descent backpropagation (BP) algorithm to fine-tune the synaptic weight. Pretraining methods resulted in 99.28% accuracy on the MNIST database, quicker training times, and greater generalisation. An innovative technique for training multilayer spiking convolutional neural networks was created by Tavanaei et al. [259]. (SCNNs). Unsupervised (a unique STDP learning scheme for feature extraction) and supervised (a supervised learning scheme to train spiking CNNs (ConvNets)) components are both included in the training process.

#### *8.1.2 Supervised learning*

The SpikeProp algorithm by Bohte et al. [260] is one of the first to train SNNs utilising backpropagation errors. Using a three-layer design, this approach is successfully used to solve classification challenges. Spike train SpikeProp (ST-SpikeProp), a more recent and sophisticated variation of SpikeProp, trains single-layer SNNs using the output layer's weight update rule [261]. Wu et al. [262] proposed the spatiotemporal backpropagation (STBP) technique, which combines the timing-dependent temporal domain and the layer-by-layer spatial domain, to address the nondifferentiable problem of SNNs. The energy consumption of SNNs has been demonstrated to significantly decrease with supervised learning using temporal coding. Mostafa [263] created a direct training method using the temporal

### *Neuromorphic Computing between Reality and Future Needs DOI: http://dx.doi.org/10.5772/intechopen.110097*

coding scheme and back propagation error. The preprocessing technique is not general, and his network lacks convolutional layers. By adding convolutional layers into the SNN, creating a new kernel operation, and suggesting a new method for preprocessing the input data, Zhou et al. [264] improved on Mostafa's work. They used fewer trainable parameters to attain good recognition accuracy with their SCNN. Using the stochastic gradient descent (SGD) algorithm and then converting it to an SNN, Stromatias et al. [265] developed a supervised technique for training a classifier. Zheng and Mazumder [266] suggested backpropagation-based learning for training SNNs in previous works. Their suggested learning approach can be used with neuromorphic hardware.

### *8.1.3 Conversion from trained ANN*

The third method involves converting an offline-trained ANN to an SNN so that the modified network can benefit from an established, fully trained ANN model. This method is frequently referred to as "spike conversion" or "spike transcoding." There are many advantages of converting an ANN to SNN. First, simulating the precise spike dynamics in a big network can be expensive in terms of computing, especially if precise spike durations and high firing rates are needed. As a result, this method enables the use of SNNs to challenging benchmark tasks that demand massive networks, such ImageNet or CIFAR-10, with a minimal accuracy loss when compared to its formal ANNs [267, 268]. Second, we can use very effective training methods created for ANNs as well as many state-of-the-art deep networks for classification tasks for conversion to SNNs. The optimization approach can also be used to ANNs. This makes it possible to use of state-of-the-art optimization techniques and GPUs for training [269]. The conversion method's primary drawback is that it cannot support on-chip learning. Additionally, many SNN particularities that are not present in the equivalent ANNs cannot be taken into account during training. Because of this, the SNNs' inference performance is frequently inferior than that of the original ANNs [270].

An extensive amount of research has been done to successfully convert an ANN to an SNN with performance on the MNIST data set. Diehl et al. [269] proposed a method for transforming an ANN into an SNN that has the least amount of performance loss, and using the MNIST database, a recognition rate of 98.64% was attained. Rueckauer et al. [271] transformed continuous-valued deep CNN to its accurate spiking equivalent in another work. This network exhibits a 99.44% recognition rate on the MNIST data set and incorporates typical operations like softmax, max-pooling, batch normalisation, biases, and inception modules. An approach to conversion that is appropriate for mapping on neuromorphic hardware was put out by Xu et al. [272]. On the MNIST data set, they demonstrated a threshold rescaling strategy to lessen the loss and attained a maximum accuracy of 99.17%. To transform CNNs into spiking CNNs, Xu et al. [255] developed an effective and hardware-friendly conversion rule. On the MNIST data set, they suggested a "n-scaling" weight mapping method that delivers high accuracy and low latency classification. On the MNIST data set, Wang et al. [273] suggested a weights-thresholds balancing conversion method that uses less memory and delivers excellent recognition accuracy. They concentrated on the relationship between weights and thresholds of spiking neurons throughout the conversion process, as opposed to the existing conversion strategies, which concentrate on the approximation between the artificial neurons' activation values and the spiking neurons' firing rates.
