**5. SNN simulation tools and hardware accelerators**

There are several spiking neural network simulation tools available which support biologically realistic neuron models for large scale networks. Some of the popular ones are:

Brian [48], is a free, open source simulator for spiking neural networks. This simulator is capable of running on several different platforms and is implemented in python making it extendable and easy to use.

NEST [49] is another simulator focusing on the dynamics, size and structure of neural systems both large and small. This tool is not intended for modeling the intricate biological details of a neuron.

NEURON [50] is simulation environment best suited for modeling individual neurons and their networks. This is popular among neuroscientists for its ability to handle complex models in a computationally efficient manner. Unlike above simulator, NEURON can handle morphological details of a neuron and is used to validate theoretical models with experimental data.

The above tools are commonly used in modeling biologically realistic neuron modes. They have their own unique interfaces and low-level semantics. An effort is made to smooth things out with a tool independent API package developed on Python programming language called PyNN [51]. The PyNN framework provides API support to model SNNs at a high level of abstraction of all aspects of neuron modeling and SNN representation, including populations of neurons, connections, layers etc. Though this provides high level abstraction, it also provides the ability to program at a low level such as adjusting individual parameters at the neuron and synapse level. To make things easy PyNN provides a set of library implementation for neurons, synapses, STDP models etc. They also provide easy interfaces to model various connectivity patterns among neurons like; all-to-all, small-world, random distance-dependent etc. These APIs are simulator independent making the code portable across different supported simulation tools and neuromorphic hardware platforms. It is relatively straightforward to add support to any custom simulation tool. PyNN officially supports BRIAN, NEST and NEURON SNN simulation tools. It is also supported on SpiNNaker [52] and BrainScaleS-2 [53] neuromorphic hardware systems. There are several more simulation tools which work with PyNN.

Cypress [54] is a C++ based SNN Simulation tool. This provides a C++ wrapper around PyNN APIs. Hence, extending the multi-platform reach of Cypress using C++ interface. It is also capable of executing networks remotely on neuromorphic compute platforms.

The BrainScaleS-2 [53] is a mixed-signal accelerated neuromorphic system with analog neural core, digital connectivity along with embedded SIMD microprocessor. It is efficient for emulations of neurons, synapses, plasticity models etc. This

hardware based system is capable of evaluating models up to ten thousand times faster than real time.

The SpiNNaker [52] is another neuromorphic system custom built with digital multicore ARM processors. The SpiNNaker system (NM-MC-1) consists of custom chips each with eighteen cores sharing a local 128 MB RAM. The overall system scales to more than a million cores.

Apart from the above tools and platforms the are many custom SNN tools available to model SNNs easily for machine learning purposes. ANNarchy (Artificial Neural Networks architect) [55] is a custom simulator for evaluating SNNs. This is implemented in C++ language, along with acceleration support provided using OpenMP/CUDA. The network definitions are provided using python interface.

NeuCube [6] is a development environment for creation of Brain-Like Artificial Intelligence. The computational architecture is suited for modeling SNN applications across several domain areas. This tool supports the latest neural network models for AI purpose. It supports PyNN interface, hence extending its versatility. This tool can run on CPU, GPU and SpiNNaker platforms, also a cloud version of the tool is available.

TrueNorth [56] is another neuromorphic platform capable of evaluating SNNs at faster than real time and at very low power. They demonstrate running state of the art neural networks on the hardware platform scaling up to 64 million neurons and 16 billion synapses while the system consumes only 70 W of power out of which only 15 W is consumed by the neuromorphic hardware components. The hardware supports inference only, with learning performed off chip.

neurons for classifying the input with one neuron per class. The input layer encodes the pixel intensities with varying firing rate in the range of 0 Hz – 300 Hz. Each input neuron is fully connected to the hidden layer neurons similarly each hidden layer neuron is fully connected to the output/classification layer neurons. In this network all synapses are plastic with soft WTA connectivity implemented between input layer and hidden layer neurons to facilitate different neurons to pick up shared features. On the other hand, a hard WTA connectivity exists between hidden

*MNIST SNN architecture showing connectivity, input, learnt features, labels and t-SNE visualizations, along*

A qualitative analysis of the learning rule is depicted by the t-distributed stochastic neighbor embedding (t-SNE) [61] visualizations in **Figure 10**. The t-SNE algorithm maps high dimensional data points lying on different but related lowdimensional manifolds to lower dimensions by capturing local structure present in high dimensional data. The input layer firing rate visualizations show the clustering of digit classes in 2 dimensions based on raw pixel data which has 784 dimensions. Similarly, the second visualization is made using the firing rate based on the learnt features of hidden layer as input to the t-SNE algorithm with 100 dimensions. It can be clearly seen that the STDP rule produces tight clustering of input space which is projected on to the feature space. The classification layer further groups these features to its respective classes. Networks with different number of hidden layer neurons are experimented with and the results are shown in the bottom right side of **Figure 10**. The robustness of the learning method is also demonstrated with experiments yielding similar accuracies with additive Gaussian white noise along with the

An inference network based on a probabilistic graphical model for sentence construction is created using Bayesian neurons. It consists of lexicons representing

The network consists of two functional sections: word sub network and phrase sub network. Each symbol neuron in word sub network represents a possible word occurrence and each symbol neuron in phrase sub network represents a possible

layer and the classification layer.

*Brain-Inspired Spiking Neural Networks DOI: http://dx.doi.org/10.5772/intechopen.93435*

**Figure 10.**

*with accuracy results [38].*

use of NWTA network.

**91**

**6.2 Probabilistic graphical models as SNNs**

words and phrases. Here each lexicon is a WTA sub network.

Loihi [57] is the latest offering in the neuromorphic SNN hardware. This hardware approach gets rid of crossbar architecture, which is prevalent in most previous neuromorphic implementations, lending itself to greater amount of flexibility. Loihi is also capable of on-chip learning which is a huge advantage in terms of online learning of synapses.

Other simulators capable of modeling software based models and models for custom neuromorphic hardware are presented in [20, 58–60]. This is still an ongoing field of research and there are several more accelerator-based simulators available hence the reader is encouraged to explore further. Neuromorphic hardware using more exotic hardware devices like memristors and phase change memories are also an active area of research, they are yet to make it to mainstream consumption hence they are only mentioned here.

#### **6. Case studies**

In this section few case studies are presented to bolster the concepts discussed in this chapter. The topics covered here include STDP learning dynamics, probabilistic graphical models as SNNs, SNN with BP-STDP based learning and SNNs on Neuromorphic Hardware.

#### **6.1 STDP learning dynamics**

A SNN is trained [38] to classify handwritten digits from the MNIST dataset using the STDP based learning rules Exp, Q2PS and 2P presented in Section 4.2. The authors build a three-layer SNN as shown in **Figure 10**. The MNIST images are of 28x28 pixel dimensions, hence the input layer contains 784 neurons, one per image pixel. The second/hidden layer contains neurons for learning the features of the input images. The number of neurons in this layer is varied over different trials to evaluate the effectiveness of the learning rule. Finally, the third layer consists of 10

#### *Brain-Inspired Spiking Neural Networks DOI: http://dx.doi.org/10.5772/intechopen.93435*

#### **Figure 10.**

hardware based system is capable of evaluating models up to ten thousand times

Apart from the above tools and platforms the are many custom SNN tools available to model SNNs easily for machine learning purposes. ANNarchy (Artificial Neural Networks architect) [55] is a custom simulator for evaluating SNNs. This is implemented in C++ language, along with acceleration support provided using OpenMP/CUDA. The network definitions are provided using python interface.

NeuCube [6] is a development environment for creation of Brain-Like Artificial Intelligence. The computational architecture is suited for modeling SNN applications across several domain areas. This tool supports the latest neural network models for AI purpose. It supports PyNN interface, hence extending its versatility. This tool can run on CPU, GPU and SpiNNaker platforms, also a cloud version of

TrueNorth [56] is another neuromorphic platform capable of evaluating SNNs at faster than real time and at very low power. They demonstrate running state of the art neural networks on the hardware platform scaling up to 64 million neurons and 16 billion synapses while the system consumes only 70 W of power out of which only 15 W is consumed by the neuromorphic hardware components. The hardware

Loihi [57] is the latest offering in the neuromorphic SNN hardware. This hardware approach gets rid of crossbar architecture, which is prevalent in most previous neuromorphic implementations, lending itself to greater amount of flexibility. Loihi is also capable of on-chip learning which is a huge advantage in terms of online

Other simulators capable of modeling software based models and models for custom neuromorphic hardware are presented in [20, 58–60]. This is still an ongoing field of research and there are several more accelerator-based simulators available hence the reader is encouraged to explore further. Neuromorphic hardware using more exotic hardware devices like memristors and phase change memories are also an active area of research, they are yet to make it to mainstream

In this section few case studies are presented to bolster the concepts discussed in this chapter. The topics covered here include STDP learning dynamics, probabilistic

A SNN is trained [38] to classify handwritten digits from the MNIST dataset using the STDP based learning rules Exp, Q2PS and 2P presented in Section 4.2. The authors build a three-layer SNN as shown in **Figure 10**. The MNIST images are of 28x28 pixel dimensions, hence the input layer contains 784 neurons, one per image pixel. The second/hidden layer contains neurons for learning the features of the input images. The number of neurons in this layer is varied over different trials to evaluate the effectiveness of the learning rule. Finally, the third layer consists of 10

graphical models as SNNs, SNN with BP-STDP based learning and SNNs on

supports inference only, with learning performed off chip.

consumption hence they are only mentioned here.

The SpiNNaker [52] is another neuromorphic system custom built with digital multicore ARM processors. The SpiNNaker system (NM-MC-1) consists of custom chips each with eighteen cores sharing a local 128 MB RAM. The overall system

faster than real time.

*Biomimetics*

the tool is available.

learning of synapses.

**6. Case studies**

**90**

Neuromorphic Hardware.

**6.1 STDP learning dynamics**

scales to more than a million cores.

*MNIST SNN architecture showing connectivity, input, learnt features, labels and t-SNE visualizations, along with accuracy results [38].*

neurons for classifying the input with one neuron per class. The input layer encodes the pixel intensities with varying firing rate in the range of 0 Hz – 300 Hz. Each input neuron is fully connected to the hidden layer neurons similarly each hidden layer neuron is fully connected to the output/classification layer neurons. In this network all synapses are plastic with soft WTA connectivity implemented between input layer and hidden layer neurons to facilitate different neurons to pick up shared features. On the other hand, a hard WTA connectivity exists between hidden layer and the classification layer.

A qualitative analysis of the learning rule is depicted by the t-distributed stochastic neighbor embedding (t-SNE) [61] visualizations in **Figure 10**. The t-SNE algorithm maps high dimensional data points lying on different but related lowdimensional manifolds to lower dimensions by capturing local structure present in high dimensional data. The input layer firing rate visualizations show the clustering of digit classes in 2 dimensions based on raw pixel data which has 784 dimensions. Similarly, the second visualization is made using the firing rate based on the learnt features of hidden layer as input to the t-SNE algorithm with 100 dimensions. It can be clearly seen that the STDP rule produces tight clustering of input space which is projected on to the feature space. The classification layer further groups these features to its respective classes. Networks with different number of hidden layer neurons are experimented with and the results are shown in the bottom right side of **Figure 10**. The robustness of the learning method is also demonstrated with experiments yielding similar accuracies with additive Gaussian white noise along with the use of NWTA network.

#### **6.2 Probabilistic graphical models as SNNs**

An inference network based on a probabilistic graphical model for sentence construction is created using Bayesian neurons. It consists of lexicons representing words and phrases. Here each lexicon is a WTA sub network.

The network consists of two functional sections: word sub network and phrase sub network. Each symbol neuron in word sub network represents a possible word occurrence and each symbol neuron in phrase sub network represents a possible

pair of words co-occurring. The synapses between the symbol neurons represent the log conditional probabilities of words and phrases co-occurring. This network is initialized to have same intrinsic potential across all symbol neurons resulting in same initial firing rate. Based on the synaptic weights the strongly connected neurons resonate and enhance each other while laterally inhibiting other symbol neurons within the lexicon WTA network. These winning neurons proportionally excite other symbol neurons across different lexicons. In this manner the network settles on a steady state firing rate which represents a contextually correct behavior. From each lexicon of the word sub network a symbol neuron is picked with highest firing rate representing a grammatically correct semantically meaningful sentence. The WTA connections in this network perform soft WTA action there by the facilitating the retention of contextual information. **Figure 11** (a) shows the network topology. For the experiments, random documents images are picked, and fuzzy character recognition is performed. Due to the fuzzy nature, each character position will result in several possible matches hence, multiple possible matches for each word position is possible as described in [62]. An example of lexicon set is [{we, wo, fe, fo, ne, no, ns, us} {must, musk, oust, onst, ahab, bust, chat} {now, noa, non, new, how, hew, hen, heu} {find, rind, tina} {the, fac, fro, kho} {other, ether}]. The SNN after evaluating the lexicons settles on a grammatically correct sentence as [we must now find the other] as seen in **Figure 11** (b).

consists of 784 input neurons, 100 through 1500 hidden neurons and 10 output neurons. With this network they were able to achieve 97.2% classification accuracy.

Deep networks achieve higher accuracy in recognition tasks and in some cases outperform humans. Eedn framework is proposed in [63], which enables SNNs to be trained using backpropagation with batch normalization [64] and implement them on TrueNorth neuromorphic hardware. The Eedn trained networks are capable of achieving state-of-the-art accuracy across eight standard datasets of vision and speech. In this implementation the inference on hardware can be run at up to 2600 frames/s which is faster than real time while consuming very low power of at most 275 mW across their experiments. The network uses low precision ternary weights +1, 0 and 1 for its synapses. A binary activation function with an approximate derivative is modeled to enable backpropagation. A hysteresis parameter is introduced in the weight update rule to avoid rapid oscillations of weights during learning. The input images are transduced by applying 12 different convolutional filter operators with binary outputs to get 12 channel input to the

Experiments were performed on eight datasets using five different network sizes spanning across several TrueNorth chips. The results of the experiments are sum-

*Example image from CIFAR10 (column 1) and the corresponding output of 12 typical transduction filters*

*Accuracy of different sized networks on eight datasets. For comparison, accuracy of state-of-the-art*

*unconstrained approaches are shown as bold horizontal lines [63].*

**6.4 SNNs on Neuromorphic hardware**

*Brain-Inspired Spiking Neural Networks DOI: http://dx.doi.org/10.5772/intechopen.93435*

network as shown in **Figure 12**.

marized in **Figure 13**.

**Figure 12.**

**Figure 13.**

**93**

*(columns 2–13) [63].*

### **6.3 SNN with Backpropagation-STDP based learning**

Using the learning rule presented in Section 4.4, the authors of [45] train SNNs to evaluate BP-STDP rule on the XOR problem, the iris dataset and the MNIST dataset. They show that the network can model the linearly inseparable XOR problem using an SNN with 2 input, 20 hidden and 2 output neurons. For the iris dataset they create a SNN with 4 input, 30 hidden and 3 output neurons. With this network they were able to achieve 96% accuracy which is comparable to ANN trained with traditional backpropagation with an accuracy of 96.7%. The SNN for MNIST dataset

**Figure 11.** *(a) Sentence confabulation network, (b) confabulation results spike plot [62].*

pair of words co-occurring. The synapses between the symbol neurons represent the log conditional probabilities of words and phrases co-occurring. This network is initialized to have same intrinsic potential across all symbol neurons resulting in same initial firing rate. Based on the synaptic weights the strongly connected neurons resonate and enhance each other while laterally inhibiting other symbol neurons within the lexicon WTA network. These winning neurons proportionally excite other symbol neurons across different lexicons. In this manner the network settles on a steady state firing rate which represents a contextually correct behavior. From each lexicon of the word sub network a symbol neuron is picked with highest firing rate representing a grammatically correct semantically meaningful sentence. The WTA connections in this network perform soft WTA action there by the facilitating the retention of contextual information. **Figure 11** (a) shows the network topology. For the experiments, random documents images are picked, and fuzzy character recognition is performed. Due to the fuzzy nature, each character position will result in several possible matches hence, multiple possible matches for each word position is possible as described in [62]. An example of lexicon set is [{we, wo, fe, fo, ne, no, ns, us} {must, musk, oust, onst, ahab, bust, chat} {now, noa, non, new, how, hew, hen, heu} {find, rind, tina} {the, fac, fro, kho} {other, ether}]. The SNN after evaluating the lexicons settles on a grammatically correct

sentence as [we must now find the other] as seen in **Figure 11** (b).

Using the learning rule presented in Section 4.4, the authors of [45] train SNNs to evaluate BP-STDP rule on the XOR problem, the iris dataset and the MNIST dataset. They show that the network can model the linearly inseparable XOR problem using an SNN with 2 input, 20 hidden and 2 output neurons. For the iris dataset they create a SNN with 4 input, 30 hidden and 3 output neurons. With this network they were able to achieve 96% accuracy which is comparable to ANN trained with traditional backpropagation with an accuracy of 96.7%. The SNN for MNIST dataset

**6.3 SNN with Backpropagation-STDP based learning**

*(a) Sentence confabulation network, (b) confabulation results spike plot [62].*

**Figure 11.**

*Biomimetics*

**92**

consists of 784 input neurons, 100 through 1500 hidden neurons and 10 output neurons. With this network they were able to achieve 97.2% classification accuracy.
