**6.4 SNNs on Neuromorphic hardware**

Deep networks achieve higher accuracy in recognition tasks and in some cases outperform humans. Eedn framework is proposed in [63], which enables SNNs to be trained using backpropagation with batch normalization [64] and implement them on TrueNorth neuromorphic hardware. The Eedn trained networks are capable of achieving state-of-the-art accuracy across eight standard datasets of vision and speech. In this implementation the inference on hardware can be run at up to 2600 frames/s which is faster than real time while consuming very low power of at most 275 mW across their experiments. The network uses low precision ternary weights +1, 0 and 1 for its synapses. A binary activation function with an approximate derivative is modeled to enable backpropagation. A hysteresis parameter is introduced in the weight update rule to avoid rapid oscillations of weights during learning. The input images are transduced by applying 12 different convolutional filter operators with binary outputs to get 12 channel input to the network as shown in **Figure 12**.

Experiments were performed on eight datasets using five different network sizes spanning across several TrueNorth chips. The results of the experiments are summarized in **Figure 13**.

#### **Figure 12.**

*Example image from CIFAR10 (column 1) and the corresponding output of 12 typical transduction filters (columns 2–13) [63].*

#### **Figure 13.**

*Accuracy of different sized networks on eight datasets. For comparison, accuracy of state-of-the-art unconstrained approaches are shown as bold horizontal lines [63].*
