**5. Conclusions**

In this chapter, we discussed different encoding schemes and the advantages and disadvantages of each spiking neural code. It shows that the rate encoding is straightforward but has low data capacity, the temporal codes have higher data capacity but are not robust against noise, and the multiplexing encoding schemes not only have both high data capacity and high robustness but also have high complexity and thus great power and area cost. Moreover, the mechanisms of the neural encoding schemes are also explained. The input signals are converted to different properties of spike trains in sampling windows. For instance, the signals are converted to the time intervals between the spikes for the ISI encoding, and the inputs are transferred to the spike number for the rate encoding. To utilize these encoding schemes in the analog circuit neural systems, the circuit implementations of these schemes are introduced as long as the mathematical models of the neural codes are employed. To the best of our knowledge, the ISI, TTFS-phase, and ISI-phase encoders proposed by our group are the first IC implementations. We have also built neural networks with different encoders to compare their performance when working with some commonly used image classification datasets. For fairness, we compared our group's performance of multiplexing encoders with those of the state-of-the-art works. For MNIST, the multiplexing encoder achieves 10.78% higher accuracy. For CIFAR-10, the ISI-phase encoder can classify the images 6.4% more accurately. The ISI-phase encoder gets 11.4% of higher accuracy than other works for SVHN. These comparison results have

shown that although the multiplexing encoding may require more power and area, it has the potential to achieve better training performance for the whole system.

As for future work, our group is going to implement spike neural networks as well as those above-mentioned encoding schemes in the Neural Simulation Tool (NEST) simulator [51]. With such a simulator, a more detailed and more realistic simulation of SNN can be carried out and there will be more convincing evidence that multiplexing encoding schemes achieve higher data capacity and robustness than rate or temporal codes alone. What's more, we will also investigate the various training algorithms for SNNs, including STDP, spike-based backpropagation, and ANN-SNN conversion. We will dig into the cooperation of different encoding schemes with various training algorithms and find out the most suitable one for the multiplexing encoder. The hardware implementation difficulties of the training algorithms will also be considered as part of the tradeoff.
