**9. The gap between expectations and reality**

Deep learning systems are used nowadays to process the majority of neuromorphic computing tasks on CPUs, GPUs, and FPGAs. However, none of these are optimised for neuromorphic processing. These functions were the exclusive focus of the creation of chips like Intel's Loihi. Because of this, Loihi was able to obtain the same outcomes with a much smaller energy profile, as demonstrated by ABR. The next generation of compact gadgets that require AI capabilities will depend heavily on this efficiency.

Many experts predict that during the next three to 5 years, commercial applications will start to appear seriously, but that will not be the end of it. For this reason, Samsung, for instance, declared in 2019 that it would double the size of its neuromorphic processing unit (NPU) business by 2030, going from 200 personnel to 2000. At the time, Samsung predicted that the market for neuromorphic chips will expand by 52% a year through 2023.

Developing common workloads and benchmarking approaches will be one of the upcoming difficulties in the neuromorphic space. Technology adopters have benefited greatly from the use of benchmarking tools like 3DMark and SPECint to match products to their needs. Although author Mike Davies of Intel Labs suggests a spiking neuromorphic system dubbed SpikeMark, there are sadly no such benchmarks in the neuromorphic space. Dmitri Nikonov and Ian Young, researchers at Intel, outline a number of guidelines and techniques for doing neuromorphic benchmarking in a technical article titled "Benchmarking Physical Performance of Neural Inference Circuits."

There is currently no practical testing tool on the market, though Intel Labs Day 2020 in early December made some significant advancements. When processing "Sudoku solver" issues, Intel, for instance, compared Loihi to its Core i7-9300K and demonstrated how fast Loihi's searching was.

Researchers solved Latin squares with a remarkable reduction in power consumption and experienced a similar 100× gain. What was perhaps the most significant finding was how various processor types performed versus Loihi for specific workloads.

Loihi competed against IBM's TrueNorth neuromorphic microprocessor in addition to traditional computers. On neuromorphic solutions like Loihi, deep learning feedforward neural networks (DNNs) clearly underperform. Data moves linearly from input to output with DNNs. Recurrent neural networks (RNNs) use feedback loops and behave more dynamically, making them more similar to how the brain functions. In RNN workloads, Loihi excels. As Intel stated: "The more bio-inspired properties we find in these networks, typically, the better the results are."

One could consider the aforementioned examples to be early benchmarks. They are an important first step in the direction of a tool that is widely used and runs typical workloads for the industry. New applications and use cases will ultimately appear, and testing holes will be filled. The deployment of these benchmarks and apps in response to urgent requirements will continue to be a focus of developers.

Research and development for neuromorphic computing is still ongoing. It's becoming more and more obvious which applications work best with neuromorphic computing. For these tasks, neuromorphic computers will be more faster and more energy-efficient than any current, conventional options. Neuromorphic computing will simply coexist with CPU and GPU computing to perform tasks more effectively than anything we have seen before. CPU and GPU computing will not go away.
