*Neuromorphic Computing between Reality and Future Needs DOI: http://dx.doi.org/10.5772/intechopen.110097*

computers. Von Neumann computers consist of distinct CPUs and memory units, with the latter housing data and instructions. On the other hand, with a neuromorphic computer, the neurons and synapses control both processing and memory. In contrast to von Neumann computers, which use explicit instructions to define programmes, neuromorphic computers instead use the characteristics and structure of the neural network. In addition, while von Neumann computers encode information as numerical values represented by binary values, neuromorphic computers receive spikes as input, where the associated time at which they occur, their magnitude and their shape can be used to encode numerical information. Binary values can be turned into spikes and vice versa, but the precise way to perform this conversion is still an area of study in neuromorphic computing.

These are some fundamental operational differences between Neuromorphic computers and Von Neumann:


The literature makes extensive note of the characteristics of neuromorphic computers and provides reasons for their use and implementation. The extraordinarily low power consumption of neuromorphic computers for computation is one of its most appealing qualities; they frequently consume orders of magnitude less power than conventional computing systems. Because they operate in enormous parallel

and are event-driven, only a small percentage of the system is normally active at any given moment, with the remainder being idle. Energy efficiency alone is a compelling incentive to examine the use of neuromorphic computers, given the rising energy cost of computing and the rising number of applications (such as edge computing applications) that have energy limits. Furthermore, neuromorphic computers provide a suitable platform for many of the artificial intelligence and machine learning applications of today since they naturally incorporate neural network-style processing. Additionally, neuromorphic computers hold potential for utilising their inherent computational capabilities to carry out a wide range of different forms of computing.

It is unclear whether these are the only parts of biological brains that are significant for computation, despite the fact that each of these qualities of neuromorphic computers is modelled after traits of the brain and has received priority in recent years. For instance, glial cells are one of many additional types of neural components that may be beneficial for computation, despite the fact that neurons and synapses have been chosen as the main computational units of neuromorphic computers. Furthermore, neurons and synapses have proved a useful level of abstraction for neuromorphic computers, but it is still unclear whether they are the best level of abstraction.

Contrary to some of the upcoming computing technologies, the research community already has access to various physical realisations of neuromorphic hardware that are now being developed. Numerous large-scale neuromorphic computers have been created with various methodologies and objectives. SpiNNaker and BrainScaleS were created with funding from the Human Brain Project of the European Union in order to facilitate large-scale neuroscience simulations. It has also been suggested to use slightly more complicated neuron models using an improved digital neuromorphic processor known as the online-learning digital spiking neuromorphic (ODIN). The Tianjic chip, a platform that supports both neuromorphic spiking neural networks and conventional artificial neural networks for various issue types, is one of the neuromorphic platforms aiming for more broad computations for wider classes of applications [13]. Both business and academia are interested in neuromorphic systems; examples from business include IBM's TrueNorth [14] and Intel's Loihi [15], while other academic initiatives exist as well, including DYNAPs [16], Neurogrid [17], IFAT [18], and BrainScales-2 [19]. In order to optimise learning-to-learn scenarios—situations where an optimization method is used to specify how learning occurs—for spiking neural networks, running at considerably faster timescales than biological timescales, neuromorphic hardware such as BrainScales-2 has been shown to be useful [20].

However, there is a lot of research being done in the neuromorphic community to create new types of materials for neuromorphic implementations, such as phasechange, ferroelectric, non-filamentary, topological insulators, or channel-doped biomembranes. All of the large-scale neuromorphic computers mentioned above are silicon-based and implemented using conventional complementary metal oxide semiconductor technology [21–23]. Memristors are frequently utilised in the literature as the fundamental device to have resistive memory to collocate processing and memory [24, 25], although other types of devices, such as optoelectronic devices, have also been employed to create neuromorphic computers [26]. Neuromorphic computers can be implemented using a variety of devices and materials, each of which has its own operating properties, including speed of operation, energy consumption, and biological likeness. Because neuromorphic hardware may now be implemented using a variety of tools and materials, it is possible to tailor its properties to a particular application.

The majority of current research in the field of neuromorphic computing focuses on the aforementioned hardware systems, devices, and materials; however, in order to best utilise neuromorphic computers in the future, take advantage of all of their special computational properties, and influence their hardware design From this vantage point, we give an overview of the state-of-the-art in neuromorphic algorithms and applications and offer a look ahead at the possibilities for the development of neuromorphic computing in computer science and computational science. It is important to note that a variety of various sorts of technology have all been referred to as neuromorphic computing.

### **3.2 Neuromorphic computing and artificial general intelligence (AGI)**

Artificial general intelligence (AGI) is the term used to describe AI that demonstrates intelligence comparable to that of humans. One could say it's the holy grail of all AI. That degree of intelligence has not yet been attained by machines and might never be. However, neuromorphic computing presents brand-new opportunities for advancing in that direction.

For example, the Human Brain Project which features the neuromorphic supercomputer SpiNNaker aims to produce a functioning simulation of the human brain and is one of many active research projects interested in AGI.

The criteria for determining whether a machine has achieved AGI are debated, but these points need to be discussed if the machine can reason and make judgements under uncertainty. If it can plan, learn, communicate using natural language, represent knowledge including common-sense knowledge, and if it can integrate these skills in the pursuit of a common goal.

The ability to imagine, subjective experience, and self-awareness are occasionally included. The well-known Turing Test and the Robot College Student Test, in which a machine enrols in classes and earns a degree just like a human would, are two more suggested techniques for verifying AGI.

There are disagreements over how it should be handled ethically and legally if a machine ever attained human intelligence. Some contend that it ought to be regarded under the law as a nonhuman animal. These debates have been going on for years, in part because we still do not fully understand consciousness as a whole.
