**10. Paradeigm shift from Von Neumann computing to neuromorphic computing**

So far, we have approximately modeled the psychrometric model of human intelligence and implemented in Von-Neumann computer system. Now we seek a

through meta-learning (learning to learn) approach. Inductive and deductive reasoning are generally considered to be the hallmark narrow ability indicators of *G <sup>f</sup> :*

Meta-learning, also known as "learning to learn", intends to design models that

We expect a good meta-learning model capable of well adapting or generalizing

A good meta-learning model should be trained over a variety of learning tasks and optimized for the best performance on a distribution of tasks, including potentially unseen tasks. Each task is associated with a dataset D, containing both feature

It looks very similar to a normal learning task, but *one dataset* is considered as *one*

*Few-shot classification* is an instantiation of meta-learning in the field of super-

vised learning. The dataset D is often split into two parts, a support set *S* for learning and a prediction set *B* for training or testing, D ¼ h i *S*, *B :* Often we consider a *K*-shot *N*-class classification task: the support set contains K labeled examples for

**Figure 4** shows an example of 4 shot 2-class image classification.

*An example of 4-shot 2-class image classification. (image thumbnails are from Pinterest).*

D�*p*ð Þ <sup>D</sup> ½ � L*θ*ð Þ D (3)

can learn new skills or adapt to new environments rapidly with a few training examples. There are three common approaches: 1) learn an efficient distance metric (metric-based); 2) use (recurrent) network with external or internal memory (model-based); 3) optimize the model parameters explicitly for fast learning

to new tasks and new environments that have never been encountered during training time. The adaptation process, essentially a mini learning session, happens during test but with a limited exposure to the new task configurations. Eventually, the adapted model can complete new tasks. This is why meta-learning is also known

But in our study we do not consider such hallmark ability of *G <sup>f</sup>* [7].

*Artificial Intelligence - Latest Advances, New Paradigms and Novel Applications*

**8.1 Meta-learning: learning to learn fast**

**8.2 Define the meta-learning problem**

vectors and true labels. The optimal model parameters are:

*<sup>θ</sup>* <sup>∗</sup> <sup>¼</sup> arg min*<sup>θ</sup>*

(optimization-based).

as learning to learn.

*data sample.*

**Figure 4.**

**140**

each of *N*-classes.

Carrol (C–H–C) theory of intelligence (see **Figure 3**) through Deep-meta learning

ð Þ *Gc :* Thus problem-solving in unknown environment and robust task specific learning mechanism are combined. During this process of approximation of (C–H– C) theory of intelligence, we realize that with the present state of the art of Artificial Intelligence we can never reach the top-level g-factor/mood (see **Figures 2** and **3**). Hence the approximation process is crude. Though, g-factor as proposed by Spearman is not well defined, the mood which is an alternative conjecture to g-factor is basically a biological phenomenon which occurs inside the brain to generate sufficient mental (neural) energy under the favorable state of mind as stated above to perform lower level of cognitive activities. Thus, in the three stratum C–H–C theory of intelligence, we set aside all the debates on g-factor and inherently assume that such mental (neural) energy is already existing due to the above stated biological phenomenon, i.e. mood. Thus the lower level of cognitive activities can be performed. Hence we consider deep-meta learning approach to crudely approximate C–H–C theory of intelligence to mimic human intelligence in Artificial Intel-

*Quest for I (Intelligence) in AI (Artificial Intelligence): A Non-Elusive Attempt*

In the second part of this chapter, we consider a paradigm shift from Von-Neumann architecture to Neuromorphic computing. It is clear that an entirely new way of thinking about algorithm development is required for neuromorphic computing to break out of the Von-Neumann way of thinking. To develop new learning methods with the characteristics of biological brains, we need to learn from cutting edge research in neuroscience. As a part of this process, we need to build a theoretical understanding of "intelligence". Without the theoretical underpinnings, we cannot implement true intelligent neuromorphic systems. One of the key features of biological brains that likely enables speedy learning from limited examples or trials is the structural features that are present in biological brains as a result of evolution which should be customized through the learning process. A neuromorphic system may include a long-term off-line training or learning component that may create gross network structures or modules which may be refined and tuned by shorterterm on-line training or learning component. The goal of a neuromorphic computer should not be to emulate the brain. We should instead take inspiration from biology

From the above study we understand that the present state of art of artificial intelligence algorithms which are implemented through Von-Neumann computing cannot model the top level factor (g-factor/mood) of three stratum C-H-C theory of intelligence. Instead, with some assumptions about the top level factor (g-factor/ mood) present AI approach can realized some lower level cognitive activities of fluid intelligence and crystallized intelligence. Thus the attempt to mimic human intelligence by conventional AI algorithms is not that much successful as much we except it to be through generalization of learning algorithm. On the other hand, alternative computing tool, i.e. neuromorphic computing device may attempt to adopt brain functioning for mimicing human intelligence provided the realization of plasticity of synaptic activity is achieved through electronic devices. Under the present scenario we should move towards native/natural intelligence (NI) which is organic/biological and which is essentially based on biological model of human brain. We should explore this new field and should no longer think of artificial intelligence as machines, robot and software code; rather we should think of biological artifacts. Thus in future we should welcome biological AI or BIO-AI.

but not limit ourselves to particular models or algorithms.

into crystallized intelligence

approach where we integrate fluid intelligence *G <sup>f</sup>*

*DOI: http://dx.doi.org/10.5772/intechopen.96324*

ligence (AI).

**143**

**Figure 6.** *Structure of simple neuron.*

new type of computing device which is beyond Moore's law and Von-Neumann architecture. This new type of computer can proactively interpret and learn from data, solve unfamiliar problems using what it has learned and separated with the energy efficiency of the human brain [9].

Inspired by the working mechanism of the nervous system, the performance development of the computing system has led to a novel non-traditional computing architecture, namely, the neuromorphic computing system. The neuromorphic computing system was proposed by Carver Mead in the 1980s to mimic the mammalian neurology using the very-large-scaled-integrated (VLSI) circuit. In order to physically realize the biological plasticity of a synapse, neuromorphic is combined with computing architecture memristors as electronic synapses.

Although fundamental functions of the brain are still under investigation, two main elements: neuron and synapse are well studied at the cellular level. The structure of a simple neuron is shown in **Figure 6**.

There are four main parts of each neuron, whose functionalities are summarized as shown in **Figure 7**.

Several well-known neuron models are investigated, such as integrate and fire IX model, Xitzhugh-Sitzhudh-Naguno (XN) model, Hodgain-Huxley (HH) model, Leaky integrate an fire (LIS) model, etc.

#### **11. Concluding remarks**

In our non-elusive attempt to search for I (intelligence) in AI (Artificial Intelligence), in the first part of this chapter, we crudely approximate the Cattle-Horn-

#### *Quest for I (Intelligence) in AI (Artificial Intelligence): A Non-Elusive Attempt DOI: http://dx.doi.org/10.5772/intechopen.96324*

Carrol (C–H–C) theory of intelligence (see **Figure 3**) through Deep-meta learning approach where we integrate fluid intelligence *G <sup>f</sup>* into crystallized intelligence ð Þ *Gc :* Thus problem-solving in unknown environment and robust task specific learning mechanism are combined. During this process of approximation of (C–H– C) theory of intelligence, we realize that with the present state of the art of Artificial Intelligence we can never reach the top-level g-factor/mood (see **Figures 2** and **3**). Hence the approximation process is crude. Though, g-factor as proposed by Spearman is not well defined, the mood which is an alternative conjecture to g-factor is basically a biological phenomenon which occurs inside the brain to generate sufficient mental (neural) energy under the favorable state of mind as stated above to perform lower level of cognitive activities. Thus, in the three stratum C–H–C theory of intelligence, we set aside all the debates on g-factor and inherently assume that such mental (neural) energy is already existing due to the above stated biological phenomenon, i.e. mood. Thus the lower level of cognitive activities can be performed. Hence we consider deep-meta learning approach to crudely approximate C–H–C theory of intelligence to mimic human intelligence in Artificial Intelligence (AI).

In the second part of this chapter, we consider a paradigm shift from Von-Neumann architecture to Neuromorphic computing. It is clear that an entirely new way of thinking about algorithm development is required for neuromorphic computing to break out of the Von-Neumann way of thinking. To develop new learning methods with the characteristics of biological brains, we need to learn from cutting edge research in neuroscience. As a part of this process, we need to build a theoretical understanding of "intelligence". Without the theoretical underpinnings, we cannot implement true intelligent neuromorphic systems. One of the key features of biological brains that likely enables speedy learning from limited examples or trials is the structural features that are present in biological brains as a result of evolution which should be customized through the learning process. A neuromorphic system may include a long-term off-line training or learning component that may create gross network structures or modules which may be refined and tuned by shorterterm on-line training or learning component. The goal of a neuromorphic computer should not be to emulate the brain. We should instead take inspiration from biology but not limit ourselves to particular models or algorithms.

From the above study we understand that the present state of art of artificial intelligence algorithms which are implemented through Von-Neumann computing cannot model the top level factor (g-factor/mood) of three stratum C-H-C theory of intelligence. Instead, with some assumptions about the top level factor (g-factor/ mood) present AI approach can realized some lower level cognitive activities of fluid intelligence and crystallized intelligence. Thus the attempt to mimic human intelligence by conventional AI algorithms is not that much successful as much we except it to be through generalization of learning algorithm. On the other hand, alternative computing tool, i.e. neuromorphic computing device may attempt to adopt brain functioning for mimicing human intelligence provided the realization of plasticity of synaptic activity is achieved through electronic devices. Under the present scenario we should move towards native/natural intelligence (NI) which is organic/biological and which is essentially based on biological model of human brain. We should explore this new field and should no longer think of artificial intelligence as machines, robot and software code; rather we should think of biological artifacts. Thus in future we should welcome biological AI or BIO-AI.

new type of computing device which is beyond Moore's law and Von-Neumann architecture. This new type of computer can proactively interpret and learn from data, solve unfamiliar problems using what it has learned and separated with the

*Artificial Intelligence - Latest Advances, New Paradigms and Novel Applications*

Inspired by the working mechanism of the nervous system, the performance development of the computing system has led to a novel non-traditional computing architecture, namely, the neuromorphic computing system. The neuromorphic computing system was proposed by Carver Mead in the 1980s to mimic the mammalian neurology using the very-large-scaled-integrated (VLSI) circuit. In order to physically realize the biological plasticity of a synapse, neuromorphic is combined

Although fundamental functions of the brain are still under investigation, two

There are four main parts of each neuron, whose functionalities are summarized

Several well-known neuron models are investigated, such as integrate and fire IX model, Xitzhugh-Sitzhudh-Naguno (XN) model, Hodgain-Huxley (HH) model,

In our non-elusive attempt to search for I (intelligence) in AI (Artificial Intelligence), in the first part of this chapter, we crudely approximate the Cattle-Horn-

main elements: neuron and synapse are well studied at the cellular level. The

with computing architecture memristors as electronic synapses.

structure of a simple neuron is shown in **Figure 6**.

Leaky integrate an fire (LIS) model, etc.

as shown in **Figure 7**.

**Figure 6.**

**Figure 7.** *Neuron design.*

*Structure of simple neuron.*

**11. Concluding remarks**

**142**

energy efficiency of the human brain [9].

*Artificial Intelligence - Latest Advances, New Paradigms and Novel Applications*

**References**

236. J, October 1950.

NO 2, February 1950.

General Intelligence.

[1] A.M. Turing, Computing Machinery and Intelligence, MIND, vol LIX. No.

*DOI: http://dx.doi.org/10.5772/intechopen.96324*

*Quest for I (Intelligence) in AI (Artificial Intelligence): A Non-Elusive Attempt*

[2] Claude Shannon, A Chess-Playing Machine, Scientific American, vol 182,

[3] S. Legg, A Collection of Definitions of Intelligence, Proceedings of the 2007 Conference on Advances in Artificial

[4] José Hernández-Orallo, Evaluation in Artificial Intelligence: From Task-Oriented to Ability Oriented

Measurement, Artificial Intelligence Review, 48(3) 397–447, 2017.

[5] McGraw K.S., The Cattle-Horn-Carroll Theory of Cognitive Abilities: Past, Present and Future in D.P.

Flanagan, J.L. Genshaft & P.L. Harrison (Eds.) Contemporary Intellectual Assessment Theories, tests and issue, (PP 136–182) New York, Guilford.

[6] Schmidhuber, J., Deep Learning in Neural Networks: An Overview, Neural Networks, 61: 85–117, January 2015.

[7] Chelsa Finn's BAIR blog on "Learning

[8] M. Huisman, J.N. Van Rojn, A. Pleat, A Survey of Deep-Meta learning, Arxiv: 2010. 03522VI[CS. LG], 7 Oct 2020.

[9] Hongyu An, Kangjun Bai and Yang Yi, The Roadmap to Realize Memristive Three-Dimensional Neuromorphic Computing System, http://dx.doi.org/

10.5772/intechopen.78986.

to Learn".

**145**
