**1. Introduction**

In this chapter, we look for I (Intelligence) in AI (Artificial Intelligence). Still today the term "Intelligence" is not well-quantified for machine implementation. According to the definition of the Oxford dictionary, it is stated as:

#### "*The ability to acquire and apply knowledge and skills.*"

We assume rationality as Human Intelligence used in our day to day activity for planning, problem-solving, reasoning and others. With the tremendous growth and development of human civilization, different branches of science and technology are developed. Artificial Intelligence is one such branch which tries to mimic human intelligence through programs implemented into Human-made machines (computers).

**126**

*Artificial Intelligence - Latest Advances, New Paradigms and Novel Applications*

[12] Kalevi R., Domb E., 2002 "Simplified TRIZ: New Problem Solving Applications for Engineers & Manufacturing Professionals" Saint

[13] Samid, G., 1996, (b) "Can you predict R&D costs?", CHEMTECH, American Chemical Society, July '96.

[14] Samid, G., 1996, (e) "R&D Cost Estimation - The Credibility Issue", INFORMS Annual Meeting

[15] Samid, G. "The Innovation Turing Machine" DGS Vitci Press 2007

[16] Samid, G. "Computer Organized Cost Engineering", CRC Press,

[18] Jian Cheng, G. et al, 2001 "Neural Network Approach to R&D Projects Termination Decision and Its Application" Portland International Conference on the Management of Engineering and Technology

[17] Samid, G. "The Unending CyberWar" DGS Vitco Press, 2012

Proceedings, May '96.

e-version, July 2020

Lucie Press

[1] Basadur, M., 1995, "The Power of Innovation: How to Make Innovation a Way of Life and Put Creative Solutions to Work" Financial Times Prentice Hall.

[2] Battelle, 1964, "The Process of Technological Change " Battelle Technical Review, April 1964

[3] Beattie C. J. , Reader R. D. ,1971, "Quantitative Management in R&D "

[4] Berg, P et al, 2001 Assessment of Quality and Maturity Level of R&D Portland International Conference on the Management of Engineering and

Innovation and Development " Business

Chapman & Hall Ltd London

[5] Berridge T ,1977, "Product

[6] Bess J. L. 1995 "Creative R&D Leadership" Quorum Books

Christensen M. C., Wheelwright S. C. "Strategic Management of Technology

[7] Burgelman R. A. 2003,

and Innovation" McGraw Hill

[8] Burns ,1975, "Innovation, The Management Connection "

[9] Burns, Stalker ,1961, "The Management of Innovation "

[10] Cuhls K. Blind et al Ed. 2002 "Innovations for Our Future: Delphi '98 New Foresight on Science and Technology" Physica Verlag "Simplified TRIZ: New Problem Solving Applications for Engineers & Manufacturing Professionals" Saint

[11] Altshuller, G. 1999 "TRIZ, Systematic Innovation and Technical Creativity" Technical Innovation Center, Publisher, 312 pp, ISBN: 0964074044

Lexington Books

Tavistock, London

Lucie Press.

Technology

Books London

In 2007, Legg and Hutter provided a survey of definitions of Artificial Intelligence/Intelligence with methods of evaluation. A decade later, in 2017, José Hernández-Orallo reported an extensive survey on evaluation methods. In this chapter, we describe AI as an attempt to imitate human intelligence in algorithmic form [1, 2].

be made to simulate it. Their basic ambition was to build a machine which can deal with the problems that are essentially reserved for humans. In those early days of "Artificial Intelligence" (AI), in 1958. Herbert A. Simon and Allen Newell published a paper titled, 'Heuristic problem solving: the next advance in operations research'. At the banquet of the Twelfth National Meeting of the Operations Research Society of America, Pittsburgh Pennsylvania, November 14, 1957, Simon presented the content of the paper as stated above. He brought the term "Heuristic" in practice. At that time it appeared to be an over-optimistic prediction, but its impact is still far-reaching. To establish the need for 'Heuristic' in real-life problem solving he precisely categorized two types of problems; well-structured problems and illstructured problems. Well-structured problems can be solved explicitly by known existing computational techniques; whereas ill-structured problems are not wellstructured. For instance, first, the variables are not numerical, but symbolic or verbal (linguistic), second, the truth status is vague multivalued, instead of precise two-valued, third, there are many practical problems where, in a time-critical situation, variables are not directly measurable (observable) and 'most practical problems' computational algorithms are not available under such circumstances. "Heuristic" can play a significant role to resolve some of the above mentioned illstructured problems. The term "Heuristic" is essentially domain-specific information which can roughly quantify the perception and/or intelligence of an individual by estimating the intuition, experience and common sense in general for any judgemental decision process that cannot be reduced to systematic computational routine. The parameter 'Heuristic' is an added advantage for solving ill-structured practical problems associated with several environmental uncertainties. Under environmental uncertainty for any judgemental procedures, several 'hunches' and/ or wild guess at random are consider as heuristic. The heuristic function may help find a feasible/reasonable (not necessarily optimal) solution of an ill-structured practical problem. Though the necessity for randomness is not proven, there is

*Quest for I (Intelligence) in AI (Artificial Intelligence): A Non-Elusive Attempt*

*DOI: http://dx.doi.org/10.5772/intechopen.96324*

much evidence in its favor, as stated by Craik's model.

became a roseta stone of Artificial Intelligence (AI).

**129**

In May 1997 when the chess machine DEEP BLUE defeated world chess champion Garry Kasparov in an exhibition match, it was an indirect silent reply, "YES", to the very fundamental question "can machine think?" raised by Alan Turing in the year 1950. Of course, the thought process of DEEP BLUE machine is not comparable to that a human being, but definitely, the DEEP BLUE machine very efficiently imitated the thoughtful mind of a world champion of chess. Thus, game-playing

Programming computer to play games is definitely a step towards understanding the methods that may be employed for machine implementation of human intelligent behavior. We still have much to learn from the study of games, and these newer techniques may be applied in future to real-life situations to imitate human intelligence. But the basic question remains that how human becomes so intelligent. In this chapter, however, we try to explore the cognitive abilities of human being through psychrometric models of human intelligence. We observed that the present state of art of artificial intelligence can mimic human intelligence in a crude sense of approximation. At present AI cannot reach the top level of three stratum of Cattell-Horn-Carroll (C-H-C) theory of intelligence. AI can only model few lower level activities of fluid intelligence and crystallized intelligence. The present state of art of artificial intelligence is implemented through Von-Neumann computing. To breakout of Von-Neumann way of thinking, we also explore the possibility of neuromorphic computing. To develop new learning methods with the characteristic of biological brain it is necessary to learn from cutting edge research in neuroscience. As a part of this process there should be a theoretical understanding of "intelligence". Without the theoretical underpinning, we cannot implement

Normally the rational behavior of an individual indicates his/her basic element of intelligence. Aristotle held the belief that the man is a rational animal. But a growing body of research suggests otherwise. From ancient times, philosophers have been proposing theories of human rationality. There are, however, many definitions of rationality and these change over time. For Plato and Aristotle, man has both a rational and an irrational soul in different proportions. According to Bertrand Russel, "Man is a rational animal. So at least we have been told. Throughout a long life, I have searched diligently for evidence in favour of this statement. So far, I have not had the good fortune to come across it." The term rationality has a handful of interpretations.

With the gradual growth of science and technology people try to adopt sophisticated computing facilities, which may be an attempt to substitute complex mental computation at any particular situation at hand. Thus life becomes smarter and faster to face different challenges of the universe. If we look back at the history of computing facilities for intelligent decision making we observe as follows:

In the year 1942, physicist John Mauchly proposed ENIAC (Electronic Numerical Integrator and Computer). ENIAC project was completed in 1945. It was the first operational computer in USA developed by Army ordinance to compute ballistic firing table during world war II.

In the year 1950, Alan Turing, a British Mathematician and Logician, who broke the German Enigma code during world war II, proposed "The Imitation Game" which was a gaming problem to make a very fundamental question "can machine think?"; which was an informal announcement of Artificial Intelligence. The question raised by Turing was not essentially concerned about an abstract activity like playing chess [3, 4].

In 1950, Claude Shannon published a landmark paper on computer chess and rang the bell of the computer era. At that instant ENIAC was a newborn baby. But visionary people like Shannon, Alan Turing could realize the tremendous potential for computer science and technology. During that period computers were mainly used for ballistic calculations for missiles whereas games appeared to be a natural application for a computer which average people could appreciate. The first working checkers' program was published in 1952. Chess programs followed shortly after that. Arthur Samuel published a strong checker-playing program based on machine learning concept. Samuel used a signature table together with an improved book learning procedure which was a superior approach compared to the earlier one. "alpha-beta" pruning and several forms of forward pruning were used to control the spread of the movement over search tree and allow the program to look ahead to a much deeper depth than it otherwise could do. Though it could not outplay checker masters, the program's playing capability was highly appreciated.

The early effort of Alan Turing, Claude Shannon, Arthur Samuel, Allan Newell, Herbert Simon and others generated tremendous impetus in researching computerperformance at games which could be a testbed for ultimate "intelligence" generated artificially (through computer program) to "exhibit" human-level "intelligence". In the year 1955 J. Mccarthy, Marvin Minsky, N. Rochester and C.E. Shannon proposed to study "Artificial Intelligence" during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The basic objective of the study was to proceed based on the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can

#### *Quest for I (Intelligence) in AI (Artificial Intelligence): A Non-Elusive Attempt DOI: http://dx.doi.org/10.5772/intechopen.96324*

be made to simulate it. Their basic ambition was to build a machine which can deal with the problems that are essentially reserved for humans. In those early days of "Artificial Intelligence" (AI), in 1958. Herbert A. Simon and Allen Newell published a paper titled, 'Heuristic problem solving: the next advance in operations research'. At the banquet of the Twelfth National Meeting of the Operations Research Society of America, Pittsburgh Pennsylvania, November 14, 1957, Simon presented the content of the paper as stated above. He brought the term "Heuristic" in practice. At that time it appeared to be an over-optimistic prediction, but its impact is still far-reaching. To establish the need for 'Heuristic' in real-life problem solving he precisely categorized two types of problems; well-structured problems and illstructured problems. Well-structured problems can be solved explicitly by known existing computational techniques; whereas ill-structured problems are not wellstructured. For instance, first, the variables are not numerical, but symbolic or verbal (linguistic), second, the truth status is vague multivalued, instead of precise two-valued, third, there are many practical problems where, in a time-critical situation, variables are not directly measurable (observable) and 'most practical problems' computational algorithms are not available under such circumstances. "Heuristic" can play a significant role to resolve some of the above mentioned illstructured problems. The term "Heuristic" is essentially domain-specific information which can roughly quantify the perception and/or intelligence of an individual by estimating the intuition, experience and common sense in general for any judgemental decision process that cannot be reduced to systematic computational routine. The parameter 'Heuristic' is an added advantage for solving ill-structured practical problems associated with several environmental uncertainties. Under environmental uncertainty for any judgemental procedures, several 'hunches' and/ or wild guess at random are consider as heuristic. The heuristic function may help find a feasible/reasonable (not necessarily optimal) solution of an ill-structured practical problem. Though the necessity for randomness is not proven, there is much evidence in its favor, as stated by Craik's model.

In May 1997 when the chess machine DEEP BLUE defeated world chess champion Garry Kasparov in an exhibition match, it was an indirect silent reply, "YES", to the very fundamental question "can machine think?" raised by Alan Turing in the year 1950. Of course, the thought process of DEEP BLUE machine is not comparable to that a human being, but definitely, the DEEP BLUE machine very efficiently imitated the thoughtful mind of a world champion of chess. Thus, game-playing became a roseta stone of Artificial Intelligence (AI).

Programming computer to play games is definitely a step towards understanding the methods that may be employed for machine implementation of human intelligent behavior. We still have much to learn from the study of games, and these newer techniques may be applied in future to real-life situations to imitate human intelligence. But the basic question remains that how human becomes so intelligent.

In this chapter, however, we try to explore the cognitive abilities of human being through psychrometric models of human intelligence. We observed that the present state of art of artificial intelligence can mimic human intelligence in a crude sense of approximation. At present AI cannot reach the top level of three stratum of Cattell-Horn-Carroll (C-H-C) theory of intelligence. AI can only model few lower level activities of fluid intelligence and crystallized intelligence. The present state of art of artificial intelligence is implemented through Von-Neumann computing. To breakout of Von-Neumann way of thinking, we also explore the possibility of neuromorphic computing. To develop new learning methods with the characteristic of biological brain it is necessary to learn from cutting edge research in neuroscience. As a part of this process there should be a theoretical understanding of "intelligence". Without the theoretical underpinning, we cannot implement

In 2007, Legg and Hutter provided a survey of definitions of Artificial Intelli-

Normally the rational behavior of an individual indicates his/her basic element of intelligence. Aristotle held the belief that the man is a rational animal. But a growing body of research suggests otherwise. From ancient times, philosophers have been proposing theories of human rationality. There are, however, many definitions of rationality and these change over time. For Plato and Aristotle, man has both a rational and an irrational soul in different proportions. According to Bertrand Russel, "Man is a rational animal. So at least we have been told. Throughout a long life, I have searched diligently for evidence in favour of this statement. So far, I have not had the good fortune to come across it." The term rationality has a

With the gradual growth of science and technology people try to adopt sophisticated computing facilities, which may be an attempt to substitute complex mental computation at any particular situation at hand. Thus life becomes smarter and faster to face different challenges of the universe. If we look back at the history of

In the year 1942, physicist John Mauchly proposed ENIAC (Electronic Numerical Integrator and Computer). ENIAC project was completed in 1945. It was the first operational computer in USA developed by Army ordinance to compute ballistic

In the year 1950, Alan Turing, a British Mathematician and Logician, who broke the German Enigma code during world war II, proposed "The Imitation Game" which was a gaming problem to make a very fundamental question "can machine think?"; which was an informal announcement of Artificial Intelligence. The question raised by Turing was not essentially concerned about an abstract activity like

In 1950, Claude Shannon published a landmark paper on computer chess and rang the bell of the computer era. At that instant ENIAC was a newborn baby. But visionary people like Shannon, Alan Turing could realize the tremendous potential for computer science and technology. During that period computers were mainly used for ballistic calculations for missiles whereas games appeared to be a natural application for a computer which average people could appreciate. The first working checkers' program was published in 1952. Chess programs followed shortly after that. Arthur Samuel published a strong checker-playing program based on machine learning concept. Samuel used a signature table together with an improved book learning procedure which was a superior approach compared to the earlier one. "alpha-beta" pruning and several forms of forward pruning were used to control the spread of the movement over search tree and allow the program to look ahead to a much deeper depth than it otherwise could do. Though it could not outplay checker

The early effort of Alan Turing, Claude Shannon, Arthur Samuel, Allan Newell, Herbert Simon and others generated tremendous impetus in researching computerperformance at games which could be a testbed for ultimate "intelligence" generated artificially (through computer program) to "exhibit" human-level "intelligence". In the year 1955 J. Mccarthy, Marvin Minsky, N. Rochester and C.E. Shannon proposed to study "Artificial Intelligence" during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The basic objective of the study was to proceed based on the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can

masters, the program's playing capability was highly appreciated.

computing facilities for intelligent decision making we observe as follows:

gence/Intelligence with methods of evaluation. A decade later, in 2017, José Hernández-Orallo reported an extensive survey on evaluation methods. In this chapter, we describe AI as an attempt to imitate human intelligence in algorithmic

*Artificial Intelligence - Latest Advances, New Paradigms and Novel Applications*

form [1, 2].

handful of interpretations.

firing table during world war II.

playing chess [3, 4].

**128**

intelligence through neuromorphic computing. Under the present scenario of understanding "intelligence" and mimicking human intelligence in an artificial manner we should further move towards the understanding of native/natural intelligence (NI) which is organic/biological and which is essentially based on biological model of human brain.

Spearman did not know exactly what the general factor was, but he proposed in

The debate between Spearman and Thurstone has remained unresolved.

discuss this particular issue, based on some conjecture, in section 4.

*Quest for I (Intelligence) in AI (Artificial Intelligence): A Non-Elusive Attempt*

information and processes that can operate on such representations.

The American psychologist John B. Carroll, in 1993, proposed a "three-stratum" psychometric model of intelligence that expanded upon existing theories of intelligence. The third stratum consisted solely of the general factor, *g*, as identified by Spearman. It might seem self-evident that the factor at the top would be the general factor, but it is not, since there is no guarantee that there is any general factor at all. Though there is long pending debates on g (general factor), in this chapter, we

Underlying most cognitive approaches to intelligence is the assumption that intelligence comprises mental representations (such as propositions or images) of

Other cognitive psychologists had studied human intelligence by constructing

Cognitive-contextual theories deal with the way that cognitive processes operate in various settings. Two of the major theories of this type are that of the American

The term putative is commonly used to describe an entity or a concept that is based on what is generally accepted or inferred even without direct proof of it. It means something like an inference or a supposition. There are several examples on putative test of intelligence, like picture completion, picture arrangements, block

A 'culture-fair' or culture-related test makes minimal use of language and not ask for specific facts. On culture-fair tests, Euro-American and African-American children differ because culture can influence a child's familiarity with the entire

Cattell argued that the observed variation among individuals in their scores on

dG: variations in environmentally-produced development of general ability. C: variations in the closeness of the individual's cultural training and experiences

fr: fluctuations in the effective expression or application of the ability through

t: variations in familiarity with tests and test situations generally.

1927 that it might be something like "mental energy."

*DOI: http://dx.doi.org/10.5772/intechopen.96324*

**3.2 Cognitive models**

computer models of human cognition.

psychologist Howard Gardner and that of Sternberg.

any intelligence can be regarded as depending on: G: variation in the innate gene endowment.

to the cultural medium in which tests are expressed.

f: fluctuations in the underlying capacity.

strength and direction of volition.

e: chance errors of measurement.

s: specific abilities.

**131**

**3.3 Cognitive-contextual models**

**4. Putative test of intelligence**

design, object assembly, etc.

**4.1 Culture-fair test**

testing situation.

### **2. Evaluation of human intelligence: a brief exposure**

Research in the fields of psychology, cognitive science, anthropology, and biology cultivates a sophisticated study on how human intelligence evolved. Understanding about the brain of living humans and great apes and the intellectual abilities they support are enabling us to assess what is unique about human intelligence and what we share with our primate relatives. Examining the habitats and skeletons of our ancestors gives cues as to environmental, social and anatomical factors that both constrain and enable the evolutional of human intelligence.

Many methods are used to assess human intelligence and its evolution. These includes (i) behavioral measures which may involve naturalistic observation or analyzing responses in laboratory experiments, (ii) artifactual measures which involve analysis of tools, art and so forth, (iii) anatomical/neurological measures which involve studies of the brain and cranium. Ideally, all three would converge upon a unified picture of how human intelligence evolved. However, this not always the case and indeed the assessment of human intelligence is still under several challenges.

#### **3. Models of human intelligence**

Basically there are four important models of human intelligence:


In this chapter, we consider the first three models which essentially deal with crystalized intelligence, fluid intelligence and combination of these two. We try to approximate or crudely approximate the above features of intelligence through deep learning, meta learning and deep meta learning approaches. We try to adopt in a very crude way the three stratum of Cattel-Horn-Carroll (CHC) theory of intelligence [5].

#### **3.1 Psychometric model**

Psychometric model is based on a composite abilities measured by mental tests. This model can be quantified.

One of the earliest of the psychometric model came from the British psychologist Charles E. Spearman (1863–1945), who published his first major article on intelligence in 1904: The Abilities of Man: Their Nature and Measurement.

*Quest for I (Intelligence) in AI (Artificial Intelligence): A Non-Elusive Attempt DOI: http://dx.doi.org/10.5772/intechopen.96324*

Spearman did not know exactly what the general factor was, but he proposed in 1927 that it might be something like "mental energy."

The debate between Spearman and Thurstone has remained unresolved.

The American psychologist John B. Carroll, in 1993, proposed a "three-stratum" psychometric model of intelligence that expanded upon existing theories of intelligence. The third stratum consisted solely of the general factor, *g*, as identified by Spearman. It might seem self-evident that the factor at the top would be the general factor, but it is not, since there is no guarantee that there is any general factor at all. Though there is long pending debates on g (general factor), in this chapter, we discuss this particular issue, based on some conjecture, in section 4.

### **3.2 Cognitive models**

intelligence through neuromorphic computing. Under the present scenario of understanding "intelligence" and mimicking human intelligence in an artificial manner we should further move towards the understanding of native/natural intelligence (NI) which is organic/biological and which is essentially based on biological

*Artificial Intelligence - Latest Advances, New Paradigms and Novel Applications*

Research in the fields of psychology, cognitive science, anthropology, and biology cultivates a sophisticated study on how human intelligence evolved. Understanding about the brain of living humans and great apes and the intellectual abilities they support are enabling us to assess what is unique about human intelligence and what we share with our primate relatives. Examining the habitats and skeletons of our ancestors gives cues as to environmental, social and anatomical factors that both constrain and enable the evolutional of human intelligence. Many methods are used to assess human intelligence and its evolution. These includes (i) behavioral measures which may involve naturalistic observation or analyzing responses in laboratory experiments, (ii) artifactual measures which involve analysis of tools, art and so forth, (iii) anatomical/neurological measures which involve studies of the brain and cranium. Ideally, all three would converge upon a unified picture of how human intelligence evolved. However, this not always the case and indeed the assessment of human intelligence is still under

**2. Evaluation of human intelligence: a brief exposure**

Basically there are four important models of human intelligence:

In this chapter, we consider the first three models which essentially deal with crystalized intelligence, fluid intelligence and combination of these two. We try to approximate or crudely approximate the above features of intelligence through deep learning, meta learning and deep meta learning approaches. We try to adopt in a very crude way the three stratum of Cattel-Horn-Carroll (CHC) theory of

Psychometric model is based on a composite abilities measured by mental tests.

One of the earliest of the psychometric model came from the British psychologist Charles E. Spearman (1863–1945), who published his first major article on intelligence in 1904: The Abilities of Man: Their Nature and Measurement.

model of human brain.

several challenges.

**3. Models of human intelligence**

i. Psychometric model

iii. Cognitive and contextual model

ii. Cognitive model

iv. Biological model.

intelligence [5].

**130**

**3.1 Psychometric model**

This model can be quantified.

Underlying most cognitive approaches to intelligence is the assumption that intelligence comprises mental representations (such as propositions or images) of information and processes that can operate on such representations.

Other cognitive psychologists had studied human intelligence by constructing computer models of human cognition.

#### **3.3 Cognitive-contextual models**

Cognitive-contextual theories deal with the way that cognitive processes operate in various settings. Two of the major theories of this type are that of the American psychologist Howard Gardner and that of Sternberg.

#### **4. Putative test of intelligence**

The term putative is commonly used to describe an entity or a concept that is based on what is generally accepted or inferred even without direct proof of it. It means something like an inference or a supposition. There are several examples on putative test of intelligence, like picture completion, picture arrangements, block design, object assembly, etc.

#### **4.1 Culture-fair test**

A 'culture-fair' or culture-related test makes minimal use of language and not ask for specific facts. On culture-fair tests, Euro-American and African-American children differ because culture can influence a child's familiarity with the entire testing situation.

Cattell argued that the observed variation among individuals in their scores on any intelligence can be regarded as depending on:

G: variation in the innate gene endowment.

dG: variations in environmentally-produced development of general ability.

C: variations in the closeness of the individual's cultural training and experiences to the cultural medium in which tests are expressed.

t: variations in familiarity with tests and test situations generally.

f: fluctuations in the underlying capacity.

fr: fluctuations in the effective expression or application of the ability through strength and direction of volition.

s: specific abilities.

e: chance errors of measurement.

In describing the G term in this expression, Cattell had reference to a culture-fair concept of intelligence:

This being the case, *a combination of dG and C* would constitute a manifest general ability, *crystallized intelligence,* which might, if there was any validity to the notion of culture-fair tests, be distinguished from G, *fluid intelligence.*

Later Cattell made these ides more explicit. He said general ability is of two kinds; (i) fluid ability which manifests perception in new situation, and (ii) crystalized ability which manifests itself in known situation.

He argued that the two abilities should show different development patterns of change.

#### **4.2 Definitions of fluid and crystallized intelligence**

*Fluid Intelligence G <sup>f</sup>* involves concepts and can be obtained from experiences and opportunities that are afforded to the vast majority.

Thus, *G <sup>f</sup>* involves learning and is a product of acculturation, *but it does not result primarily from differential opportunities in learning or from highly intensive acculturation, such as is promoted through educational programs, which, in one way or another, exclude substantial numbers of individuals.*

The mathematical model which would best represent the lawful combination of the above (and probably many other) factors might be highly complex, but in general form the theoretical terms can be represented as follows:

$$G\_f = f(H, \mathbf{M}, I, L\mathbf{1}, T\mathbf{1}, O\mathbf{1})\tag{1}$$

*Practically, it must be recognized that the learning component in G <sup>f</sup> is not completely devoid of exclusive and intensive acculturation, so that it too, is to some extent confounded with Gc:* But the essential hypothesis is of this study is that the functions of equations of *G <sup>f</sup>* and *Gc* can be separated as distinct linear components in performances

The Cattell-Horn-Carroll (CHC) theory of cognitive abilities is the most comprehensive and empirically supported psychometric theory of the structure of cognitive abilities to date. Simplified version of the Cattell-Horn and Carroll model of

on a wide sampling of putative tests of intelligence (see **Figure 1**).

*Quest for I (Intelligence) in AI (Artificial Intelligence): A Non-Elusive Attempt*

**4.3 The Cattell-horn-Carroll (CHC) theory of cognitive abilities**

the structure of abilities is shown in **Figure 2**.

*DOI: http://dx.doi.org/10.5772/intechopen.96324*

**Figure 1.**

**Figure 2.**

**133**

*Representation of Cattell-horn-Carroll three stratum theory.*

*Performance study between Gc and G <sup>f</sup> .*

where *G <sup>f</sup>* represents a performance involving fluid intelligence almost exclusively, *f* represents a function. *H* refers to a hereditary component. *M* to the maturation rate, *I* to injury, *L*1 to learning, *T*1 to the time over which these factors have operated, and *O*1 indicates the extent to which each of these factors has interacted optimally with each other and with environmental circumstances.

*Crystallized intelligence G*ð Þ*<sup>c</sup> is an outgrowth of G <sup>f</sup> :* In the early years of development and under certain other conditions *the two may be so highly related* and *cooperative as to be virtually indistinguishable.* But over the course of development, when a properly broad view of this is taken, they may be seen to become separated by virtue of the fact that manifested intelligence is produced by a large number of factors which operated largely independently of those seen as accounting for basic intellectual potential. In general these can be classified as factors promoting intensification of acculturation.

$$\mathbf{G}\_{\mathfrak{c}} = f\left(\mathbf{G}\_{f}\mathbf{1}, \mathbf{C}, \mathbf{B}, \mathbf{P}, \mathbf{R}, \mathbf{L}, \mathbf{2}, \mathbf{T}\mathbf{2}, \mathbf{O}\mathbf{2}\right) \tag{2}$$

where *Gc* represents a performance involving crystallized Intelligence to a high degree, *C* refers to opportunities and encouragements (chances), *E* to ergs and sentiments (motive traits), *P* to non-intellectual personality traits (temperament), *R* to a factor of longterm memory, *L*2 to the degree of intensive 1earning distinct from that which is provided for most people. *T*2 to the time over which these factor have operated, *O*2 to the extent to which the combination of factors and development stages was optimal for development of *Gc*, and *G <sup>f</sup>* refers to the level of Fluid intelligence that operated over this period.

*Thus a performance which is said to characterize crystallized intelligence is also seen to contain at least a trace of fluid intelligence, so that to some extent this Gc measure can be said to be confounded with measure of G <sup>f</sup> :*

*Quest for I (Intelligence) in AI (Artificial Intelligence): A Non-Elusive Attempt DOI: http://dx.doi.org/10.5772/intechopen.96324*

*Practically, it must be recognized that the learning component in G <sup>f</sup> is not completely devoid of exclusive and intensive acculturation, so that it too, is to some extent confounded with Gc:* But the essential hypothesis is of this study is that the functions of equations of *G <sup>f</sup>* and *Gc* can be separated as distinct linear components in performances on a wide sampling of putative tests of intelligence (see **Figure 1**).

#### **4.3 The Cattell-horn-Carroll (CHC) theory of cognitive abilities**

The Cattell-Horn-Carroll (CHC) theory of cognitive abilities is the most comprehensive and empirically supported psychometric theory of the structure of cognitive abilities to date. Simplified version of the Cattell-Horn and Carroll model of the structure of abilities is shown in **Figure 2**.

**Figure 1.** *Performance study between Gc and G <sup>f</sup> .*

**Figure 2.**

*Representation of Cattell-horn-Carroll three stratum theory.*

In describing the G term in this expression, Cattell had reference to a culture-fair

This being the case, *a combination of dG and C* would constitute a manifest general ability, *crystallized intelligence,* which might, if there was any validity to the

Later Cattell made these ides more explicit. He said general ability is of two kinds; (i) fluid ability which manifests perception in new situation, and (ii)

He argued that the two abilities should show different development patterns of

Thus, *G <sup>f</sup>* involves learning and is a product of acculturation, *but it does not result primarily from differential opportunities in learning or from highly intensive acculturation, such as is promoted through educational programs, which, in one way or another,*

The mathematical model which would best represent the lawful combination of

where *G <sup>f</sup>* represents a performance involving fluid intelligence almost exclu-

*Crystallized intelligence G*ð Þ*<sup>c</sup> is an outgrowth of G <sup>f</sup> :* In the early years of development and under certain other conditions *the two may be so highly related* and *cooperative as to be virtually indistinguishable.* But over the course of development, when a properly broad view of this is taken, they may be seen to become separated by virtue of the fact that manifested intelligence is produced by a large number of factors which operated largely independently of those seen as accounting for basic intellectual potential. In general these can be classified as factors promoting

where *Gc* represents a performance involving crystallized Intelligence to a high degree, *C* refers to opportunities and encouragements (chances), *E* to ergs and sentiments (motive traits), *P* to non-intellectual personality traits (temperament), *R* to a factor of longterm memory, *L*2 to the degree of intensive 1earning distinct from that which is provided for most people. *T*2 to the time over which these factor have operated, *O*2 to the extent to which the combination of factors and development stages was optimal for development of *Gc*, and *G <sup>f</sup>* refers to the level of Fluid

*Thus a performance which is said to characterize crystallized intelligence is also seen to contain at least a trace of fluid intelligence, so that to some extent this Gc measure can*

the above (and probably many other) factors might be highly complex, but in

sively, *f* represents a function. *H* refers to a hereditary component. *M* to the maturation rate, *I* to injury, *L*1 to learning, *T*1 to the time over which these factors have operated, and *O*1 indicates the extent to which each of these factors has interacted optimally with each other and with environmental circumstances.

general form the theoretical terms can be represented as follows:

involves concepts and can be obtained from experiences

*Gf* ¼ *f H*ð Þ , *M*,*I*, *L*1, *T*1, *O*1 (1)

*Gc* <sup>¼</sup> *f G <sup>f</sup>* 1,*C*, *<sup>B</sup>*, *<sup>P</sup>*, *<sup>R</sup>*, *<sup>L</sup>*, 2, *<sup>T</sup>*2, *<sup>O</sup>*<sup>2</sup> (2)

notion of culture-fair tests, be distinguished from G, *fluid intelligence.*

*Artificial Intelligence - Latest Advances, New Paradigms and Novel Applications*

crystalized ability which manifests itself in known situation.

**4.2 Definitions of fluid and crystallized intelligence**

and opportunities that are afforded to the vast majority.

*exclude substantial numbers of individuals.*

intensification of acculturation.

intelligence that operated over this period.

*be said to be confounded with measure of G <sup>f</sup> :*

**132**

concept of intelligence:

*Fluid Intelligence G <sup>f</sup>*

change.
