**2. The essence of cognitive solving of problems on the base of probabilistic modeling**

This section explains the definitions and interpretations which can help to understand the proposed models and results of modeling complex systems in different application areas.

AIS itself can be considered as an interested system (for example, dispatching intellectual center) or as a part of other, more comprehensive interested system (for example, functionally focused robots in safety systems). The current information is processed in real time for performing the set or expected functions of interested system. To meet system requirements, the solutions of considered problems 1 (of planning the possibilities of functions performance) and 2 (of robot route optimization) are initiated along with the solutions of other problems.

is understood as a probabilistic measure of "failure" considering consequences

*Probabilistic Methods for Cognitive Solving of Some Problems in Artificial Intelligence Systems*

system operation may mean an incident or accident on object.

Interpretation of such decomposition is the following:

Note. For example, an interested system is a dangerous manufacturing object. The object structure includes an AIS, which monitors events and conditions in and/or around its manufacture. Equipment parameters (temperature, pressure, and so forth) which should be in norm limits are traced. The "failure" of interested

Generally, from the point of view of formalization for each estimated variants (for problem 1 or 2), the interested system is logically decomposed to compound subsystems; see **Figures 3** and **4**. Each subsystem is a set of components (elements and/or other subsystems): for problem 1, this set covers the components participating in functions performance; and for problem 2, the set covers compound parts of a possible route of the robot in space. Complete set of these components formally characterizes a variant of decomposed system for solving problem 1 or 2. The analysis and optimization are carried out on complete set of all compared possible

The subsystem from serial connected elements provides functions performance

• "AND" 1st component, …, "AND" last element provide admissible level of

• "AND" 1st compound part of the route, …, "AND" last compound part of the route are overcame successfully by the robot at given time (for problem 2).

The subsystem from parallel connected elements provides functions performance with admissible level of integrity (quality and/or safety) at given time, if:

• "OR" 1st component, …, "OR" last component in the subsystem provide admissible level of integrity (quality and/or safety) at given time (for

with admissible level of integrity (quality and/or safety) at given time, if:

integrity (quality and/or safety) at given time (for problem 1);

(according to ISO Guide 73).

*DOI: http://dx.doi.org/10.5772/intechopen.89168*

variants.

problem 1);

*Variant of system decomposition.*

*Variant of subsystem decomposition.*

**Figure 3.**

**Figure 4.**

**7**

The cognitive solving of problems include improvements, accumulation, analysis, and a use of appearing knowledge, see **Figure 2**. Possible uncertainties for the given period (from initial time point t1 to future moment tx) may be considered by using proposed probabilistic modeling, prediction, and optimization.

The solutions for problems 1 and 2 are estimated by probability of "success" and/or "failure" (risk of "failure") during given prognostic time period. Thus, prognostic period should be defined so to be in time to recover capabilities (which can be lost), or to carry out preventive action (with which the initiation of solving the problem is connected). Such behavior means operation in real time.

In each real case of modeling the term "success" should be defined in terms of admissible condition of interested system to operate for the purpose. The term "failure" means "unsuccess." Generally, a "success" of interested system operation during the given time period means an admissible degree of integrity. Accordingly, "failure" for interested system during given time period means inadmissible degree of integrity at least once within this period. System (or system element) integrity is defined as such system (system element) state when system (system element) purposes are achieved with the required quality and/or safety. The risk of "failure"

**Figure 2.** *The essence of cognitive of solving of problems.*

*Probabilistic Methods for Cognitive Solving of Some Problems in Artificial Intelligence Systems DOI: http://dx.doi.org/10.5772/intechopen.89168*

is understood as a probabilistic measure of "failure" considering consequences (according to ISO Guide 73).

Note. For example, an interested system is a dangerous manufacturing object. The object structure includes an AIS, which monitors events and conditions in and/or around its manufacture. Equipment parameters (temperature, pressure, and so forth) which should be in norm limits are traced. The "failure" of interested system operation may mean an incident or accident on object.

Generally, from the point of view of formalization for each estimated variants (for problem 1 or 2), the interested system is logically decomposed to compound subsystems; see **Figures 3** and **4**. Each subsystem is a set of components (elements and/or other subsystems): for problem 1, this set covers the components participating in functions performance; and for problem 2, the set covers compound parts of a possible route of the robot in space. Complete set of these components formally characterizes a variant of decomposed system for solving problem 1 or 2. The analysis and optimization are carried out on complete set of all compared possible variants.

Interpretation of such decomposition is the following:

The subsystem from serial connected elements provides functions performance with admissible level of integrity (quality and/or safety) at given time, if:


The subsystem from parallel connected elements provides functions performance with admissible level of integrity (quality and/or safety) at given time, if:

• "OR" 1st component, …, "OR" last component in the subsystem provide admissible level of integrity (quality and/or safety) at given time (for problem 1);

**Figure 3.** *Variant of system decomposition.*

**Figure 4.** *Variant of subsystem decomposition.*

**2. The essence of cognitive solving of problems on the base of**

This section explains the definitions and interpretations which can help to understand the proposed models and results of modeling complex systems in

AIS itself can be considered as an interested system (for example, dispatching intellectual center) or as a part of other, more comprehensive interested system (for example, functionally focused robots in safety systems). The current

information is processed in real time for performing the set or expected functions of interested system. To meet system requirements, the solutions of considered problems 1 (of planning the possibilities of functions performance) and 2 (of robot route optimization) are initiated along with the solutions of other problems.

The cognitive solving of problems include improvements, accumulation, analysis, and a use of appearing knowledge, see **Figure 2**. Possible uncertainties for the given period (from initial time point t1 to future moment tx) may be considered by

The solutions for problems 1 and 2 are estimated by probability of "success" and/or "failure" (risk of "failure") during given prognostic time period. Thus, prognostic period should be defined so to be in time to recover capabilities (which can be lost), or to carry out preventive action (with which the initiation of solving

In each real case of modeling the term "success" should be defined in terms of admissible condition of interested system to operate for the purpose. The term "failure" means "unsuccess." Generally, a "success" of interested system operation during the given time period means an admissible degree of integrity. Accordingly, "failure" for interested system during given time period means inadmissible degree of integrity at least once within this period. System (or system element) integrity is defined as such system (system element) state when system (system element) purposes are achieved with the required quality and/or safety. The risk of "failure"

using proposed probabilistic modeling, prediction, and optimization.

the problem is connected). Such behavior means operation in real time.

**probabilistic modeling**

*Probability, Combinatorics and Control*

different application areas.

**Figure 2.**

**6**

*The essence of cognitive of solving of problems.*

• "OR" 1st compound part of the route, …, "OR" last compound part of the route are overcame successfully by the robot at given time (for problem 2).

Each component after system decomposition is presented as a "black box." For each "black box," various probabilistic models can be applied for calculations and for building required probabilistic distribution function (PDF) of time between the next deviations from an established norm. A norm is connected with definitions of "success" and "failure," it may be connected with the precondition to "failure" (to prevent "failure"—see Example 2). Focus on processes' description allows to use only time characteristics (mean time or frequency of events), the dimensionless or cost characteristics peculiar for various applications.

Appropriate calculated probabilities of "success" and/or "failure" (risk of "failure") in comparisons to real events during the prediction periods represent the knowledge of admissibility borders for probabilities of "success" and acceptability borders for risks of "failure." The process of cognitive solving of problems 1 and 2 means not only the formation and use of this knowledge for interested system, but also the estimated quality of monitored and used information (including definition of input for continuous modeling).

The proposed analytical models ("The model of functions performance by a complex system in conditions of unreliability of its components," "The models complex of calls processing for the different dispatcher technologies," "The model

*Probabilistic Methods for Cognitive Solving of Some Problems in Artificial Intelligence Systems*

domain," "The model of information gathering," "The model of information analysis," "The models complex of dangerous influences on a protected system," and "The models complex of an authorized access to system resources") allow to estimate the probability of "success" and risks to lose quality of intellectual operation during given prognostic period considering consequences; see **Table 1**. Required limits on probability measures are recommended as produced knowledge for the

The next probabilistic model is devoted to estimate a probability of "success" and risk of "failure" on high meta level. This is based on studying the general AIS technology of periodical diagnostics of system integrity. Some general technologies were researched for "The models complex of dangerous influences on a protected

For system element allowing prediction of risks to lose its integrity during given prognostic period, there is studied the next general AIS technology of providing

Technology is based on the periodical diagnostics of system integrity (without the continuous monitoring between diagnostics). Diagnostics are carried out to detect danger sources occurrence from threats into a system or consequences of negative influences (for example, these may be destabilizing factors on dangerous enterprise). The lost system integrity can be detected only as a result of diagnostics, after which system recovery is started. Dangerous influence on system is acted stepby step: at first, a danger source occurs into a system, and then after its activation may be a loss of integrity; see **Figure 6**. Occurrence time is a random value that can

be distributed by PDF of time between neighboring occurrences of danger Ωoccur(t) = *P*(τoccurrence ≤ t) = 1 exp(t/Toccur), Toccur is mean time, frequency σ = 1/Toccur. Activation time is also random value which can be distributed by PDF of activation time of occurred danger Ωactiv(t) = *P*(τactivation ≤ t) = 1 exp(t/Tactiv), Tactiv is mean time. System integrity cannot be lost before an occurred danger source is activated. A threat is considered to be realized only after a danger source

It is supposed that used diagnostics tools allow to provide system integrity recovery after revealing danger sources occurrence or the consequences of influences. Thus, the probability (*P*) of providing system integrity within the given

of entering into system current data concerning new objects of application

best AIS practice (estimated on dozens practical estimations for various

system," see **Table 1**. Here, the general case for AIS is presented.

application areas).

**Figure 5.**

*Quality of used information (abstraction).*

*DOI: http://dx.doi.org/10.5772/intechopen.89168*

system integrity.

**9**

has activated and influenced on system.
