Probability and Statistical Methods

## **Chapter 4**

## Probabilistic Predictive Modelling for Complex System Risk Assessments

*Andrey Kostogryzov, Nikolay Makhutov, Andrey Nistratov and Georgy Reznikov*

## **Abstract**

The risks assessment is described by the action of estimating the probability distribution functions of possible successes or failures of a system during a given prediction period. Typical probabilistic predictive models and methods for solving risks prediction problems are described, and their classification is given. Priority development directions for risks prediction in standard system processes and their implementation procedures are proposed. The reported examples demonstrate the effects and interpretation of the predictive results obtained. Notes: 1. System is a combination of interacting elements organized to achieve one or more stated purposes (according to ISO/IEC/IEEE 15288 "Systems and software engineering—System life cycle processes"). 2. Risk is defined as the effect of uncertainty on objectives considering consequences. An effect is a deviation from the expected — positive and/or negative (according to ISO Guide 73).

**Keywords:** prediction, method, model, probability, risk

## **1. Introduction**

Systems are subject to various risks throughout their life cycles despite their successful design and effective operation. That is why mathematics and system performance prediction have been closely interrelated since the ancient times. There is no doubt in the design and the maintenance of the world-famous wonders, astonish modern man. The preservation of these wonders was entirely based on predictive methods using existing mathematical approaches by that time. With the advent of probability theory, this relationship has become even closer. Currently, various classical mathematics and probabilistic methods are often used to solve complex engineering problems.

If for the layman probability is still associated with divination on daisies, then for specialists these methods have long become powerful tools in predicting success or failure, proactive management, and achieving the desired effects. Risk predictive assessments are practiced in various industrial sectors, for example, fuel and energy,

pharmaceutical, mining, metallurgical, chemical, communication and information, dispatch centers, etc. [1–32]. Hundreds of universities and other scientific organizations are involved in probabilistic research activities connected to risk prediction. By now it is possible to clearly trace the activities chain in a predictive approach: "From uncertainties formalization to probabilistic modelling", "From probabilistic modelling to reasonable control", "From reasonable control to achievable effects" and "From achievable effects to sustainable harmony". It means that predictive probabilistic concepts meet the main analytical challenges in the eternal aspirations to go from uncertainties formalization" to "sustainable harmony", see **Figure 1**.

Thousands of mathematicians are currently involved in risk prediction R&D activities. It is unfortunately impossible to mention all the running developments. This chapter will focus on:


**Figure 1.** *The eternal aspirations: "From uncertainties formalizationto sustainable harmony."* *Probabilistic Predictive Modelling for Complex System Risk Assessments DOI: http://dx.doi.org/10.5772/intechopen.106869*


## **2. Goals and objectives**

In general, risk prediction is associated with the achievement of pragmatic goals and solving the analytical problems of systems rational concept (conceptual design), development, utilization, and support. Pragmatic system goals may be:


In turn, the following objectives require risk predictive capabilities:


exploration and development of mineral deposits and their extraction; and technologies for preventing and eliminating natural and technogenic hazards), etc.

A review of the numerous existing mathematical approaches allows us to make the following generalization the main goals of applying probabilistic prediction are connected with (see **Figure 2**):


The enlarged classification of methods, using the probabilistic risk predictive models (including the proposed models), is presented in **Table 1**. These methods are used for solving various objectives during system life cycle.

#### **Figure 2.**

*Generalization of goals and objectives throughout the system's life cycle that require risk probabilistic-predictive models.*


## *Probabilistic Predictive Modelling for Complex System Risk Assessments DOI: http://dx.doi.org/10.5772/intechopen.106869*

**Table 1.**

*The enlarged classification of methods, using risk probabilistic-predictive models.*

## **3. Conceptual probabilistic-predictive approach**

The solution of problems in the system life cycle [6–8, 9, 14] is considered by the example of a complex system, designated as (S-N-T)-system and covering: social sphere S (person, society, state and world community); natural sphere N (earth and space); techno-sphere T (man-made infrastructures and life support facilities). In general, solving problems using a probabilistic-predictive approach includes:


When planning and implementing these actions, the following should be taken into account:


*Probabilistic Predictive Modelling for Complex System Risk Assessments DOI: http://dx.doi.org/10.5772/intechopen.106869*

The random time variables *τ* considered in the predicted risk *R*ð Þ *τ*, *t* does simultaneously take into account the probabilities *P*ð Þ *τ*, *t* of the threats' occurrence and activation, and also the associated damages *U*ð Þ *τ*, *t* . For example, the random time variable *τ* may be defined as the time between successive losses of element integrity (see details in sections 4 and 5). Here the prediction period t (which in general is also subject to justification) is dependent on the basic measures, designed to characterize the uncertainties and complexity of (S-N-T)-system, and conditions for solving the analytical problems.

The source of risks regarding the (S-N-T) system has been and remains: human (human factor); nature with its own range of threats; and techno-sphere with its inherent hazards. They are the determinants of the reliability (including aging and degradation of technical means), other quality measures (including the quality of the information used), and the safety and efficiency of the system. This makes it possible to determine risk as functionals:

$$R(\boldsymbol{\tau},t) = F\{P(\boldsymbol{\tau},\ t), U(\boldsymbol{\tau},\ t)\} = F\{R\_{\mathcal{S}}(\boldsymbol{\tau},\ t), R\_{\mathcal{N}}(\boldsymbol{\tau},\ t), R\_{T}(\boldsymbol{\tau},\ t)\}.$$

In practice, risks are estimated by the dimensionless probability of an elementary event during a period of time, comparing possible damage to it, or by the probabilistic expectation of damage (as the probabilistic multiplication of the possible damage on the probability of damage), or by the frequency of damage, etc. In turn, the magnitude of damages can be estimated in economic indicators (financial), areas of contamination, losses in case of accidents, etc.

For example, formalization of such limitations may be presented as follows:

*R*ð Þ *τ*, *t* ≤*Radm*ð Þ *τ*, *t* ,*Radm*ð Þ *τ*, *t* ≻0*:*

Then a safety *S*ð Þ *τ*, *t* for (S-N-T)-system can be expressed in terms of risks: *S*ð Þ *τ*, *t* ≤*Radm*ð Þ� *τ*, *t R*ð Þ *τ*, *t* . Safety is maintained if and only if *S*ð Þ *τ*, *t* ≥ 0.

To ensure that the quality, safety and sustainable development of the (S-N-T) system are in the acceptable risk zones. Thus, it is necessary to implement a set of actions with the economic costs expected to reduce risks to an acceptable level.

Examples of the applicability of this approach are proved in many industrial sectors such as nuclear, thermal and hydraulic power plants; the largest installations of oil and gas chemistry; the unique space station, aviation, sea and land transport; largescale offshore energy resources development facilities [7], etc.

## **4. The essence of probabilistic concepts**

The risk predictive approaches, used by system analyst*s*, are based on classical probability theory. Generally, a probabilistic space (*Ω*, *B*, *P*) should be created per system (see for example [1–6, 9–14]), where: *Ω* – is a finite space of elementary events; *B* – is a class of subspaces in *Ω* -space with the properties of *σ*-algebra; *P* – is a probability measure on a space of elementary events *Ω*. Because *Ω* ¼ f g *ω<sup>k</sup>* is finite, there is enough to establish a correspondence *<sup>ω</sup><sup>k</sup>* ! *pk* <sup>¼</sup> *<sup>P</sup>*ð Þ *<sup>ω</sup>*<sup>j</sup> <sup>⊃</sup><sup>j</sup> *<sup>k</sup>*<sup>j</sup> in which *pk* <sup>≥</sup> **<sup>0</sup>** and P *<sup>k</sup>pk* ¼ **1**. Briefly, the initial formulas in mathematical form for original models (which are used in practice) are given in **Appendices A** and **B**.

Note. Some cases of a limited space of elementary events see in Section 6. The results of modelling are related only to introduced elementary events and specific interpretation, the results of the probabilistic prediction can not describe future exact events (place, time and other detailed characteristics).

The next interconnected concepts 1�7 are proposed for probabilistic predictive modelling.

Concept 1 is concerning the probability distribution function (PDF) *P*ð Þ *τ* ≤*t* (see for example [1–6, 9–14] etc.) for a continuous random variable of time *τ*. *P*ð Þ *τ* ≤*t* is a non-decreasing function *P t*ð Þ whose value for a given point *t* ≥ 0 can be interpreted as the probability that the value of the random variable *τ* is less or equal to the given time t. Regarding risk prediction, the given time t indicates the prediction period. Additionally, *P t*ðÞ¼ 0 for *t*≤0, and *P t*ðÞ! 1 for *t* ! ∞. From a decision-making standpoint, the problem is to determine the probability of system "success" and/or "unsuccess" during the given prediction period *Treq* (for example, a risk of "failure" considering consequences). This probability is a value for a point t = *Treq*, and a PDF is due to be built for modelling the system's operational states with the time.

Concept 2. The processes, connected with data processing should provide the required system operational quality (because the system performs functions by logical reasoning based on data processing). The corresponding probabilistic methods should be appropriate for the assessment of the quality of the used information [6–8, 9–14, 28–31].

Concept 3. The PDF should establish the analytical dependence between the input parameters to allow solving direct and inverse problems necessary for the rational management of the system operation. For example, the PDF P(t) describing the random time τ between successive losses of integrity of a system may be an analytical exponential approximation of a simple system element, i.e. *P t*ðÞ¼ 1 � *exp* ð Þ �*λt* , where *λ* is the frequency of failures (losses of element integrity per unit of time). At the same time*,* the frequency of failures may be considered as a sum of frequencies of different types of failures because of various specific failure reasons—for example, failure from equipment *λ*1, or from natural threats *λ*2, or from "human factor" *λ*<sup>3</sup> and so on. For this use case*,* PDF may be presented as *P t*ðÞ¼ 1 � *exp* ½ � �ð Þ *λ*<sup>1</sup> þ *λ*<sup>2</sup> þ *λ*<sup>3</sup> þ … *t* , if and only if all the implied failures are independent. Then if the PDF *P t*ð Þ is built in dependence on different parameters and if an admissible probability level for acceptable risk is given then the inverse problem may be solved analytically—see also Section 7.

Notes. 1 System integrity is defined as such system state when system purposes are achieved with the required quality. 2. The rationale for exponential approximation choice in practice see for example in [6, 9, 14, 28–31].

Concept 4. Acceptable adequacy must be ensured. It means the consideration of several essential system parameters on which "success" or "failure" of the system operation is dependent. For example, today the way for risks prediction based on only one parameter � frequency of failures λ � is common. For this case, the exponential PDF can be used—see **Figure 3**. But the required acceptable adequacy is not always proven.

For exponential approximation the frequency of failures λ is connected with the hypothesis: "No failures during the given time with a probability less than the given admissible probability *Padm*≻0". This is always the case if the failure frequency is constant with time. For this case, the given prediction time must be no more than *treq* <sup>¼</sup> <sup>1</sup>*=λadm*, here *<sup>λ</sup>adm* <sup>¼</sup> � *ln* ð Þ <sup>1</sup>�*Padm treq* . That may not be often an accurate engineering estimation because many systems' capabilities and operation conditions are ignored [9, 14]. In **Figure 3**, this case is explained on the timeline.

*Probabilistic Predictive Modelling for Complex System Risk Assessments DOI: http://dx.doi.org/10.5772/intechopen.106869*

**Figure 3.**

*Probabilistic risk, approximated by a more adequate PDF* P(t), *in comparison with the existing representation of exponential PDF (both connected with the same λ), and admissible risk, imaginary by exponential PDF, connected with λadm.*

For different approaches and discussions, devoted to adequacy, see for example the work in [33]. In that case, the diagnostic approach to evaluate the predictive performance is based on the paradigm of maximizing the sharpness of the predictive distributions. After calibration, one obtains an assessment and ranking of the probabilistic predictions of wind speed at the Stateline wind energy centre in the US Pacific Northwest. In [34], the approach is illustrated by examples connected with "human factors". For specific systems, the topic of improving the adequacy of the prediction will always remain relevant.

Concept 5. A complex system includes subsystems and/or components (system elements), the probabilistic approach must allow a generation of probabilistic predictive models to predict the system's operational performance and its dependence on different uncertainty conditions. In general, predictive models must consider system complexity, the diagnostics of system's integrity, the monitoring of the diagnostics, the recovery from loss integrity of every system component and the quality of the used information. The adequate PDF must be the output of the probabilisticpredictive models (see also **Appendix A**).

Concept 6. The input for the probabilistic-predictive models must be based on real and other possible data (subjective data, meta-data, etc.) considering the system operational specifications and the supporting actions. These may be also hypothetical data for research purposes.

Concept 7. The specific problems of optimization must be solved considering risks prediction results (including optimization in real time). The given time for prediction should be defined so to be in real system operation time to allow taking rational proactive actions.

## **5. The description of some original probabilistic models**

For modelling modern and future systems, taking into account their specifications, it makes sense to distinguish between the intellectual part, where uncertainties are associated with information gathering, processing and production for decisionmaking, and the technical part, where there is no significant dependence on the high quality of the current information.

### **5.1 About system operational information quality**

The created models [6–8, 9–14, 28–31] help to implement concepts 1 and 2. In general, operational information quality is connected with requirements for reliable and timely producing complete, valid and/or confidential information, if needed. The gathered information is used for proper system specificity. The abstract view of such quality is illustrated in **Figure 4**.

The proposed probabilistic predictive models to assess the information quality are described in **Appendix A**. The models cover the predictive measures according to the abstract information quality metrics in **Figure 4**. It may be applied for solving problems connected with decision-making on the base of information gathering, processing and production.

### **5.2 About "black box" formalization to predict "failure" risks**

The models below help to implement concepts 1, 3 and 4 [6, 9, 14–31]. In general, successful system operation is connected with counteraction*s* against various system integrity loss hazards (of social, natural and technogenic origins) throughout system operation timeline. There are considered two general technologies formalized to predict "failure" risks. Both technologies are briefly described below.

Technology 1 is based on a periodic diagnostic of system integrity policy. It is carried out to detect system functional abnormalities or degradations that may result in a system loss of integrity. The system loss of integrity can be detected only as a result of diagnostics. Dangerous influence on system is logically acted step-by-step: at first, a danger source penetrates into system and then after its activation begins to influence. System integrity can not be lost before penetrated danger source is activated. A danger is considered to be realized only after danger source has influenced on system.

Notes: 1. For example, for new steel structures, time before the appearance of critical erosion from rust can be considered as the source penetration time, activation time is the time before unacceptable structural damage occurs due to this rust. 2. Regarding a degradation of technical system the input time of danger source penetration tends to zero. 3. For special research cases of cyberattacks the term "Loss of Integrity" may be logically replaced by the term "functional abnormalities".

Technology 2, additionally to technology 1, implies that system integrity is monitored between diagnostics by operator. An operator may be a man or a special artificial

**Figure 4.**

*The example of abstract information quality in the system.*

**Figure 5.**

*Some accident events for technology 2, left—successful (correct) operation, right—a lose of integrity during given time Treq.*

intelligence system or a system of support or their combination. The operator repairs the system after having detected the loss of integrity hazard—see **Figure 5**. Accordingly, the model assumption of operator's faultless action can do the full neutralization of the active hazard. Penetration is only possible if an operator makes an error. A dangerous influence occurs if the danger is activated before the next diagnostic. Otherwise, the source will be detected and neutralized during the next diagnostic.

The probability of a successful operation within a given period of time, i.e. the probability of "success" ð Þ *P* may be estimated using the models presented in **Appendix B**. The risk to lose integrity ð Þ *R* is an addition to 1 of the probability of successful operation, i.e. *R* ¼ 1 � *P* considering consequences. Damage from the consequences for the given period is taken into account as an additional characteristic of the calculated probability.

### **5.3 Algorithm to generate probabilistic models for complex system**

The algorithm helps to implement concepts 1 and 5 for complex systems with parallel or serial structure [9–31] with the assumption of random variables independence. Let us consider the elementary structure for two independent parallel or series elements. Let us PDF of time between losses of the i-th element integrity is *Bi*ðÞ¼ *t P*ð Þ *τ<sup>i</sup>* ≤*t* , then the time between successive integrity losses will be determined as follows:

1. for a system composed of serial independent elements is equal to the minimum of the two times *τi*: failure of 1st or 2nd elements. The PDF *Bsys*ð Þ*t* is defined by expression

$$B\_{\mathfrak{P}^\mu}(t) = P = [\mathbf{1} - B\_1(t)] \bullet [\mathbf{1} - B\_2(t)];\tag{1}$$

**Figure 6.**

*An example of a complex system integrating two serial complex structures, which also are complex subsystems (abstraction).*

2. for a system composed of parallel independent elements is equal to the maximum of the two times*τi,* i.e. the system goes into the state of integrity loss when both elements lose integrity. The PDF *Bsys*ð Þ*t* is

$$B\_{\mathfrak{P}^\mathbf{y}}(t) = P = [\mathbf{1} - B\_1(t) \bullet B\_2(t)]. \tag{2}$$

Applying expressions (1–2), the PDF of the time interval between successive losses of integrity for any complex system with parallel and/or serial structure and their combinations can be built. An example of a complex system integrating two serial complex subsystems is presented in **Figure 6**, see also Examples 2�4. For this system the following interpretation of elementary events is used: complex system integrating serial components "structures 1 and 2" is in the state of "successful operation" during a given period *Treq* if during this period component "structure 1" "AND" component "structure 2" are in the state of "successful operation". Note that both components are in their turn complex subsystems including subsystems and components, as well.

## **6. Risks prediction for standard processes**

### **6.1 About standard processes**

All actions in the timeline may be characterized as the performance of some system processes. The main system processes according to ISO/IEC/IEEE 15288 "System and

software engineering—System life cycle processes" include 30 standard processes agreement processes (acquisition and supply processes), organizational projectenabling processes (life cycle model management, infrastructure management, portfolio management, human resource management, quality management and knowledge management processes), technical management processes (project planning, project assessment and control, decision management, risk management, configuration management, information management, measurement and quality assurance processes), technical processes (business or mission analysis, stakeholder needs and requirements definition, system requirements definition, architecture definition, design definition, system analysis, implementation, integration, verification, transition, validation, operation, maintenance and disposal processes).

The focus on standard processes is justified by the fact that the life cycle of any complex system is woven from a variety of standard processes deployed in time, and for them, possible purposes, outcomes and typical actions are defined. Considering that for many critical systems, the potential damage and costs of eliminating the consequences in the conditions of heterogeneous threats can exceed the costs of preventive measures by an order of magnitude, it is necessary to find effective solutions to counter threats and ensure effective risk management for each of the processes performed. Despite many works on risk management for different areas theproblems of this chapter continue to be relevant (of course in practice developing new processes may be considered, not only from ISO/IEC/IEEE 15288 standpoint).

## **6.2 The example about input for probabilistic modelling**

The proposed practical way to input forming helps to implement concept 6 for any monitored system (including real time system). For each critical parameter (for which prognostic estimations are needed to do proactive actions) the ranges of acceptable conditions can be established. The traced conditions of monitored parameters are data on a timeline. For example, the ranges of possible values of conditions may be established: "Working range within the norm", "Out of working range, but within the norm" and "Abnormality" for each traced separate critical parameter. If the parameter ranges of acceptable conditions are not established in explicit form, then for modelling purpose they may be impleaded and can be expressed in the form of average time value. These time values are

#### **Figure 7.**

*An example of universal ranges for data about events and conditions. Note. In general case, the ranges may be established by subjective mode if a reasonable and objective one is impossible.*

used as input for probabilistic models (see **Appendices A** and **B**). For example, for coal mine some of many dozens heterogeneous parameters may be compression, temperature, etc. It may be interpreted similarly by light signals "green", "yellow" and "red" [18, 25, 28–31]—see **Figure 7** and following Example 1.

#### **6.3 The considerations**

For the estimation of reliability of standard process performance, there may be two cases to estimate the probabilistic measure: the case of observed repeatability and the case of assumed repeatability of random events influencing reliability without the consideration of additional specific threats (for example, threats to operational information quality). For the estimation, the probabilistic measure repeatability of threats activation is assumed. For estimation of the assumption of independence of events connected with reliability of standard process performance and additional specific threats activations (for example, threats to information security) is used.

#### **6.4 The case of the observed repeatability**

The inputs for calculations use statistical data according to some observed repeatability. For standard process, the reliability of process performance and expected results in time are required. Failure to perform the necessary actions of the process is a threat of possible damage. From the different points of view, all varieties of the standard process can be divided into *K* groups, *K* ≥ 1 (if necessary). Based on the use of statistical data, the probability *R*act *<sup>k</sup>*ð Þ *Tk* of failure to perform the actions of the process for the k-th group for a given time *T<sup>k</sup>* is proposed to be calculated by the formula:

$$R\_{\text{act }k}(T\_k) = G\_{\text{failure }k}(T\_k) / G\_k(T\_k),\tag{3}$$

where *G*failure *<sup>k</sup>*ð Þ *T<sup>k</sup>* ,*Gk*ð Þ *T<sup>k</sup>* —are accordingly the number of cases of failures when performing the necessary actions of the process and the total quantity of necessary actions from the k-th group to be performed in a given time *Tk*.

The probability *R*relð Þ *T* of failure to perform the necessary actions of a standard process without consideration of additional specific threats activations (for example, threats to operational information quality) is proposed to be estimated for the option when only those cases are taken into account for which the actions were not performed properly (they are the real cause of the failure)

$$R\_{\rm rel}(T) = \mathbf{1} - \sum\_{k=1}^{K} \mathcal{W}\_k[\mathbf{1} - R\_{\rm act \,k}(T\_k)] \operatorname{Ind}(\mathbf{a}\_k) / \sum\_{k=1}^{K} \mathcal{W}\_k,\tag{4}$$

where *T* is the specified total time for a process performance for the entire set of actions from different groups, including all particular values *Tk*, taking into account their overlaps; the*Wk* is the number of actions taken into account from the k-th group for multiple performances of the process.

For the k-th group, the requirement to perform the process actions using the indicator function *Indk*ð Þ *α<sup>k</sup>* is taken into account

$$Ind(a) = \begin{cases} -1, \text{if the specified requirements} \land \text{conditions are met, i.e.} a \text{ is true,} \\ 0, \text{otherwise, i.e.} \text{if the condition } a \text{ is false.} \end{cases} \tag{5}$$

The condition α used in the indicator function is determined by the analysis of different specific conditions, proper to the process. It is to allow take into account the consequences associated with the failure of the process—see (3) and (4). Condition *α<sup>k</sup>* means a set of conditions for all process actions required from the k-th group.

## **6.5 The case of the assumed repeatability**

There may be recommended the models from Section 5 and **Appendices A** and **B**, which do not exhaust the whole set of possible probabilistic models.

## **6.6 About estimation of generalized measure**

The generalized probability *R*generð Þ *T* of failure to perform standard process considering additional specific threats *R*addð Þ *T* for the given period *T* may be calculated by the formula:

$$R\_{\text{gener}}(T) = \mathbf{1} - [\mathbf{1} - R\_{\text{rel}}(T)] \cdot [\mathbf{1} - R\_{\text{add}}(T)].\tag{6}$$

Here the probabilistic measure *R*generð Þ *T* of failure to perform reliable process considering specific threats are estimated according to proposition of section 5, subsections 6.1�6.5 and **Appendices A** and **B** considering the possible consequences.

### **6.7 Approach for risks integration from different processes**

The integral risk of violation of the acceptable performance of standard processes set is proposed to be evaluated depending on the real or hypothetical initial data characterizing each process (see subsections 6.1–6.4), from the possible scenario*s* of using each process during the given prediction period. The prediction period can cover any period during the system life cycle, including at least 1 complete process of each of the types involved in the specified set of standard processes in the scenario under consideration. An example of the standard processes performed on the time axis is shown in **Figure 8**. In general, the scenario of using standard processes i1, i2, … , ik is proposed to be determined by the real or hypothetical frequency of these

**Figure 8.**

*An example of standard processes performed on the time axis.*

processes (taking into account the typical actions performed at the same time that affect the time characteristics of the implementation of the processes). This approach allows us to take into account such opportunities when one process can be part of the work performed within other standard processes in the system lifecycle and, if necessary, include*s* other processes.

The integral risk of violation of the acceptable performance of standard processes set *R*Ð ð Þ *Tstated* for given prediction period *Tstated* is proposed to be estimated by the formula

$$\mathcal{R}\_{\Gamma}(T\_{\text{state}}) = \mathbf{1} - \sum\_{i=1}^{I} \lambda\_i \{\mathbf{1} - \mathcal{R}\_i(T\_{\text{state}}) \bullet [\text{Ind}(a\_i)]\} / \sum\_{i=1}^{I} \lambda\_i,\tag{7}$$

where *λ<sup>i</sup>* is the expected frequency of execution of standard processes of the *i*-th type for the prediction period. If the duration of the executed process can go beyond the prediction period (which depends on the actions performed and their time characteristics), this frequency can be a fractional number that characterizes the number of executions of each type of process, greater than one;

*Tstated i* – the expected period specified in the source data for modeling for the acceptable execution of standard type *i* process;

*Tstated* – the given prediction period that covers the duration of all the specified periods *Tstated i* of each from standard processes involved in the scenario. The assumption about a partially completed process that can start at the end of the prediction period and not finish (if the total number of processes of each type is greater than one) can be satisfied by setting the fractional value *λi*.

At the same time, the criterion for meeting the requirements and conditions (*αi*) for each type of process, including the requirements for acceptable risks and damages, is set using the indicator function (5).

Note. The expression (6) is a special case of expression (7).

The proposed in Section 6 models and methods are applicable for solving practical problems related to risk prediction and the justification of effective proactive measures to reduce risks or their prevention within acceptable limits.

## **7. Optimization of problem statements for rationale proactive actions**

The proposed optimization problem statements for rationale actions help to implement concept 7. The matter is the metrics calculated in sections 5 and 6, in the models from **Appendixes A** and **B** depend on many parameters. The values of some parameters may be given and often variated within system life cycle. These values of some parameters may be specified or selected to achieve pragmatic goals and solve the different analytical problems of systems rational concept, development, utilization and support (described in Section 2). They are impacting the values of the estimated probabilistic risks. It means many optimization problems may be solved by rationale proactive actions connected with providing rational values of these parameters. For example, such parameters for optimization may be the duration of prediction period, parameters, impact on the information quality (see **Appendix A**), system structure, for the compound components: time between the end of diagnostic and the beginning of the next diagnostic, diagnostic time (see **Appendix B**) etc.

The proposed concepts 2�6 may be supported by the following typical optimization problem statements for various systems [9, 14, 28–31]:

1.on the stages of the system conceptual design, development, production and support: system parameters, software, technical and control measures (they are described by a set *Q* of parameters, which may be optimized) are the most rational for the given prediction period if the minimum of expenses *Z(Q)* can be reached

$$Z(\mathbf{Q}\_{rational}) = \min\_{\text{parameters of } \mathbf{Q}} Z(\mathbf{Q}), \tag{8}$$


2.utilization stage:

• System parameters, software, technical and control measures (*Q*) are the most rational for the given period of system operation if the maximum of the probability of successful operation can be reached

$$P\_{quality}(\mathbf{Q}\_{rational}) = \max\_{\text{parameters of } \mathbf{Q}} P\_{quality}(\mathbf{Q}),\tag{9}$$


$$R(\mathbf{Q}\_{rational}) = \min\_{\text{parameters of } \mathbf{Q}} R(\mathbf{Q}), \tag{10}$$

a. with limitations on the quality *Pquality*ð Þ *Q* ≥ *Padm* and expenses *C*ð Þ *Q* ≤*Cadm* and under other operation or maintenance conditions; or


These statements may be retransformed into the other problems statements of expenses minimization for different limitations.

In system life cycle, there may be a combination of these formal statements. Note. There may be another applicable variants of optimization.

## **8. Examples**

The applications of the proposed approach cover: the analysis of the reliability of complex systems built from unreliable components; the estimation of the expected reliability and safety for complex constructions and intelligent manufacturing, the modelling of robotic and automated systems operating in cosmic space, the optimization of a centralized heat supply system, the analysis of the possibilities to keep "organism integrity" by continuous monitoring, the risk analysis during longtime grain storage, the control of timeliness, the completeness and validity of used information; the comparison between information security processes in networks; resources management and predicting quality for information systems operation; the estimation of human factor, the research of mutual monitoring operators actions for transport systems, rational dispatching of a sequence of heterogeneous repair works, the analysis of sea oil and gas systems vulnerabilities in conditions of different threats, the development of recommendations to reduce risks for the important land use planning (including Arctic region), the rationales of preventive measures by using "smart systems" etc.—see [9, 14–31]. Here the examples are intended to demonstrate some probabilistic risk prediction sectorial applications.

## **8.1 Example 1 of predicting the mean residual time before the next parameter abnormality**

The example demonstrates system possibility on the base of solving the inverse problem by models described in subsection 5.2 and **Appendix B**. The research results are applied to rationale actions in real time for coal companies.

The conditions of parameters, traced by dispatcher intelligence centre, are data about a condition before and after the current moment of time. But always the scientifically justified predictions open new possibilities in the prevention of risks. With the use of predicted residual time, the responsible staff (mechanics, technologists, engineers, etc.) can determine the admissible time for rational changing of system operational regime to prevent negative events after expected parameter abnormality. For monitored critical parameters, the probabilistic approach to predict the mean residual time before the next parameter abnormality is proposed.

For every subsystem (element) monitored parameter, the ranges of possible values of conditions are established—see **Figures 7** and **9**. The condition "Abnormality" means system (element) integrity loss (it may simply mean "system failure" that

includes also "functional failure"). To prevent the possible cross-border abnormalities propagation, through the prediction of the residual time on the base of the data about parameter condition fluctuations. Given that the information quality also is estimated and provided (by using models from **Appendix A**).

The predicted residual time Tresid is the solution t0 of the following equation:

$$\mathbf{R}\left(\mathbf{T}\_{\text{penetr}},\mathbf{t},\mathbf{T}\_{\text{betw}},\mathbf{T}\_{\text{diag}},\mathbf{T}\_{\text{req}}\right) = \mathbf{R}\_{\text{adm.}}\left(\mathbf{T}\_{\text{req}}\right) \tag{11}$$

concerning of unknown parameter t, i.e. Tresid = t0.

Here R(Tpenetr, t, Tbetw, Tdiag, Treq.) is the risk to lose integrity, calculated according to the model of **Appendix B**. Tpenetr is the probabilistic expectation of PDF Ωpenetr (τ), defined by the transition statistical parameter from "green" to "yellow" see **Figures 7** and **9**. The other parameters Tbetw and Tdiag in (11) are known—see **Appendix B**. The example explains a method to rationally estimate the value of prediction time Treq.

The properties of the function R(Tpenetr, t, Tbetw, Tdiag, Treq.) are the next:


**Figure 9.** *Example of predicted residual time before the next parameter abnormality.*

before the next abnormality is equal to "x" with the confidence level of the admissible risk R(Tpenetr, x, Tbetw, Tdiag, x). See details in [18].

The proposed ideas, probabilistic methods, models and justified normative requirements are implemented in Russia at the level of the national standard for system engineering—see for example GOST R 58494–2019 regarding the multifunctional safety systems of the coal mines (in the part of a remote monitoring system of critical industrial systems).

## **8.2 Examples related to socio-economic and pharmaceutical safety in a region**

Examples 24 below demonstrate some analytical capabilities of the proposed approach for infrastructure management process related to socio-economic and pharmaceutical safety in a region of Russia. It concerns some problems in the creation and application of enterprise (S-N-T)-system the manufacturer of pharmaceuticals denoted further as (S-N-T)-ESMP. Let the purposes of S-N-T-ESMP are to solve the following tasks:


In relation to the mentioned tasks, which allows achieving the demonstration goals of the examples, the application of methodological approach illustrates predicting on probability level: the risk of failure to reliable perform system infrastructure management process without consideration of specific abnormal impacts (see example 2); the risk of unacceptable damage because of abnormal impacts; the integral risk of failure to reliable perform system infrastructure management process considering specific abnormal impacts (see example 4). Assuming the commensurability of possible damages, a system analysis using probabilistic risk measures is carried out in the examples.

*Probabilistic Predictive Modelling for Complex System Risk Assessments DOI: http://dx.doi.org/10.5772/intechopen.106869*

**Figure 10.**

*The abstract complex structure for modelling (example 2).*

Taking into account possible damages, the objectives of risk prediction are formulated as follows. In the conditions of existing uncertainty, to carry out: a quantitative predicting of the risks of failure to reliable perform system infrastructure management process without consideration of specific abnormal impacts; a quantitative predicting of the risks of unacceptable damage because of abnormal impacts on (S-N-T)-ESMP (both piecemeal for each type of infrastructure tasks and for the entire set of tasks); identification of critical factors affecting risks; determination of such a period in which guarantees of risks retention within admissible limits are maintained; a quantitative predicting of the integral risk of failure to reliable perform system infrastructure management process considering specific abnormal impacts.

Example 2. Here, the infrastructure model without consideration of specific abnormal impacts is focused on a set of output results and assets for solving tasks of the 1st, 2nd and 3rd types—see system structure in **Figure 10**. The following interpretation is applicable: During the given prediction period, the modeled complex structure is in an elementary state "the integrity of the infrastructure is maintained" if an implementation of the system infrastructure management process is reliable to solve the tasks "AND" for socio-economic, "AND" for production and transport "AND" for information and communication infrastructure. Many possible threats affecting the reliability of output results for each of the structural elements have been identified. Without delving into the numerous technical aspects of setting and solving the tasks of developing socio-economic, production and transport, information and communication infrastructure in a region, **Table 2** reflects hypothetical averaged input data for research by the models (see sections 5, 6 and **Appendix B** considering application of **Appendix A** models).


#### **Table 2.**

*Input for probabilistic modelling (example 2).*

For modelling a period from 1 to 4 years was chosen because it is a typical period for short- and medium-term plans according to an infrastructure project. Analysis of the calculation results showed that in probabilistic terms the risk of failure to reliable perform system infrastructure management process without consideration of specific abnormal impacts for 2 years will be 0.282 totally for all elements (see **Figure 11**). In turn, for 1 year the risk will not fall below 0.150 (see **Figure 12**), and for 4 years with weekly diagnostics, the probabilities of "success" and "failure" will almost equal (0.51 vs. 0.49). In practice, such a level of risks is inadmissible, i.e. it is necessary to identify the critical factors (affecting risks) and effective ways to reduce risks.

Additional calculations have shown that one of the critical factors is the parameter "time between the end of diagnostics and the beginning of the next diagnostics" (that can also be called "Mean Time Before Diagnosis-MTBD" because of "diagnostics time of element integrity" is much less—see **Table 2**) for the 1st and 2nd elements (Tbetw). Due to the management decision, expressed in changing the frequency of diagnostics from weekly to daily and the adoption of the appropriate measures to counter threats, with other conditions unchanged, it is possible to reduce risks several times. It is enough to compare the risks in **Figures 11** and **13**. About 2.1-fold reduction in risk has been achieved totally for all elements. That is, due to the most simply implemented

**Figure 11.**

*The risks of failure to reliable perform system infrastructure management process without consideration of specific abnormal impacts on elements 13 for 2 years (for weekly diagnostics).*

#### **Figure 12.**

*The dependence of total risk of failure to reliable perform system infrastructure management process without consideration of specific abnormal impacts from duration of prediction period (for weekly diagnostics).*

*Probabilistic Predictive Modelling for Complex System Risk Assessments DOI: http://dx.doi.org/10.5772/intechopen.106869*

#### **Figure 13.**

*The risks of failure to reliable perform system infrastructure management process without consideration of specific abnormal impacts on elements for 2 years.*

#### **Figure 14.**

*The dependence of total risk of failure to reliable perform system infrastructure management process without consideration of specific abnormal impacts from duration of prediction period (for daily diagnostics).*

organizational measures related to the introduction of more frequent diagnostics of work on the development of socio-economic and production and transport infrastructure, a significant risk reduction is achievable. This finding is the result of the used models. Despite the high value of this logical finding for example conditions, frequent diagnosis generates higher running costs and lower services supply capacity. Diagnosis costs money and time. It should be considered in other optimization problems (see Section 7).

For 1 year, the risk of failure of the infrastructure management process without considering the specific abnormal impacts will be about 0.08 (see **Figure 14**). In turn, as a result of the analysis of the risk dependence on the prediction period (from 1 to 4 years), it is additionally revealed that under the conditions of the example with daily diagnostics the risk level of 0.10 will not be exceeded for 1.3 years. Accordingly, and for infrastructure management process during development, focusing on admissible risk at the level of 0.10, in the conditions of the example, guarantees risks prevention within the admissible limits for about 1.3 years. Recommended measures to reduce risks are to increase the stability of the mechanical properties of the critical areas of structures, to timely carry on preventive and repair maintenance, to perform statistical analysis of emergency situations, and to predict critical unacceptable values of critical parameters inherent in the unacceptable risks.

### *Time Series Analysis - New Insights*

Example 3. In contrast with Example 2, the model of specific abnormal impacts covers a set of actions related to the maintenance of buildings and constructions (element 1), ensuring the operation of engineering and technical systems (element 2), ensuring the operation of engineering networks (element 3), solving development problems of socio-economic infrastructure (element 4), production and transport infrastructure (element 5) and information and communication infrastructure (element 6)—see model on **Figure 15**. The following interpretation is applicable: during the given prediction period, the modeled complex structure is in an elementary state "the integrity of the system in case of abnormal impacts is maintained", if all the system elements are taken into account during the entire prediction period are in the state "the integrity of the system element in case of specific abnormal impacts is maintained".

**Figure 15.** *The abstract complex structure for modelling specific abnormal impacts.*


#### **Table 3.**

*Input for probabilistic modelling (example 3).*

Without delving into the numerous technical, engineering and functional aspects of (S-N-T)-ESMP, **Table 3** reflects hypothetical averaged input data for research by models, described in sections 5�7, **Appendices A, B** and [1–7]. Input values for element 1 consider additional factors leading to degradation and destruction of techno-sphere systems (seismic, wind, snow, corrosion and other natural impacts). For element 6, proper impacts may be from "human factor" and/or from "cyber threats". For elements 2�5, input values have usual explanation.

Analysis of the results showed that in probabilistic terms the risk of unacceptable damage due to specific abnormal impacts for 2 years will be about 0.219 totally for all elements (see **Figure 16**). In turn, for the predict for 4 years with daily monitoring of the state of the engineering infrastructure of the (S-N-T)-ESMP (i.e. elements 1, 2, 3), the risk of unacceptable damage from specific impacts for all elements 1�6 will be about 0.39, and for the predict 1, this probability is about 0.12 (see **Figure 17**). In general, the results are comparable with the results of Example 2 (**Figures 18** and **19**).

Moreover, due to the management decision, expressed in changing the frequency of diagnostics from daily to 1 time every 8 hours it is possible to reduce total risk from 0.219 to 0.091 (see **Figure 18**). And owing to diagnostics every 8 hours the admissible risk level of 0.10 will not be exceeded about 2.3 years (see **Figure 19**).

Example 4. In continuation of Examples 2 and 3, the integral probability *R*Ð ð Þ *T* of failure of the infrastructure management process considering specific

system abnormal impacts for the prediction period *T* = 1 year is calculated using the recommendations of Section 6. It depends on probabilities *R***rel**ð Þ *T* and *R***add**ð Þ� *T* see formula (6). Considering that *R***rel 1year** � � = 0,08 and *R***add 1year** � � = 0,05,

#### **Figure 16.**

*The risks of unacceptable damage because of specific abnormal impacts on elements 1–6 for 2 years (for daily diagnostics).*

#### **Figure 17.**

*The dependence of total risk of unacceptable damage because of specific abnormal impacts on elements 1*�*6 from duration of prediction period (for daily diagnostics).*

**Figure 18.**

*The risks of unacceptable damage because of abnormal impacts on elements 1–6 for 2 years (for diagnostics every 8 hours).*

$$\mathcal{R}\left\{ (\mathbf{1}\mathbf{year}) = \mathbf{1} - (\mathbf{1} - \mathbf{0}, \mathbf{08}) \cdot (\mathbf{1} - \mathbf{0}, \mathbf{05}) \approx \mathbf{0}, \mathbf{126} \right\}$$

Interpretation: the integral risk for the prediction period of 1 year is about 0.126 considering possible damage. In general, such risk is considered elevated. It can be considered acceptable only in exceptional cases when there are no real possibilities of any counteraction to threats. As measures to improve the process, additional control systems for damaging natural factors, emergency protection systems for

### *Probabilistic Predictive Modelling for Complex System Risk Assessments DOI: http://dx.doi.org/10.5772/intechopen.106869*

#### **Figure 19.**

*The dependence of total risk of unacceptable damage because of abnormal impacts on elements 16 from duration of prediction period (for diagnostics every 8 hours).*

techno-sphere systems, operators and personnel under extreme natural hazards, and measures to increase safety against specific system threats (the sources of specific abnormal impacts) can be used. Since such opportunities are far from being exhausted, an additional search for measures to reduce the integral risk is necessary. Decision-making on ways to reduce risks may be quantitatively justified using the proposed models and methods.

### **8.3 What about the possible pragmatic effects?**

In general pragmatic effects are connected with achieving pragmatic goals (see Section 2). It may characterize the efficiency of the implementation of the state and/or corporate strategy in the economy, effects from improving the safety and sustainability of the region's development, from ensuring the protection of the population and territories from natural and man-made hazards, etc. For example, the authors of this chapter took part in the creation of the complex of supporting technogenic safety in the systems of oil and gas transportation and distribution and have been awarded for it by the award of the Government of the Russian Federation in the field of a science and technic. Through the Complex intellectual means, it is possible to detect remotesensing technology: vibrations, fire, flood, unauthorized access, hurricane; and to recognize, identify and predict the development of extreme hazardous situations, and to make decisions in real-time. The applications of this Complex for 200 objects in several regions of Russia during the period 5 years have already provided economy about 8,5 Billion Roubles (reached at the expense of effective risks prediction and processes optimization [7]).

## **9. About directions for development**

It is proposed to focus on scientific and technical efforts on the meta-level of system engineering, which allows, by universal probabilistic models, to set and analytically solve the problems of rational development and efficient operation of complex systems of various functionalities and purposes.

The proposed prioritization of development directions for predicting are: 1 � focusing on scientific and technical efforts on achieving the goals of ensuring the required safety, quality, balanced effects, sustainable operation and development of complex systems; 2 � providing capabilities for predicting and rational risk managing in standard processes of the system life cycle, improving and accumulating knowledge, patterns discovery; 3 � expansion of the functionality of the created models and methods, software and technological and methodological solutions (for predicting and rational risk managing) to all spheres of human activity, cross-application of knowledge bases; 4 � transformation of the existing approach to the creation and use of models and methods into artificial intelligence technology to support logical decisionmaking (based on proactive research with traceability of logic from the idea to the achieved effect).

The proposed steps to implement these directions are 1st step: from pragmatic filtering of information ! to promising ideas and purposeful conceptions; 2nd step: from promising ideas and purposeful conceptions ! to the formalization of uncertainties; 3rd step: from the formalization of uncertainties ! to the knowledge of patterns and logical solutions; 4th step: from the knowledge of patterns and logical solutions ! to rational risk management; 5th step: from rational risk management ! to achieving the required safety, quality, balanced effects and sustainable operation and development.

The expected results will equally be understood at the level of probabilistic risk predictions, identically interpreted and comparable, the traceability of the effectiveness of scientific and technical system efforts from the conceptions to the results obtained will also be ensured. The purposeful aspirations "From uncertainties formalization � to sustainable harmony" (see Section 1) may be really supported.

## **10. Conclusion**

On the generalizations of goals and objectives throughout the system's life cycle and existing approaches to risks prediction, there are proposed main goals of applying probabilistic methods. The goals of probabilistic concepts of risks prediction are connected with: an analysis of opportunities, achievable quality, safety and efficiency; a rationale for achieving desirable characteristics and acceptable conditions; an optimization of systems and processes; and finding and developing new ideas and concepts.

The enlarged classification of probabilistic methods for solving various objectives is explained for all stages of the system life cycle: concept, development, production, utilization, support and retirement.

The conceptual approach, proposed to risk prediction, covers social sphere (person, society, state and world community), natural sphere (earth and space) and techno-sphere (man-made infrastructures and life support facilities).

The essence of the proposed probabilistic concepts of risks prediction for the system is described on the level of probability distribution function. The described methods of risks prediction for complex systems include probabilistic models, methods for risks prediction and integration, optimization methods for rationale actions and examples for solving the problems of system analysis and rationale proactive actions in uncertain conditions. The achievable practical effects are explained.

The prioritization of development directions for risk prediction in standard system processes and targeted steps for their implementation are proposed. They support the purposeful aspirations "From uncertainties formalization � to sustainable harmony" in application to the life cycle of various systems.

## **Appendix A. The recommended models to predict information system operation quality**

The proposed models are presented in **Table A.1**.

## **Appendix B. The models to predict risks for "Black box"**

B.1. The model for technology 1 ("Black box") � see 5.2, [9, 14, 15]. Input:

*Ωpenetr*ð Þ*t* —is the PDF of time between neighboring influences for penetrating a danger source;

*Ωactiv*ð Þ*t* – is the PDF of activation time up to "accident event";

*Tbetw:*—is time between the end of diagnostic and the beginning of the next diagnostic,

*Tdiag*—is diagnostic time.

*R* ¼ 1 � *P* considering consequences.

Variant 1*— Treq:* <sup>&</sup>lt; *Tbetw:* <sup>þ</sup> *Tdiag* :

$$P\_{(1)}\left(T\_{req.}\right) = \mathbf{1} - \mathcal{Q}\_{penet} \* \mathcal{Q}\_{active}\left(T\_{req.}\right). \tag{12}$$

Variant 2*—*the assigned period *Treq:* is more than or equals to an established period between neighboring diagnostics *Treq:* <sup>≥</sup> *Tbetw:* <sup>þ</sup> *Tdiag* :

measure a)

$$P\_{(2)}(T\_{req.}) = N\left(\left(T\_{betw.} + T\_{diag}\right) / T\_{req.}\right) \bullet \mathbf{P}\_{(1)}^N \left(T\_{betw.} + T\_{diag}\right) + \left(T\_{run} / T\_{req.}\right) \bullet \mathbf{P}\_{(1)}(T\_{run}),\tag{13}$$

where *<sup>N</sup>* <sup>¼</sup> *Tgiven<sup>=</sup> Tbetw:* <sup>þ</sup> *Tdiag* is the integer part,

$$T\_{rmn} = T\_{given} - N(T\_{between.} + T\_{diag});$$

measure b)

$$P\_{(2)}\left(T\_{req.}\right) = P\_{(1)}^N \left(T\_{betw.} + T\_{dig}\right) \bullet P\_{(1)}(T\_{run}),\tag{14}$$

where the probability of success within the given time *P*ð Þ<sup>1</sup> *Treq:* is defined by (B.1). B.2. The model for technology 2 ("Black box")—see 5.2, [9, 14, 15]. Input:

Additionally to Input for technology 1: *A t*ð Þ—is the PDF of time from the last finish of diagnostic time up to the first operator error.

Evaluated measures:

Risk to lose system integrity ð Þ *R* . Probability of providing system integrity ð Þ *P* . *R* ¼ 1 � *P* considering consequences.


preparing, transfer and entering into database


## *Probabilistic Predictive Modelling for Complex System Risk Assessments DOI: http://dx.doi.org/10.5772/intechopen.106869*


**Table A.1.**

*The proposed models (for the details —See [7–9, 14, 15]).*

*Probabilistic Predictive Modelling for Complex System Risk Assessments DOI: http://dx.doi.org/10.5772/intechopen.106869*

For variant 1 *Treq:* <sup>&</sup>lt;*Tbetw:* <sup>þ</sup> *Tdiag* : see (A.11). For variant 2 *Treq:* <sup>≥</sup> *Tbetw:* <sup>þ</sup> *Tdiag* : see (A.12), (A.13), and the same (B.2), (B.3). Evaluated measures: Risk to lose system integrity (*R*). Probability of providing system integrity (*P*).

## **Acknowledgements**

The authors would like to thank the Russian Academy of Sciences for supporting the work presented in this Chapter.

The authors much obliged to Dr. Mohamed Eid, President of ESReDA (European Safety, Reliability & Data Association) for meaningful advices regarding Chapter.

And also thanks a lot to Ms. Olga Kostogryzova for some artistic Figures design.

## **Author details**

Andrey Kostogryzov<sup>1</sup> \*, Nikolay Makhutov<sup>2</sup> , Andrey Nistratov<sup>1</sup> and Georgy Reznikov<sup>3</sup>

1 Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, Moscow, Russia

2 The A.A. Blagonravov Institute for Machine Sciences of the Russian Academy of Sciences, Moscow, Russia

3 Company "Regional Engineering Consulting Firm" Ltd., Samara, Russia

\*Address all correspondence to: akostogr@gmail.com

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **References**

[1] Feller W. An Introduction to Probability Theory and Its Applications. New York: Jhon Wiley & Sons; 1971;**2**

[2] Martin J. System Analysis for Data Transmission. Englewood Cliffs; New Jersey: Prentice Hall, Inc; 1972

[3] Gnedenko BV et al. Priority Queueing Systems. Moscow: МSU; 1973

[4] Kleinrock L. Queueing Systems, V.2: Computer Applications. New York: John Wiley & Sons; 1976

[5] Matweev VF, Ushakov VG. Queuing Systems. Moscow: МSU; 1984

[6] Kostogryzov AI, Petuhov AV, Scherbina AM. Foundations of Evaluation, Providing and Increasing Output Information Quality for Automatized System. Moscow: Armament. Policy. Conversion; 1994

[7] Security of Russia. Legal, Social& Economic and Scientific&Engineering Aspects. The Scientific Foundations of Technogenic Safety. Under the editorship of Makhutov N.A. Moscow: Znanie. Volumes 1–63; 1998-2021

[8] Kostogryzov AI. Software Tools Complex for Evaluation of Information Systems Operation Quality (CEISOQ). In: Proceedings of the 34-th Annual Event of the Government Electronics and Information Association (GEIA), Engineering and Technical Management Symposium. Dallas; 2000. pp. 63-70

[9] Kostogryzov A, Nistratov G. Standardization, Probabilistic Modelling, Rational Management and Certification in the Field of System and Sengineering (80 Standards, 100 Probabilistic Models, 35 Software Tools, More than 50 Practical Examples). Moscow: Armament. Policy. Conversion; 2005

[10] Zio En. An Introduction to the Basics of Reliability and Risk Analysis. Singapore: World Scientific Publishing Co.Pte. Ltd; 2006

[11] Makhutov NA. Strength and Safety. Fundamental and Applied Research. Novosibirsk: Nauka; 2008

[12] Kolowrocki K, Soszynska-Budny J. Reliability and Safety of Complex Technical Systems and Processes. London: Springer-Verlag; 2011

[13] Eid M, Rosato V. Critical Infrastructure Disruption Scenarios Analyses via Simulation. Managing the Complexity of Critical Infrastructures. A Modelling and Simulation Approach. New York: Springer Open; 2016:43-62

[14] Kostogryzov AI, Stepanov PV. Innovative Management of Quality and Risks in Systems Life Cycle. Moscow: Armament. Policy. Conversion; 2008

[15] Kostogryzov A, Nistratov G, Nistratov A. Some Applicable Methods to Analyze and Optimize System Processes in Quality Management. Total Quality Management and Six Sigma. Rijeka, Croatia: InTech; 2012:127-196

[16] Kostogryzov A, Nistratov G, Nistratov A. The innovative probability models and software technologies of risks prediction for systems operating in various fields. International Journal of Engineering and Innovative Technology (IJEIT). 2013;**3**(3):146-155

[17] Grigoriev L, Guseinov C, Kershenbaum V, Kostogryzov A. The methodological approach, based on the risks analysis and optimization, to research variants for developing hydrocarbon deposits of Arctic regions. *Probabilistic Predictive Modelling for Complex System Risk Assessments DOI: http://dx.doi.org/10.5772/intechopen.106869*

Journal of Polish Safety and Reliability Association. 2014;**5**:1-2

[18] Artemyev V, Kostogryzov A, Rudenko J, Kurpatov O, Nistratov G, Nistratov A. Probabilistic methods of estimating the mean residual time before the next parameters abnormalities for monitored critical systems. In: Proceedings of the 2nd International Conference on System Reliability and Safety (ICSRS). Milan, Italy; 2017. pp. 368-373

[19] Kostogryzov A, Stepanov P, Nistratov A, Atakishchev O. About Probabilistic Risks Analysis During Longtime Grain Storage. Proceedings of the 2nd Internationale Conference on the Social Science and Teaching Research (ACSS-SSTR), Volume 18 of Advances in Social and Behavioral Science. Singapore: Singapore Management and Sports Science Institute, PTE.Ltd; 2017: 3-8

[20] Kostogryzov A, Stepanov P, Nistratov A, Nistratov G, Klimov S, Grigoriev L. The method of rational dispatching a sequence of heterogeneous repair works. Energetica. 2017;**63**(4): 154-162

[21] Kostogryzov A, Stepanov P, Grigoriev L, Atakishchev O, Nistratov A, Nistratov G. Improvement of existing risks control concept for complex systems by the automatic combination and generation of probabilistic models and forming the storehouse of risks predictions knowledge. In: Proceedings of the 2nd International Conference on Applied Mathematics, Simulation and Modelling (AMSM). Phuket, Thailand: DEStech Publications; 2017. pp. 279-283

[22] Kostogryzov A, Panov V, Stepanov P, Grigoriev L, Nistratov A, Nistratov G. Optimization of sequence of performing heterogeneous repair work for transport systems by criteria of

timeliness. In: Proceedings of the 4th International Conference on Transportation Information and Safety (ICTIS). Canada; 2017. pp. 872-876

[23] Kostogryzov A, Grigoriev L, Golovin S, Nistratov A, Nistratov G, Klimov S. Probabilistic modeling of robotic and automated systems operating in cosmic space. In: Proceedings of the International Conference on Communication, Network and Artificial Intelligence (CNAI). Beijing, China: DEStech Publications; 2018. pp. 298-303

[24] Kostogryzov A, Grigoriev L, Kanygin P, Golovin S, Nistratov A, Nistratov G. The experience of probabilistic modeling and optimization of a centralized heat supply system which is an object for modernization. In: International Conference on Physics, Computing and Probabilistic Modeling (PCMM). Shanghai: DEStech Publications, Inc; 2018. pp. 93-97

[25] Artemyev V, Rudenko J, Nistratov G. Probabilistic modeling in system engineering. Chapter 2. Probabilistic methods and technologies of risks prediction and rationale of preventive measures by using "smart systems". Applications to coal branch for increasing Industrial safety of enterprises. IntechOpen. 2018:23-51

[26] Kershenbaum V, Grigoriev L, Nistratov A. Probabilistic modeling in system engineering. Chapter 3. Probabilistic modeling processes for oil and gas systems. IntechOpen. 2018:55-79

[27] Kostogryzov A, Nistratov A, Nistratov G, Atakishchev O, Golovin S, Grigoriev L. The probabilistic analysis of the possibilities to keep "organism integrity" by continuous monitoring. In: Proceedings of the International

Conference on Mathematics, Modelling, Simulation and Algorithms (MMSA). Chengdu, China: Atlantis Press; 2018. pp. 432-435

[28] Kostogryzov A, Korolev V. Probability, combinatorics and control. Chapter 1. Probabilistic methods for cognitive solving problems of artificial intelligence systems operating in specific conditions of uncertainties. IntechOpen. 2020:3-34

[29] Kostogryzov A, Kanygin P, Nistratov A. Probabilistic comparisons of systems operation quality for uncertainty conditions. RTA&A. 2020; **15**:63-73

[30] Kostogryzov A, Nistratov A, Nistratov G. Analytical risks prediction. Rationale of system preventive measures for solving quality and safety problems. In: Sukhomlin V, Zubareva E, editors. Modern Information Technology and IT Education. SITITO 2018. Communications in Computer and Information Science. Cham: Springer; 2020. pp. 352-364. DOI: 10.1007/978-3- 030-46895-8\_27

[31] Kostogryzov A, Nistratov A. Probabilistic methods of risk predictions and their pragmatic applications in life cycle of complex systems. In: Safety and Reliability of Systems and Processes. Poland: Gdynia Maritime University; 2020. pp. 153-174

[32] Moskvichev VV, Makhutov NA, Shokin YM, Gadenin MM. Applied Problems of Structural Strength and Mechanics of Destruction of Technical Systems. Novosibirsk: Nauka; 2021

[33] Gneiting T, Balabdaoui F, Raferty AE. Probabilistic forecasts, calibration and sharpness. Journal of the Royal Statistical Society, Series B (Statistical Methodology). 2007;**68**:243-268

[34] Kostogryzov A, Nistratov A, Zubarev I, Stepanov P, Grigoriev L. About accuracy of risks prediction and importance of increasing adequacy of used probabilistic models. Journal of Polish Safety and Reliability Association. Summer Safety and Reliability Seminars. 2015;**6**:71-80

Section 5
