Woolfolk, A. (2011). *Educational psychology* (3rd ed.). Boston, MA: Allyn & Bacon. **Part 2**

**New Systems Engineering Theories** 

230 Systems Engineering – Practice and Theory

Waring, A. (1996). *Practical systems thinking*. Boston, MA: Thomson Business Press.

## **Usage of Process Capability Indices During Development Cycle of Mobile Radio Product**

Marko E. Leinonen *Nokia Oyj Finland* 

#### **1. Introduction**

Mobile communication devices have become a basic need for people today. Mobile devices are used by all people regardless of the race, age or nationality of the person. For this reason, the total number of mobile communication devices sold was almost 1.6 billion units worldwide during 2010 (Gartner Inc., 2011). Manufacturability and the level of quality of devices need to be taken into account at the early stages of design in order to enable a high volume of production with a high yield.

It is a common and cross-functional task for each area of technology to build the required level of quality into the end product. For effective communication between parties, a common quality language is needed and process capability indices are widely used for this purpose.

The basis for the quality is designed into the device during the system specification phase and it is mainly implemented during the product implementation phase. The quality level is designed in by specifying design parameters, target levels for the parameters and the appropriate specification limits for each parameter. The quality level of the end product during the manufacturing phase needs to be estimated with a limited number of measurement results from prototype devices during the product development phase. Statistical methods are used for this estimation purpose. A prototype production may be considered to be a short-term production compared to the life cycle of the end product and a long-term performance process is estimated based on the short-term production data. Even though statistical process control (SPC) methods are widely used in high volume production, the production process may vary within statistical control limits without being out of the control leading product to product variation between product parameters.

Easy to use statistical process models are needed to model long-term process performance during the research and development (R&D) phase of the device. Higher quality levels for the end product may be expected, if the long-term variation of the manufacturing process is taken into account more easily during the specification phase of the product's parameters.

#### **2. Product development process**

An overview of a product development process is shown in Figure 1 (based on Leinonen, 2002). The required characteristics of a device may be defined based on a market and competitor analyses. A product definition phase is a cross-functional task where marketing and the quality department and technology areas together define and specify the main functions and target quality levels for features of the device. A product design phase includes system engineering and the actual product development of the device. The main parameters for each area of technology as well as the specification limits for them are defined during the system engineering phase. The specification limits may be 'hard' limits which cannot be changed from design to design, for example governmental rulings (e.g., Federal Communications Commission, FCC) or standardisation requirements (e.g., 3GPP specifications) or 'soft' limits, which may be defined by the system engineering team.

Fig. 1. An overview of a product development process

The main decisions for the quality level of the end product are done during the system engineering and product design phases. Product testing is a supporting function which ensures that the selections and implementations have been done correctly during the implementation phase of the development process. The quality level of the end product needs to be estimated based on the test results prior to the design review phase, where the maturity and the quality of the product is reviewed prior the mass production phase. New design and prototype rounds are needed until the estimated quality level reaches the required level. Statistical measures are tracked and stored during the manufacturing phase of the product and those measures are used as a feedback and as an input for the next product development.

#### **2.1 Process capability indices during the product development**

An origin of process capability indices is in the manufacturing industry where the performance of manufacturing has been observed with time series plots and statistical process control charts since 1930s. The control charts are useful for controlling and monitoring production, but for the management level a raw control data is too detailed and thus a simpler metric is needed. Process capability indices were developed for this purpose and the first metric was introduced in early 1970s. Since then, numerous process capability indices are presented for univariate (more than twenty) and multivariate (about ten) purposes (Kotz & Johnson, 2002). The most commonly used process capability indices are

competitor analyses. A product definition phase is a cross-functional task where marketing and the quality department and technology areas together define and specify the main functions and target quality levels for features of the device. A product design phase includes system engineering and the actual product development of the device. The main parameters for each area of technology as well as the specification limits for them are defined during the system engineering phase. The specification limits may be 'hard' limits which cannot be changed from design to design, for example governmental rulings (e.g., Federal Communications Commission, FCC) or standardisation requirements (e.g., 3GPP

specifications) or 'soft' limits, which may be defined by the system engineering team.



> Manufacturing Process - Quality assurance e.g. Statistical process control

The main decisions for the quality level of the end product are done during the system engineering and product design phases. Product testing is a supporting function which ensures that the selections and implementations have been done correctly during the implementation phase of the development process. The quality level of the end product needs to be estimated based on the test results prior to the design review phase, where the maturity and the quality of the product is reviewed prior the mass production phase. New design and prototype rounds are needed until the estimated quality level reaches the required level. Statistical measures are tracked and stored during the manufacturing phase of the product and those measures are used as a feedback and as an input for the next

An origin of process capability indices is in the manufacturing industry where the performance of manufacturing has been observed with time series plots and statistical process control charts since 1930s. The control charts are useful for controlling and monitoring production, but for the management level a raw control data is too detailed and thus a simpler metric is needed. Process capability indices were developed for this purpose and the first metric was introduced in early 1970s. Since then, numerous process capability indices are presented for univariate (more than twenty) and multivariate (about ten) purposes (Kotz & Johnson, 2002). The most commonly used process capability indices are

Product Design System design Design implementation Design testing Prototype testing Reliability testing

Design reviews:


Product Definition Multi-discipline work

Customer & Market Needs Market segmentation

Minimum Product Requirements Design Standards

Fig. 1. An overview of a product development process

**2.1 Process capability indices during the product development** 

Competition Analysis

Selling & Delivery

product development.

still *C*p and *C*pk which are widely used within the automotive - an electrical component - the telecommunication and mobile device industries. An overview of the use of process capability indices for quality improvement during the manufacturing process (based on Albing, 2008; Breyfogle 1999) is presented in Figure 2.

Fig. 2. An improvement process for production related parameters

The usage of process capability indices has been extended from the manufacturing industry to the product development phase, where the improvement of the quality level during product development needs to be monitored, and process capability indices are used for this purpose. The main product development phases, where product capability indices are used, are shown in Figure 3.

Fig. 3. Product development steps where process capability indices are actively used

An advantage of process capability indices is that they are unitless, which provides the possibility of comparing the quality levels of different technology areas to each other during the development phase of the mobile device, for the example mechanical properties of the device may be compared to radio performance parameters. Additionally, process capability indices are used as a metric for quality level improvement during the development process of the device. The following are examples of how process capability indices may be used during the product development phase:


Process capability indices can be calculated with some statistical properties of data regardless of the shape of a data distribution. The shape of the data needs to be taken into account if the product capability index is mapped to an expected quality level of the end product. Typically, normal distributed data is assumed for simplicity, but in real life applications a normality assumption is rarely available, at least in radio engineering applications. One possibility to overcome the non-normality of the data is to transform the data closer to the normal distribution and to calculate the process capability indices for the normalised data (Breyfogle, 1999); however, this normalisation is not effective for all datasets. An alternative method is to calculate the process capability indices based on the probability outside of the specification limits and to calculate the process capability index backwards.

#### **2.2 RF system engineering during the product development**

RF (Radio Frequency) engineering develops circuitries which are used for wireless communication purposes. RF system engineering is responsible for selecting the appropriate RF architectures and defining the functional blocks for RF implementations. System engineering is responsible for deriving the block level requirements of each RF block based on specific wireless system requirements, e.g., GSM or WCDMA standards and regulatory requirements such as FCC requirements for unwanted radio frequency transmissions.

RF system level studies include RF performance analyses with typical component values as well as statistical analyses with minimum and maximum values of components. The statistical analyses may be done with statistical software packages or with RF simulators in order to optimise performance and select the optimal typical values of components for a maximal quality level. RF block level analyses with process capability indices are studied in Leinonen (1996) and a design optimisation with process capability contour plots and process capability indices in Wizmuller (1998). Most of the studied RF parameters are onedimensional parameters which are studied and optimised simultaneously, such as the sensitivity of a receiver, the linearity of a receiver and the noise figure of a receiver.

Some product parameters are multidimensional or cross-functional and need a multidimensional approach. A multiradio operation is an example of a multidimensional radio parameter, which requires multidimensional optimisation and cross-technology communication. The requirements for the multiradio operation and interoperability need to be agreed as a cross-functional work covering stake holders from product marketing, system engineering, radio engineering, testing engineering and the quality department. The requirement for multiradio interoperability - from the radio engineering point of view - is a probability when the transmission of the first radio interferes with the reception of a second radio. The probability may be considered as a quality level, which may be communicated with a process capability index value and which may be monitored during the development process of the device. A multiradio interoperability (IOP) may be presented with a twodimensional figure, which is shown in Figure 4 (based on Leinonen, 2010a). Interference is present if the signal condition is within an IOP problem area. The probability of when this situation may occur can be calculated with a two dimensional integral, which includes the probabilities of radio signals and a threshold value. The actual threshold value for the transmission signal level is dependent on - for example - an interference generation mechanism, an interference tolerance of the victim radio and the activity periods of radios.

Fig. 4. Illustration of multiradio interoperability from the RF system engineering point of view

#### **3. Overview of process capability indices**

236 Systems Engineering – Practice and Theory

Process capability indices can be calculated with some statistical properties of data regardless of the shape of a data distribution. The shape of the data needs to be taken into account if the product capability index is mapped to an expected quality level of the end product. Typically, normal distributed data is assumed for simplicity, but in real life applications a normality assumption is rarely available, at least in radio engineering applications. One possibility to overcome the non-normality of the data is to transform the data closer to the normal distribution and to calculate the process capability indices for the normalised data (Breyfogle, 1999); however, this normalisation is not effective for all datasets. An alternative method is to calculate the process capability indices based on the probability outside of the specification

RF (Radio Frequency) engineering develops circuitries which are used for wireless communication purposes. RF system engineering is responsible for selecting the appropriate RF architectures and defining the functional blocks for RF implementations. System engineering is responsible for deriving the block level requirements of each RF block based on specific wireless system requirements, e.g., GSM or WCDMA standards and regulatory requirements such as FCC requirements for unwanted radio frequency transmissions.

RF system level studies include RF performance analyses with typical component values as well as statistical analyses with minimum and maximum values of components. The statistical analyses may be done with statistical software packages or with RF simulators in order to optimise performance and select the optimal typical values of components for a maximal quality level. RF block level analyses with process capability indices are studied in Leinonen (1996) and a design optimisation with process capability contour plots and process capability indices in Wizmuller (1998). Most of the studied RF parameters are onedimensional parameters which are studied and optimised simultaneously, such as the

Some product parameters are multidimensional or cross-functional and need a multidimensional approach. A multiradio operation is an example of a multidimensional radio parameter, which requires multidimensional optimisation and cross-technology communication. The requirements for the multiradio operation and interoperability need to be agreed as a cross-functional work covering stake holders from product marketing, system engineering, radio engineering, testing engineering and the quality department. The requirement for multiradio interoperability - from the radio engineering point of view - is a probability when the transmission of the first radio interferes with the reception of a second radio. The probability may be considered as a quality level, which may be communicated with a process capability index value and which may be monitored during the development process of the device. A multiradio interoperability (IOP) may be presented with a twodimensional figure, which is shown in Figure 4 (based on Leinonen, 2010a). Interference is present if the signal condition is within an IOP problem area. The probability of when this

sensitivity of a receiver, the linearity of a receiver and the noise figure of a receiver.

A robustness indicator of design during the R&D phase and product testing

 A decision-making tool of the quality level during design reviews A process capability indicator during the mass production phase A tool to follow the production quality for quality assurance purposes

limits and to calculate the process capability index backwards.

**2.2 RF system engineering during the product development** 

Process capability indices are widely used across different fields of industry as a metric of the quality level of products (Breyfogle, 1999). In general, process capability indices describe a location of a mean value of a parameter within specification limits. The specification limits can be 'hard' limits, which cannot be changed from product to product, or 'soft' limits, which are defined during the system engineering phase based on the mass production data of previous or available components, or else the limits are defined based on numerical calculations or simulations.

The most commonly used process capability indices within industry are so-called 'first generation' process capability indices *C*p and *C*pk. The *C*p index is (Kotz S. & Johnson, 1993)

$$C\_p = \frac{LUSL - LSL}{6\sigma} \,\,\,\,\,\tag{1}$$

where USL is an upper specification limit and LSL is a lower specification limit, and σ is a standard deviation unit of a studied parameter. *C*pk also takes the location of the parameter into account and it is defined (Kotz S. & Johnson, 1993)

$$C\_{pk} = \min\left(\frac{\text{USL} - \mu}{3\sigma}, \frac{\mu - \text{LSL}}{3\sigma}\right),\tag{2}$$

where μ is a mean value of the parameter. The process capability index *C*pk value may be converted to an expected yield with a one-sided specification limit (Kotz S. & Johnson, 1993)

$$\text{Yield} = \Phi \Big( \Im C\_{pk} \Big)\_{\text{\tiny \text{\tiny}}} \tag{3}$$

where Φ is a cumulative probability function of a standardised normal distribution. A probability outside of the specification limit is one minus the yield, which is considered to be a quality level. A classification of process capability indices and expected quality levels are summarised in Table 1 (Pearn and Chen, 1999; Leinonen, 2002). The target level for *C*pk in high volume production is higher than 1.5, which corresponds to a quality level of 3.4 dpm (defects per million)


Table 1. A classification of the process capability index values and expected quality level

The *C*pk definition in equation 2 is based on the mean value and the variation of the data, but alternatively the *C*pk may be defined as an expected quality level (Kotz S. & Johnson, 1993)

$$\mathcal{L}\_{\rm pk} = -\frac{1}{3} \Phi^{\cdot 1} (\mathcal{V}) \, \, \, \, \, \tag{4}$$

where γ is the expected proportion of non-conformance units.

Data following a normal distribution is rarely available in real life applications. In many cases, the data distribution is skewed due to a physical phenomenon of the analysed parameter. The process capability analysis and the expected quality level will match each other if the shape of the probability density function of the parameter is known and a statistical analysis is done based on the distribution. The process capability index *C*pk has been defined for non-normally distributed data with a percentile approach, which has now been standardised by the ISO (International Standardisation Organisation) as their definition of the *C*pk index. The definition of *C*pk with percentiles is (Clements, 1989*)*

$$C\_{pk} = \min\left\{ \frac{\text{USL-M}}{U\_p - \text{M}}, \frac{\text{M-LSL}}{\text{M} - L\_p} \right\},\tag{5}$$

where M is a median value, *U*p is a 99.865 percentile and *L*p is a 0.135 percentile.

A decision tree for selecting an approach to the process capability analysis is proposed in Figure 5. The decision tree is based on the experience of the application of process capability indices to various real life implementations. The first selection is whether the analysed data is a one-dimensional or a multidimensional. Most of the studied engineering applications have been one-dimensional, but the data is rarely normally distributed. A transform function, such as a Cox-Box or a Johnson transformation, may be applied to the data to convert the data so as to resemble a normal distribution. If the data is normally distributed, then the results based on equations 2 and 3 will match each other. If a probability density function of the parameter is known, then the process capability analysis should be done with the known distribution. Applications for this approach are discussed in Chapter 4. The process capability analysis based on equation 5 is preferred for most real-life applications.

where Φ is a cumulative probability function of a standardised normal distribution. A probability outside of the specification limit is one minus the yield, which is considered to be a quality level. A classification of process capability indices and expected quality levels are summarised in Table 1 (Pearn and Chen, 1999; Leinonen, 2002). The target level for *C*pk in high volume production is higher than 1.5, which corresponds to a quality level of 3.4

**Acceptable level** *C***pk value Low limit High limit**  Poor 0.00 *C*pk < 0.50 500000 dpm 66800 dpm Inadequate 0.50 *C*pk < 1.00 66800 dpm 1350 dpm Capable 1.00 *C*pk < 1.33 1350 dpm 32 dpm Satisfactory 1.33 *C*pk < 1.50 32 dpm 3.4 dpm Excellent 1.50 *C*pk < 2.00 3.4 dpm 9.9\*10-4 dpm

Table 1. A classification of the process capability index values and expected quality level

pk

definition of the *C*pk index. The definition of *C*pk with percentiles is (Clements, 1989*)*

where M is a median value, *U*p is a 99.865 percentile and *L*p is a 0.135 percentile.

The *C*pk definition in equation 2 is based on the mean value and the variation of the data, but alternatively the *C*pk may be defined as an expected quality level (Kotz S. & Johnson, 1993)

> 1 3 *C*

Data following a normal distribution is rarely available in real life applications. In many cases, the data distribution is skewed due to a physical phenomenon of the analysed parameter. The process capability analysis and the expected quality level will match each other if the shape of the probability density function of the parameter is known and a statistical analysis is done based on the distribution. The process capability index *C*pk has been defined for non-normally distributed data with a percentile approach, which has now been standardised by the ISO (International Standardisation Organisation) as their

> USL-M M-LSL min , *p p* M M *Cpk U L*

A decision tree for selecting an approach to the process capability analysis is proposed in Figure 5. The decision tree is based on the experience of the application of process capability indices to various real life implementations. The first selection is whether the analysed data is a one-dimensional or a multidimensional. Most of the studied engineering applications have been one-dimensional, but the data is rarely normally distributed. A transform function, such as a Cox-Box or a Johnson transformation, may be applied to the data to convert the data so as to resemble a normal distribution. If the data is normally distributed, then the results based on equations 2 and 3 will match each other. If a probability density function of the parameter is known, then the process capability analysis should be done with the known distribution. Applications for this approach are discussed in Chapter 4. The process capability analysis based on equation 5 is preferred for most real-life applications.


, (4)

, (5)

Super *C*pk 2.00 9.9\*10-4 dpm

where γ is the expected proportion of non-conformance units.

dpm (defects per million)

In general, the analysis of multidimensional data is more difficult than one-dimensional data. A correlation of the data will have an effect to the analysis in the multidimensional case. The correlation of the data will change the shape and the direction of the data distribution so that the expected quality level and calculated process capability index do not match one another. A definition of a specification region for multidimensional data is typically a multidimensional cube, but it may alternatively also be a multidimensional sphere, which is analysed in Leinonen (2010b). The process capability analysis may be done with analytical calculus or numerical integration of multidimensional data, if the multidimensional data is normally distributed (which is rarely the case). Transformation functions are not used for non-normally distributed multidimensional data. A numerical integration approach for process capability analysis may be possible for non-distributed multidimensional data but it may be difficult with real life data. A Monte Carlo simulationbased approach has been preferred for non-normally distributed multidimensional data. The process capability analysis has been done based on equation 3, where simulated probability out of the specification region is converted to a corresponding *C*pk value. The Monte Carlo simulations are done with computers, either with mathematical or spread sheet software based on the properties of the statistical distribution of the data.

Process performance indices *P*p and *P*pk are defined in a manner similar to the process capability indices *C*p and *C*pk, but the definition of the variation is different. *P*p and *P*pk are defined with a long-term variation while *C*p and *C*pk are defined with a short-term variation (Harry & Schroeder, 2000). Both the short-term and the long-term variations can be distinguished from each other by using statistical control charts with a rational sub-grouping of the data in a time domain. The short-term variation is a variation within a sub-group and the long-term variation sums up short-term variations of sub-groups and a variation between sub-group mean values, which may happen over time. Many organisations do distinguish between *C*pk and *P*pk due to similar definitions of the indices (Breyfogle, 1999).

Fig. 5. Process capability analysis selection tree

#### **3.1 Statistical process models for manufacturability analysis**

An overview of the usage of process capability analyses during the product development process is shown in Figure 6. Data from a pilot production is analysed in R&D for development purposes. These process capability indices provide information about the maturity level of the design and the potential quality level of the design. The pilot production data may be considered as a short-term variation of the device as compared with a mass production (Uusitalo, 2000). Statistical process models for sub-group changes during a mass production process are needed in order to estimate long-term process performance based on the pilot production data. A basic assumption is that the manufacturing process is under statistical process control, which is mandatory for high volume device production. A mean value and a variation of the parameters are studied during mass production. The mean values of parameters change over time, since even if the process is under statistical process control, statistical process control charts allow the process to fluctuate between the control limits of the charts.

Fig. 6. Long-term process performance estimation during product development

An ideal process is presented in Figure 7, where the mean value and the variation of the process are static without a fluctuation over time. There are some fluctuations in real life processes and those are controlled by means of statistical process control. SPC methods are based on a periodic sampling of the process, and the samples are called sub-groups. The frequency of sampling and the number of samples within the sub-group are processdependent parameters. The size of the sub-group is considered to be five in this study, which has been used in industrial applications and in a Six Sigma process definition. The size of sub-group defines control limits for the mean value and the standard deviation of the process. The mean value of sub-groups may change within +/- 1.5 standard deviation units around the target value without the process being out of control with a sub-group size of five. The variation of the process may change up to an upper process control limit (B4) which is 2.089 with a sub-group size of five.

The second process model presented in Figure 8 is called a Six Sigma community process model. If the mean value of the process shifts from a target value, the mean will shift 1.5 standard deviation units towards the closer specification limit and the mean value will stay there. The variation of the process is a constant over time in the Six Sigma process model, but it is varied with a normal and a uniform distribution in Chapter 3.2.

The mean value of the process varies over time within control limits, but the variation is a constant in the third process model presented in Figure 9. The variation of the mean value within the control limits is modelled with a normal and a uniform distribution.

The mean value and the variation of the process are varied in the fourth process model presented in Figure 10. The changes of the mean value and the variation of sub-groups may

maturity level of the design and the potential quality level of the design. The pilot production data may be considered as a short-term variation of the device as compared with a mass production (Uusitalo, 2000). Statistical process models for sub-group changes during a mass production process are needed in order to estimate long-term process performance based on the pilot production data. A basic assumption is that the manufacturing process is under statistical process control, which is mandatory for high volume device production. A mean value and a variation of the parameters are studied during mass production. The mean values of parameters change over time, since even if the process is under statistical process control, statistical process control charts allow the process to fluctuate between the

> **A mass production and production data**

An ideal process is presented in Figure 7, where the mean value and the variation of the process are static without a fluctuation over time. There are some fluctuations in real life processes and those are controlled by means of statistical process control. SPC methods are based on a periodic sampling of the process, and the samples are called sub-groups. The frequency of sampling and the number of samples within the sub-group are processdependent parameters. The size of the sub-group is considered to be five in this study, which has been used in industrial applications and in a Six Sigma process definition. The size of sub-group defines control limits for the mean value and the standard deviation of the process. The mean value of sub-groups may change within +/- 1.5 standard deviation units around the target value without the process being out of control with a sub-group size of five. The variation of the process may change up to an upper process control limit (B4)

The second process model presented in Figure 8 is called a Six Sigma community process model. If the mean value of the process shifts from a target value, the mean will shift 1.5 standard deviation units towards the closer specification limit and the mean value will stay there. The variation of the process is a constant over time in the Six Sigma process model,

The mean value of the process varies over time within control limits, but the variation is a constant in the third process model presented in Figure 9. The variation of the mean value

The mean value and the variation of the process are varied in the fourth process model presented in Figure 10. The changes of the mean value and the variation of sub-groups may

but it is varied with a normal and a uniform distribution in Chapter 3.2.

within the control limits is modelled with a normal and a uniform distribution.

Fig. 6. Long-term process performance estimation during product development

**Statistical long-term process modelling**

**Process performance index estimation**

**Actual Process performance index**

control limits of the charts.

**Research and development** 

**A pilot production and production data** 

**A design of device**

**Manufacturing**

**Process capability index estimation**

which is 2.089 with a sub-group size of five.

be modelled with both a normal and a uniform distribution. The normal distribution is the most common distribution for modelling a random variation. For example, tool wear in the mechanical industry produces a uniform mean value shift of the process over time.

A short-term process deviation is calculated from the ranges of sub-group and a long-term variation is calculated with a pooled standard deviation method over all sub-groups (Montgomery, 1991). If the number of samples of the sub-group is small - i.e., less than 10 the range method in deviation estimation is preferred due to the robustness of outlier observations (Bissell, 1990). For control chart creation, 20 to 25 sub-groups are recommended (Lu & Rudy, 2002). It is easier and safer to use a pooled standard deviation method for all the data in an R&D environment for the standard deviation estimation to overcome time and order aspects of the data.

#### Fig. 7. An ideal process model without mean or deviation changes

Fig. 9. A process model with a variable mean value and a constant variation of sub-groups

Time

Fig. 10. A process model with a variable mean value shift and variations of sub-groups

#### **3.2 Process model effect to one-dimensional process performance index**

Long-term process performance may be estimated based on short-term process capability with a statistical process model. The easiest model is a constant shift model, which is presented in Figure 8. The mean value of sub-groups is shifted with 1.5 deviation units with a constant variation. The process performance index is (Breyfogle, 1999)

$$P\_{pk} = \min\left(\frac{\text{LSL} - \mu}{3\sigma} - 0.5, \frac{\mu - \text{LSL}}{3\sigma} - 0.5\right),\tag{6}$$

where σ is a short-term standard deviation unit.

A constant variation within sub-groups with a varied mean value of sub-groups is presented in Figure 9. It is assumed that the variation of the mean value of sub-groups is a random process. If the variation is modelled with a uniform distribution within statistical control limits (+/- 1.5 standard deviation units), then long-term process standard deviation is

$$
\sigma\_{\text{Long term}} = \sqrt{\sigma^2 + \frac{\left(1.5 - (-1.5)\right)^2}{12} \sigma^2} = \frac{\sqrt{7}}{2} \sigma \approx 1.330 \sigma \tag{7}
$$

and a corresponding long-term process performance index *P*pk is

$$P\_{pk} = \min\left(\frac{\text{USL} - \mu}{3 \cdot 1.330 \sigma}, \frac{\mu - \text{LSL}}{3 \cdot 1.330 \sigma}\right) \approx \min\left(\frac{\text{USL} - \mu}{3.99 \sigma}, \frac{\mu - \text{LSL}}{3.99 \sigma}\right). \tag{8}$$

The second process model for the variation of the mean values of the sub-groups of the process presented in Figure 9 is a normal distribution. The process is modelled so that the process control limits are assumed to be natural process limits or the process is within the control limits with a 99.73% probability. Thus, the standard deviation of the mean drift is 0.5 standard deviation units and the total long-term deviation with normal distributed subgroup mean variation is

$$
\sigma\_{\text{long term}} = \sqrt{\sigma^2 + \left(0.5\sigma\right)^2} = \sigma\sqrt{1 + 0.25} = 1.118\sigma\tag{9}
$$

A corresponding long-term process performance index *P*pk is

Usage of Process Capability Indices During Development Cycle of Mobile Radio Product 243

242 Systems Engineering – Practice and Theory

Grand average

where σ is a short-term standard deviation unit.

Long term

group mean variation is

Parameter value


+3.0 short-term deviation units +1.5 short-term deviation units

...

Total data Population

deviation

Long-term parameter

Time

Sub-group 1 Sub-group N

Fig. 10. A process model with a variable mean value shift and variations of sub-groups

Long-term process performance may be estimated based on short-term process capability with a statistical process model. The easiest model is a constant shift model, which is presented in Figure 8. The mean value of sub-groups is shifted with 1.5 deviation units with

min 0.5, 0.5 3 3 *pk*

A constant variation within sub-groups with a varied mean value of sub-groups is presented in Figure 9. It is assumed that the variation of the mean value of sub-groups is a random process. If the variation is modelled with a uniform distribution within statistical control limits

> <sup>2</sup> 2 2

min , min , 3 1.330 3 1.330 3.99 3.99 *pk*

<sup>2</sup> <sup>2</sup>

long term

 

 

The second process model for the variation of the mean values of the sub-groups of the process presented in Figure 9 is a normal distribution. The process is modelled so that the process control limits are assumed to be natural process limits or the process is within the control limits with a 99.73% probability. Thus, the standard deviation of the mean drift is 0.5 standard deviation units and the total long-term deviation with normal distributed sub-

 *LSL*

> 

1.5 1.5 <sup>7</sup> 1.330 12 2

*<sup>P</sup>* . (8)

0.5 1 0.25 1.118

(7)

 

(9)

 

*USL LSL USL LSL*

*<sup>P</sup>* , (6)

**3.2 Process model effect to one-dimensional process performance index**

*USL*

(+/- 1.5 standard deviation units), then long-term process standard deviation is

 

A corresponding long-term process performance index *P*pk is

and a corresponding long-term process performance index *P*pk is

a constant variation. The process performance index is (Breyfogle, 1999)

$$P\_{pk} = \min\left(\frac{\text{LSL} - \mu}{3 \cdot 1.118\sigma}, \frac{\mu - \text{LSL}}{3 \cdot 1.118\sigma}\right) \approx \min\left(\frac{\text{LSL} - \mu}{3.35\sigma}, \frac{\mu - \text{LSL}}{3.35\sigma}\right). \tag{10}$$

The effects of the process models to the process performance indices are summarised in Figure 11. The Six Sigma process is defined in that the process capability 2.0 corresponds with the process performance 1.5. The same relationship for process capability 2.0 can be seen if the sub-group means are varied with a uniform distribution. If the process capability is less than 2.0, then the process performance index based on a normal distribution model is clearly higher than with other process models. This may be taken into account when specification limits are defined for the components during the R&D phase. A tolerance reserved for manufacturability may be reduced if a normal distribution may be assumed for the process model instead of the uniform distribution or the constant mean shift, based on previous experience. The process capability Cp value 2.0 is mapped to a process performance index Ppk 1.66 with a normal distribution, and only to 1.50 with the constant mean shift and the uniform distribution models. The estimated quality levels for the process with a process capability *C*p value 2.0 are 3.4 dpm with the constant mean shift, 2.9 dpm with the uniform distribution and 0.048dpm with the normal distribution.

Fig. 11. The effect of the statistical model of sub-group mean value to the process performance index

A realistic statistical process model is presented in Figure 10, where both the mean value and the variation of the sub-groups are varied within the control limits of the control charts for both mean values (Xbar-chart) and variations (s-chart). The effects of the variation within the sub-groups are modelled with both a normal and a uniform distribution. the effect of the variation distribution for the variation within the sub-groups is calculated for a process with a constant mean value, and the combined effect of the variation of the sub-group means and sub-group variations are simulated.

Firstly, the mean value of the process is assumed to be a constant, and a long-term standard deviation is calculated by combining within sub-groups and between sub-groups' variations. The variation within sub-groups is modelled to be one and a standard deviation between subgroups is defined so that a probability exceeding an UCL (Upper Control Limit) of the s-chart is 0.27 per cent, or the UCL limit is three standard deviation units away from the average value. The UCL (or B4 value) value for the s-chart is 2.089 when a sub-group size 5 is used and a Lower Control Limit (LCL) is zero. The long-term variation can be calculated by

$$
\sigma\_{\text{Long term}} = \sqrt{\sigma^2 + \left(\frac{\left(2.089 - 1\right)}{3}\right)^2 \sigma^2} \approx 1.064\sigma \tag{11}
$$

A corresponding long-term process performance index *P*pk is

$$P\_{pk} = \min\left(\frac{\text{USL} - \mu}{3 \cdot 1.064 \sigma}, \frac{\mu - \text{LSL}}{3 \cdot 1.064 \sigma}\right) \approx \min\left(\frac{\text{USL} - \mu}{3.19}, \frac{\mu - \text{LSL}}{3.19}\right). \tag{12}$$

The second process model is a uniform distribution for the variation between sub-groups. The uniform distribution is defined so that the variation may drift between the control limits of the s-chart, where the UCL is 2.089 and the LCL is zero. The variation within the subgroup is assumed to be normally distributed with a standard deviation of one. The longterm variation is

$$
\sigma\_{\text{Long term}} = \sqrt{\sigma^2 + \frac{\left(2.089 - 0\right)^2}{12} \sigma^2} \approx 1.168 \sigma \tag{13}
$$

A corresponding long-term process performance index *P*pk is

$$P\_{pk} = \min\left(\frac{\text{USL} - \mu}{3 \cdot 1.168 \sigma}, \frac{\mu - \text{LSL}}{3 \cdot 1.168 \sigma}\right) \approx \min\left(\frac{\text{USL} - \mu}{3.50}, \frac{\mu - \text{LSL}}{3.50}\right). \tag{14}$$

The combined effects of the variations of the sub-group mean value and the variation are simulated with Matlab with ten million observations ordered into sub-groups with five observations within each sub-group. The results of the combined effects of variations of the mean and variation of the sub-groups are presented in Figure 12. The results based on a normal distribution process model for the mean value are closest to the perfect process. The results based on a uniform distribution process model for variation give the most pessimistic quality level estimations.

New equations for process performance indices with various statistical process models are presented in Table 2. It is assumed that the upper specification limit is closer to the mean value in order to simplify the presentation of equations without losing generality. The top left corner equations are used in the literature for process performance indices and others are based on the results from Figures 11 and 12. Short term data models the long term process performance based on these equations. These equations may be used with measured data from the pilot production or during the system engineering phase when component specifications are determined. The short-term data during the system engineering phase may be generated based on Monte Carlo–simulations. A system engineer may test the effects of different statistical process models to the specification limit proposals with these simple equations and estimate a quality level.

Firstly, the mean value of the process is assumed to be a constant, and a long-term standard deviation is calculated by combining within sub-groups and between sub-groups' variations. The variation within sub-groups is modelled to be one and a standard deviation between subgroups is defined so that a probability exceeding an UCL (Upper Control Limit) of the s-chart is 0.27 per cent, or the UCL limit is three standard deviation units away from the average value. The UCL (or B4 value) value for the s-chart is 2.089 when a sub-group size 5 is used and

> <sup>2</sup> 2 2

*USL LSL USL LSL*

3

 

min , min , 3 1.064 3 1.064 3.19 3.19 *pk*

 

The second process model is a uniform distribution for the variation between sub-groups. The uniform distribution is defined so that the variation may drift between the control limits of the s-chart, where the UCL is 2.089 and the LCL is zero. The variation within the subgroup is assumed to be normally distributed with a standard deviation of one. The long-

> <sup>2</sup> 2 2

> > 12

*USL LSL USL LSL*

min , min , 3 1.168 3 1.168 3.50 3.50 *pk*

 

The combined effects of the variations of the sub-group mean value and the variation are simulated with Matlab with ten million observations ordered into sub-groups with five observations within each sub-group. The results of the combined effects of variations of the mean and variation of the sub-groups are presented in Figure 12. The results based on a normal distribution process model for the mean value are closest to the perfect process. The results based on a uniform distribution process model for variation give the most

New equations for process performance indices with various statistical process models are presented in Table 2. It is assumed that the upper specification limit is closer to the mean value in order to simplify the presentation of equations without losing generality. The top left corner equations are used in the literature for process performance indices and others are based on the results from Figures 11 and 12. Short term data models the long term process performance based on these equations. These equations may be used with measured data from the pilot production or during the system engineering phase when component specifications are determined. The short-term data during the system engineering phase may be generated based on Monte Carlo–simulations. A system engineer may test the effects of different statistical process models to the specification limit proposals with these

2.089 1 1.064

*<sup>P</sup>* . (12)

2.089 0 1.168

*<sup>P</sup>* . (14)

(13)

> 

  (11)

a Lower Control Limit (LCL) is zero. The long-term variation can be calculated by

Long term

A corresponding long-term process performance index *P*pk is

Long term

A corresponding long-term process performance index *P*pk is

pessimistic quality level estimations.

simple equations and estimate a quality level.

term variation is


Table 2. Equations to include statistical process model effects for one-dimensional *P*pk

Fig. 12. The combined effects of statistical processes models to the process performance index

#### **3.3 Multidimensional process capability indices**

The research into multivariable process capability indices is limited in comparison with onedimensional ones due to a lack of consistency regarding the methodology for the evaluation of the process's capability (Wu, 2009). In the multidimensional case, the index gives an indication about the problem, but the root cause of the indicated problem needs to be studied parameter by parameter. In general, multidimensional process indices are analogous to univariate indices when a width of variation is replaced with a volume. A multivariable counterpart of *C*p is *C*p (Kotz & Johnson, 1993)

$$\mathcal{C}\_{\mathbb{P}} = \frac{\text{Volume of specification region}}{\text{Volume of region containing 99.73\% of values of X}},\tag{15}$$

where volume of specification is

 i i 1 USL LSL *i* .

where USLi and LSLi are upper and lower specification limits for i:th variable. For multidimensional *C*pk there is no analogous definition as with single dimensional *C*pk. For multidimensional cases, a probability outside of the specification can be defined and it can be converted backwards to a corresponding *C*pk value which can be regarded as a generalisation of *C*pk. (Kotz & Johnson, 1993). A definition for a multidimensional *C*pk is (Kotz & Johnson, 1993)

$$C\_{\rm pk} = -\frac{1}{3} \Phi^{\rm \text{\textquotedblleft}} \text{(expected proportion\textquotedblright non- conformance items)} \tag{16}$$

#### **3.4 Process model's effect on two-dimensional process capability indices**

Statistical process models of long-term process variation for the two-dimensional case are similar to those presented in Figures 6 through to 9. An additional step for two-dimensional process capability analysis is to include a correlation of two-dimensional data into the analysis. The correlation of the data needs to be taken into account in both the process capability index calculation and statistical process modelling.

A two-dimensional process capability analysis for a circular tolerance area has been studied in reference to Leinonen (2010b). The circular tolerance area may be analysed as two separate one-dimensional processes or one two-dimensional process. One-dimensional process indices overestimate the quality level for circular tolerance since one-dimensional tolerances form a square-type tolerance range. Additionally, correlation of the data cannot be taken into account in analysis with two separate one-dimensional process indices.

In order to overcome the problems of one-dimensional process indices with a circular tolerance, a new process capability index has been proposed (Leinonen, 2010b), as shown in Figure 13. The one-dimensional *C*pk process capability indices for X and Y dimensions are marked with and, respectively. The one-dimensional specification limits for the X and Y axis are shown in Figure 13 and the circular tolerance area has the same radius as onedimensional specifications. A two-dimensional process capability index estimates the process capability based on a probability outside of the circular specification limit. Onedimensional process capability indices overestimate the process capability of the circular tolerance area and they may be regarded as upper bounds for the two-dimensional process capability.

Fig. 13. *C*pk definitions with circular specification limits

analogous to univariate indices when a width of variation is replaced with a volume. A

i i

USL LSL

.

where USLi and LSLi are upper and lower specification limits for i:th variable. For multidimensional *C*pk there is no analogous definition as with single dimensional *C*pk. For multidimensional cases, a probability outside of the specification can be defined and it can be converted backwards to a corresponding *C*pk value which can be regarded as a generalisation of *C*pk. (Kotz & Johnson, 1993). A definition for a multidimensional *C*pk is

Statistical process models of long-term process variation for the two-dimensional case are similar to those presented in Figures 6 through to 9. An additional step for two-dimensional process capability analysis is to include a correlation of two-dimensional data into the analysis. The correlation of the data needs to be taken into account in both the process

A two-dimensional process capability analysis for a circular tolerance area has been studied in reference to Leinonen (2010b). The circular tolerance area may be analysed as two separate one-dimensional processes or one two-dimensional process. One-dimensional process indices overestimate the quality level for circular tolerance since one-dimensional tolerances form a square-type tolerance range. Additionally, correlation of the data cannot

In order to overcome the problems of one-dimensional process indices with a circular tolerance, a new process capability index has been proposed (Leinonen, 2010b), as shown in Figure 13. The one-dimensional *C*pk process capability indices for X and Y dimensions are marked with and, respectively. The one-dimensional specification limits for the X and Y axis are shown in Figure 13 and the circular tolerance area has the same radius as onedimensional specifications. A two-dimensional process capability index estimates the process capability based on a probability outside of the circular specification limit. Onedimensional process capability indices overestimate the process capability of the circular tolerance area and they may be regarded as upper bounds for the two-dimensional process

be taken into account in analysis with two separate one-dimensional process indices.

expectedpropotionof non- conformance items

<sup>1</sup> -1 *<sup>C</sup>*pk (16)

1

**3.4 Process model's effect on two-dimensional process capability indices** 

*i*

Volume of region containing 99.73% of values of X , (15)

multivariable counterpart of *C*p is *C*p (Kotz & Johnson, 1993)

3

capability index calculation and statistical process modelling.

where volume of specification is

(Kotz & Johnson, 1993)

capability.

*<sup>C</sup>*p = Volume of specification region

The analysed two-dimensional data distribution is a non-central elliptical normal distribution, and the probability inside of a circular acceptance limit can be calculated (Leinonen, 2010b)

$$p\_4 = \int\_{-R}^{R} \int\_{-\sqrt{\hbar^2 - \mathbf{x}^2}}^{\sqrt{\hbar^2 - \mathbf{x}^2}} \frac{1}{2\pi\sigma\_X\sigma\_Y\sqrt{1 - \rho^2}} e^{-\frac{1}{2\left(1 - \rho^2\right)} \left(\frac{\mathbf{x} - \mathbf{w}\_X}{\sigma\_X}\right)^2 - 2\rho\left(\frac{\mathbf{x} - \mathbf{w}\_Y}{\sigma\_X}\right)\left(\frac{\mathbf{y} - \mathbf{w}\_Y}{\sigma\_Y}\right)^2} d\mathbf{x} d\mathbf{y} \,\tag{17}$$

The process capability index is specified with a probability outside of the circular and it may be calculated based on (16)

$$\mathbf{C}\_{\mathrm{pk}}^{\mathrm{R}} = -\frac{1}{3} \boldsymbol{\Phi}^{\mathrm{-1}} \left( \mathbf{1} \cdot \boldsymbol{p}\_{4} \right). \tag{18}$$

A long-term process performance for a two-dimensional process with a circular tolerance may be modelled with a similar statistical model (a normal and a uniform distribution) which were used in Chapter 3.2 for the one-dimensional case. However, the correlation of the mean shift of sub-groups is added. Two assumptions are analysed: the first is that there is no correlation between variations of sub-group mean values and the second is that the sub-group mean values are similarly correlated than the individual observations.

The analysed numerical cases are based on Leinonen (2010b) and these are summarised in Table 3. A graphical summary of the numerically analysed two-dimensional Cases 1, 2 and 3 is shown in Figure 14.

The location of the data set is in the first quadrant of the plane in Cases 1, 2 and 3, while the location is on the X-axis in Case 4. The variation of the data is the same in both directions in Cases 1 and 4, while the variation is non-symmetrical in Cases 2 and 3. The location of the data set is defined with mean values mX and mY, and the variation with sX and sY. Onedimensional process capability indices and are calculated for each case and the smaller one is regarded as the one-dimensional *C*pk value.

Fig. 14. A graphical representation of a two-dimensional process capability case-study


Table 3. Input data for a two-dimensional process capability case study

The effects of statistical process models of the variation of the mean values of sub-groups in the two-dimensional process performance index are simulated with Matlab with ten million observations ordered into sub-groups with five observations within each sub-group. The same process performance index name is used for both indices, whether based on the shortor the long-term variation.

A significant effect of data correlation to the process's capability may be seen in Figure 15, which summaries the analysis of the example in Case 1. The X-axis is the correlation factor ρ of the data set and the Y-axis is the value. The process capability index is calculated with a numerical integration and simulated with a Monte Carlo-method without any variation of the sub-groups for reference purposes (Leinonen, 2010b). The one-dimensional *C*pk value is 1.5, and it may be seen that the two-dimensional process performance is maximised and approaching 1.5 when the correlation of data rotates the orientation of the data set in the same direction to that of the arch of the tolerance area.

Y

mY

LSLY

Fig. 14. A graphical representation of a two-dimensional process capability case-study

Distribution shape, main direction Circle Ellipse, y-axis Ellipse, x-axis Circle

The effects of statistical process models of the variation of the mean values of sub-groups in the two-dimensional process performance index are simulated with Matlab with ten million observations ordered into sub-groups with five observations within each sub-group. The same process performance index name is used for both indices, whether based on the short-

A significant effect of data correlation to the process's capability may be seen in Figure 15, which summaries the analysis of the example in Case 1. The X-axis is the correlation factor ρ of the data set and the Y-axis is the value. The process capability index is calculated with a numerical integration and simulated with a Monte Carlo-method without any variation of the sub-groups for reference purposes (Leinonen, 2010b). The one-dimensional *C*pk value is 1.5, and it may be seen that the two-dimensional process performance is maximised and approaching 1.5 when the correlation of data rotates the orientation of the data set in the

Tolerance circle

pdf of mY

Table 3. Input data for a two-dimensional process capability case study

same direction to that of the arch of the tolerance area.

or the long-term variation.

USLY

LSLX

USLX

Case 1 Case 2 Case 3 Case 4

X

mX

pdf of mX

Example cases 1, 2 and 3

USL 0.45 0.45 0.45 0.45 LSL -0.45 -0.45 -0.45 -0.45 mX 0.225 0.225 0.225 0.225 mY -0.2 -0.2 -0.2 0.0 sX 0.05 0.025 0.05 0.05 sY 0.05 0.05 0.025 0.05

 1.50 3.00 1.50 1.50 1.67 1.67 3.33 3.00 *C*pk = min (,) 1.50 1.67 1.50 1.50 The statistical process models have a noticeable effect on the expected quality level. If the mean values of the sub-groups are varied independently, with a normal distribution in both the X and Y directions, the effect varies between 0.05 and 0.25. If the mean values vary independently with a uniform distribution in both directions, then the process model has a significant effect up to 0.45 to the with a correlation factor value of 0.6. If the maximum differences in values are converted to the expected quality levels, then the difference ranges from 13 dpm to 2600 dpm. The uniform distribution model suppresses the correlation of data more than normal distribution, and for this reason the long-term process performs worse. If the sub-group's mean values are varied with normal distribution and correlated with the same correlation as the observations, then the long-term performance is a shifted version of the original process's performance and the effect of the correlated process model on average is 0.1 units.

The results for Case 2 are shown in Figure 16. The variation in the X-axis direction is a half of the variation of the Y-axis direction and the one-dimensional *C*pk value is 1.66. The twodimensional index approaches the one-dimensional value when the correlation of data increases. If the correlation is zero, then the circle tolerance limits the process performance to 1.2 as compared with the one-dimensional specification at 1.66. If the mean values of the sub-groups are varied independently, either with a normal or a uniform distribution, the process performs better than with the correlated process model. In this case, the correlated mean shift model changes the distribution so that it points out more from the tolerance area than the uncorrelated models. It may be noted that when the correlation changes to positive, then the normal distribution model performs closer to the original process than the uniform distribution model.

Fig. 15. The effect of the variation of the mean value of sub-groups on a two-dimensional , Case 1

Fig. 16. The effect of the variation of the mean value of sub-groups on a two-dimensional , Case 2

The example of Case 3 shows half of the variation in the Y-axis direction as compared with the X-axis direction, and the one-dimensional *C*pk value is 1.50. The results for Case 3 are presented in Figure 17. If the sub-group mean values are varied with correlated normal distributions, then the process capability with negative correlations is the best since the correlated process model maintains the original correlation of the data. The uncorrelated normal distribution has an overall data correlation between -0.8 to 0.8, and the uncorrelated uniform distribution has a correlation between -0.57 and 0.57. The uncorrelated uniform distribution model has an effect from 0.25 up to 0.42 of the value.

The results for the Case 4 are presented in Figure 18. The example provided by Case 4 has a symmetrical variation and the distribution is located along the X-axis. For these reasons, the correlation has a symmetrical effect on the two-dimensional process performance indices. The one-dimensional *C*pk value is 1.50 and the close form equation result without the correlation has a value of 1.45. Both normally distributed process models have a value of 1.31 with the correlation factor at zero. The correlated process model differs from the uncorrelated one with high correlation factor values. The uniform distribution model clearly has the biggest impact on the estimated quality level up to 0.3. The process performance indices maintain the order of the quality level estimations over the correlations due to the symmetrical distribution and location.

As a conclusion, it is not possible to derive similar easy-to-use process capability indices, including the effects of the statistical process models of two-dimensional process performance indices as compared with one-dimensional ones based on the results presented in Chapter 3.2.

Closed form equation results for static process Simulated results for static process Mean value varied with correlated N distribution

Mean value varied indepenently for X and Y with N distribution Mean value varied independently for X and Y with U distribution


Rho

Fig. 16. The effect of the variation of the mean value of sub-groups on a two-dimensional ,

The example of Case 3 shows half of the variation in the Y-axis direction as compared with the X-axis direction, and the one-dimensional *C*pk value is 1.50. The results for Case 3 are presented in Figure 17. If the sub-group mean values are varied with correlated normal distributions, then the process capability with negative correlations is the best since the correlated process model maintains the original correlation of the data. The uncorrelated normal distribution has an overall data correlation between -0.8 to 0.8, and the uncorrelated uniform distribution has a correlation between -0.57 and 0.57. The uncorrelated uniform

The results for the Case 4 are presented in Figure 18. The example provided by Case 4 has a symmetrical variation and the distribution is located along the X-axis. For these reasons, the correlation has a symmetrical effect on the two-dimensional process performance indices. The one-dimensional *C*pk value is 1.50 and the close form equation result without the correlation has a value of 1.45. Both normally distributed process models have a value of 1.31 with the correlation factor at zero. The correlated process model differs from the uncorrelated one with high correlation factor values. The uniform distribution model clearly has the biggest impact on the estimated quality level up to 0.3. The process performance indices maintain the order of the quality level estimations over the correlations due to the

As a conclusion, it is not possible to derive similar easy-to-use process capability indices, including the effects of the statistical process models of two-dimensional process performance indices as compared with one-dimensional ones based on the results presented

distribution model has an effect from 0.25 up to 0.42 of the value.

0.8

symmetrical distribution and location.

in Chapter 3.2.

Case 2

0.9

1

1.1

1.2

Cpk(r)

1.3

1.4

1.5

1.6

Fig. 17. The effect of the variation of the mean value of sub-groups on a two-dimensional , Case 3

Fig. 18. The effect of the variation of the mean value of sub-groups on a two-dimensional , Case 4

#### **4. Usage of process capability indices in radio engineering**

Most of the parameters which are studied during the RF system design phase do not follow a normal distribution. Monte Carlo-simulations have been carried out for the most important RF block level parameters and - based on the simulations results - none of the RF block level parameters follow a normal distribution (Vizmuller, 1998). This is due to fact that the dynamic range of signal levels in radio engineering is huge and typically a logarithm scale is used for signal levels. Unfortunately, in most cases the signal levels do not follow a normal distribution on such a scale. In order to perform a process capability analysis properly for radio engineering parameters, the analysis should be done according to specific distributions, as shown in Figure 5. If a production quality level estimation of an RF parameter is done based on a process capability index with a normal distribution assumption, then the quality level may be significantly under- or overestimated. The problem is that the underlying distributions for all important RF parameters are not available or known and the analyses are based on measured results. The problem with a measurement-based approach is that the properties of the data distributions may change during the development cycle of the device.

Another problem with a measurement-based approach for process capability analysis is that a measurement error of an RF parameter may change the properties of the data distribution. The measurement error based on the RF test equipment on the process capability indices has been studied (Moilanen, 1998). Based on the study of the effect of RF, test equipment needs to be calibrated out and the analysis should be done with actual variation which is based on product-to-product variation. An actual number of RF measurements cannot be reduced based on mathematical modelling, since most RF parameters do not follow the normal distribution and the accuracy of the modelling is not good enough for the purposes of design verification or process capability analysis (Pyylampi, 2003).

Some work has been done in order to find the underlying functions for some critical RF parameters. The statistical properties of the bit error rate have been studied and a statistical distribution of it would follow an extreme value function on a linear scale or else it would follow a log-normal distribution on a logarithm scale with a DQPSK modulation (Leinonen, 2002). In order to validate this result in real life, an infinitive measurement result and measurement time would be needed. It has been shown that, based on measurement results, a peak phase error of a GSM transmission modulation would follow - statistically - a lognormal distribution (Leinonen, 2002). The statistical distribution of a bit error rate of a QPSK modulation has been studied and, with a limited measurement time and measurement results, the distribution of the bit error rate is a multimodal distribution (Leinonen, 2011). The multimodal distribution has a value of zero and a truncated extreme value function distribution part on a linear scale or else a truncated extreme value function distribution on a logarithm scale. Based on the previous results, the process capability analysis of the biterror rate based on known statistical distribution functions has been studied (Leinonen, 2003, 2011).

Process capability indices give an indication of the maturity level of the design even though the process capability indices may over- or underestimate the expected quality level. The maturity levels of multiple designs may be compared to each other, if the calculation of the indices has been done in a similar manner.

Process capability indices are used as a communication tool between different parties during the development process of the device. Different notation for the process capability index may be used in order to create differences between a process capability index based on a normal distribution or those based on a known or non-normal distribution assumption. One proposal is to use the *C*\* pk notation if the process capability index is based on nonnormal distribution (Leinonen, 2003).

Typically, the studied parameters during the RF system engineering and R&D phases are one-dimensional parameters, and multiradio interoperability may be considered to be one of the rare two-dimensional RF design parameters. Multiradio interoperability in this context is considered to be purely defined as a radio interference study, as shown in Figure 4. Multiradio interoperability may be monitored and designed in the manner of a process capability index (Leinonen, 2010a). A new capability index notation MR*C*pk has been selected as a multiradio interoperability index, which be defined in a manner similar to the process capability index in equation 16, at least for communication purposes. In order to make a full multiradio interoperability system analysis, all potential interference mechanisms should be studied. A wide band noise interference mechanism has been studied with an assumption that the noise level is constant over frequencies (Leinonen, 2010a). Typically, there is a frequency dependency of the signal level of the interference signals and new studies including frequency dependencies should be done.

The effects of statistical process models on normally distributed one- and two-dimensional data has been studied in 3.2. and 3.4. Unfortunately, most of RF parameters are, by nature, non-normally distributed and thus previous results may not apply directly. More studies will be needed in order to understand how simple statistical process models will affect nonnormally distributed parameters. If the manufacturing process could be taken into account more easily during the system simulation phase, either block level or component level specifications could - potentially - be relaxed. If the manufacturing process cannot be modelled easily, then the block level and component level specifications should be done in a safe manner which will yield the over-specification of the system. If the system or solution is over-specified, the solution is typically more expensive than the optimised solution.

#### **5. Conclusion**

252 Systems Engineering – Practice and Theory

Most of the parameters which are studied during the RF system design phase do not follow a normal distribution. Monte Carlo-simulations have been carried out for the most important RF block level parameters and - based on the simulations results - none of the RF block level parameters follow a normal distribution (Vizmuller, 1998). This is due to fact that the dynamic range of signal levels in radio engineering is huge and typically a logarithm scale is used for signal levels. Unfortunately, in most cases the signal levels do not follow a normal distribution on such a scale. In order to perform a process capability analysis properly for radio engineering parameters, the analysis should be done according to specific distributions, as shown in Figure 5. If a production quality level estimation of an RF parameter is done based on a process capability index with a normal distribution assumption, then the quality level may be significantly under- or overestimated. The problem is that the underlying distributions for all important RF parameters are not available or known and the analyses are based on measured results. The problem with a measurement-based approach is that the properties of the data distributions may change

Another problem with a measurement-based approach for process capability analysis is that a measurement error of an RF parameter may change the properties of the data distribution. The measurement error based on the RF test equipment on the process capability indices has been studied (Moilanen, 1998). Based on the study of the effect of RF, test equipment needs to be calibrated out and the analysis should be done with actual variation which is based on product-to-product variation. An actual number of RF measurements cannot be reduced based on mathematical modelling, since most RF parameters do not follow the normal distribution and the accuracy of the modelling is not good enough for the purposes of

Some work has been done in order to find the underlying functions for some critical RF parameters. The statistical properties of the bit error rate have been studied and a statistical distribution of it would follow an extreme value function on a linear scale or else it would follow a log-normal distribution on a logarithm scale with a DQPSK modulation (Leinonen, 2002). In order to validate this result in real life, an infinitive measurement result and measurement time would be needed. It has been shown that, based on measurement results, a peak phase error of a GSM transmission modulation would follow - statistically - a lognormal distribution (Leinonen, 2002). The statistical distribution of a bit error rate of a QPSK modulation has been studied and, with a limited measurement time and measurement results, the distribution of the bit error rate is a multimodal distribution (Leinonen, 2011). The multimodal distribution has a value of zero and a truncated extreme value function distribution part on a linear scale or else a truncated extreme value function distribution on a logarithm scale. Based on the previous results, the process capability analysis of the biterror rate based on known statistical distribution functions has been studied (Leinonen,

Process capability indices give an indication of the maturity level of the design even though the process capability indices may over- or underestimate the expected quality level. The maturity levels of multiple designs may be compared to each other, if the calculation of the

**4. Usage of process capability indices in radio engineering**

design verification or process capability analysis (Pyylampi, 2003).

during the development cycle of the device.

indices has been done in a similar manner.

2003, 2011).

In high volume manufacturing, the high quality level of the product is essential to maximise the output of the production. The quality level of the product needs to be designed during the product development phase. The design of the quality begins in the system definition phase of product development by agreeing upon the most important parameters to follow during the development phase of the device. Block level and component level specifications are defined during the system engineering phase. The actual specifications and how they are specified are main contributors towards the quality level of the design.

The maturity and potential quality level of the design may be monitored with process capability indices during the product development phase. The process capability indices had originally been developed for the quality tools for manufacturing purposes. Multiple parameters may be compared to each other, since process capability indices are dimensionless, which is an advantage when they are used as a communication tools between technology and quality teams.

Components may be defined using previous information regarding the expected variation of the parameter or based on the calculation and simulation of the parameters. If the component specifications are only defined when based on calculations and simulations, then the variability of the manufacturing of the component and a variability of a device's production need to be taken into account. The manufacturability margin for parameters needs to be included, and one method for determining a needed margin is to use statistical process variation models. Statistical process control methods are used in high volume production and they allow the actual production process to vary between the control limits of statistical control charts. The control limits of the control charts are dependent on a number of samples in a sample control group, and the control limits define the allowable process variation during mass production. A constant mean shift process model has been used in a Six Sigma community to model mass production variation. The effects of a constant process shift model and normal distribution- and uniform distribution-based process models are compared with each other and with the one-dimensional normally distributed data. Based on the simulation results, the constant shift and the uniform distribution models expect a similar quality level with a process capability index value of 2, while at a lower process capability level a constant shift process estimates the lowest quality level. The normal distribution model of the manufacturing process expects a higher quality level than other process models with a one-dimensional parameter. New equations for onedimensional process capability indices with statistical process models based on calculations and simulations have been presented in the Chapter 3.2.

Process capability indices have been defined according to multidimensional parameters which are analogous to one-dimensional process capability indices. One of the main differences between one- and two-dimensional process capability index analyses is that a correlation of the data with two-dimensional data should be included into the analysis. Another difference is the definition of the specification limit, which may be rectangular or circular or else a sub-set of those. A rectangular tolerance area may be considered if the twodimensional data is uncorrelated, and the specifications may be considered to be independent of each other. Otherwise, the tolerance area is considered to be circular. The effects of statistical process models for two-dimensional process capability indices with a correlated normal distribution with a circular tolerance area have been studied. The correlation of the data has a significant effect on the expected quality level based on the simulation results. The location and the shape of the data distribution have an additional effect when statistical process models are applied to the data. Easy to use equations which take the statistical process models into account with two-dimensional data cannot be derived due to multiple dependences in terms of location, shape and the correlation of the data distribution.

Most radio performance parameters are one-dimensional and they are not distributed with a normal distribution, and so the process capability analysis should be carried within known statistical distributions. A process capability analysis based on a normality assumption may significantly under- or overestimate the expected quality level of the production. The statistical distributions of some RF parameters are known - e.g., the bit error rate - but more work will be needed to define the others. Also, a multiradio interoperability may be considered to be a two-dimensional parameter which may be analysed with process capability indices.

#### **6. References**

254 Systems Engineering – Practice and Theory

Components may be defined using previous information regarding the expected variation of the parameter or based on the calculation and simulation of the parameters. If the component specifications are only defined when based on calculations and simulations, then the variability of the manufacturing of the component and a variability of a device's production need to be taken into account. The manufacturability margin for parameters needs to be included, and one method for determining a needed margin is to use statistical process variation models. Statistical process control methods are used in high volume production and they allow the actual production process to vary between the control limits of statistical control charts. The control limits of the control charts are dependent on a number of samples in a sample control group, and the control limits define the allowable process variation during mass production. A constant mean shift process model has been used in a Six Sigma community to model mass production variation. The effects of a constant process shift model and normal distribution- and uniform distribution-based process models are compared with each other and with the one-dimensional normally distributed data. Based on the simulation results, the constant shift and the uniform distribution models expect a similar quality level with a process capability index value of 2, while at a lower process capability level a constant shift process estimates the lowest quality level. The normal distribution model of the manufacturing process expects a higher quality level than other process models with a one-dimensional parameter. New equations for onedimensional process capability indices with statistical process models based on calculations

Process capability indices have been defined according to multidimensional parameters which are analogous to one-dimensional process capability indices. One of the main differences between one- and two-dimensional process capability index analyses is that a correlation of the data with two-dimensional data should be included into the analysis. Another difference is the definition of the specification limit, which may be rectangular or circular or else a sub-set of those. A rectangular tolerance area may be considered if the twodimensional data is uncorrelated, and the specifications may be considered to be independent of each other. Otherwise, the tolerance area is considered to be circular. The effects of statistical process models for two-dimensional process capability indices with a correlated normal distribution with a circular tolerance area have been studied. The correlation of the data has a significant effect on the expected quality level based on the simulation results. The location and the shape of the data distribution have an additional effect when statistical process models are applied to the data. Easy to use equations which take the statistical process models into account with two-dimensional data cannot be derived due to multiple dependences in terms of location, shape and the correlation of the

Most radio performance parameters are one-dimensional and they are not distributed with a normal distribution, and so the process capability analysis should be carried within known statistical distributions. A process capability analysis based on a normality assumption may significantly under- or overestimate the expected quality level of the production. The statistical distributions of some RF parameters are known - e.g., the bit error rate - but more work will be needed to define the others. Also, a multiradio interoperability may be considered to be a two-dimensional parameter which may be analysed with process

and simulations have been presented in the Chapter 3.2.

data distribution.

capability indices.


## **Augmented Human Engineering: A Theoretical and Experimental Approach to Human Systems Integration**

Didier Fass *ICN Business School and MOSEL – LORIA, Lorraine University, Nancy, France*

#### **1. Introduction**

256 Systems Engineering – Practice and Theory

Uusitalo, A. (2000). The Characterization a Long Term Process Deviation for RF Parameters

Vizmuller, P. (1998). *Design Centering Using Mu-Sigma Graphs and System Simulation*, Artech

Wu C.-H., Pearn W. L. & Kotz S. (2009) An Overview of theory and practice on process

p.45, in Finnish

pp.338 - 359

House, ISBN 0-89006-950-6, Norwood, MA

in high volume Mobile Phone Production, Engineering Thesis, Polytechnic of Oulu,

capability indices for quality assurance. *Int. Journal Production Economics*, 117,

This chapter focuses on one of the main issues for augmented human engineering: integrating the *biological user's needs* in its methodology for designing human-artefact systems integration requirements and specifications. To take into account biological, anatomical and physiological requirements we need a validated theoretical framework. We explain how to ground augmented human engineering on the Chauvet mathematical theory of integrative physiology as a fundamental framework for human system integration and augmented human design. We propose to validate and assess augmented human domain engineering models and prototypes by experimental neurophysiology.

We present a synthesis of our fundamental and applied research on augmented human engineering, human system integration and human *in-the-loop* system design and engineering for enhancing human performance - especially for technical gestures, in safety critical systems operations such as surgery, astronauts' extra-vehicular activities and aeronautics. For fifteen years, our goal was to research and to understand fundamental theoretical and experimental scientific principles grounding human system integration, and to develop and validate rules and methodologies for augmented human engineering and reliability.

#### **2. Concepts**

#### **2.1 Human being**

A human being, by its biological nature – bearing in mind its socio-cultural dimensions, cannot be reduced to properties of mathematical or physical automaton. Thus, connecting up humans and artefacts is not only a question of technical interaction and interface; it is also a question of integration.

#### **2.2 Human systems integration**

As a technical and managerial concept (Haskins 2010), human systems integration (HIS) is an umbrella term for several areas of "human factors" research and systems engineering that include human performance, technology design, and human-interactive systems interaction (Nasa 2011). Defining a system more broadly than hardware and software refer to human centred design (Ehrhart & Sage 2003). That issue requires thinking human as an element of the system and translating it qualitatively throughout design, development and testing process (Booher, 2003).

These are concerned with the integration of human capabilities and performances, from individual to social level into the design of complex human-machine systems supporting safe, efficient operations; there is also the question of reliability.

Human systems integration involves augmented human design with the objectives of increasing human capabilities and improving human performance<sup>1</sup> (Engelbart 1962) using behavioural technologies at the level of human-machine system and human machine symbiosis (Licklider 1960). By using wearable interactive systems, made up of virtual reality and augmented reality technologies or wearable robotics, many applications offer technical gesture assistance e.g. in aeronautics, human space activities or surgery.

#### **2.3 Technical gesture assistance**

Gesture is highly integrated neurocognitive behaviour, based on the dynamical organization of multiple physiological functions (Kelso, 2008)(de Sperati, 1997). Assisting gestures and enhancing human skill and performances requires coupling sensorimotor functions and organs with technical systems through artificially generated multimodal interactions. Thus, augmented human design has to integrate human factors - anatomy, neurophysiology, behaviour - and assistive cognitive and interactive technologies in a safe and coherent way for extending and enhancing the ecological domain of life and behaviour.

The goal of this type of human *in-the-loop* system design is to create entities that can achieve goals and actions (predetermined) beyond natural human behavioural, physical and intellectual abilities and skills – force, perception, action, awareness, decision…

#### **2.4 Integrative design**

Augmenting cognition and sensorimotor loops with automation and interactive artefacts enhances human capabilities and performance. It is extending both the anatomy of the body and the physiology of human behaviour. Designing augmented human beings by using virtual environment technologies requires integrating both artificial and structural elements and their structural interactions with the anatomy, and artificial multimodal functional interactions with the physiological functions (Fass, 2006). That needs a fitting organizational design (Nissen & Burton 2010).

Therefore, the scientific and pragmatic questions are: how to best couple and integrate in a coherent way, a biological system with physical and artifactual systems? How to integrate in a coherent way human and interactive artefact –more or less immersive and invasive, in a behaviourally coherent way by design? How augmented human engineering can anticipate and validate a technical and organizational design and its dynamics? How modelling and assessing such a design efficiency? How grounding HIS and augmenting human design on a validated theory? How assessing experimentally and measuring both performance and efficiency?

 1 Sensorimotor and cognitive

#### **3. Augmented human domain engineering**

Human-artefact systems are a special kind of *systems of systems*. They are made up of two main categories of systems. These two kinds of systems differ in their nature: their fundamental organization, complexity and behaviour. The first category, the traditional one, includes *technical* or *artifactual* systems that could be engineered. The second category includes *biological* systems: the human that could not be engineered. Thus, integrating human and complex technical systems in design is to couple and integrate in a behaviourally coherent way, a biological system (the human) with a technical and artifactual system. Augmented human engineering needs to model the human body and its behaviour to test and validate augmented human reliability and human systems integration (HSI).

#### **3.1 Domain engineering**

258 Systems Engineering – Practice and Theory

(Nasa 2011). Defining a system more broadly than hardware and software refer to human centred design (Ehrhart & Sage 2003). That issue requires thinking human as an element of the system and translating it qualitatively throughout design, development and testing

These are concerned with the integration of human capabilities and performances, from individual to social level into the design of complex human-machine systems supporting

Human systems integration involves augmented human design with the objectives of

behavioural technologies at the level of human-machine system and human machine symbiosis (Licklider 1960). By using wearable interactive systems, made up of virtual reality and augmented reality technologies or wearable robotics, many applications offer technical

Gesture is highly integrated neurocognitive behaviour, based on the dynamical organization of multiple physiological functions (Kelso, 2008)(de Sperati, 1997). Assisting gestures and enhancing human skill and performances requires coupling sensorimotor functions and organs with technical systems through artificially generated multimodal interactions. Thus, augmented human design has to integrate human factors - anatomy, neurophysiology, behaviour - and assistive cognitive and interactive technologies in a safe and coherent way

The goal of this type of human *in-the-loop* system design is to create entities that can achieve goals and actions (predetermined) beyond natural human behavioural, physical and

Augmenting cognition and sensorimotor loops with automation and interactive artefacts enhances human capabilities and performance. It is extending both the anatomy of the body and the physiology of human behaviour. Designing augmented human beings by using virtual environment technologies requires integrating both artificial and structural elements and their structural interactions with the anatomy, and artificial multimodal functional interactions with the physiological functions (Fass, 2006). That needs a fitting organizational

Therefore, the scientific and pragmatic questions are: how to best couple and integrate in a coherent way, a biological system with physical and artifactual systems? How to integrate in a coherent way human and interactive artefact –more or less immersive and invasive, in a behaviourally coherent way by design? How augmented human engineering can anticipate and validate a technical and organizational design and its dynamics? How modelling and assessing such a design efficiency? How grounding HIS and augmenting human design on a validated theory? How assessing experimentally and measuring both performance and efficiency?

(Engelbart 1962) using

safe, efficient operations; there is also the question of reliability.

increasing human capabilities and improving human performance<sup>1</sup>

gesture assistance e.g. in aeronautics, human space activities or surgery.

for extending and enhancing the ecological domain of life and behaviour.

intellectual abilities and skills – force, perception, action, awareness, decision…

process (Booher, 2003).

**2.3 Technical gesture assistance** 

**2.4 Integrative design** 

design (Nissen & Burton 2010).

1 Sensorimotor and cognitive

According to system engineering, taking into account user needs in the world of activities and tasks, designing system requirements is to find the system design, its three dimensional organizational dimensions of requirements - structural, geometrical and dynamical - and its three view plans of system design specifications –structure or architecture, behaviour – performance and efficiency, and evolution –adaptation, resilience capability…(Fig.1).

Fig. 1. Our overall system design general conceptual framework: System function results from the integrative organization of different structural elements, shapes and dynamics according there space and time scales relativity and specific qualitative and quantitative measurement units.

Thus, system engineering requires both expert skills and validated formal modelling methodologies. To some extent, the main difficulty is to build a system model from a collection of informal and sometimes imprecise, redundant and unstructured descriptions to the domain of expertise. A formal model could be relevant to highlight a hidden structure according to an intended function and its dynamics, or to apply operations or transformation on the system itself.

From domain engineering to requirements, our approach is situated inside Dines Bjoemer's framework (Bjoemer's 2006a, 2006b and 2009) based on the triptych: D, S -> R, where D is the domain of the problem and where requirements R are satisfied by the relation ->, which intends to mean *entailment*; so, S is a kind of model of our system built or expressed from D. If that triptych is able to express, in a synthetic manner, a situation related to the problem domain, a system model and the requirements, it remains at a global level and can thus be applied in different problem spaces and instances.

The domain provides a way to express properties and facts of the environment of the system under construction. The system model S is intended to summarize actions and properties of the system and it is a link between the requirements and the final resulting system. The relation -> is conceptualized as a deduction-based relation which can be defined in a formal logical system, and which helps to derive requirements from domain and model. This relation is sometimes called entailment and is used to ground the global framework. When one considers an application, one should define the application domain from the analysis and this may integrate elements of the world. The triptych helps for defining a global framework and offers the possibility to use tools that are useful for assessing the consistent relation between D, S and R; because we aim to use proof techniques for ensuring the soundness of the relation.

#### **3.2 Human system integration**

The major benefits of using augmented human modelling in design include reducing the need for physical development; reducing design costs by enabling the design team to more rapidly prototype and test a design; avoiding costly design 'fixes' later in the program by considering *human factors* requirements early in the design process; and improving customer communications at every step of product development by using compelling models and simulations. Thus, designing an artefact consists of organizing a coherent relation between structural elements and functions in a culture and context of usage. Modelling human beings consists of taking into account anatomical and physiological elements in the same model. It is to design functions by organizing a hierarchy of structural elements and their functions. Such models should be used to create models of individuals rather than using aggregated summaries of isolated functional or anthropometric variables that are more difficult for designers to use. Therefore augmented human modelling in design requires an integrative approach according to the three necessities we defined for human systems integration (Fass 2007).

#### **3.3 Human system integration domain**

Since technical systems are mathematically grounded and based on physical principles, HITLS needs to be considered in mathematical terms. There are several necessities to make HIS and augmented human reliable (Fass & e: Lieber 2009).


Consequently, designing augmented human following human system integration is to organize hierarchically and dynamically human and artefact coupling. This requires a new domain engineering approach for requirements and specification based on biological user's needs and functions.

#### **3.4 Augmented human engineering**

260 Systems Engineering – Practice and Theory

Thus, system engineering requires both expert skills and validated formal modelling methodologies. To some extent, the main difficulty is to build a system model from a collection of informal and sometimes imprecise, redundant and unstructured descriptions to the domain of expertise. A formal model could be relevant to highlight a hidden structure according to an intended function and its dynamics, or to apply operations or

From domain engineering to requirements, our approach is situated inside Dines Bjoemer's framework (Bjoemer's 2006a, 2006b and 2009) based on the triptych: D, S -> R, where D is the domain of the problem and where requirements R are satisfied by the relation ->, which intends to mean *entailment*; so, S is a kind of model of our system built or expressed from D. If that triptych is able to express, in a synthetic manner, a situation related to the problem domain, a system model and the requirements, it remains at a global level and can thus be

The domain provides a way to express properties and facts of the environment of the system under construction. The system model S is intended to summarize actions and properties of the system and it is a link between the requirements and the final resulting system. The relation -> is conceptualized as a deduction-based relation which can be defined in a formal logical system, and which helps to derive requirements from domain and model. This relation is sometimes called entailment and is used to ground the global framework. When one considers an application, one should define the application domain from the analysis and this may integrate elements of the world. The triptych helps for defining a global framework and offers the possibility to use tools that are useful for assessing the consistent relation between D, S and R; because we aim to use proof techniques for ensuring the

The major benefits of using augmented human modelling in design include reducing the need for physical development; reducing design costs by enabling the design team to more rapidly prototype and test a design; avoiding costly design 'fixes' later in the program by considering *human factors* requirements early in the design process; and improving customer communications at every step of product development by using compelling models and simulations. Thus, designing an artefact consists of organizing a coherent relation between structural elements and functions in a culture and context of usage. Modelling human beings consists of taking into account anatomical and physiological elements in the same model. It is to design functions by organizing a hierarchy of structural elements and their functions. Such models should be used to create models of individuals rather than using aggregated summaries of isolated functional or anthropometric variables that are more difficult for designers to use. Therefore augmented human modelling in design requires an integrative approach according

Since technical systems are mathematically grounded and based on physical principles, HITLS needs to be considered in mathematical terms. There are several necessities to make

to the three necessities we defined for human systems integration (Fass 2007).

HIS and augmented human reliable (Fass & e: Lieber 2009).

transformation on the system itself.

soundness of the relation.

**3.2 Human system integration** 

**3.3 Human system integration domain** 

applied in different problem spaces and instances.

Dealing with augmented human engineering is being able to situate and limit its domain for specifying the whole system – biological and artifactual integrated system- in accordance with the high-level and global requirements:


#### **4. Augmented human's needs**

Who would even think about separating a living goldfish from its water and its fishbowl?

#### **4.1 Epistemological needs**

Converging technologies for improving human performances (Rocco & Brainbridge 2002), *augmented human*, need a *new epistemological and theoretical* approach to the nature of knowledge and cognition considered as an integrated biological, anatomical, and physiological process, based on a hierarchical structural and functional organization (Fass 2007). Current models for human-machine interaction or human-machine integration are based on symbolic or computational cognitive sciences and related disciplines. Even though they use experimental and clinical data, they are yet based on logical, linguistic and computational interpretative conceptual frameworks of human nature, where postulate or axiomatic replace predictive theory. It is essential for the robust modelling and the design of future rules of engineering for HIS, to enhance human capabilities and performance. *Augmented human* design needs an integrative theory that takes into account the specificity of the biological organization of living systems, according to the principles of physics, and a coherent way to organize and integrate structural and functional artificial elements (structural elements and functional interactions). Consequently, virtual environments design for *augmented human* involves a shift from a metaphorical, and scenario based design, grounded on *metaphysical* models and rules of interaction and cognition, to a predictive science and engineering of interaction and integration. We propose to ground HSI and *augmented human* design on an integrative theory of the human being and its principles.

#### **4.2 Chauvet's mathematical theory of mathematical physiolgy (MTIP) needs**

The mathematical theory of integrative physiology, developed by Gilbert Chauvet (Chauvet 1993a; Chauvet 1993b; Chauvet 1993c) examines the hierarchical organization of structures (i.e., anatomy) and functions (i.e., physiology) of a living system as well as its behaviour. MTIP introduces the principles of a functional hierarchy based on structural organization within spaces limits, functional organization within time limits and structural units that are the anatomical elements in the physical space. This abstract description of a biological system is represented in (fig. 2). MTIP copes with the problem of structural discontinuity by introducing functional interaction, for physiological function coupling, and structural interaction *Ψ* from structure-source *s* into structure-sink *S*, as a coupling between the physiological functions supported by these structures.

Fig. 2. Ω - 3D representation of a biological system based on the Chauvet's MTIP.

Chauvet had chosen a possible representation related to hierarchical structural constraints, and which involves specific biological concepts. MTIP consists in a representation: set of non-local interactions, an organizing principle: stabilizing auto-association principle (PAAS), and a hypothesis: any biological system may be described as a set of functional interactions that gives rise to two faces of the biological system, the potential of organization (O-FBS) and the dynamics in the structural organization, making an n-level field theory (D-FBS). Both are based on geometrical/topological parameters, and coupled via geometry/topology that may vary with time and space (state variables of the system) during development and adult phases. The structures are defined by the space scale *Z*, hence the structural hierarchy, the functions are defined by the time scale *Y*, hence the functional hierarchy.

grounded on *metaphysical* models and rules of interaction and cognition, to a predictive science and engineering of interaction and integration. We propose to ground HSI and *augmented human* design on an integrative theory of the human being and its principles.

The mathematical theory of integrative physiology, developed by Gilbert Chauvet (Chauvet 1993a; Chauvet 1993b; Chauvet 1993c) examines the hierarchical organization of structures (i.e., anatomy) and functions (i.e., physiology) of a living system as well as its behaviour. MTIP introduces the principles of a functional hierarchy based on structural organization within spaces limits, functional organization within time limits and structural units that are the anatomical elements in the physical space. This abstract description of a biological system is represented in (fig. 2). MTIP copes with the problem of structural discontinuity by introducing functional interaction, for physiological function coupling, and structural interaction *Ψ* from structure-source *s* into structure-sink *S*, as a coupling between the

**4.2 Chauvet's mathematical theory of mathematical physiolgy (MTIP) needs** 

Fig. 2. Ω - 3D representation of a biological system based on the Chauvet's MTIP.

Chauvet had chosen a possible representation related to hierarchical structural constraints, and which involves specific biological concepts. MTIP consists in a representation: set of non-local interactions, an organizing principle: stabilizing auto-association principle (PAAS), and a hypothesis: any biological system may be described as a set of functional interactions that gives rise to two faces of the biological system, the potential of organization (O-FBS) and the dynamics in the structural organization, making an n-level field theory (D-FBS). Both are based on geometrical/topological parameters, and coupled via geometry/topology that may vary with time and space (state variables of the system) during development and adult phases. The structures are defined by the space scale *Z*, hence the structural hierarchy, the functions are defined by the time scale *Y*, hence the

physiological functions supported by these structures.

functional hierarchy.

MTIP shows three relevant concepts for grounding human system integration:


Therefore augmented human engineering needs designing artificial functional interactions – short sensorimotor artificial functions, which generate a maximum of stability for humanartefact systems in operational conditions. Thereby MTIP provide for us an abstract framework for designing human-artefact system and designing organizations for dynamic fit (Nissen & Burton 2011). These are the reasons why MTIP is a relevant candidate theory for grounding augmented human design.

### **5. Rational for a model of** *augmented human*

As claims by Fass (Fass2006), since artifactual systems are mathematically founded and based on physical principles, HSI needs to be thought of in mathematical terms. In addition, there are several main requirements categories to make HIS and augmented human design safe and efficient. They address the technology - virtual environment-, sensorimotor integration and coherency.

#### **Requirement 1: Virtual environment is an artifactual knowledge based environment**

As an environment, which is partially or totally based on computer-generated sensory inputs, a virtual environment is an artificial multimodal knowledge-based environment. Virtual reality and augmented reality, which are the most well known technologies of virtual environments, are obviously the tools for the augmented human design and the development of human in-the-loop systems. Knowledge is gathered from interactions and dynamics of the individual-environment complex. It is an evolutionary, adaptive and integrative physiological process, which is fundamentally linked to the physiological functions with respect to emotions, memory, perception and action. Thus, designing an artifactual or a virtual environment, a sensorimotor knowledge based environment, consists of making biological individual and artifactual physical system consistent. This requires a neurophysiological approach, both for knowledge modelling and human in-the-loop design.

#### **Requirement 2: Sensorimotor integration and motor control ground behaviour and skills**

Humans use multimodal sensorimotor stimuli and synergies for interacting with their environment, either natural or artificial (vision, vestibular stimulus, proprioception, hearing, touch, taste…) (Sporn & Edelman 1998). When an individual is in a situation of immersive interaction, wearing head-mounted display and looking at a three-dimensional computergenerated environment, his or her sensorial system is submitted to an unusual pattern of stimuli. This dynamical pattern may largely influence the balance, the posture control (Malnoy & al. 1998), the spatial cognition and the spatial motor control of the individual. Moreover, the coherence between artificial stimulation and natural perceptual input is essential for the perception of the space and the action within. Only when artificial interaction affords physiological processes is coherence achieved.

#### **Requirement 3: Coherence and HIS insure the human-artefact system performance, efficiency and domain of stability**

If this coherence is absent, perceptual and motor disturbances appear, as well as illusions, vection or vagal reflex. These illusions are solutions built by the brain in response to the inconsistency between outer sensorial stimuli and physiological processes. Therefore, the cognitive and sensorimotor abilities of the person may be disturbed if the design of the artificial environment does not take into account the constraints imposed by human sensory and motor integrative physiology. The complexity of physiological phenomena arises from the fact that, unlike ordinary physiological systems, the functioning of a biological system depends on the coordinated action of each of the constitutive elements (Chauvet 2002). This is why the designing of a artificial environment as an augmented biotic system, calls for an integrative approach.

Integrative design strictly assumes that each function is a part of a continuum of integrated hierarchical levels of structural organization and functional organization as described above within MTIP. Thus, the geometrical organization of the virtual environment structure, the physical structure of interfaces and the generated patterns of artificial stimulations, condition the dynamics of hierarchical and functional integration. Functional interactions, which are products or signals emanating from a structural unit acting at a distance on another structural unit, are the fundamental elements of this dynamic.

As a consequence, the proposed model inside Chauvet's MTIP assumes the existence of functional interactions between the artificial and the physiological sensorimotor systems. This hypothesis has been tested through experiments described in the following section. This model in the framework of MTIP is formally described in figure 3, that is the 3D representation of the integrated augmented human design. The human (Ω) (fig.2.) is represented as the combination of the hierarchical structural (z) and functional (Y) organizations. X-Axis corresponds to the ordinary physical or Cartesian space. Each physiological function ψ is represented in the *xψy* plane by a set of structural units hierarchically organized according space scales. Two organizational levels are shown: *ψ*<sup>1</sup> and *ψ*2. The different time scales are on the *y*-axis, while space scales, which characterize the structure of the system, are on the *z*-axis. The role of space and time clearly appears. *Ψ*1*ij* is the non-local and non-symmetric functional interaction.

of making biological individual and artifactual physical system consistent. This requires a neurophysiological approach, both for knowledge modelling and human in-the-loop design. **Requirement 2: Sensorimotor integration and motor control ground behaviour and skills**  Humans use multimodal sensorimotor stimuli and synergies for interacting with their environment, either natural or artificial (vision, vestibular stimulus, proprioception, hearing, touch, taste…) (Sporn & Edelman 1998). When an individual is in a situation of immersive interaction, wearing head-mounted display and looking at a three-dimensional computergenerated environment, his or her sensorial system is submitted to an unusual pattern of stimuli. This dynamical pattern may largely influence the balance, the posture control (Malnoy & al. 1998), the spatial cognition and the spatial motor control of the individual. Moreover, the coherence between artificial stimulation and natural perceptual input is essential for the perception of the space and the action within. Only when artificial

**Requirement 3: Coherence and HIS insure the human-artefact system performance,**

If this coherence is absent, perceptual and motor disturbances appear, as well as illusions, vection or vagal reflex. These illusions are solutions built by the brain in response to the inconsistency between outer sensorial stimuli and physiological processes. Therefore, the cognitive and sensorimotor abilities of the person may be disturbed if the design of the artificial environment does not take into account the constraints imposed by human sensory and motor integrative physiology. The complexity of physiological phenomena arises from the fact that, unlike ordinary physiological systems, the functioning of a biological system depends on the coordinated action of each of the constitutive elements (Chauvet 2002). This is why the designing of a artificial environment as an augmented biotic system, calls for an

Integrative design strictly assumes that each function is a part of a continuum of integrated hierarchical levels of structural organization and functional organization as described above within MTIP. Thus, the geometrical organization of the virtual environment structure, the physical structure of interfaces and the generated patterns of artificial stimulations, condition the dynamics of hierarchical and functional integration. Functional interactions, which are products or signals emanating from a structural unit acting at a distance on

As a consequence, the proposed model inside Chauvet's MTIP assumes the existence of functional interactions between the artificial and the physiological sensorimotor systems. This hypothesis has been tested through experiments described in the following section. This model in the framework of MTIP is formally described in figure 3, that is the 3D representation of the integrated augmented human design. The human (Ω) (fig.2.) is represented as the combination of the hierarchical structural (z) and functional (Y) organizations. X-Axis corresponds to the ordinary physical or Cartesian space. Each physiological function ψ is represented in the *xψy* plane by a set of structural units hierarchically organized according space scales. Two organizational levels are shown: *ψ*<sup>1</sup> and *ψ*2. The different time scales are on the *y*-axis, while space scales, which characterize the structure of the system, are on the *z*-axis. The role of space and time clearly appears. *Ψ*1*ij* is

interaction affords physiological processes is coherence achieved.

another structural unit, are the fundamental elements of this dynamic.

the non-local and non-symmetric functional interaction.

**efficiency and domain of stability** 

integrative approach.

Units at the upper levels of the physiological system represent the whole or a part of sensorial and motor organs. Augmented human (Ω') (fig.3.) design consists of creating an artificially extended sensorimotor loop by coupling two artifactual structural units *I*'and *J*'. Their integration into the physiological system is achieved by the functional interactions (i.e. sensorimotor) they generate. From sensors' outputs to effectors' inputs, the synchronized designed artificial system or process *S*' controls and adapts the integration of the functional interactions artificially created into the dynamics of the global and coherent system.

Fig. 3. Ω' – a representation of *augmented human*: artifactual loop coupling the biological system with an artifactual system to an artificial sensorimotor loop (Fass 2007).

This is our theoretical paradigm for augmented human modelling.

According MTIP we highlight three grounding principles for augmented human engineering and human-artefact system design<sup>2</sup> :


<sup>2</sup> These theoretical principles of human system integration are consistent with the ten organizational HSI principles define by Harold Booher (Booher 2003) or the three HSI design principles defined by Hobbs et al. (Hobbs et al. 2008).

### **6. Experiments**

The goals of this research are to search for the technical and sensorimotor primitives of augmented human design for gesture assistance by a wearable virtual environment, using virtual reality and augmented reality technologies, for human space activities, aeronautical maintenance and surgery. We have chosen as behavioural assessment adapts to a virtual environment, a neurophysiological method used in motor control researches to study the role of the body in human spatial orientation (Gurfinkel et al. 1993), and the representation of the peri-personnal space in humans (Ghafouri & Lestienne 2006).

Fig. 4. Examples of different structural and functional primitives for virtual environment design.

#### **6.1 Paradigm**

The following method was developed for expert system engineering (knowledge based system) and to explore the knowledge nature as a behavioural property of coupling generated in the dynamics of the individual-environment interaction, either natural or artificial. We use gestures as a sensorimotor maieutic.

The gesture based method for virtual environment design and human system integration assessment is a behavioural tool inspired by Chauvet's theoretical framework, i.e.:


By designing a artificial environment, a human *in-the-loop* system consists of organizing the linkage of multimodal biological structures, sensorimotor elements at the hierarchical level of the living body, with the artificial interactive elements of the system, devices and patterns of stimulation. There exists a "transport" of functional interaction in the augmented space of both physiological and artifactual units, and thus a *function* may be viewed as the final result of a set of functional interactions that are hierarchically and functionally organized between the artificial and biological systems.

#### **6.2 Material and method**

266 Systems Engineering – Practice and Theory

The goals of this research are to search for the technical and sensorimotor primitives of augmented human design for gesture assistance by a wearable virtual environment, using virtual reality and augmented reality technologies, for human space activities, aeronautical maintenance and surgery. We have chosen as behavioural assessment adapts to a virtual environment, a neurophysiological method used in motor control researches to study the role of the body in human spatial orientation (Gurfinkel et al. 1993), and the representation

Fig. 4. Examples of different structural and functional primitives for virtual environment

The following method was developed for expert system engineering (knowledge based system) and to explore the knowledge nature as a behavioural property of coupling generated in the dynamics of the individual-environment interaction, either natural or

The gesture based method for virtual environment design and human system integration

i. an integrated marker for the dynamical approach of augmented human design, and the search for interaction primitives and validation of organization principles; and ii. an integrated marker for a dynamical organization of virtual environment integrative

By designing a artificial environment, a human *in-the-loop* system consists of organizing the linkage of multimodal biological structures, sensorimotor elements at the hierarchical level of the living body, with the artificial interactive elements of the system, devices and patterns of stimulation. There exists a "transport" of functional interaction in the augmented space of both physiological and artifactual units, and thus a *function* may be viewed as the final result of a set of functional interactions that are hierarchically and functionally organized between

assessment is a behavioural tool inspired by Chauvet's theoretical framework, i.e.:

of the peri-personnal space in humans (Ghafouri & Lestienne 2006).

artificial. We use gestures as a sensorimotor maieutic.

**6. Experiments** 

design.

**6.1 Paradigm** 

design.

the artificial and biological systems.

To find the main classes of virtual environments and highlight the dynamical principles of hierarchical organization of human systems integration and virtual environment design for assisting gesture, we set up a protocol according to a complex and incremental design (fig.4.). The experiments were performed in laboratory and a prototype was tested during a French National Space Centre (CNES) parabolic flight campaign.

*Devices***:** Head mounted display I-Glasses® immersive or see-trough, Frastrack Pohlemus® electromagnetic motion tracking system, workstation with a specific software design for managing and generating the visual virtual environment in real-time.

*Protocol*: Our protocol is based on graphical gesture analysis, more specifically of the drawing of ellipses within 3D-spaces. It's inspired by neurophysiology of movement [20]. By selecting this experimental paradigm, the movement was considered as the expression of a cognitive process *per se*: the integrated expression of the sensorimotor three-dimensional space.

*In laboratory*, ellipses drawn without virtual environment are the control experiment. It consists of two main situations: open and closed eyes, touch or guided by a real wooden ellipse, and memorized without a model. To highlight the dynamical principles of organization for assisting gestures, we set up a protocol according to a complex and incremental VE design, allowing intuitive learning of both task and use of virtual environment. Ten volunteers (7 men and 3 women, 25 to 35 years old) were asked to performed graphical gestures (drawing of ellipses: eccentricity 0.87 – major axis 40cm and minor axis 20cm) in the three anatomical planes of reference for each step of incremental design (Fig. 5).

The first step of the protocol consisted of drawing ellipses wearing a turned off HMD to study the influence of HMD design and intrusiveness on sensorimotor integration and motor control. The last step of the virtual reality artefact combined allocentric and egocentric prototypic structural elements of artificial visual space, model of ellipses and their planes of movement, and a visual feedback of movement.

*Parabolic Flights – hypergravity and weightlessness*: to test our prototype (Fig. 6, 7 and 8), three right-handed trained volunteers were asked to draw ellipses (major axis 30 cm and minor axis 15cm) in two orientations of the three anatomical reference planes: vertical sagittal (VS) and transversal horizontal (TH). These drawing of ellipses were performed continuously and recorded during both the 1.8g ascents and the 0g parabola itself, feet in foot-strap (F) or in free-floating (FF), in two main situations: free gesture and assisted gesture wearing a visual virtual environment. Visual virtual environment was generated in immersion (RV) or in augmented reality (RA).

*Data analysis*: sixteen gesture-related variables are calculated from data produced during the parabola and recorded from the sensor worn on the tip of the index finger of the working hand: kinematics (Number of ellipses), Average velocity, Covariation Vt/Rt, Amplitude), position (Global position, Position / x axis, Position / y axis, Position / z axis), orientation (Global orientation, Orientation / sagittal plane, Orientation / frontal plane, Orientation / horizontal plane) and shape (Mean area, Eccentricity, Major axis variation, Minor axis variation) – indexes in Annex 1.

Fig. 5. Graphical gesture of ellipse drawing in the 3D space is performed and analysed in different configurations, more or less complex, of immersive virtual environment assisted drawing ellipses: A- SV ellipses and neutral and coloured background, B- SV ellipses and anthropomorphic visual feedback of movement (artificial hand), C- TF and model of ellipse insert in its plan of movement without visual feedback of movement, D- TH ellipses and abstract representation visual feedback of movement (ball).

Fig. 6. Drawing of SV (A,B) and HT (C, B) ellipses with gesture assistance in hypergravity (1,8g – A, C) and microgravity (0g – B,D)

Fig. 5. Graphical gesture of ellipse drawing in the 3D space is performed and analysed in different configurations, more or less complex, of immersive virtual environment assisted drawing ellipses: A- SV ellipses and neutral and coloured background, B- SV ellipses and anthropomorphic visual feedback of movement (artificial hand), C- TF and model of ellipse insert in its plan of movement without visual feedback of movement, D- TH ellipses and

Fig. 6. Drawing of SV (A,B) and HT (C, B) ellipses with gesture assistance in hypergravity

abstract representation visual feedback of movement (ball).

(1,8g – A, C) and microgravity (0g – B,D)

Fig. 7. Weightlessness (0g), example of ellipse drawing in vertical sagital orientation without assistance. We observe a total lost of shape and orientation accuracy.

Fig. 8. Weightlessness (0g), example of ellipse drawing in vertical sagital orientation with assistance in vertical sagittal orientation. Even if the shape is not precise, orientation of movement is very accurate and stable (tacking into account the magnetic field distorsion) despite that loss of the gravitational referential and vestibular perturbations. Artificial visuomotor functional interaction coupling by virtual environment enhance stability according the Chauvet's MTIP theory and its principles of auto-associative stabilization.

*Statistical analysis*: We use a method of multidimensional statistical analysis. Principal component analysis and hierarchical classification are calculated with SPAD 4.0® to show the differential effects of hypergravity and microgravity on graphical gestures for each subject wearing or not the system. A second goal of this exploratory statistics is to assess the design of our prototype and the dynamics of the human virtual environment integration in weightlessness and on earth.

*Results:* The variable correlation circle (Fig. 9.) shows the first principal (F1) component is correlated in a negative manner with the position, kinematics and shape variables; especially with the global position F, the average velocity B and the mean area E. The second principal component (F2) is correlated in a negative manner with the variables of orientation M, J and K. Whereas K orientation variation in relation to the sagittal plane is fairly correlated with F1. Thereof, the more the average person is placed downward and on the left on the F1-F2 plane, the more their global orientation and orientation in relation to both the frontal and horizontal planes will be important (Annex 1).

Principal component analysis F1-F2 factorial plans (Fig. 10.): Axis 1 (42.70%) shows two sets of experimental status. The first set contains control status head free, touched ellipse, opened or closed eyes, visual guidance, and the virtual reality assisted gesture with visual feedback, ball or hand, and referential frames of action: plane of movement or ellipse model. The second set contains individuals without ellipse model; head free, opened or closed eyes and memorized, HMD off, no gesture feedback and no allocentric or egocentric referential frames. These positions of individuals on the axis 1 reveal the importance of visuo-haptic interactions for gesture in real or virtual environment. Inside that set, they are differences between real touched ellipses situations and " virtually touched". The visuo-haptic class contains two sub-classes (visuo-tactile and visuo-proprioceptive).

Fig. 9. Variables correlation circle.

Axis 2 (19.01%) shows difference functions of the orientation plane of movement. The distortion of the gesture spatial orientation is greater without visuo-haptic inputs, even with spatial frames of reference and models of action (ellipse model and plan of movement). These positions of individuals on the axis 2 reveal the importance of the gesture spatial orientation. Without visuo-haptic elements, situation of sagittal plane drawing ellipses are nearest to the gravity center of the factorial plane. Frontal and horizontal orientations influence motor behavior with contrary effect. The gesture distortion is greater in the horizontal plane. It also shows significant influence of HMD configurations and of gesture feedback representation. There are functional semiotic differences between ball and virtual hand with enhanced functional differences in absence of visuo-haptic elements. There are four noticeable statuses: 88A, 172a and 175a, without gesture feedback, induce similar behavior to situations with visuo-haptic interactions; 39f, drawing ellipses in the horizontal plane wearing HMD off immersive I-Glasses, induce the greatest distortion in motor control.

The multidimensional statistical analysis (Fig. 9 and 10) confirms the existence of structural and dynamical primitives of human system integration and virtual environment design, for assisting gestures the *a priori* main classes of virtual environment organizational elements. Their organizational and functional properties - the way to couple real and artificial sensorimotor functions - have a significant influence on the human *in-the-loop* system behavior. By enhancing and interacting with the sensorimotor loops, they are able to modify (disturbing or improving) the motor control, the gesture and, as a consequence, the global quality of human behavior. According to these experimental results, the interactions generated by the artefacts may be identified as functional interactions.

Thus we are able to show differential effects for each element of the incremental design of VE, and to assess the global design and dynamics of the human system integration. These experimental results will ground VE design modelling according to the hierarchical organization of theoretical integrative physiology.

interactions for gesture in real or virtual environment. Inside that set, they are differences between real touched ellipses situations and " virtually touched". The visuo-haptic class

Axis 2 (19.01%) shows difference functions of the orientation plane of movement. The distortion of the gesture spatial orientation is greater without visuo-haptic inputs, even with spatial frames of reference and models of action (ellipse model and plan of movement). These positions of individuals on the axis 2 reveal the importance of the gesture spatial orientation. Without visuo-haptic elements, situation of sagittal plane drawing ellipses are nearest to the gravity center of the factorial plane. Frontal and horizontal orientations influence motor behavior with contrary effect. The gesture distortion is greater in the horizontal plane. It also shows significant influence of HMD configurations and of gesture feedback representation. There are functional semiotic differences between ball and virtual hand with enhanced functional differences in absence of visuo-haptic elements. There are four noticeable statuses: 88A, 172a and 175a, without gesture feedback, induce similar behavior to situations with visuo-haptic interactions; 39f, drawing ellipses in the horizontal plane wearing HMD off immersive I-Glasses, induce the greatest distortion in motor control. The multidimensional statistical analysis (Fig. 9 and 10) confirms the existence of structural and dynamical primitives of human system integration and virtual environment design, for assisting gestures the *a priori* main classes of virtual environment organizational elements. Their organizational and functional properties - the way to couple real and artificial sensorimotor functions - have a significant influence on the human *in-the-loop* system behavior. By enhancing and interacting with the sensorimotor loops, they are able to modify (disturbing or improving) the motor control, the gesture and, as a consequence, the global quality of human behavior. According to these experimental results, the interactions generated by the

Thus we are able to show differential effects for each element of the incremental design of VE, and to assess the global design and dynamics of the human system integration. These experimental results will ground VE design modelling according to the hierarchical

contains two sub-classes (visuo-tactile and visuo-proprioceptive).

Fig. 9. Variables correlation circle.

artefacts may be identified as functional interactions.

organization of theoretical integrative physiology.

Fig. 10. Principal component analysis, F1-F2 factorial plans: outcome analysis of the virtual environment elements organization is done by observing statistical individuals (indexes Annex 2 and 3) position on the F1-F2 plan (representing 67.71% of the total inertia).

#### **7. Conclusion and perspective**

Designing a human-artefact system consists of organizing the linkage of multimodal biological structures, sensorimotor elements at the hierarchical level of the living body, with the artificial interactive elements of the system, devices and patterns of stimulation. There exists a "transport" of functional interaction in the augmented space of both physiological and artifactual units, and thus a *function* may be viewed as the final result of a set of functional interactions that are hierarchically and functionally organized between the artificial and biological system elements.

*Structures or Architecture***:** spatial organization of the structural elements, natural and artificial, coupled by non-local and non-symmetric functional interactions according to PAAS. It is specifying the function(s) of the integrated system. Different organizations specify different architecture and their specific functions:

*Behaviour*: temporal organisation of the patterns of artificial functional interactions condition and specify the dynamics fit of augmented sensorimotor loops. It is determining augmented human behaviour.

*Evolution*: the spatiotemporal organization of the structural elements and the functional interactions they produce and processes specify functional stability of human-artefact system according to the *potential of functional organization* principle during the *life of augmented human.* 

Contingent on ecology and economy, architecture, behaviour and evolution as specified, define and limit the *life domain of augmented human.* 

MTIP is thus applicable to different space and time level of integration in the physical space of the body and the natural or artificial behavioural environment; from molecular level to socio-technical level; from drug design to wearable robotics, and to life and safety critical systems design.

Future work should address questions related to the development of formal models (Cansell & Méry 2008; Méry & Singh 2010) related to augmented human engineering. New questions arise when dealing with deontic or ethical questions that might be handled by an augmented human together with classical formal modelling languages based on deontic or modal languages.

Industrial scientific and pragmatic challenges rely on designing intelligent and interactive artifactual systems relating machines and human beings. This relationship must be aware of its human nature and its body: it is anatomy and physiology. The man-machine interface becomes an integrated continuation of the body between perception-action and sensory and motion organs. By integrating human body and behaviours, the automaton is embodied but this embodiment grounds on the user's body; it enhances capabilities and performances. Efficiency and reliability depend on respecting these fundamental necessities.

#### **8. Acknowledgment**

Our special thanks to Professor Dominique MÉRY head of MOSEL LORIA, University of Lorraine.

#### **9. Annexes**


#### **9.1 Annex 1: Calculated variables**

Table 1.


#### **9.2 Annex 2: Training and control experimental status indexation**

Table 2.

272 Systems Engineering – Practice and Theory

MTIP is thus applicable to different space and time level of integration in the physical space of the body and the natural or artificial behavioural environment; from molecular level to socio-technical level; from drug design to wearable robotics, and to life and safety critical

Future work should address questions related to the development of formal models (Cansell & Méry 2008; Méry & Singh 2010) related to augmented human engineering. New questions arise when dealing with deontic or ethical questions that might be handled by an augmented human together with classical formal modelling languages based on deontic or

Industrial scientific and pragmatic challenges rely on designing intelligent and interactive artifactual systems relating machines and human beings. This relationship must be aware of its human nature and its body: it is anatomy and physiology. The man-machine interface becomes an integrated continuation of the body between perception-action and sensory and motion organs. By integrating human body and behaviours, the automaton is embodied but this embodiment grounds on the user's body; it enhances capabilities and performances.

Our special thanks to Professor Dominique MÉRY head of MOSEL LORIA, University of

Efficiency and reliability depend on respecting these fundamental necessities.

**Index Variables A** Number of ellipse **B** Average velocity (cm/s) **C** Covariation Vt/Rt **D** Amplitude (cm) **E** Mean area (cm²) **F** Global position **G** Position / x axis (cm) **H** Position / y axis (cm) **I** Position / z axis (cm) **J** Global orientation **K** Orientation / sagittal plane(d°) **L** Orientation / frontal plane(d°) **M** Orientation / horizontal plane(d°)

**N** Eccentricity **O** Major axis variation **P** Minor axis variation

systems design.

modal languages.

**8. Acknowledgment** 

**9.1 Annex 1: Calculated variables** 

Lorraine.

Table 1.

**9. Annexes** 

#### **9.3 Annex 3: Assisted graphical gesture experimental status**



Table 3.

#### **10. References**


 " **95d 176d 212d** TF " **96f 177f 213f** TH

Ball simple **97a 178a 214a** VS " **98d 179d 215d** TF " **99f 180f 216f** TH

 " **149d 182d 218d** TF " **150f 183f 219f** TH Hand simple **103a 184a 220a** VS " **104d 185d 221d** TF " **105f 186f 222f** TH

 " **152d 188d 224d** TF " **153f 189f 225f** TH

 " **155d 191d 227d** TF " 156f 192f 228f TH

Bjorner, D. (2006a), Software Engineering 1 Abstraction and Modelling. *Theoretical Computer* 

Bjorner, D. (2006b), Software Engineering 2 Specication of Systems and Languages.

Bjorner, D. (2009), Domain Engineering Technology Management, Research and

Booher, H.R. (2003), Introduction: Human Systems Integration, In: *Handbook of human systems integration*, Booher H.R., pp. 1-30, Wiley, ISBN: O-471-02053-2. Cansell, D. and Méry, D. 2008. The Event-B Modelling Method: Concepts and Case Studies,

Chauvet, G. A. (1993) Hierarchical functional organization of formal biological systems: a

*Theoretical Computer Science*, An EATCS Series. Springer, 2006. ISBN: 978-3-540-

in "Logics of specifi- cation languages", D. BJØRNER, M. C. HENSON (editors),

dynamical approach. I. An increase of complexity by self-association increases the domain of stability of a biological system. *Phil Trans Roy Soc London B*, Vol. 339

*Science*, An EATCS Series. Springer. ISBN: 978-3-540-21149-5.

Engineering. COE Research Monograph Series, Vol. 4, JAIST.

Monographs in Theoretical Computer Science, Springer, p. 47-152.

(March 1993), pp. 425-444, ISSN: 1471-2970.

Ellipse and hand **154a 190a 226a** VS

**Gesture Feedback** 

 " **Vision and touch** 

**10. References** 

21150-1.

Table 3.

Ellipse + all references

Ellipse + all references

Ellipse + Allo+ Ego **94a 175a 211a** VS

**148a 181a 217a** VS

**151a 187a 223a** VS


## **A System Engineering Approach to e-Infrastructure**

Marcel J. Simonette and Edison Spina *University of São Paulo, Escola Politécnica, Computer and Digital Systems Department, Brazil* 

#### **1. Introduction**

276 Systems Engineering – Practice and Theory

Hobbs A. N., Adelstein B. D., O'Hara J. , & Null C.H. (2008), Three principles of human-

Licklider, J. C. R., Man-Computer Symbiosis, *IRE Transactions on Human Factors in* 

Kelso, JA. (2008). An Essay on Understanding the Mind: The A.S. Iberall Lecture, *Ecological Psychology*, Vol. 20, No. 2 (April 2008), pp. 180-208, ISSN: 1040-7413. Malnoy F., Fass. D. and Lestienne F. (1998), Vection and postural activity: application to

Méry, D. and Singh N.K. (2010), Trustable Formal Specification for Software Certification, T.

Nasa - Human System Integration Division, What is human system integration?, September

Nissen M.E. and Burton R.M. (2011), Designing organizations for dynamic fit: system

Roco, M.C. and Brainbridge, W.S. (2002), Converging technologies for improving human

Sporns, O. and Edelman G., (1998), Bernstein's dynamic view of the brain: the current

Viviani P, Burkhard PR, Chiuvé SC, Corradi-Dell'Acqua C, Vindras P. 2009. Velocity control

Margaria and B. Steffen (Eds.): ISoLA 2010, Part II, LNCS 6416, pp. 312–326, ISSN:

stability, manoeuvrability, and opportunity loss, *IEEETransactions on Systems, Man and Cybernetics-Part A: Systems and Humans*, Vol. 41, No. 3, pp. 418-433, ISSN: 1083-

problems of modern neurophysiology (1945), *Motor Control*, Vol. 2 (October 1998),

in Parkinson's disease: a quantitative analysis of isochrony in scribbling movements, Exp Brain Res. , Vol. 194, No. 2 (April 2009), pp. 259-83. Epub 2009 Jan

*Electronics*, Vol. HFE-1 (March 1960), pp. 4-11, ISSN: 0096-249X.

virtual environment design, *La lettre de l'IA*, Vol. 134-136, 121-124.

Sydney, Australia, April 8-11, 2008

2011, http://humansystems.arc.nasa.gov

pp. 283-305, ISSN: 1087-1640.

performance. National Science Foundation, June 2002. www.wtec.org/ConvergingTechnologies/Report/NBIC\_report.pdf.

20. Erratum in: Exp Brain Res. 2009 Apr; 194(2):285.

0302-9743.

4427.

system integration, *Proceedings of the 8th Australian Aviation Psychology Symposium*,

Electronic infrastructures (e-Infrastructures) are the basic resources used by Information and Communication Technologies. These resources are heterogeneous networks, which together constitute a large computing and storage power, allowing resources, facilities and services to be provided to the creation of systems in which communication and business operations are almost immediate, with implications in business organization, task management and human relations, forming a kind of patchwork of technologies, people and social institutions.

e-Infrastructure are present in several areas of knowledge, and they are helping the competitiveness of economies and societies. However, in order to continue with this paradigm, e-Infrastructures must be used in a sustainable and continuous way, respecting the humans and the social institutions that ultimately use them, demand their development and fund their paradigm.

This work presents an approach to deal with the interactions between e-Infrastructure technologies, humans and social institutions, ensuring that the emergent properties of this system may be synthesized, engaging the right system parts in the right way to create a unified whole, greater than the sum of its parts. The social components of this system have needs. The answers to these needs must not be associated with the engineering old philosophy of "giving the customers what they want", as the technology alone does not have a purpose; it is only a technological artifact. Technology has a purpose only when one or more humans use it to perform a task. This human presence in a e-Infrastructure System make it a complex system, because humans are diverse - multi cultural, multi generational multi skilled. This diversity can lead to differences between what is expected (planned) and the actual System behavior, and this variation is called complexity in this study.

Soft System Methods emerged as a way of addressing complex and fuzzy problems, the objectives of which may be uncertain. Soft methods are aimed at systems in which human and social institutions are present, these methods have an underlying concept and theory of systems, with which the Systems Engineering approach can focus on solving the customer's problem and provides all the customer needs, not only on what has been required (Hitchins, 2007).

e-Infrastructure design should have a holistic approach, seeking steps that ensure functional and failsafe systems, respecting humans and social institutions dimensions. This chapter is about Systems Engineering in the design of e-Infrastructure Systems, using Soft System Methods to develop a Systemic Socio-technical approach, crucial in order to identify the correct quality factors and expectations of the social infrastructure in an e-Infrastructure. The following sections, Dealing with Complexity, and e-Infrastructure as a Socio-technical System, introduce background information related to System Engineering and Sociotechnical Systems. Next, the Soft System Method Approach section is about design process of systems in which human and socio institutions are present; in this section, the Consensual Methods are highlighted, and a perspective to a method selection is presented. Next, this chapter presents a Case Study, the design of an e-Infrastructure to be used by ALCUE Units of the Vertebralcue Project, from the ALFA III Program of the European Commission. A Conclusion section is followed by Acknowledgment and References.

#### **2. Dealing with complexity**

Problems arise in many ways, several problems are complex, difficult to be understood and analyzed; problems the solution of which is often only a "good enough" response, based on previous experience, common sense, and subjective judgment. Sometimes, the response to this kind of problem is just a change in the problem domain, so that the problem disappears.

Addressing problems is part of human nature. Humans have already faced numerous problems in history, and, especially after the Scientific Revolution, the approach adopted to deal with problems is to divide them into smaller parts, prioritizing and addressing the parts thought to be the most important first. Unfortunately, sometimes this approach fails, especially when it is necessary to deal with multiple aspects of a problem at the same time. When an aspect is prioritized, either it is not possible to have an understanding of emergent properties that may exist, or the problem can change in nature, emerging with another format. Neither scenario allow the identification of the existing complexity in the original problematic situation. Systems Engineers need to deal with complexity, identifying the interrelationships that exist in problematic situations, especially those related with human demands.

#### **3. e-infrastructure as socio-technical system**

The operation of e-Infrastructures depends both on the technology involved (developed by several engineering disciplines), and humans and social institutions interfaces (social interfaces), i.e., the operation depends on technological and social infrastructures. People, social institutions and technology result in a Socio-technical System, which has a social infrastructure and a technological infrastructure (Hitchins, 2007; Sommerville, 2007).

Although the Traditional Engineering methods with their reductionist approach, successfully address technological components and Human Factors (Chapanis, 1996; Nemeth, 2004; Sadom, 2004), these methods have difficulties in the treatment of the social infrastructure of e-Infrastructures Systems, both for addressing people and social institutions, which are often seen only as part of a context, without directly belonging to the System, treating human and social dimensions as constants, or some-times, ignores them (Bryl et al. 2009; Fiadeiro, 2008; Hollnagel & Woods, 2005; Nissenbaum, 2001; Ottens et al., 2006).

e-Infrastructure design should have a holistic approach, seeking steps that ensure functional and failsafe systems, respecting humans and social institutions dimensions. This chapter is about Systems Engineering in the design of e-Infrastructure Systems, using Soft System Methods to develop a Systemic Socio-technical approach, crucial in order to identify the correct quality factors and expectations of the social infrastructure in an e-Infrastructure. The following sections, Dealing with Complexity, and e-Infrastructure as a Socio-technical System, introduce background information related to System Engineering and Sociotechnical Systems. Next, the Soft System Method Approach section is about design process of systems in which human and socio institutions are present; in this section, the Consensual Methods are highlighted, and a perspective to a method selection is presented. Next, this chapter presents a Case Study, the design of an e-Infrastructure to be used by ALCUE Units of the Vertebralcue Project, from the ALFA III Program of the European Commission. A

Problems arise in many ways, several problems are complex, difficult to be understood and analyzed; problems the solution of which is often only a "good enough" response, based on previous experience, common sense, and subjective judgment. Sometimes, the response to this kind of problem is just a change in the problem domain, so that the problem disappears. Addressing problems is part of human nature. Humans have already faced numerous problems in history, and, especially after the Scientific Revolution, the approach adopted to deal with problems is to divide them into smaller parts, prioritizing and addressing the parts thought to be the most important first. Unfortunately, sometimes this approach fails, especially when it is necessary to deal with multiple aspects of a problem at the same time. When an aspect is prioritized, either it is not possible to have an understanding of emergent properties that may exist, or the problem can change in nature, emerging with another format. Neither scenario allow the identification of the existing complexity in the original problematic situation. Systems Engineers need to deal with complexity, identifying the interrelationships that exist in problematic situations, especially those related with human

The operation of e-Infrastructures depends both on the technology involved (developed by several engineering disciplines), and humans and social institutions interfaces (social interfaces), i.e., the operation depends on technological and social infrastructures. People, social institutions and technology result in a Socio-technical System, which has a social

Although the Traditional Engineering methods with their reductionist approach, successfully address technological components and Human Factors (Chapanis, 1996; Nemeth, 2004; Sadom, 2004), these methods have difficulties in the treatment of the social infrastructure of e-Infrastructures Systems, both for addressing people and social institutions, which are often seen only as part of a context, without directly belonging to the System, treating human and social dimensions as constants, or some-times, ignores them (Bryl et al. 2009; Fiadeiro, 2008;

infrastructure and a technological infrastructure (Hitchins, 2007; Sommerville, 2007).

Conclusion section is followed by Acknowledgment and References.

**3. e-infrastructure as socio-technical system** 

Hollnagel & Woods, 2005; Nissenbaum, 2001; Ottens et al., 2006).

**2. Dealing with complexity** 

demands.

The social infraestrucure actors of an e-Infrastructure are more than system components, a part of the context, they want to optimize their decisions, considering their own subsystems, proposes and interests (Houwing et al., 2006).

#### **4. Soft system method approach**

There are several Systems Engineering approaches to address a solution to a problem. Nevertheless, Hitchins (2007) argues that the approach that makes use of Soft System Methods is the one that investigates the problem to be treated, looking for practical experiences and interactions with the problematic situation, trying to develop an understanding about the nature of problem symptoms and to propose solutions.

The use of Soft System Methods - a Soft Systems Approach - both allows the System Engineer to understand the problem domain, and helps him with the identification of social and human dimensions present in the problem domain. The former is because the activity to understand the problem domain is essentially an activity in which the components are human activities, and the second because there is an intrinsic complexity for accurately identifying human and social dimensions all along the System life.

The approach to go beyond Human Factors, and deal with the humans dimensions, is the use of the Soft System Approach with an evolutionary approach strategy. This approach deals with the interaction between Reality and Thought, and the interaction between Problem and Solution, it is represented at Figure 1 and was proposed by Soares (1986) as a way to understand, design, and implement solutions to a problematic situation.

Fig. 1. Representation of the Evolutionary Spiral Approach.

From the two interactions - Reality x Thought and Problem x Solution, there are four actions that generate a cycle to treat a problem. These actions are: (i) *Understanding*: when the System Engineer develops an understanding, an abstract representation of the real problem, (ii) *Design*: when the System Engineer creates a response to the problem that satisfies the Problem in the Thought dimension, (iii) *Implementation*: the construction of the response to the problem in terms of Reality, (iv) *Use:* set up of a response to the Problem, in the environment of the Problem.

The set up of a response to a Problem may cause changes in Reality, emerging scenarios not previously determined, giving rise to new demands and a redefinition of the Problem. The treatment sequence of the problems leads to an Evolutionary Spiral as in Figure 1.

However, different from Soares, the authors of this chapter consider Solution not only as a response to a problem, but also as an overcoming restrictions, improvements in an existing Reality through actions to treat the problematic situation. Solution is an indicative of an improvement, a response that satisfies, but does not always solve, the problem, i.e., a response to the problem that is the best at that moment.

Although the identification of human and social dimension all along the System life is important to System success; the first action of the process - *Understanding* - is crucial.

#### **4.1 Consensual methods**

Understanding the Problem in the Reality dimension (Fig. 1) is the first step to determine the System construction possibilities. A proposal to develop this understanding and reduce users' dissatisfaction - respecting the human and social dimensions - is the use of Consensual Methods

Consensual Methods are not only about getting a consensus about a problem to be treated, it is also about getting the Systems Requirements from the people that have interests in the System. The consensual processes deal with the human activities involved in identifying the requirements and the human and social dimensions, reducing the discrepancy between the expected Systems features and the ones that will be perceived by the users.

Next, the Consensual Methods used by the authors in their work are listed. Hitchins (2007) stated that these methods are specifically meant to the front end of the Systems methodology, they are: Brainstorming, Nominal Group Technique, Idea Writing, Warfield's Interpretive Structural Modeling, Checkland's Soft System Methodology, Hitchins' Rigorous Soft Method.

#### **4.1.1 Brainstorming**

This method is an approach in which a selected group of people is encouraged by a moderator to come up with ideas in response to a topic or a triggering question.

#### **4.1.2 Nominal Group Technique (NGT)**

This method is similar to Brainstorming. A moderator introduces a problematic situation to a group of people and asks participants to write down their ideas about the problem on a sheet of paper. After a suitable time for people to generate their ideas, all participants read their ideas and the moderator, or an assistant, write them in a flip chart. With all the ideas written, the moderator conducts a discussion about these ideas, and then the participants are invited to rank all ideas. An idea-ordered list is generated and this constitutes the ideas that have been produced by the group as whole.

#### **4.1.3 Idea writing**

280 Systems Engineering – Practice and Theory

From the two interactions - Reality x Thought and Problem x Solution, there are four actions that generate a cycle to treat a problem. These actions are: (i) *Understanding*: when the System Engineer develops an understanding, an abstract representation of the real problem, (ii) *Design*: when the System Engineer creates a response to the problem that satisfies the Problem in the Thought dimension, (iii) *Implementation*: the construction of the response to the problem in terms of Reality, (iv) *Use:* set up of a response to the Problem, in the

The set up of a response to a Problem may cause changes in Reality, emerging scenarios not previously determined, giving rise to new demands and a redefinition of the Problem. The

However, different from Soares, the authors of this chapter consider Solution not only as a response to a problem, but also as an overcoming restrictions, improvements in an existing Reality through actions to treat the problematic situation. Solution is an indicative of an improvement, a response that satisfies, but does not always solve, the problem, i.e., a

Although the identification of human and social dimension all along the System life is important to System success; the first action of the process - *Understanding* - is crucial.

Understanding the Problem in the Reality dimension (Fig. 1) is the first step to determine the System construction possibilities. A proposal to develop this understanding and reduce users' dissatisfaction - respecting the human and social dimensions - is the use of

Consensual Methods are not only about getting a consensus about a problem to be treated, it is also about getting the Systems Requirements from the people that have interests in the System. The consensual processes deal with the human activities involved in identifying the requirements and the human and social dimensions, reducing the discrepancy between the

Next, the Consensual Methods used by the authors in their work are listed. Hitchins (2007) stated that these methods are specifically meant to the front end of the Systems methodology, they are: Brainstorming, Nominal Group Technique, Idea Writing, Warfield's Interpretive Structural Modeling, Checkland's Soft System Methodology, Hitchins' Rigorous

This method is an approach in which a selected group of people is encouraged by a

This method is similar to Brainstorming. A moderator introduces a problematic situation to a group of people and asks participants to write down their ideas about the problem on a sheet of paper. After a suitable time for people to generate their ideas, all participants read

moderator to come up with ideas in response to a topic or a triggering question.

expected Systems features and the ones that will be perceived by the users.

treatment sequence of the problems leads to an Evolutionary Spiral as in Figure 1.

response to the problem that is the best at that moment.

environment of the Problem.

**4.1 Consensual methods** 

Consensual Methods

Soft Method.

**4.1.1 Brainstorming** 

**4.1.2 Nominal Group Technique (NGT)** 

This method takes TGN a little farther. The moderator introduces the theme, and the participants are asked to write their ideas, suggestions, etc., on a piece of paper. After two or three minutes, the moderator asks each participant to pass his sheet on to another person, to pass the sheet to the second person on the left, for example. The one who receives the sheet can see the ideas already written, which may lead him (her) to a new set of ideas. After a short time, the moderator asks for the sheet recirculation, this time, to a different number of people. The process is repeated for about 30 minutes, or until the moderator notes that most people do not have any more ideas. There are two purposes in this strategy: encouraging ideas emergence within the working group and hiding the origin of a particular idea. The lists of ideas are worked later through Brainstorming or TGN to generate an action plan.

#### **4.1.4 Interpretive Structural Modelling (ISM)**

This method is similar to a computer-assisted learning process that enables individuals or groups to map complex relationships between many elements, providing a fundamental understanding and the development of action courses to treat a problem. An ISM session starts with a set of elements (entities) to which a relationship must be established. These entities are identified using any other method. The result of ISM is a kind of graph, where the entities are nodes and the relations are edges. The whole process can be time-consuming, especially when there are many divergences among the group members. Therefore, this time is important. It is essential for participants to understand and to recognize the each other' arguments, reaching a consensus.

#### **4.1.5 Checkland's Soft Systems Methodology (SSM)**

This method promotes the understanding of a problematic situation through the interaction between the people involved in the problematic situation. It promotes the agreement of the multiple problem views and multiple interests, and may be represented by a seven-stage model. Stages one and two explore the problematic situation (unstructured) and express it in a rich picture. Stage three is the root definition of the relevant systems describing six aspect of the problem, which are called CATWOE, they are: Customers, Actors, Transformation process, World view, Owner and Environment constrains. In stage four, the conceptual models of the relevant systems are developed, and, in stage 5, the conceptual model is compared with the perceptions of the real situation. In stage six, an action plan is developed for the changes, which are feasible and desirable; and in stage seven, the action plan is implemented. As a method developed from the Soft Systems Thinking, SSM does not produce a final answer to the problematic situation, it seeks to understand the problem situation and find the best possible response (Checkland, 2000).

#### **4.1.6 Hitchins' Rigorous Soft Method (RSM)**

As SSM, this method is based on the General-Purpose Problem-Solving Paradigm and is context free. The people who are experiencing a problem, and have knowledge about it, provide information about it in meetings with a coordinator. This investigation, which searches for dysfunction sources related to the problem, can create a lot of information and data. Differently from SSM, RSM employs tools and methods for treating, organizing and processing information; the action of "process" implies a gradual reduction of the problematic situation by ordering the data, transforming them into information for the treatment of the problem. RSM has seven steps: (1) *Nominate Issue & Issue domain*, in which the problem issues are indentified and a description of the situation is made; (2) *Identify Issue Symptoms & Factors*, that identifies the symptoms of the problem, and the factors that make them significant to be explored; (3) *Generate implicit systems,* each symptom implies the existence of at least one implicit system in the problem situation; (4) *Group into Containing System:* at this step, the implicit systems are aggregated to form clusters, one cluster for each symptom, named containing system, which can generate a hierarchy of systems, highlighting issues related to the problem; (5) *Understanding Containing Systems, interactions, imbalances*: at this step, the interactions between the containing systems are evaluated; (6) *Propose Containing Systems Imbalance resolution:* this step uses the differences between an ideal world, where the symptoms do not exist, and the real world, to propose Sociotechnical solutions to the imbalances identified in the previous step; (7) *Verify proposal against original symptoms*: at this step, the system model are tested to see if they would, if implemented, eliminate the symptoms identified at step two and the imbalance found at step six. This model could also be tested for cultural acceptability by the people that are experiencing the problem (Hitchins, 2007).

#### **4.2 Perspectives of consensual method selection**

The diversity of people involved in an e-Infrastructure System development is a reality that Engineering must deal with. Zhang (2007) states that it is impractical to limit the diversity of people involved in a process to get a consensus about a problem to be treated. However, the methods to develop Systems requirements are under the Engineer's control.

Kossiakoff & Sweet (2003) stated that the function of System Engineering is to guide the Engineering of complex Systems, and that System Engineering is an inherent part of Project Management - the part that is concerned with guiding the Engineering effort itself. Kossiakoff and Sweet also propose a System Engineering life cycle model that corresponds to significant transitions in Systems Engineering activities, and it is the model adopted as the life cycle framework to this work. It has three broad stages: (i) *Concept Development Stage*: with the Needs Analysis, Concept Exploration and Concept Definition phases; (ii) *Engineering Development Stage*: with: Advanced Development, Engineering Design and Integration & Evaluation phases; (iii) *Post development* with the Production and Operation & Support phase.

The use of Consensual Methods to get a consensus about the problematic situation is a System requirements elicitation process. Consequently, a Consensual Method is a technique to implement the *Concept Development Stage;* thus, to be adherent to the System life cycle, the Consensual Methods must also provide information to other phases that are dependent on the requirement definition process. The information that is demanded by the following phases, and its purpose, is presented in Table 1.

As SSM, this method is based on the General-Purpose Problem-Solving Paradigm and is context free. The people who are experiencing a problem, and have knowledge about it, provide information about it in meetings with a coordinator. This investigation, which searches for dysfunction sources related to the problem, can create a lot of information and data. Differently from SSM, RSM employs tools and methods for treating, organizing and processing information; the action of "process" implies a gradual reduction of the problematic situation by ordering the data, transforming them into information for the treatment of the problem. RSM has seven steps: (1) *Nominate Issue & Issue domain*, in which the problem issues are indentified and a description of the situation is made; (2) *Identify Issue Symptoms & Factors*, that identifies the symptoms of the problem, and the factors that make them significant to be explored; (3) *Generate implicit systems,* each symptom implies the existence of at least one implicit system in the problem situation; (4) *Group into Containing System:* at this step, the implicit systems are aggregated to form clusters, one cluster for each symptom, named containing system, which can generate a hierarchy of systems, highlighting issues related to the problem; (5) *Understanding Containing Systems, interactions, imbalances*: at this step, the interactions between the containing systems are evaluated; (6) *Propose Containing Systems Imbalance resolution:* this step uses the differences between an ideal world, where the symptoms do not exist, and the real world, to propose Sociotechnical solutions to the imbalances identified in the previous step; (7) *Verify proposal against original symptoms*: at this step, the system model are tested to see if they would, if implemented, eliminate the symptoms identified at step two and the imbalance found at step six. This model could also be tested for cultural acceptability by the people that are

The diversity of people involved in an e-Infrastructure System development is a reality that Engineering must deal with. Zhang (2007) states that it is impractical to limit the diversity of people involved in a process to get a consensus about a problem to be treated. However, the

Kossiakoff & Sweet (2003) stated that the function of System Engineering is to guide the Engineering of complex Systems, and that System Engineering is an inherent part of Project Management - the part that is concerned with guiding the Engineering effort itself. Kossiakoff and Sweet also propose a System Engineering life cycle model that corresponds to significant transitions in Systems Engineering activities, and it is the model adopted as the life cycle framework to this work. It has three broad stages: (i) *Concept Development Stage*: with the Needs Analysis, Concept Exploration and Concept Definition phases; (ii) *Engineering Development Stage*: with: Advanced Development, Engineering Design and Integration & Evaluation phases; (iii) *Post development* with the Production and Operation & Support phase. The use of Consensual Methods to get a consensus about the problematic situation is a System requirements elicitation process. Consequently, a Consensual Method is a technique to implement the *Concept Development Stage;* thus, to be adherent to the System life cycle, the Consensual Methods must also provide information to other phases that are dependent on the requirement definition process. The information that is demanded by the following

methods to develop Systems requirements are under the Engineer's control.

**4.1.6 Hitchins' Rigorous Soft Method (RSM)** 

experiencing the problem (Hitchins, 2007).

**4.2 Perspectives of consensual method selection** 

phases, and its purpose, is presented in Table 1.

The authors' experience in dealing with Consensual Methods has allowed the development of a comparison context, which considers if a Method complies with the demands of the Primary Purpose and the Inputs of each phase listed in Table 1.


Table 1. List of System Engineering life cycle phases after the Concept Development stage.

In Table 2, the adherence of each Consensual Method to System Engineering life cycle model phases is summarized. The first cell of the left column is a label that presents the level of adherence.


Table 2. Table of Method Selection.

Table 2 is illustrative, rather than comprehensive. It is based on empirical findings from the authors' experience. It provides a practical starting point for organizing an approach to identify the Consensual Method that complies with the demands of the System life cycle.

#### **5. Case study: e-Infrastructure for an ALCUE unit**

From the Perspective of Method Selection, RSM is the Consensual Method that provides more information for the phases of the System life cycle. As a Consensual Method, it promotes the consensus among people about the problem issues, so that people feel welcomed by the process. Of course, as Hitchins (2007) argues, people who feel dissatisfied with this approach are those who have no interest in consensus, who want to impose their worldview.

As a Case Study, the RSM is used to understand the problem of developing an e-Infrastructure to an ALCUE Unit, a kernel concept of Vertebralcue Project from the ALFA III Program of the European Commission. This Case Study also assessed whether the information obtained by RSM may actually contribute to other life system stages, according to the Perspective of Comparison of Consensus Methods.

#### **5.1 The issue and its domain**

KNOMA is designing an ALCUE Unit, and desires to develop and maintain an e-Infrastructure to support it.

As usually occurs in Engineering practice, the demand comes to the Engineer with words that are known by the people involved with the problematic situation, which the Engineer is still unaware of.

#### **5.1.1 Issue**

The concern about the e-Infrastructure to be developed and maintained is about what needs to be done. However, this depends on the features needed for an ALCUE Unit, which are not clear.

#### **5.1.2 Domain**

The Knowledge Engineering Laboratory (KNOMA) is a research laboratory of the Department of Computer Engineering and Digital Systems (PCS) of the School of Engineering (EPUSP) of the University of São Paulo (USP), and acts as a partner in projects sponsored by the European Commission (EC), including Vertebralcue from the ALFAIII Program of the EC.

Each project partner should develop and implement an ALCUE Unit (VERTEBRALCUE, 2011). These Units must operate independently from each other; however, they must be linked as "vertebras" of the framework, strengthening the academic cooperation networks that already exist between the project partners institutions, providing structural support for new partnerships and corporations networks. The Vertebralcue Project board stated that each ALCUE Units operate as an Information Center, broadcasting information about both the intuition and the region it belongs to. Likewise, the Unit must receive information from partner institutions for internal disclosure.

Table 2 is illustrative, rather than comprehensive. It is based on empirical findings from the authors' experience. It provides a practical starting point for organizing an approach to identify the Consensual Method that complies with the demands of the System life cycle.

From the Perspective of Method Selection, RSM is the Consensual Method that provides more information for the phases of the System life cycle. As a Consensual Method, it promotes the consensus among people about the problem issues, so that people feel welcomed by the process. Of course, as Hitchins (2007) argues, people who feel dissatisfied with this approach are those who have no interest in consensus, who want to impose their

As a Case Study, the RSM is used to understand the problem of developing an e-Infrastructure to an ALCUE Unit, a kernel concept of Vertebralcue Project from the ALFA III Program of the European Commission. This Case Study also assessed whether the information obtained by RSM may actually contribute to other life system stages, according

KNOMA is designing an ALCUE Unit, and desires to develop and maintain an e-

As usually occurs in Engineering practice, the demand comes to the Engineer with words that are known by the people involved with the problematic situation, which the Engineer is

The concern about the e-Infrastructure to be developed and maintained is about what needs to be done. However, this depends on the features needed for an ALCUE Unit, which are

The Knowledge Engineering Laboratory (KNOMA) is a research laboratory of the Department of Computer Engineering and Digital Systems (PCS) of the School of Engineering (EPUSP) of the University of São Paulo (USP), and acts as a partner in projects sponsored by the European Commission (EC), including Vertebralcue from the ALFAIII

Each project partner should develop and implement an ALCUE Unit (VERTEBRALCUE, 2011). These Units must operate independently from each other; however, they must be linked as "vertebras" of the framework, strengthening the academic cooperation networks that already exist between the project partners institutions, providing structural support for new partnerships and corporations networks. The Vertebralcue Project board stated that each ALCUE Units operate as an Information Center, broadcasting information about both the intuition and the region it belongs to. Likewise, the Unit must receive information from

**5. Case study: e-Infrastructure for an ALCUE unit** 

to the Perspective of Comparison of Consensus Methods.

worldview.

**5.1 The issue and its domain** 

Infrastructure to support it.

still unaware of.

**5.1.1 Issue** 

not clear.

**5.1.2 Domain** 

Program of the EC.

partner institutions for internal disclosure.

The ALCUE Unit operation deal with information and policy, as an academic collaborative process consists of multiple academic partners working together for information exchange and development of policy cooperation. In this operation process, there are interests of multiple actors: students, professors, researchers, and academic and social institutions. In the scenario of ALCUE Unit as an information center, there may be a distortion of information due to political interests, which can occur with pressures related to the disclosure of information or not. Uncertainty, diversity, quality and quantity of information are factors that can lead to a variation between the expected (planned) for a ALCUE Unit and the actual situation, perceived by the people who interact with the Unit, this variation is called complexity in this study.

#### **5.2 Symptoms and Issue factors**

The e-Infrastructure required for an ALCUE Unit depends on the purposes of the people who interact with the Unit. In order to indentify these purposes, meetings have been held with diverse groups of people who had interest in an ALCUE Unit. Furthermore, the Vertebralcue Project documentation and documents about the EPUSP academic cooperation was studied.

#### **5.2.1 A Socio-technical System**

e-Infrastructures are Socio-technical Systems. The technology in these Systems does not have a purpose by itself; this technology must meet the purpose of the people and institutions that interact with it. The difficulty in identifying the purpose of an ALCUE Unit can be seen by the description of the domain of the problematic situation.

The existence of a relationship between ALCUE Units and academic cooperation networks is evidence that there are different people's and institutions' interests in the System. This diversity of institutions and people, possibly with different cultures, makes it difficult to identify the specific System goals. Consequently, the identification of e-Infrastructure technological requirement is also made difficult.

#### **5.2.2 Information center**

The demand for an ALCUE Unit to be an Information Center is vague. As an Information Center, the Unit must both generate and disclose the information, and receive information and publish it. Nevertheless, before defining how the information will be received or generated, and how access will be provided to this information, it is necessary to identify what information is of interest to the people involved with the ALCUE Unit and what information is of interest to the academic cooperation networks. All this information has been identified by a Brainstorming session with the topic: "What subjects related to academic cooperation would you like to know?"

The Brainstorming session identified the following subjects: (i) Equivalence of titles between higher education institutions; (ii) Graduate and Undergraduate courses offered by institutions, including information about the disciplines and curriculum; (iii) Training programs and continuous education programs offered by institutions; (iv) Distance Learning; (v) Scholarships and funding of studies and research in institutions; (vi) Qualifications of faculty and researchers; and (vii) Mobility and exchange between institutions for faculty, students and researchers.

This list was not definitive; it was a first sample of what a group of people with interest in an ALCUE Unit had thought to be relevant at that stage of the problem treatment. Figure 2 presents the Brainstorming diagram that was created during the session. Diagrams were used in the Brainstorming session to improve communication and association of ideas.

Fig. 2. Brainstorming diagram.

#### **5.2.3 The relationships**

The information, generated or received by the ALCUE Unit, occurs within a context with several institutions that have interests in academic cooperation. In order to identify some institutions, the Nominal Group Technique was used with the subjects that were identified in the Brainstorming session as a starting point. The Nominal Group session resulted in Table 2, in which the first column shows the identified institutions; the second column indicates if the institution is a funding institution, a support foundation, an academic institution, or an international cooperation institution. The third column was not identified in that session; it was identified only in the workshop that followed that session, and presents the characteristic of each type of institution.

The list of the institutions indentified in the Nominal Group session was used in a workshop, which aimed to build an institution chart and identify the relationship and information flow between them. In that workshop, the Interpretative Structural Modeling was used, and the work group decided to group institutions according to their characteristics - the results of which are present in the third column in Table 3. Figure 3 presents the institutions relationship and the information flow that was identified in the workshop.

This list was not definitive; it was a first sample of what a group of people with interest in an ALCUE Unit had thought to be relevant at that stage of the problem treatment. Figure 2 presents the Brainstorming diagram that was created during the session. Diagrams were used in the Brainstorming session to improve communication and association of ideas.

The information, generated or received by the ALCUE Unit, occurs within a context with several institutions that have interests in academic cooperation. In order to identify some institutions, the Nominal Group Technique was used with the subjects that were identified in the Brainstorming session as a starting point. The Nominal Group session resulted in Table 2, in which the first column shows the identified institutions; the second column indicates if the institution is a funding institution, a support foundation, an academic institution, or an international cooperation institution. The third column was not identified in that session; it was identified only in the workshop that followed that session, and

The list of the institutions indentified in the Nominal Group session was used in a workshop, which aimed to build an institution chart and identify the relationship and information flow between them. In that workshop, the Interpretative Structural Modeling was used, and the work group decided to group institutions according to their characteristics - the results of which are present in the third column in Table 3. Figure 3 presents the institutions relationship

Fig. 2. Brainstorming diagram.

presents the characteristic of each type of institution.

and the information flow that was identified in the workshop.

**5.2.3 The relationships** 



#### **5.2.4 Threats, opportunities, weaknesses and strengths**

When the System Engineer deals with a problem such as the design of e-Infrastructure Systems to support the ALCUE Unit, he must not only be concerned about the needs to have the System operating according to the demands at the moment when he understands the problem domain. If the Engineer only considers these needs, the product of the design may be a System in which the changes and the evolutions required to meet new demands will be impossible. Therefore, to identify future scenarios for the ALCUE Unit, a situational analysis tool was used: the TOWS Matrix. This Matrix is a tool that allows the formulation of a strategy for the future by examining the present.

In a single workshop, the ALCUE Unit internal factors - Strengths and Weaknesses - and external factors - Threats and Opportunities - were identified and the relationship between them were established. Table 4 presents the result of this workshop: the TOWS Matrix.

#### **5.3 Implicit systems**

The Symptoms and Issue Factors imply the existence of Implicit Systems1 in problematic situations. At this point in the RSM process, the needs of the ALCUE Unit that indicate the existence of Implicit Systems in the e-Infrastructure System are indentified.

Usually, skilled System Engineering can indentify Implicit Systems by the analysis and synthesis of the content in Figure 3, a rich picture - as in SSM - and the content in Table 4, the TOWS Matrix. The Implicit Systems identified by the authors are:


<sup>1</sup> The authors consider that Implicit Systems are sub-systems of the e-Infrastructure System, but the term Implicit Systems is used to follow the RSM pattern.

<sup>2</sup> Another possibility would be to have Implicit Systems that receive information from these sources, which was discarded by the authors, because this involves a demand for work in the partner institution.

Fig. 3. Relationship between institutions.

impossible. Therefore, to identify future scenarios for the ALCUE Unit, a situational analysis tool was used: the TOWS Matrix. This Matrix is a tool that allows the formulation of a

In a single workshop, the ALCUE Unit internal factors - Strengths and Weaknesses - and external factors - Threats and Opportunities - were identified and the relationship between them were established. Table 4 presents the result of this workshop: the TOWS Matrix.

The Symptoms and Issue Factors imply the existence of Implicit Systems1 in problematic situations. At this point in the RSM process, the needs of the ALCUE Unit that indicate the

Usually, skilled System Engineering can indentify Implicit Systems by the analysis and synthesis of the content in Figure 3, a rich picture - as in SSM - and the content in Table 4,

System to store information: all the information obtained or generated should be stored

System to support static disclosure: a system that allows access to information when

System to support dynamic disclosure: a system that sends information to people who

System to support relationship networks: a system that allows the construction and

System for obtaining2 information from FUSP: a system that accesses an interface at

System for obtaining information from FAPESP: a system that accesses an interface at

 System for obtaining information from Private Companies: a system that accesses an interface at a Private Company to retrieve information. There may be a different system

System for obtaining information from FDTE: a system that accesses an interface at

System for obtaining and sending information to CRInt-POLI: a system that accesses an

System for obtaining and sending information to CCInt: a system that accesses an

 System for obtaining and sending information to other ALCUE Units: a system that accesses an interface at another ALCUE Unit to send and retrieve information. There

1 The authors consider that Implicit Systems are sub-systems of the e-Infrastructure System, but the term

2 Another possibility would be to have Implicit Systems that receive information from these sources, which was discarded by the authors, because this involves a demand for work in the partner

existence of Implicit Systems in the e-Infrastructure System are indentified.

the TOWS Matrix. The Implicit Systems identified by the authors are:

strategy for the future by examining the present.

**5.3 Implicit systems** 

for later access;

people want it;

are interested in receiving them;

FUSP to retrieve information;

FDTE to retrieve information;

institution.

FAPESP to retrieve information;

operation of social and thematic networks;

for each Company that wishes to disclose information;

interface at CRInt-POLI to send and retrieve information;

interface at CCInt to send and retrieve information;

may be a different system for each ALCUE Unit.

Implicit Systems is used to follow the RSM pattern.


Table 4. TOWS Matrix for ALCUE Unit

#### **5.4 Containing systems**

290 Systems Engineering – Practice and Theory

Table 4. TOWS Matrix for ALCUE Unit

The authors have decided not to use any special technique of clustering to group the Implicit Systems in containing sets. Therefore, the Implicit Systems have been grouped together according to partners indentified in their own characteristics, in order to get sets of systems grouped by the symptoms of the ALCUE Unit e-Infrastructure. The resulting Containing Systems are:

	- System to support static disclosure;
	- System to support dynamic disclosure;
	- System to support relationship networks.
	- System for obtaining information from FUSP;
	- System for obtaining information from FAPESP;
	- System for obtaining information from Private Companies;
	- System for obtaining information from FDTE.
	- System for obtaining and sending information to CRInt-POLI;
	- System for obtaining and sending information to CCInt;
	- System for obtaining and sending information to other ALCUE Units.

The systems identified represent a perspective about the problematic situation in an ideal world. This means that they do not necessarily have to be designed and implemented in the real world. Furthermore, it does not mean that they are the only systems in the problematic situation. During the following phases of the System life cycle, new symptoms may appear that were not determined in this phase of the method execution, which can lead to a redefinition of the issue or the emergence of new issues. The sequence of treatments for these symptoms follows the concept of the previously mentioned Evolutionary Spiral.

#### **5.5 Interactions and imbalances of containing systems**

The interactions between Containing Systems always occur when there is an information related demand. These interactions are represented in Figure 4, in which the arrow indicates the direction in which information is being sent.

Following the concept of the Evolutionary Spiral (Fig. 1), a new workshop was held with the aim of assessing the interactions identified in reality dimension. At that meeting, it was identified:

 The **Disclosure Support System** contains the Implicit System that supports relationship networks, and this Implicit System also generates information to be stored.

 Two distinct Containing Systems - **Information Gathering System** and **Information Gathering/Dispatch System** - have Implicit Systems with the same characteristic: obtaining information in as institution. This scenario indicates a duplication of systems, even if the institutions are of different types, as identified in Table 2.

Fig. 4. Containing Systems Interaction.

#### **5.6 Treatment for Imbalance and impact of the proposal**

The new symptoms, identified in the workshop commented above, were considered in a new proposal for the Containing Systems, in which the **Information Gathering System** was merged with the **Information Gathering/Dispatch System.** The proposal also considered the symptom that the **Disclosure Support System** demands interactions with the **Storage System,** generating information that should also be accessed later by the system. This new scenario is depicted in Figure 5, where the arrows indicate the direction in which information is being sent.

Fig. 5. Containing Systems Interaction, after the treatment of symptoms.


Table 5. RSM Consensual Method outputs and contribution to System life cycle

292 Systems Engineering – Practice and Theory

 Two distinct Containing Systems - **Information Gathering System** and **Information Gathering/Dispatch System** - have Implicit Systems with the same characteristic: obtaining information in as institution. This scenario indicates a duplication of systems,

The new symptoms, identified in the workshop commented above, were considered in a new proposal for the Containing Systems, in which the **Information Gathering System** was merged with the **Information Gathering/Dispatch System.** The proposal also considered the symptom that the **Disclosure Support System** demands interactions with the **Storage System,** generating information that should also be accessed later by the system. This new scenario is depicted in Figure 5, where the arrows indicate the direction in which

even if the institutions are of different types, as identified in Table 2.

Fig. 4. Containing Systems Interaction.

information is being sent.

**5.6 Treatment for Imbalance and impact of the proposal** 

Fig. 5. Containing Systems Interaction, after the treatment of symptoms.

#### **5.6.1 Proposal impact**

Store and make available information generated by social networks organized by the ALCUE Unit does not affect the **Storage Containing System.** Store information already was its original function.

The merge of the Containing Systems that was implemented may cause internal systems imbalances at the resulting system, because the different institutions with which the Implicit Systems are connected may demand different connection properties. However, in this phase of the System life cycle, it is too early to determine clearly this dependence scenario of connection, and "how" these connections with the different institutions will be held.

The purpose duplication of distinct systems was resolved.

#### **5.7 Potential solution**

The e-Infrastructure systems that KNOMA wishes to develop and maintain to support the ALCUE Unit activities is composed of three Containing Systems, which interact between themselves always that information is demanded or disclosed. The interaction between these systems is shown in Figure 5, in which arrows indicate the direction in which information is being sent.

#### **5.8 Contribution to next phases of project life cycle**

The process of RSM identified the symptoms and treatments of the issue on to develop and maintain an e-Infrastructure for ALCUE Unit. RSM has been chosen because according to the perspective presented earlier, it is the consensual method that provides more information for the phases that follows the requirement elicitation phase. Table 5 presents the contributions that the application of RSM brings to the phases of System Engineering life cycle model proposed by Kossiakoff and Sweet (2003).

#### **6. Conclusion**

This chapter addressed the use of Consensual Methods to assist the authors in the process of understanding a problematic situation: Design an e-Infrastructure to be used by KNOMA ALCUE Unit of VertebrALCUE Project, from ALFA III Program. According to the perspective adopted, the use of RSM provides information to all the phases of Project life cycle and was adopted. The meetings organized by the authors enabled the engagement of people with interest in the ALCUE Unit development, reduce the people dissatisfactions about the requirement elicitation process and respect the human and social dimensions. This scenario allows the development of a e-Infrastructure that minimized the difference between what is expected and what will be verified in reality. The authors decisions about the development of a TOWS Matrix was supported by VertebrALCUE Project board, which after evaluating the results obtained, demanded to all ALCUE Units the development of a TOWS Matrix.

#### **7. Acknowledgments**

The research and scholarships are partially funded by the Vertebralcue Project (http://www.vertebralcue.org). An ALFA III Program Project that aims to contribute to the development process of the regional integration among Latin American Higher Education Systems (HES´s), and the implementing process of the Common Area of Higher Education between Latin America, the Caribbean and the European Union (ALCUE in Spanish), by exploring and strengthening different levels of articulation of Latin America-Latin America and EU-Latin America academic cooperation through the design and implementation of a cooperation infrastructure at institutional, national and regional level.

#### **8. References**

294 Systems Engineering – Practice and Theory

Store and make available information generated by social networks organized by the ALCUE Unit does not affect the **Storage Containing System.** Store information already was

The merge of the Containing Systems that was implemented may cause internal systems imbalances at the resulting system, because the different institutions with which the Implicit Systems are connected may demand different connection properties. However, in this phase of the System life cycle, it is too early to determine clearly this dependence scenario of

The e-Infrastructure systems that KNOMA wishes to develop and maintain to support the ALCUE Unit activities is composed of three Containing Systems, which interact between themselves always that information is demanded or disclosed. The interaction between these systems is shown in Figure 5, in which arrows indicate the direction in which

The process of RSM identified the symptoms and treatments of the issue on to develop and maintain an e-Infrastructure for ALCUE Unit. RSM has been chosen because according to the perspective presented earlier, it is the consensual method that provides more information for the phases that follows the requirement elicitation phase. Table 5 presents the contributions that the application of RSM brings to the phases of System Engineering life

This chapter addressed the use of Consensual Methods to assist the authors in the process of understanding a problematic situation: Design an e-Infrastructure to be used by KNOMA ALCUE Unit of VertebrALCUE Project, from ALFA III Program. According to the perspective adopted, the use of RSM provides information to all the phases of Project life cycle and was adopted. The meetings organized by the authors enabled the engagement of people with interest in the ALCUE Unit development, reduce the people dissatisfactions about the requirement elicitation process and respect the human and social dimensions. This scenario allows the development of a e-Infrastructure that minimized the difference between what is expected and what will be verified in reality. The authors decisions about the development of a TOWS Matrix was supported by VertebrALCUE Project board, which after evaluating the

results obtained, demanded to all ALCUE Units the development of a TOWS Matrix.

The research and scholarships are partially funded by the Vertebralcue Project (http://www.vertebralcue.org). An ALFA III Program Project that aims to contribute to the

connection, and "how" these connections with the different institutions will be held.

The purpose duplication of distinct systems was resolved.

**5.8 Contribution to next phases of project life cycle** 

cycle model proposed by Kossiakoff and Sweet (2003).

**5.6.1 Proposal impact** 

its original function.

**5.7 Potential solution** 

information is being sent.

**6. Conclusion** 

**7. Acknowledgments** 


VERTEBRALCUE (September 2011), Project web site, presents its goals and activities. Available from http://www.vertebralcue.org

## **Systems Engineering and Subcontract Management Issues**

Alper Pahsa *Havelsan Inc. Turkey* 

#### **1. Introduction**

296 Systems Engineering – Practice and Theory

VERTEBRALCUE (September 2011), Project web site, presents its goals and activities.

One of the major problems that the Systems Engineering processes come across is how to deal with the subcontractors. Assuring the activities of subcontractors are convenient and compliant with the systems engineering standard procedures and criteria is an important for the systems engineering program management. One of the most challenging jobs for a systems engineering team is to understand the needs and requirements of the customer, the constraints and variables that are established and the limits of business conduct that are acceptable for the particular job under contract. This understanding should directly reroute to the people who work under the subject contract of the customer. All of the requests, criteria and generic standards of customer needs associated with the subcontractor are directly written in the subcontracts statement of work or tasking contract too.

The process of the dealing with the subcontractors is the responsibility of the systems integrators in order to ensure the whole systems engineering process is followed. The systems integrator has the responsibility of helping the subcontractor take functional point of view of the organization and of all procurement process. It is the responsibility of the systems integrator to aid the subcontractor in erecting a parallel technical auditing process.

So what does all of this mean to project management team responsible for issuing contracts and subcontracts that enable systems engineering team solutions to meet the requirements of the systems integration project? It should be clear that the unclear instructions as part of the subcontracts issued to subcontractors with metrics spelled out by which are able to gauge both technical and on-time performance should be clear.

Systems engineering teams incorporate this into the terms and conditions of the subcontracts. It must be careful to avoid the pass-through of non deterministic risk factors, as if the systems engineering team lose control of these once they are in the hands of others. Pass-through of known risk elements is natural, and a revision activity must be in place such that it is able to keep track of the progress in the resolving the items with the risk.

Systems Engineering Teams discussed how to implement and maintain an audit trail throughout the systems integration process and how to perform and record the details of the quality assurance process. Each of these activities carries special important on how it is implemented in systems integration approach with subcontractors that is engaged for assistance with the project or for procurement of hardware and software.

Just as the customer provides the facilities with a set of requirements that it believes to be representative of the actual needs of the user. The corporation must prepare a detailed set of valid requirements for subcontractors. Absence of strategic plan on the part of a subcontractor should result in imposition of the systems integration organization strategic plan, especially those parts that related to audit trail maintenance; risk identification, formulation, and resolution; and such management process and procedures as we feel are essential for satisfactory performance the contract or subcontract. In the following sections initially Systems Engineering process is explained. Secondly Program Management process is explained then the process of the subcontract management and the activities related to Issues with contractor and subcontractor management will be given. In this section a systems engineering and the program management teams' perspectives for subcontract management issues are explained. Then the concerns related to subcontractor management process are given and finally conclusion section is drawn for the subcontract management in sense of systems engineering process is given.

#### **2. Systems engineering**

Systems engineering defined as "An interdisciplinary approach to evolve and verify an integrated and life cycle balanced set of systems product and process solutions that satisfy customer needs. Systems engineering: (a) encompasses the scientific and engineering efforts related to the development, manufacturing, verification, deployment, operations, support, and disposal of systems products and processes; (b) develops needed user training equipments, procedures, and data ( c) establishes and maintains configuration management of the systems; (d) develops work breakdown structures and statements of work; and (e) provides information for management decision making." Figure 1 displays the Systems Engineering process outline (David E. S. et al, 2006).

The basic Systems Engineering process needs successful products and/or process. It is largely an iterative process that provides overarching technical management of systems from the stated need or capability to effective and useful fielded systems. During the process, design solutions are distributed evenly to the stated needs through the constraints imposed by technology, budgets, and schedules (INCOSE, 2011).

Systems engineering should support acquisition program management in defining what must be done and gathering the information, personnel, and analysis tools to define the mission or program objectives. This includes gathering customer inputs on "needs" and "wants", systems constraints (costs, technology limitations, and applicable specifications/legal requirements), and systems "drivers" (such as capabilities of the competition, military threats, and critical environments). The set of recommended activities that follow are written for a complex project that meets a stated mission or goal, but the word "product" can be substituted to apply these steps to commercial products, for example (Associate CIO of Architecture, 2002),.

Based on the acquisition strategy, the technical team needs to plan acquisitions and document the plan in developing Systems Engineering Management Plan (SEMP). The SEMP covers the technical teams before contract award, during contract performance, and upon contract completion. Included in acquisition planning are solicitation preparation,

Just as the customer provides the facilities with a set of requirements that it believes to be representative of the actual needs of the user. The corporation must prepare a detailed set of valid requirements for subcontractors. Absence of strategic plan on the part of a subcontractor should result in imposition of the systems integration organization strategic plan, especially those parts that related to audit trail maintenance; risk identification, formulation, and resolution; and such management process and procedures as we feel are essential for satisfactory performance the contract or subcontract. In the following sections initially Systems Engineering process is explained. Secondly Program Management process is explained then the process of the subcontract management and the activities related to Issues with contractor and subcontractor management will be given. In this section a systems engineering and the program management teams' perspectives for subcontract management issues are explained. Then the concerns related to subcontractor management process are given and finally conclusion section is drawn for the subcontract management in

Systems engineering defined as "An interdisciplinary approach to evolve and verify an integrated and life cycle balanced set of systems product and process solutions that satisfy customer needs. Systems engineering: (a) encompasses the scientific and engineering efforts related to the development, manufacturing, verification, deployment, operations, support, and disposal of systems products and processes; (b) develops needed user training equipments, procedures, and data ( c) establishes and maintains configuration management of the systems; (d) develops work breakdown structures and statements of work; and (e) provides information for management decision making." Figure 1 displays the Systems

The basic Systems Engineering process needs successful products and/or process. It is largely an iterative process that provides overarching technical management of systems from the stated need or capability to effective and useful fielded systems. During the process, design solutions are distributed evenly to the stated needs through the constraints

Systems engineering should support acquisition program management in defining what must be done and gathering the information, personnel, and analysis tools to define the mission or program objectives. This includes gathering customer inputs on "needs" and "wants", systems constraints (costs, technology limitations, and applicable specifications/legal requirements), and systems "drivers" (such as capabilities of the competition, military threats, and critical environments). The set of recommended activities that follow are written for a complex project that meets a stated mission or goal, but the word "product" can be substituted to apply these steps to commercial products, for example

Based on the acquisition strategy, the technical team needs to plan acquisitions and document the plan in developing Systems Engineering Management Plan (SEMP). The SEMP covers the technical teams before contract award, during contract performance, and upon contract completion. Included in acquisition planning are solicitation preparation,

sense of systems engineering process is given.

Engineering process outline (David E. S. et al, 2006).

(Associate CIO of Architecture, 2002),.

imposed by technology, budgets, and schedules (INCOSE, 2011).

**2. Systems engineering** 

source selection activities, contract phase-in, monitoring contractor performance, acceptance of deliverables, completing the contract, and transition beyond the contract. The SEMP focuses on interface activities with the contractor, including technical team involvement with and monitoring of contracted work. Often overlooked in project staffing estimates is the amount of time that technical team members are involved in contracting-related activities. Depending on the type of procurement, a technical team member involved in source selection could be consumed nearly full time for 6 to 12 months. After contract award, technical monitoring consumes 30 to 50 percent, peaking at full time when critical milestones or key deliverables arrive (Shamieh, 2011).

Fig. 1. Systems Engineering Process

The technical team is intimately involved in developing technical documentation for the acquisition package. The acquisition package consists of the solicitation (e.g., Request for Proposals (RFPs) and supporting documents. The solicitation contains all the documentation that is advertised to prospective contractors (or offers). The kkey technical sections of the solicitation are the SOW (or performance work statement), technical specifications, and contract data requirements list. Other sections of the solicitation include proposal instructions and evaluation criteria. Documents that support the solicitation include a procurement schedule, source evaluation plan, Government cost estimate, and purchase request. Input from the technical team will be needed for some of the supporting documents. It is the responsibility of the contract specialist, with the input from the technical team, to ensure that the appropriate clauses are included in the solicitation. All of the features related to solicitation are important for a subcontractor for fully understanding the content of the work that is aimed to realize. Figure 2 shows the process of the contract requirement development process (NASA, 2007).

Fig. 2. Contract Requirements Development Process

#### **3. Program management**

Program management has been defined as "the management of a series of related projects designed to accomplish broad goals, to which the individual projects contribute, and typically executed over an extended period of time". Program management is very different from corporate administrative management that involves an ongoing oversight role. Program management usually has the more specific task of completing a project or set of projects for which there is a common goal and a finite termination point. The program manager has the responsibility of planning the project, controlling the project's activities, organizing the resources, and leading the work within the constraints of the available time and resources (Associate CIO of Architecture, 2002).

Project planning involves mapping the project's initial course and then updating the plan to meet needs and constraints as they change throughout the program. In the planning process, an overall plan, called an "acquisition strategy," is formulated by analyzing the requirements; investigating material solutions (designs); and making technical, cost, and performance trade-offs to arrive at the best solution. A formal acquisition plan details the specific technical, schedule, and financial aspects of a specific contract or group of contracts within a specific phase of a program. Functional plans detail how the acquisition strategy will be carried out with respect to the various functions within the program (i.e., systems engineering, test and evaluation, logistics, software development). Schedules that are continually updated are used to ensure that various milestones along a timeline are being met. Budgeting, another aspect of project planning, involves developing an initial cost estimate for the work to be performed, presenting and defending the estimate to parties responsible for budget approvals, and expending the funding.

Program management has been defined as "the management of a series of related projects designed to accomplish broad goals, to which the individual projects contribute, and typically executed over an extended period of time". Program management is very different from corporate administrative management that involves an ongoing oversight role. Program management usually has the more specific task of completing a project or set of projects for which there is a common goal and a finite termination point. The program manager has the responsibility of planning the project, controlling the project's activities, organizing the resources, and leading the work within the constraints of the available time

Project planning involves mapping the project's initial course and then updating the plan to meet needs and constraints as they change throughout the program. In the planning process, an overall plan, called an "acquisition strategy," is formulated by analyzing the requirements; investigating material solutions (designs); and making technical, cost, and performance trade-offs to arrive at the best solution. A formal acquisition plan details the specific technical, schedule, and financial aspects of a specific contract or group of contracts within a specific phase of a program. Functional plans detail how the acquisition strategy will be carried out with respect to the various functions within the program (i.e., systems engineering, test and evaluation, logistics, software development). Schedules that are continually updated are used to ensure that various milestones along a timeline are being met. Budgeting, another aspect of project planning, involves developing an initial cost estimate for the work to be performed, presenting and defending the estimate to parties

Fig. 2. Contract Requirements Development Process

and resources (Associate CIO of Architecture, 2002).

responsible for budget approvals, and expending the funding.

**3. Program management** 

Control of the project's activities is primarily concerned with monitoring and assessing actual activities and making sure they align with program goals. Monitoring involves conducting program reviews, measuring actual costs with planned costs, and testing incremental aspects of the program. It also includes managing the internal aspects of a program (e.g., the current contract) and monitoring external organizations (Government etc.) that may have a stake in the program's outcome. From time to time, a program assessment is needed to determine if the overall requirement is still being addressed, adequate funds are available, the risks are being managed, and the initial acquisition strategy is sound. Leading the work, given time and resource constraints, involves not only the previously mentioned tasks, but also directing that tasks be carried out and maintaining consensus within and outside the program. The program manager must give direction to his or her organization and take direction from organizations outside of his or her direct control. Maintaining a consensus requires making sure that the competing goals of internal and external organizations remain in balance and are working toward the desired goal (David E. S. et al, 2006).

There exists an agreement between the systems engineer and the contract management team. Systems engineer supports the development and maintenance of the agreement between the project office and the contractor that will perform or manage the detail work to achieve the program objectives. This agreement has to satisfy several stakeholders and requires coordination between responsible technical, managerial, financial, contractual, and legal personnel. It requires a document that conforms to the acquisition regulations, program product breakdown structure documentation and the systems architecture. The figure given below shows the contractual process (David E. S. et al, 2006):

Fig. 3. Contractual Process

The role of technical managers or systems engineers is crucial to satisfying these diverse concerns. Their primary responsibilities include:


#### **4. Subcontracting in System Engineering program**

When a Systems Engineering program/project includes a contracting service/product, a challenge is occurred in the Systems Engineering people minds: "Do it in our company or purchase it?" As "everything is program/project…", it is always seem to Systems Engineer team that the option of doing the service/product in the company would be more manageable and cause less trouble. In fact this option is an illusion and reinforced by closed project/program contracting experiences. These experiences are large and were not successful and over which the system engineering process had little control.

However, it is already known that small or large project/programs there are many benefits of subcontracting the program/project. These benefits caused by purchasing the service/product from the product that is already available. Because of the lack of information or interest in a certain technology, carrying out a program/project without subcontracting external services/product, many problems would frequently occur for the program activities. Moreover, it must be well known that the training to hire outsources is important.

The reasons for failing in subcontracting activities are started with lack of a well-defined process to guide the systems engineering team. Purchasing services/product, despite being a rather routine task in program/project manager's life, is a high-risk endeavour and, usually, an empirical activity. Nonetheless, there are many items are bought during the program life cycle (De Mello Filho, 2005). When the program management acquisition team purchase hardware or some material, they are performing a search procedure for certain characteristics that will be evaluated during the acquisitions. This procedure or acquisition activity is defined in classical engineering terms as the procurement process.

#### **5. System Engineering integration roles for subcontract management**

When a project being managed by the primary contractor requires a wide range of skills and experience, it may require subcontracting with other companies. It is the prime contractor project manager's responsibility to ensure that the teaming partners and subcontractors are held to the same quality standards as the prime contractor as specified in the Project Plan.


Forms Integrated Teams and coordinates the government side of combined government

Monitors the contractor's progress, and Coordinates government action in support of

When a Systems Engineering program/project includes a contracting service/product, a challenge is occurred in the Systems Engineering people minds: "Do it in our company or purchase it?" As "everything is program/project…", it is always seem to Systems Engineer team that the option of doing the service/product in the company would be more manageable and cause less trouble. In fact this option is an illusion and reinforced by closed project/program contracting experiences. These experiences are large and were not

However, it is already known that small or large project/programs there are many benefits of subcontracting the program/project. These benefits caused by purchasing the service/product from the product that is already available. Because of the lack of information or interest in a certain technology, carrying out a program/project without subcontracting external services/product, many problems would frequently occur for the program activities.

The reasons for failing in subcontracting activities are started with lack of a well-defined process to guide the systems engineering team. Purchasing services/product, despite being a rather routine task in program/project manager's life, is a high-risk endeavour and, usually, an empirical activity. Nonetheless, there are many items are bought during the program life cycle (De Mello Filho, 2005). When the program management acquisition team purchase hardware or some material, they are performing a search procedure for certain characteristics that will be evaluated during the acquisitions. This procedure or acquisition

Prepares task statements

Acceptance criteria.

CIO of Architecture, 2002).

and industry integrated teams,

Prepares the Contract Data Requirements List (CDRL),

**4. Subcontracting in System Engineering program** 

Supports negotiation and participates in source selection evaluations,

successful and over which the system engineering process had little control.

Moreover, it must be well known that the training to hire outsources is important.

activity is defined in classical engineering terms as the procurement process.

**5. System Engineering integration roles for subcontract management** 

activities that the prime contractor will address with subcontractors include:

Subcontractor Project Plan, Quality Plan, Quality Assurance Plan.

Quality assessments of subcontractor performance.

When a project being managed by the primary contractor requires a wide range of skills and experience, it may require subcontracting with other companies. It is the prime contractor project manager's responsibility to ensure that the teaming partners and subcontractors are held to the same quality standards as the prime contractor as specified in the Project Plan. Statements of work for the subcontractors must clearly reflect the project requirements and state what activities and reviews are expected in their performance. Primary

Subcontractor assessments, audits, preventive and corrective action plans (Associate

the contracting officer (Global Intergy Corporation, 2002).

The systems integrators company teams such as the systems engineering and contract management have the responsibility of helping the subcontractor take a functional point of view of the organization and of all procurement efforts.

It is the responsibility of the systems integrator to aid the subcontractor in grows a parallel technical auditing process. In addition to interaction matters already discussed, there are several other points to be made (U.S. DoT, 2009). These include the following items:

*No favoured treatment for specific vendors*. It is only human perhaps for clients and systems integrators to have favourite vendors. These are companies or individuals within certain companies that have provided excellent service in the past. Perhaps a previous acquisition more than met specifications or was unusually trouble-free. Sometimes a particular marketing organization has gone the extra mile to be of assistance in an emergency. It is only natural under such circumstances that this favourable past impression might bias the client or systems integrator. Indeed, the concept of the favoured client is a common one in the private sector. But this attitude is illegal and improper in government procurements. We want to emphasize here that we are not talking about collusion or conspiracy to defraud the government. It is entirely possible that, on occasion, biased behaviour could benefit the government. That is of no matter. It is illegal and not to be condoned.

*Timely, Accurate Client Reports.* Technical personnel, engineers, computer scientists and the like, tend not to support active, timely reporting on progress to clients. They follow the mushroom growers to client interactions-"keep them in the dark and cover them with manure." That approach may work when things are moving well, but it runs the risk of forfeiting client confidence in troubled times. It seems better to report progress accurately and in a timely fashion, so that if slippages occur they are minor when first mentioned. Naturally the systems integrator should make every effort to stay on schedule, and if the schedule slips or a problem surfaces, the systems integrator should present the recommended solution at the same time the problem is first mentioned.

*Prudential Judgement.* Suppose the systems integrator has reason to believe that the client is unable or unwilling to handle setbacks in an objective manner. The parable of the king who "killed messengers who brought him bad news" would not remain current in our folklore if it did not have a basis in reality. Thus, reports of delays and difficulties should be brought to the attention of top management rather than directly to the client. This is the sort of prudential judgement call that should be handled by the top management within your organization rather than someone at the operating level. It is suggested that the matter be brought to the attention of top management within the organization as soon as possible and in a calm, factual manner.

Management of subcontractors is of special importance for systems integration involving large, complex engineered systems. It is highly likely that multiple subcontractors will be employee by the prime contractor. Prudent management of these subcontracts is critical to the success of the systems integration program (Grady 1994, 2010).

There are a number of key activities that must be completed by the systems integrator to assure integration of the products provided b the subcontractors prior to test and delivery of the final configuration. Some of the more important activities that must be accomplished include the following (Grady 1994, 2010):


The corporation must be able to demonstrate that it has gone about its business in a legal, objective, unbiased fashion. In large procurements it is often the case that outside contractors will be let for validation and verification and to develop and administer an audit trail relative to the prime contractor. The necessity for an external enterprise to create and follow a technical audit trail arises not so much from the need to respond to potential procurement difficulties as it does from a need to be able to demonstrate that an objective and unbiased procurement process was utilised. In the figure given below systems integration acquisition strategy is given (INCOSE, 2004):


Fig. 4. Generic Technical Acquisition Strategy for a Systems Integration Viewpoint

The Validation Test Document will contain a conceptual discussion of items such as the following (NASA, 2007):

Traceability

304 Systems Engineering – Practice and Theory

Organize overall team support for the subsystems integration and test activity,

Prepare the various subsystems for the test and evaluation prior to integration to assure

Integrate hardware/software (HW/SW) subsystems from subcontractors with systems

Monitor test activity and assure that all tests conform to the systems testing regimens

Provide for failure recovery and error correction in the event subcontractors are unable

The corporation must be able to demonstrate that it has gone about its business in a legal, objective, unbiased fashion. In large procurements it is often the case that outside contractors will be let for validation and verification and to develop and administer an audit trail relative to the prime contractor. The necessity for an external enterprise to create and follow a technical audit trail arises not so much from the need to respond to potential procurement difficulties as it does from a need to be able to demonstrate that an objective and unbiased procurement process was utilised. In the figure given below systems

Fig. 4. Generic Technical Acquisition Strategy for a Systems Integration Viewpoint

Conduct necessary post-test activities to review outcomes with all concerned parties.

including personnel from various subcontractors.

developed by the corporation and legacy systems.

Conduct formal reviews and review documentation.

integration acquisition strategy is given (INCOSE, 2004):

performance meets the stated specifications.

Provide for both Alpha and Beta site tests.

agreed to by the client.

to meet design specifications.

Validate incremental deliveries as these are made by subcontractor.


*(a) Potential Ambiguities in Evaluation Procedures.* In effect, a conflict is an error of omission. It is almost impossible to write a set of specifications for complex systems that is totally without conflict and ambiguity. Be that as it may, it is the job of systems integrators to produce a set of specifications that reduce ambiguity to a minimum, while at the same time remaining within the bounds of reasonableness as far as complexity goes.

*(b) Testability.* Testability is an absolutely necessary attribute or feature of a specification. If a specification is not testable, it is not really realistic. It is the job of the installation team or the validation component of the systems integrator effort to require a feasible test scheme for each proposed specification. Some specifications can be validated or tested by simple observation. One can count the entry ports or disk drives or what have you. But other specifications are intrinsically impossible to complete until after final installation and breakin of the systems. The second level of the audit component is the Validation and Audit Plan. At this level the generic Validation Test Document produced in the first phase is refined and sharpened. For each configuration category, name and describe the relevant characteristics that delimit the requirement.

Then in the third audit component, Validation and Test audit Implementation, for each configuration component set down explicit functional and quantitative tests. At the fourth and final audit level, within the contract request for proposal, establish the operational requirements for validation and audit.

*(c) Audit Reports and Sign-off.* It is known how the auditing process proceeds. The procedures just discussed above establish the requirements for a complete audit trail, but only if the requirements are actually followed. Often, in practice, reality is far from the theoretical ideal. For example, program evaluation and review technique (PERT) and critical path method (CPM) charts are merely useless impedimenta if not maintained on a timely basis. We also know that documentation sometimes lags production by several cycles. Similarly, audit reports and sign-off will not be kept up to date and functional unless management insists. This is especially so in dealing with subcontractors and one can see why this is so. A subcontractor is paid to produce one or more deliverables. Paper records of any kind seem to some subcontractors to b a non functional and unnecessary.

For each of the activities, components "at risk" are identified, the risk aspects are analysed, the steps to avoid the risk and the ensuing consequences are taken, and management of the risk initiated, an internal processes and procedures are developed to address components at risk. In addition, the risk detection and identification plan is modified to incorporate similar occurrences of such risk, if these are not already to address components at risk. In addition, the risk detection and identification plan is modified to incorporate similar occurrences of such risk, if these are not already included in the plan. The risk management plan, as part of the overall strategic plan for the systems integration program, begins wit an analysis of the requirements at the onset of the program to ascertain if there are requirements statements that could jeopardize successful completion of the program (NASA, 2007).

The risk management plan continues with risk assessment for each of the phases of the systems integration life cycle. One of the most vexing problems in risk management is the early identification of potential causes of risk. This is especially true in the development of large, complex life-support systems and for large systems integration programs that are heavily dependent on the integration of legacy systems and newly developed requirements. What has made this problem particularly difficult has been necessity of using qualitative processes in an attempt to identify risk areas and risk situations. Risk detection and identification should commence with the issuance of requirements and the development of specifications. It is often assessing the risk assessment process is delayed until development of systems designs or even until procurements of major subsystems. This is fundamentally an untenable situation, since by this point in a program, investments of resources and personnel have been made, designs have been developed, and it is much too late to achieve an economical and efficient recovery without significant rework. This impact and ripple effect due to program elements at risk becomes known only after discovery of the nature and character of risk, thus jeopardizing the entire development program (Grady, 1994, 2010).

each proposed specification. Some specifications can be validated or tested by simple observation. One can count the entry ports or disk drives or what have you. But other specifications are intrinsically impossible to complete until after final installation and breakin of the systems. The second level of the audit component is the Validation and Audit Plan. At this level the generic Validation Test Document produced in the first phase is refined and sharpened. For each configuration category, name and describe the relevant characteristics

Then in the third audit component, Validation and Test audit Implementation, for each configuration component set down explicit functional and quantitative tests. At the fourth and final audit level, within the contract request for proposal, establish the operational

*(c) Audit Reports and Sign-off.* It is known how the auditing process proceeds. The procedures just discussed above establish the requirements for a complete audit trail, but only if the requirements are actually followed. Often, in practice, reality is far from the theoretical ideal. For example, program evaluation and review technique (PERT) and critical path method (CPM) charts are merely useless impedimenta if not maintained on a timely basis. We also know that documentation sometimes lags production by several cycles. Similarly, audit reports and sign-off will not be kept up to date and functional unless management insists. This is especially so in dealing with subcontractors and one can see why this is so. A subcontractor is paid to produce one or more deliverables. Paper records of any kind seem

For each of the activities, components "at risk" are identified, the risk aspects are analysed, the steps to avoid the risk and the ensuing consequences are taken, and management of the risk initiated, an internal processes and procedures are developed to address components at risk. In addition, the risk detection and identification plan is modified to incorporate similar occurrences of such risk, if these are not already to address components at risk. In addition, the risk detection and identification plan is modified to incorporate similar occurrences of such risk, if these are not already included in the plan. The risk management plan, as part of the overall strategic plan for the systems integration program, begins wit an analysis of the requirements at the onset of the program to ascertain if there are requirements statements

The risk management plan continues with risk assessment for each of the phases of the systems integration life cycle. One of the most vexing problems in risk management is the early identification of potential causes of risk. This is especially true in the development of large, complex life-support systems and for large systems integration programs that are heavily dependent on the integration of legacy systems and newly developed requirements. What has made this problem particularly difficult has been necessity of using qualitative processes in an attempt to identify risk areas and risk situations. Risk detection and identification should commence with the issuance of requirements and the development of specifications. It is often assessing the risk assessment process is delayed until development of systems designs or even until procurements of major subsystems. This is fundamentally an untenable situation, since by this point in a program, investments of resources and personnel have been made, designs have been developed, and it is much too late to achieve an economical and efficient recovery without significant rework. This impact and ripple effect due to program elements at risk becomes known only after discovery of the nature and character of risk, thus jeopardizing the entire development program (Grady, 1994, 2010).

that delimit the requirement.

requirements for validation and audit.

to some subcontractors to b a non functional and unnecessary.

that could jeopardize successful completion of the program (NASA, 2007).

Consider the instance of systems and hardware and software requirements that may be at risk. If these requirements are found to be ambiguous, in conflict, incomplete, or changing too much (Requirement volatility), they may be considered to be a cause of risk to successful completion of the program. Any of these sources may in and of itself, be sufficient to jeopardize the entire program if not resolved.

#### **6. Issues related with subcontractor arrangements**

In the ideal world, a systems integrator group that has systems engineering management and program management group manages its subcontractors, each subcontract contains all the right requirements, and resources are adequate. In the real world, the technical team deals with contractors and subcontractors that are motivated by profit, subcontracts with missing or faulty requirements, and resources that are consumed more quickly than expected (Grady 1994, 2010). These and other factors cause or influence two key issues in subcontracting:


These issues are exacerbated when they apply to second-(or lower) tier subcontractors. Scenarios other than those above are possible. Resolutions might include reducing contract scope or deliverables in lieu of cost increases or sharing information technology in order to obtain data. Even with the adequate flow down requirements in (sub) contracts, legal wrangling may be necessary to entice contractors to satisfy the conditions of their (sub) contracts. Activities during contract performance will generate an updated surveillance plan, minutes documenting meetings, change requests, and contract change orders. Processes will be assessed, deliverables and work products evaluated, and results reviewed (De Mello Filho, 2005).

Systems engineering companies, who use an internal pool of technical resources to develop the entire program/project in their organization, need independent control and audit to their process. System's owners who select to use their internal resources and capabilities of their organization to perform the development process should obey the Systems Engineering Management process defined a Systems Engineering Process guidebook such as "INCOSE System Engineering Handbooks". Internal agreements in the organization should be written and signed between the customer and the systems development team as though they were procured from the outside. Moreover there should be independent review (by another division such as quality control assurance teams, agency, or independent consultant) of products and activities. In fact the development is done internally, an independent review team is recommended to provide a sanity check on the development process. This will create a healthy and clear perspective in the project and help to identify and manage project risks (De Mello Filho, 2005).

If the company uses an independent subcontractor in their development program/project then the control on the subcontracted service is performed by the contractor system integrator. Independent control of the system integrator carries the same responsibility as the independent control/audit of the consultants or agencies that use to select internal system development resources in the program/project development. However subcontracting a service/product then brought different problems with itself. Distributing the Systems Engineering process of the system integrator to the subcontractor, sharing the program schedule and program risks related to the subcontracted activity to the subcontractor are the important headlines in the subcontract activity.

#### **7. Conclusion**

One of the major problems that the Systems Engineering processes come across is how to deal with the subcontractors in order to assure activities of subcontractors are convenient and compliant with the systems engineering standard procedures and criteria. One of the most challenging job for a systems engineering team is to understand the needs and requirements of the customer and the constraints and variables that are established and the limits of business conduct that are acceptable for the particular job under contract. This understanding should directly reroute to the people who work under the subject contract of the customer. All of the requests, criteria and generic standards of customer needs associated with the subcontractor are directly written in the subcontracts statement of work or tasking contract too.

Systems Engineering Teams discussed how to implement and maintain an audit trail throughout the systems integration process and how to perform and record the details of the quality assurance process. Each of these activities carries special important on how it is implemented in systems integration approach with subcontractors that is engaged for assistance with the project or for procurement of hardware and software. Just as the customer provides the facilities with a set of requirements that it believes to be representative of the actual needs of the user. The corporation must prepare a detailed set of valid requirements for subcontractors. Absence of strategic plan on the part of a subcontractor should result in imposition of the systems integration organization strategic plan, especially those parts that related to audit trail maintenance; risk identification, formulation, and resolution; and such management process and procedures as we feel are essential for satisfactory performance the contract or subcontract.

#### **8. References**


Grady O. J., (1994), "System Integration", CRC Press, 1994


### **System Engineering Approach in Tactical Wireless RF Network Analysis**

Philip Chan2, Hong Man1, David Nowicki1 and Mo Mansouri1 *1Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ, 2University of Maryland University College (UMUC), MD, USA* 

#### **1. Introduction**

308 Systems Engineering – Practice and Theory

One of the major problems that the Systems Engineering processes come across is how to deal with the subcontractors in order to assure activities of subcontractors are convenient and compliant with the systems engineering standard procedures and criteria. One of the most challenging job for a systems engineering team is to understand the needs and requirements of the customer and the constraints and variables that are established and the limits of business conduct that are acceptable for the particular job under contract. This understanding should directly reroute to the people who work under the subject contract of the customer. All of the requests, criteria and generic standards of customer needs associated with the subcontractor are directly written in the subcontracts statement of work

Systems Engineering Teams discussed how to implement and maintain an audit trail throughout the systems integration process and how to perform and record the details of the quality assurance process. Each of these activities carries special important on how it is implemented in systems integration approach with subcontractors that is engaged for assistance with the project or for procurement of hardware and software. Just as the customer provides the facilities with a set of requirements that it believes to be representative of the actual needs of the user. The corporation must prepare a detailed set of valid requirements for subcontractors. Absence of strategic plan on the part of a subcontractor should result in imposition of the systems integration organization strategic plan, especially those parts that related to audit trail maintenance; risk identification, formulation, and resolution; and such management process and procedures as we feel are

Associate CIO of Architecture (2002), "Departmental Information Systems Engineering (DISE) Volume 1 Information Systems Engineering Lifecycle", DISE-V1-F1-013102, (2002) David E. S., Michael B., Obaid Y. (2006), "Systems Engineering and Program Management-

De Mello Filho M. C., (2005),"A Whitepaper from IBM's Rational Software Division-Managing Subcontractors with Rational Unified Process", Rev 2.0, 2005, IBM Global Intergy Corporation (2002), "Phase 1 Systems Engineering Management Plan,

Grady O. J. (2010), "System Synthesis: Product and Process Design", Taylor & Francis, 2010

INCOSE (2004), "Systems Engineering Handbook-A "What to" Guide for All SE

NASA (2007), "NASA Systemss Engineering Handbook", NASA/SP-2007-6105 Rev 1, 2007 Shamieh C., IBM (2011), "Systems Engineering for Dummies, IBM Limited Edition", Wiley

U.S. Department of Transportation (DoT) (2009), "Systems Engineering Guidebook for

Practices for Developing Systems Engineering Process Models"

Intelligent Transportation Systems", Version 3, November 2009

Practitioners", INCOSE-TP-2003-016-02, Version 2a, (2004)

Trends and Costs for Aircraft and Guided Weapons Programs", United States Air

Volume No 1 Appendix A-A Process Review of the Accepted Standards and Best

essential for satisfactory performance the contract or subcontract.

Force, ISBN 0-8330-3872-9

Grady O. J., (1994), "System Integration", CRC Press, 1994

Publishing, Inc, ISBN: 978-1-118-10001-1, (2011)

**7. Conclusion** 

or tasking contract too.

**8. References** 

Engineers dealing with different scaled and interconnected engineering systems such as tactical wireless RF communication systems have growing needs for analyzing complex adaptive systems. We propose a systemic engineering methodology based on systematic resolution of complex issues in engineering design. Issues arise which affect the success of each process. There are a number of potential solutions for these issues, which are subject to discussion based on the result assembled from a variety of sources with a range of measures. There are needs to assemble and balance the results in a success measure showing how well each solution meets the system's objectives. The uncertain arguments used by the participants and other test results are combined using a set of mathematical theory for analysis. This process-based construction helps not only in capturing the way of thinking behind design decisions, but also enables the decision-makers to assess the support for each solution. The complexity in this situation arises from the many interacting and conflicting requirements of an increasing range of possible parameters. There may not be a single 'right' solution, only a satisfactory set of resolution, which this system helps to facilitate. Applying systems engineering approaches will definitely help in measuring and analyzing tactical RF wireless networks, smart and innovative performance matrixes through tactical modeling and simulation scenarios may also be developed and enhanced. Systematic utilize of systems engineering approaches with RF electronic warfare modeling and simulation scenarios can support future research in vulnerability analysis of RF communication networks. RF electronic tactical models are used to provide a practical yet simple process for assessing and investigate the vulnerability of RF systems. The focus is also on tactical wireless network within a system of systems (SoS) context research area and to provide a comprehensive network assessment methodology. Researchers have proposed a variety of methods to build network trees with chains of exploits, and then perform normal post-graph vulnerability analysis. This chapter presents an approach to use mathematical Bayesian network to model, calculate and analyze all potential vulnerability paths in wireless RF networks.

#### **2. Main methodology**

Tactical wireless network vulnerabilities continually being reported and critically studied with many U.S. government organizations. The need for a comprehensive framework of network vulnerability assessment using systems engineering approach [24] [26] [27] [28] has been an increasing challenge to many research analysts. Researchers have proposed a more systematic way to manage wireless network nodes and trees with possible chains of events, and then perform normal post-graph vulnerability assessments with system of systems methodology. The most recent system engineering approaches are building attack trees by trying to number all potential attack paths with vulnerabilities identification, node probabilities calculations, inference analysis, and weights assignments by system experts. These are expert driven vulnerabilities analysis. Assessment and identification are one of the main key issues in making sure the property security of a given deployed tactical RF communication network. The vulnerability assessment process involves many uncertain factors reside within both the networks and the network nodes. Threat assessment or injecting threats is one of the major factors of evaluating a situation for its suitability to support decision-making and the indication of the security of a given tactical RF communication network system. One approach is using experienced decision makers database. This type of expert driven database recorded most of their decisions on vulnerability identification. The decision-makers use past experience for their decisions. The decision will be based upon previously good solutions that have worked in similar real life scenarios. The approach is to extract the most significant characteristics from the lay-down situation. Any similar situations and actions that have worked well in past cases will be considered in the assessment due to the present or the lack of certain essential characteristics. The assessment and identification is to create relevant relations between objects in the tactical RF network environment. Tactical communication RF wireless networks are best illustrated by Mr. David L. Adamy [11] in his book. Bayesian network (BN) and the related methods [17] is an effective tool for modeling uncertainty situation and knowledge. This paper discusses Bayesian's Theory [17], Bayesian networks and their ability to function in a given tactical RF communication network [11] for vulnerabilities analysis and identification. This short chapter presents an approach to use Bayesian network to model all potential vulnerabilities or attack paths in tactical RF wireless network. We will call such graph as "Bayesian network vulnerabilities graph" for a given tactical RF wireless network. It provides a more compact representation of attack paths than conventional methods. Bayesian inference methods can be used for probabilistic analysis. It is necessary to use algorithms for updating and computing optimal subsets of attack paths relative to current knowledge about attackers. Tactical RF wireless models were tested on a small example JCSS [12] network. Simulated test results demonstrate the effectiveness of approach.

#### **2.1 Why systems engineering is used here**

Systems engineering [7] [8] [27] [28] is applied here to assist the rapid design and development of complex systems such as tactical wireless communication systems. Systems engineering [29] uses engineering sciences techniques with operations research. Operations research also tackles with designing complex systems. Our goal is to utilize concurrent engineering principles in systems engineering analysis that covers our design goals and testing requirements in developing the RF communication system. The systems approach to solving complex problems are critical since integrating complex analysis and building of RF communication models requires synthesis of different methods. Systems approach is widely used and successful in fields of engineering, for example systems engineering. It is most

network vulnerability assessment using systems engineering approach [24] [26] [27] [28] has been an increasing challenge to many research analysts. Researchers have proposed a more systematic way to manage wireless network nodes and trees with possible chains of events, and then perform normal post-graph vulnerability assessments with system of systems methodology. The most recent system engineering approaches are building attack trees by trying to number all potential attack paths with vulnerabilities identification, node probabilities calculations, inference analysis, and weights assignments by system experts. These are expert driven vulnerabilities analysis. Assessment and identification are one of the main key issues in making sure the property security of a given deployed tactical RF communication network. The vulnerability assessment process involves many uncertain factors reside within both the networks and the network nodes. Threat assessment or injecting threats is one of the major factors of evaluating a situation for its suitability to support decision-making and the indication of the security of a given tactical RF communication network system. One approach is using experienced decision makers database. This type of expert driven database recorded most of their decisions on vulnerability identification. The decision-makers use past experience for their decisions. The decision will be based upon previously good solutions that have worked in similar real life scenarios. The approach is to extract the most significant characteristics from the lay-down situation. Any similar situations and actions that have worked well in past cases will be considered in the assessment due to the present or the lack of certain essential characteristics. The assessment and identification is to create relevant relations between objects in the tactical RF network environment. Tactical communication RF wireless networks are best illustrated by Mr. David L. Adamy [11] in his book. Bayesian network (BN) and the related methods [17] is an effective tool for modeling uncertainty situation and knowledge. This paper discusses Bayesian's Theory [17], Bayesian networks and their ability to function in a given tactical RF communication network [11] for vulnerabilities analysis and identification. This short chapter presents an approach to use Bayesian network to model all potential vulnerabilities or attack paths in tactical RF wireless network. We will call such graph as "Bayesian network vulnerabilities graph" for a given tactical RF wireless network. It provides a more compact representation of attack paths than conventional methods. Bayesian inference methods can be used for probabilistic analysis. It is necessary to use algorithms for updating and computing optimal subsets of attack paths relative to current knowledge about attackers. Tactical RF wireless models were tested on a small example JCSS [12] network. Simulated test results demonstrate the effectiveness of

Systems engineering [7] [8] [27] [28] is applied here to assist the rapid design and development of complex systems such as tactical wireless communication systems. Systems engineering [29] uses engineering sciences techniques with operations research. Operations research also tackles with designing complex systems. Our goal is to utilize concurrent engineering principles in systems engineering analysis that covers our design goals and testing requirements in developing the RF communication system. The systems approach to solving complex problems are critical since integrating complex analysis and building of RF communication models requires synthesis of different methods. Systems approach is widely used and successful in fields of engineering, for example systems engineering. It is most

approach.

**2.1 Why systems engineering is used here** 

effective in treating complex phenomena in tactical wireless RF communication networks. All this requires the use of modular views that clearly illustrate the component features of the whole system. The views may be put into different parts with proper interfaces. Extended knowledge may be gained about the parts in order to further understand the whole nature of a given tactical RF communication system. The system and its details in many levels may then be decomposed into several subsystems and into sub-subsystems, and so on, to the last details. In the same time, we can change focus to view different levels so that users are not overwhelmed by complexity. From time to time abstracts level information may be hided to gain focus on a certain task for detailed analysis. We may just simplify the system by treating some of its parts as black boxes except their interfaces. Hiding information for certain RF tactical analysis is not discarding it. The same black box can be opened at later time for other uses. Systems engineering can make a complex system more tractable and some of the parts can be studied or designed with minimal interference from other parts. All these protective measures can control defective designs and improves system level performance. The systems approach is effective not only for understanding or designing tactical RF wireless communication systems but also for abstract construction in mathematics and theories. Instead of an actual RF communication physical module, a RF wireless network "subsystem" can be a concept within a conceptual scheme and its "interfaces" can be relations to other in the scheme. Analyses and concepts are sometimes needed to approximate in the beginning. We can then refine approximations step by step towards a better answer with our method of analysis. Systems approach is not merely system-level approach but rather delving into lower-level subsystems. The system-level is powerful and appropriate in some cases, but it also misses out on most structures plus dynamics of the system and it is not employed in our systems approach, modularity study here. Systems approach is an integral part of systems engineering. Our analysis here may also call reduction, and "lessening" to yet finer information that also mean the importance of detailed analysis.

#### **3. System of systems in tactical wireless network**

In general, system of systems [9] [10] is a compilation of task-oriented or dedicated systems that bundle their resources and capabilities together to obtain a newer, more complex system that offers more functionality and performance than simply the summation of basic systems. Currently, system of systems is a critical research discipline that supplements engineering processes, quantitative analysis, tools, and design methods. The methodology to define, abstract, model, and analyze system of systems problems is typically referred to as system of systems engineering. We are going to define features for a system of systems that are unique for our study of tactical wireless communication system. The goal will be linking systems into joint system of systems allows for the interoperability and integration of Command, Control, Computers, Communications, and Information (C4I) and Intelligence, Surveillance and Reconnaissance (ISR) Systems as description in the field of information management control in modern armed forces. The system of systems integration is a method to pursue better development, integration, interoperability, and optimization of systems to enhance performance in future combat zone scenarios that related to area of information intensive integration. As one can predict that modern systems that comprise system of systems problems are not merely massive, rather they have some common characteristics: operational independence of the individual systems and managerial independence of the systems. System of systems problems are a collection of multiple domain networks of heterogeneous systems that are likely to exhibit operational and managerial independence, geographical distribution, and emergent and evolutionary behaviors that would not be apparent if the systems and their interactions are modeled separately. Taken together, all these background requirements suggest that a complete system of systems engineering framework is considered necessary to improve decision support for system of systems problems. In our case, an effective system of systems engineering framework for tactical RF communication network models are desired to help decision makers to determine whether related infrastructure, policy, and/or technology considerations are good, efficient, or deficient over time. The urgent need to solve system of systems problems is critical not only because of the growing complexity of today's technology challenges, but also because such problems require large resource commitments and investments with multi-years cost. The bird-eyes view using system-of-systems approach will allow the individual system constituting a system of systems that can be different and operate independently. The interactions expose certain important emergent properties. These emergent patterns have an evolving nature that the RF communication system stakeholders must recognize, analyze, and understand. The system of systems way of thinking promotes a new way of approach for solving grand challenges where the interactions of current technology, organization policy, and resources are the primary drivers. System of systems study is also integrated the study of designing, complexity and systems engineering with additional challenge of design. Systems of systems typically exhibit the behaviors of complex systems. However, not all complex problems fall into the area of systems of systems. System of systems by nature, are several combinations of qualities, not all of which are exhibited in the operation of heterogeneity networks of systems. Current research into effective approaches to system of systems problems includes: proper frame of reference, design architecture. Our study of RF communication network modeling, simulation, and analysis techniques will include network theory, agent-based modeling, probabilistic (Bayesian) robust design (including uncertainty modeling/management), software simulation and programming with multiobjective optimization. We have also studied and developed various numerical and visual tools for capturing the interaction of RF communication system requirements, concepts, and technologies. Systems of systems are still being employed predominantly in the defense sector and space exploration. System of Systems engineering methodology is heavily used in U.S. Department of Defense applications, but is increasingly being applied to many nondefense related problems such as commercial PDA data networks, global communication networks, space exploration and many other System of Systems application domains. System-of-Systems engineering and systems engineering are related but with slightly different fields of study. Systems engineering addresses the development and operations of one particular product like the RF communication networks. System-of-Systems engineering addresses the development and operations of evolving programs. Traditional systems engineering seeks to optimize an individual system (i.e., the target product), while Systemof-Systems engineering seeks to optimize network of various interacting legacy and new systems brought together to satisfy multiple objectives of the program. It enables the decision-makers to understand the implications of various choices on technical performance, costs, extensibility and flexibility over time and the effective of methodology. It may prepare decision-makers to design informed architectural solutions for System-of-Systems context type problems. The objective in our research is to focus on tactical wireless network within a system of systems (SoS) context research area. The ultimate goal is to provide a comprehensive network assessment methodology and possible framework with systems engineering approach.

#### **4. Approach with system engineering**

312 Systems Engineering – Practice and Theory

systems. System of systems problems are a collection of multiple domain networks of heterogeneous systems that are likely to exhibit operational and managerial independence, geographical distribution, and emergent and evolutionary behaviors that would not be apparent if the systems and their interactions are modeled separately. Taken together, all these background requirements suggest that a complete system of systems engineering framework is considered necessary to improve decision support for system of systems problems. In our case, an effective system of systems engineering framework for tactical RF communication network models are desired to help decision makers to determine whether related infrastructure, policy, and/or technology considerations are good, efficient, or deficient over time. The urgent need to solve system of systems problems is critical not only because of the growing complexity of today's technology challenges, but also because such problems require large resource commitments and investments with multi-years cost. The bird-eyes view using system-of-systems approach will allow the individual system constituting a system of systems that can be different and operate independently. The interactions expose certain important emergent properties. These emergent patterns have an evolving nature that the RF communication system stakeholders must recognize, analyze, and understand. The system of systems way of thinking promotes a new way of approach for solving grand challenges where the interactions of current technology, organization policy, and resources are the primary drivers. System of systems study is also integrated the study of designing, complexity and systems engineering with additional challenge of design. Systems of systems typically exhibit the behaviors of complex systems. However, not all complex problems fall into the area of systems of systems. System of systems by nature, are several combinations of qualities, not all of which are exhibited in the operation of heterogeneity networks of systems. Current research into effective approaches to system of systems problems includes: proper frame of reference, design architecture. Our study of RF communication network modeling, simulation, and analysis techniques will include network theory, agent-based modeling, probabilistic (Bayesian) robust design (including uncertainty modeling/management), software simulation and programming with multiobjective optimization. We have also studied and developed various numerical and visual tools for capturing the interaction of RF communication system requirements, concepts, and technologies. Systems of systems are still being employed predominantly in the defense sector and space exploration. System of Systems engineering methodology is heavily used in U.S. Department of Defense applications, but is increasingly being applied to many nondefense related problems such as commercial PDA data networks, global communication networks, space exploration and many other System of Systems application domains. System-of-Systems engineering and systems engineering are related but with slightly different fields of study. Systems engineering addresses the development and operations of one particular product like the RF communication networks. System-of-Systems engineering addresses the development and operations of evolving programs. Traditional systems engineering seeks to optimize an individual system (i.e., the target product), while Systemof-Systems engineering seeks to optimize network of various interacting legacy and new systems brought together to satisfy multiple objectives of the program. It enables the decision-makers to understand the implications of various choices on technical performance, costs, extensibility and flexibility over time and the effective of methodology. It may prepare decision-makers to design informed architectural solutions for System-of-Systems context type problems. The objective in our research is to focus on tactical wireless network within a system of systems (SoS) context research area. The ultimate goal is to provide a

Systems engineering [7] [8] [9] is employed here to look into wireless network vulnerabilities with simulation and modeling work-processes. Sets of useful tools are developed to handle the vulnerability analysis part of the RF wireless network. In the research, we have summarized a variety of methods to build network trees with chains of possible exploits, and then perform normal post-graph vulnerability assessment and analysis. Recent approaches suggest building more advanced attack trees by trying to number all potential attack paths with vulnerabilities identification, node probabilities calculations, inference analysis, weights assignments by system experts. Vulnerabilities analysis, assessment and identification are one of the key issues in making sure the security of a given tactical RF communication network. The vulnerability assessment process involves many uncertain factors. Threat assessment is one of the major factors of evaluating a situation for its suitability to support decision-making and the indication of the security of a given tactical RF communication network system. Systems engineering methodology in the research plays a critical role to help develop a distinctive set of concept and methodology for the vulnerability assessment of tactical RF communication networks. Systems engineering approaches have been developed to meet the challenges of engineering functional physical systems of tactical RF communication networks with complexity. The system engineering process employs here is a brand of holistic concept of system engineering processes. With this holistic view in mind, the systems engineering focuses are on analyzing and understanding the potential U.S. government customer needs. Re-useable RF connectivity models with requirements and functionality are implemented early in the development cycle of these RF communication network models. We then proceed with design synthesis and system validation while considering the complete problem, the system lifecycle. Based upon the concept by Oliver et al. [23], systems engineering technical process are adopted during the course of the research. Within Oliver's model [23], the technical process includes assessing available information, defining effectiveness measures, to create a behavior Bayesian vulnerabilities model, create a structure model, perform trade-off analysis, and create sequential build & test plan. At the same time, a RF communication system can become more complex due to an increase in network size as well as with an increase in the amount of vulnerabilities data, engineering variables, or the number of fields that are involved in the analysis. The developments of smarter matrices with better algorithms are the primary goals of the research. With disciplined systems engineering, it enables the use of tools and methods to better comprehend and manage complexity in wireless RF network systems for in-depth analysis. These tools are developed using modeling and simulation methodologies, optimization calculations and vulnerabilities analysis. Taking an interdisciplinary engineering systems approach to perform vulnerabilities analysis using Bayesian graph with weights calculation is inherently complex. The behavior of and interaction among RF wireless network system components can be well defined in some cases. Defining and characterizing such RF communication systems and subsystems and the interactions among them that supports vulnerabilities analysis is one of the goals of the research.

#### **5. Insights behind research**

Decision matrix is used for vulnerabilities analysis in the research. Decision matrix is an arrangement of related qualitative or quantitative values in terms of rows and columns. It allows our research to graphically identify, analyze, and rate the strength of relationships between sets of information in vulnerabilities. Elements of a decision matrix represent decisions based upon calculations and Bayesian network (BN) on certain vulnerabilities decision criteria. The matrix development is especially useful and critical for looking at large sample numbers of decision factors and assessing each factor's relative importance. Decision matrix employs in the research is used to describe a multi-criteria decision analysis (MCDA) for the tactical RF wireless network. When given a MCDA problem, where there are M alternative options and each need to be assessed on N criteria, can be described by the decision matrix which has M rows and N columns, or M × N elements. Each element, such as Xij, is either a single numerical value or a single grade, representing the performance of alternative i on criterion j. For example, if alternative i is "Wireless Node i", criterion j is "Background Noise" assessed by five grades {Excellent, Good, Average, Below Average, Poor}, and " Wireless Node i" is assessed to be "Good" on "Background Noise", then Xij = "Good". The matrix table 1 is shown below:


Table 1.

#### **5.1 Multiple criteria decision**

Using a modified belief decision matrix, the research is now more refined and the matrix can describe a multiple criteria decision analysis (MCDA) problem in the Evidential Reasoning Approach. In decision theory, the evidential reasoning approach is a generic evidence-based multi-criteria decision analysis (MCDA) approach for dealing with problems having both quantitative and qualitative criteria under various uncertainties. This matrix may be used to support various decision analysis, assessment and evaluation activities such as wireless RF networks environmental impact assessment and wireless RF networks internal nodes (transceiver) assessment based on a range of quality models that are developed. For a given MCDA, there are M alternative options and each need to be assessed on N criteria, then the belief decision matrix for the problem has M rows and N columns or M X N elements. Instead of being a single numerical value or a single grade as in a decision matrix, each element in a belief decision matrix is a belief structure. For example, suppose Alternative i is "Wireless Node i", Criterion j is "Background Noise" assessed by five grades {Excellent, Good, Average, Below Average, Poor}, and "Wireless Node i" is assessed to be "Excellent" on "Message Completion Rate" with a high degree of belief (i.g. 0.6) due to its low Transmission Delay, low Propagation Delay, good Signal-to-Noise Ratio and low Bit Error Rate. At the same time, the quality is also assessed to be only "Good" with a lower degree of confidence (i.g. 0.4 or less) because its fidelity and "Message Completion Rate (MCR) can still be improved. If this is the case, then we have Xij={ (Excellent, 0.6), (Good, 0.4)}, or Xij={ (Excellent, 0.6), (Good, 0.4), (Average, 0), (Below Average, 0), (Poor, 0)}. A conventional decision matrix is a special case of belief decision matrix when only one belief degree in a belief structure is 1 and the others are 0. The modified matrix table 2 is shown below:


Table 2.

314 Systems Engineering – Practice and Theory

Decision matrix is used for vulnerabilities analysis in the research. Decision matrix is an arrangement of related qualitative or quantitative values in terms of rows and columns. It allows our research to graphically identify, analyze, and rate the strength of relationships between sets of information in vulnerabilities. Elements of a decision matrix represent decisions based upon calculations and Bayesian network (BN) on certain vulnerabilities decision criteria. The matrix development is especially useful and critical for looking at large sample numbers of decision factors and assessing each factor's relative importance. Decision matrix employs in the research is used to describe a multi-criteria decision analysis (MCDA) for the tactical RF wireless network. When given a MCDA problem, where there are M alternative options and each need to be assessed on N criteria, can be described by the decision matrix which has M rows and N columns, or M × N elements. Each element, such as Xij, is either a single numerical value or a single grade, representing the performance of alternative i on criterion j. For example, if alternative i is "Wireless Node i", criterion j is "Background Noise" assessed by five grades {Excellent, Good, Average, Below Average, Poor}, and " Wireless Node i" is assessed to be "Good" on "Background Noise", then Xij =

Using a modified belief decision matrix, the research is now more refined and the matrix can describe a multiple criteria decision analysis (MCDA) problem in the Evidential Reasoning Approach. In decision theory, the evidential reasoning approach is a generic evidence-based multi-criteria decision analysis (MCDA) approach for dealing with problems having both quantitative and qualitative criteria under various uncertainties. This matrix may be used to support various decision analysis, assessment and evaluation activities such as wireless RF networks environmental impact assessment and wireless RF networks internal nodes (transceiver) assessment based on a range of quality models that are developed. For a given MCDA, there are M alternative options and each need to be assessed on N criteria, then the belief decision matrix for the problem has M rows and N columns or M X N elements. Instead of being a single numerical value or a single grade as in a decision matrix, each element in a belief decision matrix is a belief structure. For example, suppose Alternative i is "Wireless Node i", Criterion j is "Background Noise" assessed by five grades {Excellent, Good, Average, Below Average, Poor}, and "Wireless Node i" is assessed to be "Excellent" on "Message Completion Rate" with a high degree of belief (i.g. 0.6) due to its low Transmission Delay, low Propagation Delay, good Signal-to-Noise Ratio and low Bit Error

**5. Insights behind research** 

"Good". The matrix table 1 is shown below:

Table 1.

**5.1 Multiple criteria decision** 

#### **5.2 Probability distributions**

The research may help to develop a more systematic and automated approach for building "Bayesian network vulnerabilities graph" with weights assignment for vulnerability study in tactical wireless RF networks [11]. Bayesian network [17] is designed in vulnerabilities graph and models all potential attack steps in a given network. As describe by T. Leonard and J. Hsu [17], using Bayesian's rule as a special case involving continuous prior and posterior probability distributions and discrete probability distributions of data, but in its simplest setting involving only discrete distributions, the theorem relates the conditional and marginal probabilities of events A and B, where B has a certain (non-zero) probability as in (1):

$$\mathbf{P(A \mid B)} = (\mathbf{P(B \mid A)P(A)}) / \mathbf{P(B)}\tag{1}$$

Each term in the theorem has a conventional name: P(A) is the prior probability or marginal probability of A. It is "prior" in the sense that it does not take into account any information about B. P(A|B) is the conditional probability of A, given B. It is also called the posterior probability because it is derived from or depends upon the specified value of B. P(B|A) is the conditional probability of B given A. P(B) is the prior or marginal probability of B, and acts as a normalizing constant. The theorem in this form gives a mathematical representation of how the conditional probability of even A given even B is related to the converse conditional probability of even B when given even A. In our research, each wireless network node represents a single security and vulnerability point and contains property violation mode; each link edge corresponds to an exploitation of one or more possible vulnerabilities and each network path represents a series of exploits that can signify a potential vulnerability for attack within the RF wireless network. The communication model takes on characteristics of a tactical wireless RF network, and we consider an integrated posterior probability of Bayesian networks (BN) [17] with well-defined security metric represents a more comprehensive quantitative vulnerability assessment of a given tactical RF networks which contain different communication stages. Posterior probability is a revised probability that takes into account new available information. For example, let there be two stages within a given wireless transceiver. Wireless stage A having vulnerability or 0.35 accuracy due to noise factor and 0.85 accuracy due to jamming factor and wireless stage B having vulnerability or 0.75 accuracy due to noise factor and 0.45 accuracy due to jamming. Now if wireless stage is selected at random, the probability that wireless stage A is chosen is 0.5 (50% chance, one out of two stage). This is the a priori probability for the vulnerability of wireless communication stage. If we are given an additional piece of information that a wireless stage was chosen at random from the wireless network, and that the factor is noise, what is the probability that the chosen wireless stage is A? Posterior probability takes into account this additional information and revises the probability downward from 0.5 to 0.35 according to Bayesian's theorem. Also, the noise factor effect is more probable from stage B (0.75) than stage A (0.35). When the factor is jamming instead, the probability that the chosen wireless stage is A will be revised upward from 0.5 to 0.85 instead. Then, the vulnerability related jamming factor now is definitely less probable from stage B (0.45) than stage A (0.85). With conditional independence relationship encoded in a Bayesian network (BN) can be stated as follows: a wireless node is independent of its ancestors given its parents, where the ancestor/parent relationship is with respect to some fixed topological ordering of the wireless nodes. Using figure 1 below to demonstrate the outcomes, by the chain rule of probability with stages C, S, R & W, the joint probability of all the nodes in the vulnerabilities graph is now become: P(C, S, R, W) = P(C) \* P(S|C) \* P(R|C,S) \* P(W|C,S,R). By using conditional independence relationships, we can rewrite this as: P(C, S, R, W) = P(C) \* P(S|C) \* P(R|C) \* P(W|S,R) where we are allowed to simplify the third term because R is independent of S given its parent C, and the last term because W is independent of C given its parents S and R. We can see that the conditional independence relationships allow us to represent the joint more compactly. Here the savings are minimal, but in general, if we had n binary nodes, the full joint would require **O(** <sup>2</sup>*n N* **)** space to represent, but the factored form would require **O(***n* <sup>2</sup> *k* **)** space to represent, where k is the maximum fan-in of a node with fewer overall parameters.

#### **5.3 Wireless communication models**

In the model, we concern about the vulnerability of the wireless network caused by the failure of various communication stages in the wireless RF communication network. Figure 2 clearly presents the logical communication block diagram of our RF model. Each stage in a RF network is profiled with network and system configurations with exhibited vulnerabilities. They are identified through the breaking down of a given transceiver into transmitter and receiver with different stages. The purpose of our modeling and simulation goals is to make use the DISA JCSS Transceiver Pipeline stages [12]. All vulnerabilities data may be collected and the following information may be collected at run-time: (1) Effect of the transmission on nodes in the vicinity. (2) Set of nodes will attempt to receive the packet. (3) Determine a node attempting to receive a packet successfully. (4) Time it take for a packet to be transferred to the receiver. To start with the transmitter, we break down the transceiver into different radio pipeline stages. On the transmitter side, the transmitter has a

tactical RF networks which contain different communication stages. Posterior probability is a revised probability that takes into account new available information. For example, let there be two stages within a given wireless transceiver. Wireless stage A having vulnerability or 0.35 accuracy due to noise factor and 0.85 accuracy due to jamming factor and wireless stage B having vulnerability or 0.75 accuracy due to noise factor and 0.45 accuracy due to jamming. Now if wireless stage is selected at random, the probability that wireless stage A is chosen is 0.5 (50% chance, one out of two stage). This is the a priori probability for the vulnerability of wireless communication stage. If we are given an additional piece of information that a wireless stage was chosen at random from the wireless network, and that the factor is noise, what is the probability that the chosen wireless stage is A? Posterior probability takes into account this additional information and revises the probability downward from 0.5 to 0.35 according to Bayesian's theorem. Also, the noise factor effect is more probable from stage B (0.75) than stage A (0.35). When the factor is jamming instead, the probability that the chosen wireless stage is A will be revised upward from 0.5 to 0.85 instead. Then, the vulnerability related jamming factor now is definitely less probable from stage B (0.45) than stage A (0.85). With conditional independence relationship encoded in a Bayesian network (BN) can be stated as follows: a wireless node is independent of its ancestors given its parents, where the ancestor/parent relationship is with respect to some fixed topological ordering of the wireless nodes. Using figure 1 below to demonstrate the outcomes, by the chain rule of probability with stages C, S, R & W, the joint probability of all the nodes in the vulnerabilities graph is now become: P(C, S, R, W) = P(C) \* P(S|C) \* P(R|C,S) \* P(W|C,S,R). By using conditional independence relationships, we can rewrite this as: P(C, S, R, W) = P(C) \* P(S|C) \* P(R|C) \* P(W|S,R) where we are allowed to simplify the third term because R is independent of S given its parent C, and the last term because W is independent of C given its parents S and R. We can see that the conditional independence relationships allow us to represent the joint more compactly. Here the savings are minimal, but in general, if we had n binary nodes, the full joint would require **O(** <sup>2</sup>*n N* **)** space to represent, but the factored form would require **O(***n* <sup>2</sup> *k* **)** space to represent, where k is the maximum fan-in of a node with fewer overall

In the model, we concern about the vulnerability of the wireless network caused by the failure of various communication stages in the wireless RF communication network. Figure 2 clearly presents the logical communication block diagram of our RF model. Each stage in a RF network is profiled with network and system configurations with exhibited vulnerabilities. They are identified through the breaking down of a given transceiver into transmitter and receiver with different stages. The purpose of our modeling and simulation goals is to make use the DISA JCSS Transceiver Pipeline stages [12]. All vulnerabilities data may be collected and the following information may be collected at run-time: (1) Effect of the transmission on nodes in the vicinity. (2) Set of nodes will attempt to receive the packet. (3) Determine a node attempting to receive a packet successfully. (4) Time it take for a packet to be transferred to the receiver. To start with the transmitter, we break down the transceiver into different radio pipeline stages. On the transmitter side, the transmitter has a

parameters.

**5.3 Wireless communication models** 

Fig. 1. Vulnerabilities graph (simple stage within a wireless node)

Fig. 2**.** JCCS pipeline stages are defined for a wireless communication model

"Group Receiver" start with the index "Group 0". The transmitter executed once at the start of simulation for each pair of transmitter and receiver channels or dynamically by OPNET JCSS's [12] Kernel Procedure (KP) calls. Inside the radio pipeline stages of the receiver side, for every receiver channel which "passed" the transmission checks, the simulated RF packet will "flow" through the pipe. Using JCSS [12] and OPNET Modeler, it is very critical to make sure the JCSS Radio Pipeline Model [12] attributes are being configured correctly. This is particular important for military RF radios like EPLRS [12] during a lay-down of network nodes in different scenarios. In all cases, configuration should be retained and saved in the node model. In summary, for Radio Transmitter, there are six (6) different stages (stage 0 to stage 5) associated with each Radio Transmitter. The following are six of the stages for a give Radio Transmitter (RT): Receiver Group, Transmission Delay, Link Closure, Channel Match, Transmitter (Tx) Antenna Gain and Propagation Delay. As for the Radio Receiver, there are altogether eight (8) stages (stage 6 to stage 13) that associated with a Radio Receiver (RR): Rx Antenna Gain, Received Power, Interference Noise, Background Noise, Signal-to-Noise Ratio, Bit Error Rate, Error Allocation and Error Correction. In JCSS [12] and OPNET Modeler, there are altogether 14 Pipeline Stages (PS) that have implemented vulnerabilities graph for Bayesian networks (BN) [17] analysis. These are customized collections sequence of 'C' or 'C++' procedures (code & routines) with external Java subroutines and portable applications written for research purposes. In figure 2, each 14 different stages that comprised in a transceiver network perform a different calculation. For example in (1) Line-of-sight, (2) Signal strength & (3) Bit errors rates, Pipeline Stages (PS) code & routines are written in C, C++ and with external subroutine interfaces written in Java. Each procedure has a defined interface (prototype) with arguments typically a packet. Unlike most available vulnerability bulletins on public domains, we classify tactical wireless networks with vulnerabilities inside the 14 different stages of a given tactical wireless RF communication transceiver. So the vulnerabilities graph for a given tactical transceiver may be classified as vulnerabilities in Radio Transmitter are: (Vt1) Receiver Group, (Vt2) Transmission Delay, (Vt3) Link Closure, (Vt4) Channel Match, (Vt5) Transmitter Antenna Gain and (Vt6) Propagation Delay. On the hand the vulnerabilities for the Radio Receiver are: (Vr1) Rx Antenna Gain, (Vr2) Received Power, (Vr3) Interference Noise, (Vr4) Background Noise, (Vr5) Signal-to-Noise Ratio, (Vr6) Bit Error Rate, (Vr7) Error Allocation and (Vr8) Error Correction.

Fig. 3. An example of vulnerabilities template for JCSS (transmitter / receiver pair) and related simulations.

Using the existing JCSS tactical RF hosts configuration and profile editors with wireless networking analysis tools [13] [14], we can construct generic, vulnerabilities graph and templates to describe possible exploitations conditions with certain vulnerabilities in a given transceiver and then on to a larger scale, a given tactical communication network's overall situation. Each template contains some pre-conditions and post-conditions of an atomic event related to the communication stage along with some security metric(s) information. A successful JCSS simulation will lead to better understanding for a more secure tactical RF communication model. Since we build vulnerability graphs using Bayesian networks (BN), we also assign probability of success after a failure in a pipeline stage's link-edge weight.

#### **5.4 Algorithm within vulnerabilities graph**

318 Systems Engineering – Practice and Theory

Bit Error Rate, Error Allocation and Error Correction. In JCSS [12] and OPNET Modeler, there are altogether 14 Pipeline Stages (PS) that have implemented vulnerabilities graph for Bayesian networks (BN) [17] analysis. These are customized collections sequence of 'C' or 'C++' procedures (code & routines) with external Java subroutines and portable applications written for research purposes. In figure 2, each 14 different stages that comprised in a transceiver network perform a different calculation. For example in (1) Line-of-sight, (2) Signal strength & (3) Bit errors rates, Pipeline Stages (PS) code & routines are written in C, C++ and with external subroutine interfaces written in Java. Each procedure has a defined interface (prototype) with arguments typically a packet. Unlike most available vulnerability bulletins on public domains, we classify tactical wireless networks with vulnerabilities inside the 14 different stages of a given tactical wireless RF communication transceiver. So the vulnerabilities graph for a given tactical transceiver may be classified as vulnerabilities in Radio Transmitter are: (Vt1) Receiver Group, (Vt2) Transmission Delay, (Vt3) Link Closure, (Vt4) Channel Match, (Vt5) Transmitter Antenna Gain and (Vt6) Propagation Delay. On the hand the vulnerabilities for the Radio Receiver are: (Vr1) Rx Antenna Gain, (Vr2) Received Power, (Vr3) Interference Noise, (Vr4) Background Noise, (Vr5) Signal-to-Noise Ratio, (Vr6)

Fig. 3. An example of vulnerabilities template for JCSS (transmitter / receiver pair) and

related simulations.

Bit Error Rate, (Vr7) Error Allocation and (Vr8) Error Correction.

Specifying valid probability of communication in different stages requires domain expert knowledge. Most existing vulnerabilities scanning tools report those vulnerabilities with a standard set of categorical security measurements, such as severity level and vulnerability consequences. Therefore, considering the nature of a wireless network, one can define a more than one dimension security or vulnerabilities matrix using these categorical information and quantify levels of each category into numerical values for computation and comparison basis. Our approach is to make each matrix entry value related to each stage in a given transceiver. The result can then be computed and derived by a mathematical function that receives contributions from various dimensions like a normal linear addictive function *f*(*x* + *y*) = *f*(*x*) + *f*(*y*) or multiplicative function *f*(*ab*) = *f*(*a*) *f*(*b*). Then, it can be converted to a value within range [0,1] by applying a special scalar function. A function of one or more variables whose range is one-dimensional, this scalar function can be applied to the matrix. Such value may be represented the probability of a given vulnerability with respect to the transceiver. For example, One can define a two dimension m × n security matrix W = (wij), with one dimension wi to denote severity levels and another dimension wj to denote ranges of exploits. A 3-scale severity level may be specified as {high = 0.95, medium = 0.65, low = 0.35}, and 2-scale exploit ranges may be specified as {remote = 0.55, local = 0.95}. If applying a multiplicative function to the matrix, then each entry value is given by wij = wi × wj. Our research constructs Bayesian vulnerabilities graphs with our graph generation and mapping routine by matching a list of stages in a given transceiver on a wireless network with profile information against a library of computed vulnerabilities specified node characteristic templates. For any vulnerability, if all pre-conditions are met, values of post-condition attributes are updated with an edge that is assigned with weight. It is then added to the vulnerability graph. The most common task we wish to solve using Bayesian networks (BN) is probabilistic inference. For example, consider the network G with a current vulnerability status W, and suppose we observe the fact that G with a status of W. There are two possible causes for this: either it is due to factor R, or the due to factor S is on. Which is more likely? We can use Bayesian's rule to compute the posterior probability of each explanation (where 0==false and 1==true).

$$\Pr(S=1 \mid W=1) = \frac{\Pr(S=1, W=1)}{\Pr(W=1)} = \frac{\sum\_{c,r} \Pr(C=c, S=1, R=r, W=1)}{\Pr(W=1)} = 0.2781 \, 1 \, 0.6471 = 0.430$$

$$\Pr(R=1 \mid W=1) = \frac{\Pr(R=1, W=1)}{\Pr(W=1)} = \frac{\sum\_{c,s} \Pr(C=c, S=1, W=1)}{\Pr(W=1)} = 0.4581 \, 1 \, 0.6471 = 0.708$$

where

$$\Pr(\mathcal{W} = 1) = \sum\_{c,r,s}^{c,r,s} \Pr(\mathcal{C} = c, S = s, R = r, \mathcal{W} = 1) = 0.64718$$

Pr(W = 1) is a normalizing constant, equal to the probability (likelihood) of the data. So we see that it is more likely that the network G will have a status of W, because of the weight in factor R is more than factor S: i.e. the likelihood ratio is 0.7079/0.4298 = 1.647. With variable elimination techniques illustrated below and using vulnerabilities graph in figure 4, we use Bayesian networks (BN) with Bucket Elimination Algorithm implementation in the models with belief updating in our scenarios, to the most probable explanation. We need to provide vulnerability values in each communication stage within each transceiver plus the network scores on the entire tactical network. Finding a maximum probability assignment to each and the rest of variables is a challenge. We may really need to maximizing a posteriori hypothesis with given evidence values, finding an assignment to a subset of hypothesis variables that maximize their probability. On the other hand we may need to maximize the expected utility of the problem with given evidence and utility function, finding a subset of decision variables that maximize the expected utility. Any other consideration is Bucket Elimination Algorithm. It may be used as a framework for various probabilistic inferences on Bayesian Networks (BN) in the experiment. Finally, a RF Vulnerability Scoring System (RF-VSS) analysis is in development. It is based upon the Common Vulnerability Scoring System [22] and associates with additional features of Bayesian networks [17] (also known as belief network) that in turn yields a more refined belief decision matrix and the matrix can then describes a multiple criteria decision analysis (MCDA) with evidential reasoning approach for vulnerabilities analysis of a given tactical wireless RF network.

Fig. 4. Use of Bucket Elimination Algorithm within vulnerabilities graph

#### **6. Result generated from sample experiments**

320 Systems Engineering – Practice and Theory

Pr( 1) Pr( , , , 1) 0.6471

Pr(W = 1) is a normalizing constant, equal to the probability (likelihood) of the data. So we see that it is more likely that the network G will have a status of W, because of the weight in factor R is more than factor S: i.e. the likelihood ratio is 0.7079/0.4298 = 1.647. With variable elimination techniques illustrated below and using vulnerabilities graph in figure 4, we use Bayesian networks (BN) with Bucket Elimination Algorithm implementation in the models with belief updating in our scenarios, to the most probable explanation. We need to provide vulnerability values in each communication stage within each transceiver plus the network scores on the entire tactical network. Finding a maximum probability assignment to each and the rest of variables is a challenge. We may really need to maximizing a posteriori hypothesis with given evidence values, finding an assignment to a subset of hypothesis variables that maximize their probability. On the other hand we may need to maximize the expected utility of the problem with given evidence and utility function, finding a subset of decision variables that maximize the expected utility. Any other consideration is Bucket Elimination Algorithm. It may be used as a framework for various probabilistic inferences on Bayesian Networks (BN) in the experiment. Finally, a RF Vulnerability Scoring System (RF-VSS) analysis is in development. It is based upon the Common Vulnerability Scoring System [22] and associates with additional features of Bayesian networks [17] (also known as belief network) that in turn yields a more refined belief decision matrix and the matrix can then describes a multiple criteria decision analysis (MCDA) with evidential reasoning

*W Cc S s R r <sup>W</sup>*

, ,

*crs*

, ,

approach for vulnerabilities analysis of a given tactical wireless RF network.

Fig. 4. Use of Bucket Elimination Algorithm within vulnerabilities graph

*crs*

where

For simplicity in terms of network radio analysis, we provide here a rather simple two (2) nodes wireless RF network scenarios that are communicating with each other via UDP protocol. A more complex one is illustrated in figure 5b. Using some of the available wireless networking analysis toolkits [13] [14] as in figure 5a, a set of JCSS EPLRS Scenarios with a link being jammed. Packets were being captured and exported into Microsoft EXCEL spreadsheet. Jamming occurs between 2 wireless links for this network: EPLRS\_6004 and EPLRS\_6013. EPLRS\_6013 transceiver model was changed to a special EPLRS EW network vulnerability model as in figure 5c. The receiver link was intentionally jammed (by increase the noise level to an extremely high value, i.e. the vulnerabilities within one of the wireless stage are increased by many fold) so that no more simulated packet will be "successful" in getting through from EPLRS\_6004 to EPLRS\_6013 and the results are listed and illustrated in figure 5d with some sample data.

Fig. 5a. Before and after scenarios using wireless networking analysis toolkits in Java

Fig. 5b. Wireless RF Networks

Fig. 5c. Two wireless nodes network

#### **7. Future possibilities**

Bayesian Analysis [17] – the Bayesian's Theorem looks at probability as a measure of a state of knowledge, whereas traditional probability theory looks at the frequency of an event happening. In other words, Bayesian probability looks at past events and prior knowledge and tests the likelihood that an observed outcome came from a specific probability distribution. With some sample field data the Bayesian's Theorem can be applied including wireless RF communications & computer networking science in tactical military applications. The research presented here is for building a set of "Bayesian network vulnerabilities graph" for vulnerability study in tactical wireless RF networks. Bayesian network is designed in vulnerabilities graph and model all potential attack steps in a given network. Each wireless network node represents a single security property violation mode; each link edge corresponds to an exploitation of one or more possible vulnerabilities and each network path represents a series of exploits that can signify a potential vulnerability for attack within a tactical RF wireless communication network. Inference is played a major part in our vulnerability calculations. Future research work will involve looking into different kinds of Bayesian's network (BN) with advanced topological arrangements as in figure 6 below with multiple experts and multiple factors analysis for our more advanced JCSS wireless RF vulnerabilities analysis.

Fig. 5d. Sample results generated by JCSS scenarios

322 Systems Engineering – Practice and Theory

Bayesian Analysis [17] – the Bayesian's Theorem looks at probability as a measure of a state of knowledge, whereas traditional probability theory looks at the frequency of an event happening. In other words, Bayesian probability looks at past events and prior knowledge and tests the likelihood that an observed outcome came from a specific probability distribution. With some sample field data the Bayesian's Theorem can be applied including wireless RF communications & computer networking science in tactical military applications. The research presented here is for building a set of "Bayesian network

Fig. 5b. Wireless RF Networks

Fig. 5c. Two wireless nodes network

**7. Future possibilities** 

Fig. 6. Multiple experts and multiple factors analysis

#### **7.1 Adaptive Bayesian network and scoring system**

Finally, we may consider an adapted Bayesian network (BN) of wireless tactical network analysis with a RF Vulnerability Scoring System (RF-VSS) that can generate weighted scores in the research. The Common Vulnerability Scoring System (CVSS), a NIAC research project from U.S. Department of Homeland Security. This rating system is designed to provide open and universally standard severity ratings of vulnerabilities in certain specific systems. It creates a global framework for disclosing information about security vulnerabilities. The CVSS may be recognized and generally accepted by the public in support, international coordination and communication to ensure successful implementation, education and on-going development of the scoring system. It serves a critical need to help organizations appropriately prioritize security vulnerabilities across different domains. A common scoring system has the advantages of solving the similar problems with better coordination. Based upon the Common Vulnerability Scoring System develops by Peter Mell et al. [22], we think this is a very valuable, useful tool and scoring system for quickly assessing wireless RF security and vulnerabilities. RF-VSS scores are derived from three scores: a "base network" score, an "adversaries impact" score, and an "environmental impact" score. These can better be described as "fixed" score, "external variable" score, and "wireless RF network experts" assigned score. The base network system score is fixed at the time the vulnerability is found and its properties do not change. The base assigned score includes numerous scoring metrics. Each of these metrics will then be chosen from a pre-determined list of options. Each option has a value. The values are then fed into a formula to produce the base network score. Next comes the temporal or adversaries impact score. The adversaries impact score changes and revises the base network score up or down. The temporal or adversaries impact score can also change over time (thus it is "time sensitive"). For example, one of the component metrics of the adversaries impact score is System Remediation Level (SRL). This means, there exists a possible common defense fixes out there, maybe from a contractor or vendor or an emergency research workaround. If, when the detected vulnerability is first encountered, there may be no possible fix, then the temporal or adversaries impact score will be much higher. But when a solution or fix is possible, then the score will go down dramatically. Again, it was temporary and a changing factor. There are three possible vulnerabilities metrics that make up the temporal or adversaries impact score. This score is then multiplied by the base network score to produce a new score. This first computed new score will be produced based upon the current operating wireless RF network scenarios set up via background expert diagnostic. The final part is the environmental impact score. This is how the final vulnerability will affect the wireless RF network. The researchers get to determine how the combined vulnerabilities might affect the overall wireless RF network in field deployment. If the vulnerability has very little risk or to do with all the listed factors then this computed score will be very, very low (like zero). There are five metrics that affect the environmental impact score. This portion is combined with the base network and temporal adversaries impact score to produce a final score. The score will be on a scale of 1-10. If it is a low 2, then don't be too worried. However, a rather higher score like 6 or above might indicate major security issues in terms of security. We will provide a vulnerabilities smart index by constructing a novel calculator with a set of RF Vulnerability Scoring System (RF-VSS) for final system vulnerability analysis. For an example: For a given wireless RF radio network, according to expert released analysis and advisory, there are a set of "RF wireless network vulnerabilities" being assigned. The example metrics for the given wireless RF network scenarios with vulnerabilities are: (1) base network impact, (2) temporal or adversaries' impact and (3) Environmental impact.

Fig. 7. Transposing the vulnerabilities graph into a matrix for analysis

So, overall a base RF wireless network vulnerability score of 8.8 (very bad) that is slightly mitigated to 7.9 by the temporal or adversaries metrics. Still, 7.9 is not a great score and still has considerable amount of risk. Now, this is where the final environmental impact score comes in to alter the landscape. The negative impact may be bad for the overall wireless RF network when we look at the environmental impact metrics calculated before for certain wireless network scenarios as illustrated above. We gather all those factors into the RF Vulnerability Scoring System (RF-VSS) calculator and it produces an environmental score of 6.5 which translates into high vulnerabilities. This is a relatively good approach to determine what the overall risk is for a give wireless RF network and the RF Vulnerability Scoring System (RF-VSS) analysis is based upon the Common Vulnerability Scoring System develops by Peter Mell [22] and associates with additional features of Bayesian networks [17] (also known as belief network). Using adjacency-matrix as a starting point, a more quantitative wireless RF network vulnerability assessment may be achieved. An adjacent edge counts as 1 unit in the matrix for an undirected graph as illustrated in figure 7. (For example a given X, Y coordinates that are numbered below from #1 to #6 may be transposed into a 6x6 matrix.)

#### **8. Conclusion**

324 Systems Engineering – Practice and Theory

Finally, we may consider an adapted Bayesian network (BN) of wireless tactical network analysis with a RF Vulnerability Scoring System (RF-VSS) that can generate weighted scores in the research. The Common Vulnerability Scoring System (CVSS), a NIAC research project from U.S. Department of Homeland Security. This rating system is designed to provide open and universally standard severity ratings of vulnerabilities in certain specific systems. It creates a global framework for disclosing information about security vulnerabilities. The CVSS may be recognized and generally accepted by the public in support, international coordination and communication to ensure successful implementation, education and on-going development of the scoring system. It serves a critical need to help organizations appropriately prioritize security vulnerabilities across different domains. A common scoring system has the advantages of solving the similar problems with better coordination. Based upon the Common Vulnerability Scoring System develops by Peter Mell et al. [22], we think this is a very valuable, useful tool and scoring system for quickly assessing wireless RF security and vulnerabilities. RF-VSS scores are derived from three scores: a "base network" score, an "adversaries impact" score, and an "environmental impact" score. These can better be described as "fixed" score, "external variable" score, and "wireless RF network experts" assigned score. The base network system score is fixed at the time the vulnerability is found and its properties do not change. The base assigned score includes numerous scoring metrics. Each of these metrics will then be chosen from a pre-determined list of options. Each option has a value. The values are then fed into a formula to produce the base network score. Next comes the temporal or adversaries impact score. The adversaries impact score changes and revises the base network score up or down. The temporal or adversaries impact score can also change over time (thus it is "time sensitive"). For example, one of the component metrics of the adversaries impact score is System Remediation Level (SRL). This means, there exists a possible common defense fixes out there, maybe from a contractor or vendor or an emergency research workaround. If, when the detected vulnerability is first encountered, there may be no possible fix, then the temporal or adversaries impact score will be much higher. But when a solution or fix is possible, then the score will go down dramatically. Again, it was temporary and a changing factor. There are three possible vulnerabilities metrics that make up the temporal or adversaries impact score. This score is then multiplied by the base network score to produce a new score. This first computed new score will be produced based upon the current operating wireless RF network scenarios set up via background expert diagnostic. The final part is the environmental impact score. This is how the final vulnerability will affect the wireless RF network. The researchers get to determine how the combined vulnerabilities might affect the overall wireless RF network in field deployment. If the vulnerability has very little risk or to do with all the listed factors then this computed score will be very, very low (like zero). There are five metrics that affect the environmental impact score. This portion is combined with the base network and temporal adversaries impact score to produce a final score. The score will be on a scale of 1-10. If it is a low 2, then don't be too worried. However, a rather higher score like 6 or above might indicate major security issues in terms of security. We will provide a vulnerabilities smart index by constructing a novel calculator with a set of RF Vulnerability Scoring System (RF-VSS) for final system vulnerability analysis. For an example: For a given wireless RF radio network, according to expert released analysis and advisory, there are a set of "RF wireless

**7.1 Adaptive Bayesian network and scoring system** 

A possible framework with systems engineering approach [7] [8] is utilized. The ultimate goal is now partially achieved by providing a comprehensive network assessment methodology. Our study illustrates using system engineering thinking, Bayesian networks [17] can be applied during the analysis as a powerful tool for calculating security metrics regarding information system networks. The use of our modified Bayesian network model with the mechanisms from CVSS is in our opinion an effective and sound methodology contributing towards improving the research into the development of security metrics by constructing a novel calculator with a set of RF Vulnerability Scoring System (RF-VSS) for final system vulnerability analysis. We will continue to refine our approach using more dynamic Bayesian Networks to encompass the temporal domain measurements established in the CVSS. This short paper demonstrated an approach to model all potential vulnerabilities in a given tactical RF network with Bayesian graphical model. In addition, using a modified belief decision matrix, the research can describe a multiple criteria decision analysis (MCDA) using Evidential Reasoning Approach [3] [4] [5] [6]. It was used to support various decision analysis, assessment and evaluation activities such as impact and self assessments [1] [2] based on a range of quality models. In decision theory, evidential reasoning approach (ER) is generally a evidence-based multi-criteria decision analysis (MCDA) for dealing with some problems having both quantitative and qualitative criteria with various uncertainties including ignorance and randomness. With evidential reasoning approach, a generic evidence-based multi-criteria decision analysis (MCDA) approach is chosen for dealing with problems having both quantitative and qualitative criteria with variables. This matrix may be used to support various decision analysis, assessment and evaluation activities such as wireless RF networks environmental impact assessment and wireless RF networks internal nodes (transceiver) assessment based on a range of quality models that are developed. Bayesian vulnerabilities graphs provide comprehensive graphical representations with conventional spanning tree structures. The Bayesian vulnerabilities graph model is implemented in Java, and it is deployed along with JCSS software. JCSS is the Joint Net-Centric Modeling & Simulation Tool used to assess end-toend communication network capabilities and performance. It is the Joint Chiefs of Staff standard for modeling military communications systems. JCSS is a desktop software application that provides modeling and simulation capabilities for measuring and assessing the information flow through the strategic, operational, and tactical military communications networks. Our new tool can generate implement vulnerabilities network graph with link edges and weights. All these may be transposed into an adjacency-matrix as illustrated before for a more quantitative wireless RF network vulnerability assessment. The convention followed here is that an adjacent edge counts as one in a matrix for an undirected graph as illustrated before in figure 7. For a given X, Y coordinates, for instant; they can be numbered from one to six and may also be transposed into a 6x6 matrix. The vulnerabilities analysis with the help of system engineering approach [25] [26] [29] of a wireless RF network is then achieved by assigning corresponding measurement metrics with posterior conditional probabilities of Bayesian network [17]. The Bucket Elimination algorithm is adapted and modified for probabilistic inference in our approach. The most common approximate inference algorithms are stochastic MCMC simulation, bucket algorithm and related elimination steps which generalizes looping and aggregated belief propagation, and variation methods. A better approximate inference mechanism may be deployed in the near future for more complex vulnerabilities graph. Our method is very applicable to tactical wireless RF networks by picking, implementing each model's communication stages and states. The result when using with OPNET JCSS [12] simulation and modeling will provide both graphical quantitative and real assessment of RF network vulnerabilities at a network topology state and during time of actual deployment.

#### **9. Acknowledgment**

The authors thank Dr. John V. Farr, Dr. Ali Mostashari, Dr. Jose E. Ramirez-Marquez of the Department of Systems Engineering, Stevens Institute of Technology for many fruitful discussions in the early stages, the referees for helpful comments, for providing outstanding intellectual environments and significant guidance. We appreciate the assistance of Dr. Jessie J. Zhao who proof read some of the technical materials. Finally, we also thank the countless effort and insight contributed to this research by Dr. Mung Chiang, Professor of Electrical Engineering Department from Princeton University for his excellent technical skills, strategies and valuable advice in applying conventional and distributed convex optimization in the area of wireless communication network.

#### **10. References**

326 Systems Engineering – Practice and Theory

final system vulnerability analysis. We will continue to refine our approach using more dynamic Bayesian Networks to encompass the temporal domain measurements established in the CVSS. This short paper demonstrated an approach to model all potential vulnerabilities in a given tactical RF network with Bayesian graphical model. In addition, using a modified belief decision matrix, the research can describe a multiple criteria decision analysis (MCDA) using Evidential Reasoning Approach [3] [4] [5] [6]. It was used to support various decision analysis, assessment and evaluation activities such as impact and self assessments [1] [2] based on a range of quality models. In decision theory, evidential reasoning approach (ER) is generally a evidence-based multi-criteria decision analysis (MCDA) for dealing with some problems having both quantitative and qualitative criteria with various uncertainties including ignorance and randomness. With evidential reasoning approach, a generic evidence-based multi-criteria decision analysis (MCDA) approach is chosen for dealing with problems having both quantitative and qualitative criteria with variables. This matrix may be used to support various decision analysis, assessment and evaluation activities such as wireless RF networks environmental impact assessment and wireless RF networks internal nodes (transceiver) assessment based on a range of quality models that are developed. Bayesian vulnerabilities graphs provide comprehensive graphical representations with conventional spanning tree structures. The Bayesian vulnerabilities graph model is implemented in Java, and it is deployed along with JCSS software. JCSS is the Joint Net-Centric Modeling & Simulation Tool used to assess end-toend communication network capabilities and performance. It is the Joint Chiefs of Staff standard for modeling military communications systems. JCSS is a desktop software application that provides modeling and simulation capabilities for measuring and assessing the information flow through the strategic, operational, and tactical military communications networks. Our new tool can generate implement vulnerabilities network graph with link edges and weights. All these may be transposed into an adjacency-matrix as illustrated before for a more quantitative wireless RF network vulnerability assessment. The convention followed here is that an adjacent edge counts as one in a matrix for an undirected graph as illustrated before in figure 7. For a given X, Y coordinates, for instant; they can be numbered from one to six and may also be transposed into a 6x6 matrix. The vulnerabilities analysis with the help of system engineering approach [25] [26] [29] of a wireless RF network is then achieved by assigning corresponding measurement metrics with posterior conditional probabilities of Bayesian network [17]. The Bucket Elimination algorithm is adapted and modified for probabilistic inference in our approach. The most common approximate inference algorithms are stochastic MCMC simulation, bucket algorithm and related elimination steps which generalizes looping and aggregated belief propagation, and variation methods. A better approximate inference mechanism may be deployed in the near future for more complex vulnerabilities graph. Our method is very applicable to tactical wireless RF networks by picking, implementing each model's communication stages and states. The result when using with OPNET JCSS [12] simulation and modeling will provide both graphical quantitative and real assessment of RF network

vulnerabilities at a network topology state and during time of actual deployment.

The authors thank Dr. John V. Farr, Dr. Ali Mostashari, Dr. Jose E. Ramirez-Marquez of the Department of Systems Engineering, Stevens Institute of Technology for many fruitful

**9. Acknowledgment** 


### **Creating Synergies for Systems Engineering: Bridging Cross-Disciplinary Standards**

Oroitz Elgezabal and Holger Schumann *Institute of Flight Systems, German Aerospace Center (DLR) Germany* 

#### **1. Introduction**

328 Systems Engineering – Practice and Theory

[15] Chan P., U.S. Army, ARL patent (pending) - ARL Docket No. ARL 10-09. "Wireless RF

[16] Swiler, Phillips, Ellis and Chakerian, "Computer-attack graph generation tool," in

[17] Liu Yu & Man Hong, "Network vulnerability assessment using Bayesian networks,"

[19] Sheyner, Lippmann and J. Wing, "Automated generation and analysis of attack

[20] Dijkstra E., Dijkstra's algorithm. Dutch scientist Dr. Edsger Dijkstra network algorithm.

[21] Phillips & Swiler, "A graph-based system for network-vulnerability analysis," in

[22] Ammann, Wijesekera and S. Kaushik, "Scalable, graph-based network vulnerability

[23] Mell & Scarfone, "A Complete Guide to the Common Vulnerability Scoring System

[25] Chan P., Mansouri M. and Hong M., "Applying Systems Engineering in Tactical

[27] Bahill, T. & Briggs, C. (2001). "The Systems Engineering Started in the Middle Process:

[29] Boehm, B. (2005). "Some Future Trends and Implications for Systems and Software Engineering Processes". In: Systems Engineering, Vol. 9, No. 1 (2006) [30] Vasquez, J. (2003). Guide to the Systems Engineering Body of Knowledge – G2SEBoK,

[31] Lima, P.; Bonarini, A. & Mataric, M. (2004). *Application of Machine Learning,* InTech,

Security 2005.Proceedings of the SPIE, Volume 5812, pp. 61-71 (2005). [18] Leonard T. & Hsu J., "Bayesian Methods: An Analysis for Statisticians and

Algorithm".

1997.

1999.

(DISCEX'01), vol. 2.

(Oakland 2002), pp. 254–265, May 2002.

http://en.wikipedia.org/wiki/Dijkstra's\_algorithm

communications security, pp. 217–224, November 2002.

Conference, Publication Year: 2010 , Page(s): 208 – 215. [26] Defense Acquisition Guidebook (2004). Chapter 4: Systems Engineering.

Engineering, Vol. 4, No. 2 (2001)

[28] Bahill, T. & Dean, F. (2005). What Is Systems Engineering?

International Council on Systems Engineering.

ISBN 978-953-7619-34-3, Vienna, Austria

http://www.first.org/cvss/cvss-guide.html#n3.

Version 2.0", National Institute of Standards and Technology.

[24] Systems Engineering Fundamentals. Defense Acquisition University Press, 2001.

Network Security and Vulnerability Modeling & Simulation Toolkit - Electronic Warfare Simulation & Modeling of RF Link Analysis with Modified Dijstrka

Proceedings of the DARPA Information Survivability Conference & Exposition II

Data Mining, Intrusion Detection, Information Assurance, and Data Networks

Interdisciplinary Researchers," Cambridge University Press, ISBN 0-521-00414-4,

graphs," in Proceedings of the 2002 IEEE Symposium on Security and Privacy

Proceedings of the 1998 workshop on New security paradigms, pp. 71–79, January

analysis," in Proceedings of 9th ACM conference on Computer and

Wireless Network Analysis with Bayesian Networks", Computational Intelligence, Communication Systems and Networks (CICSyN), 2010 Second International

A Consensus of Systems Engineers and Project Managers". in: Systems

The increasing complexity of technical systems can only be managed by a multi-disciplinary and holistic approach. Besides technical disciplines like aerodynamics, kinematics, etc. cross-disciplines like safety and project management play an immanent role in the Systems Engineering approach. In this chapter, standards from different cross-disciplines are discussed and merged together to elaborate synergies which enable a more holistic Systems Engineering view.

After this introductory section, definitions of the terms *system* and *complexity* are given and the problems associated with the development of complex systems are introduced. The third section presents existing development philosophies and procedures. Additionally the mentioned cross-disciplines are introduced together with international standards widely established in the respective fields. Because the selected standards are not only complementary but also overlapping, the fourth section describes the harmonization approach carried out, together with the resulting holistic view. This combination of the standards enhances the benefits of the "traditional" Systems Engineering approach and solves many of the mentioned problems associated to the development of complex systems by taking also project management and safety aspects into a deeper and therefore, more holistic, account.

#### **2. Background**

The concept *system* has been defined in multiple ways since Nicolas Carnot introduced it in the modern sciences during the first quarter of the 19th century. Most of the definitions assigned to it are based on the Greek concept of *"σύστημα systēma"*, which means: *a whole compounded of several parts or members, literally "composition"*. An example of the remanent influence of the original *system* concept on the modern one is the definition provided by Gibson et al. (Gibson et al., 2007) which defines a system as *a set of elements so interconnected as to aid driving toward a defined goal*.

As an extension to the concept *system*, the term c*omplex system* is interpreted very broadly and includes both physical (mostly hardware and software) groupings of equipment to serve a purpose, and sets of procedures that are carried out by people and/or machines (Eisner, 2005). In complex systems, characteristics and aspects belonging to different fields of expertise interact with each other. The factors which make a system to be complex are the interactions and interdependencies between the different components of a system. Those dependencies are not always obvious, intuitive or identifiable in a straightforward way. Especially, keeping a perspective of the whole system, together with all its implications in big projects, is complicated if not almost impossible at all. Even if the size is not a determinant factor for complexity, complex systems tend to be relatively large, with lots of internal and external interfaces. Additionally, in complex systems other kinds of considerations than those purely technical come frequently into play like political interests, international regulations, social demands, etc.

#### **2.1 Problems associated with complex systems development**

The development of complex systems implies other kinds of problems apart from those directly related with the different technical fields involved in it. Eisner summarizes in (Eisner, 2005) some of the problems associated with the design and development of complex systems. Eisner further classifies those problems into four different categories: Systems-, Human-, Software- and Management-related problems. Fig. 1 lists the mentioned problem categories together with their respective problems associated with the development of complex systems.

Fig. 1. Problems associated with the development of complex systems

As a consequence of all those problems, the efficiency during system development process decreases, which in fact can lead to a loss of money or project cancellation, both due to lower productivities. Besides, this efficiency decrease can result in higher project risks i.e. violation of deadlines or project failure during system verification phase due to poor system quality.

Another critical point associated with problems which belong to the previous classification like: *Erratic communication, People not sharing information, Requirements creeping and not validated, No processes, all ad hoc* and *Poor software architecting process* is the fact that they make the achievement and maintenance of traceability very difficult. Traceability is a key source of know-how in every company since it condensates the rationale behind every decision made during the system design process. Traceability is also vital for finding the location of design and production failures in case they are detected internally or reclamations from customers take place. Finally, in the case of safety-related systems, it is a mandatory requirement for system certification as well as for failure and accident investigation.

All these problems result in a poor execution of system development processes which, in case they get established in the every-day working methodology of a company, could even threat the profitability and continuity of the company itself.

#### **3. Existing development philosophies and procedures**

#### **3.1 System development philosophies**

330 Systems Engineering – Practice and Theory

of expertise interact with each other. The factors which make a system to be complex are the interactions and interdependencies between the different components of a system. Those dependencies are not always obvious, intuitive or identifiable in a straightforward way. Especially, keeping a perspective of the whole system, together with all its implications in big projects, is complicated if not almost impossible at all. Even if the size is not a determinant factor for complexity, complex systems tend to be relatively large, with lots of internal and external interfaces. Additionally, in complex systems other kinds of considerations than those purely technical come frequently into play like political interests,

The development of complex systems implies other kinds of problems apart from those directly related with the different technical fields involved in it. Eisner summarizes in (Eisner, 2005) some of the problems associated with the design and development of complex systems. Eisner further classifies those problems into four different categories: Systems-, Human-, Software- and Management-related problems. Fig. 1 lists the mentioned problem categories together with their respective problems associated with the development of

international regulations, social demands, etc.

complex systems.

**2.1 Problems associated with complex systems development** 

Fig. 1. Problems associated with the development of complex systems

As a consequence of all those problems, the efficiency during system development process decreases, which in fact can lead to a loss of money or project cancellation, both due to lower productivities. Besides, this efficiency decrease can result in higher project risks i.e. violation of deadlines or project failure during system verification phase due to poor system quality. Another critical point associated with problems which belong to the previous classification like: *Erratic communication, People not sharing information, Requirements creeping and not validated, No processes, all ad hoc* and *Poor software architecting process* is the fact that they make the achievement and maintenance of traceability very difficult. Traceability is a key source of know-how in every company since it condensates the rationale behind every decision made during the system design process. Traceability is also vital for finding the location of design and production failures in case they are detected internally or reclamations from The development of systems in general, and of technical systems in particular, has been carried out since the foundation of engineering sciences, or even earlier. During that time, many different terms like systems analysis and systems integration have been used to make reference to the concept represented by the modern Systems Engineering approach. Currently two philosophies with different focuses are applied in the development of technical systems, the analytic and the holistic approach (Jackson, 2010).

On the one hand, the traditional approach taken for the development of systems is the *analytic approach,* which concentrates on the development of each system's element independently, without paying any attention neither to the system as a whole, nor to the interactions among the different elements conforming the system once they are assembled together. This design process is carried out according to the problem solving methodology stated by Descartes, which consists on dividing the complex problems into smaller and simpler problems. Once the top problem has been decomposed into a collection of atomic entities, the problems are solved hierarchically in an ascent way until a solution for the complex problem on the top is achieved. This kind of methodology, applied in the conventional engineering design, is suitable and valuable for the design of systems where the technological environment is subject to minor changes, system's goals are clear, and the amount of uncertainties is low.

On the other hand, the *holistic approach* is based on the *Systems thinking* philosophy which considers a system as a whole rather than as simply the sum of its parts, and tries to understand how the different parts of a system influence each other inside the whole. This approach also takes into consideration the boundaries and environment of the system-ofinterest by determining which entities are inside the system and which are not, as well as by analysing the influence of the operating environment on the system to be developed. The *holistic approach* has also been considered as a problem solving method in which the different aspects of a problem can most effectively be understood if they are considered in the context of interactions among them and with other systems rather than in isolation. This problem solving nature has been also stated by Sage and Armstrong in (Sage & Armstrong, 2000). According to them, the *holistic approach* stresses that *there is not a single correct answer or solution to a large-scale problem or design issue. Instead, there are many different alternatives that can be developed and implemented depending on the objectives the system is to serve and the values of the people and organizations with a stake in the solution.* 

The principles of *Systems thinking* state that events can act as catalysts which can heavily influence complex systems. Thereby, the events as well as the systems can be completely different. The events can have a technical, natural or timely source amongst others, while the systems can be from technical, political, social, or any other kind. In fact, identifying the so-called *emergent* properties of a system that cannot be predicted by examining its individual parts is an exclusive feature of the *holistic approach* not provided by the *analytical approach*. This kind of methodology is suitable and valuable for the design of systems where the technological environment is subject to significant changes, system's goals are not clear, and the amount of uncertainties is high.

According to the provided definition of *complex system* and the description of its characteristics, it can be stated that the features of the *holistic approach* make it to be best suited to the characteristics required for the process of developing this kind of systems. Table 1 maps the specific challenges associated with the development of complex systems to the characteristics and features provided by the holistic system design approach. It shows how *holistic approach* provides measures to manage all the concerns present in a typical development process of complex systems.

The argument of the *holistic approach* being more suitable for developing complex systems is supported by the statement made by Gibson et al. in (Gibson et al., 2007) in which *system team members are supposed to be able to work across disciplinary boundaries toward a common goal when their disciplinary methodologies are different not only in detail but in kind*. A design process based on the *analytical approach* cannot fulfill this requirement since the system team members work exclusively in their own disciplines and they do not have access neither to a vision in perspective of the whole system nor to the context information related to the other elements in the system. The former is necessary for identifying the interacting elements while the latter is necessary for assessing the way the different elements interact with each other.

The characteristics of the *holistic approach* described above may propitiate the assumption that this approach remains pretty much superficial and that it does not get very detailed or specific. This assumption is incorrect in the sense that, inside the *holistic approach*, there is much effort devoted to in-scoping, high-fidelity modeling, and specification of system requirements and architecture (Sage & Armstrong, 2000).


Table 1. Mapping of characteristics of complex systems and holistic approach

#### **3.2 Standardized procedures as a means for managing complexity**

332 Systems Engineering – Practice and Theory

so-called *emergent* properties of a system that cannot be predicted by examining its individual parts is an exclusive feature of the *holistic approach* not provided by the *analytical approach*. This kind of methodology is suitable and valuable for the design of systems where the technological environment is subject to significant changes, system's goals are not clear,

According to the provided definition of *complex system* and the description of its characteristics, it can be stated that the features of the *holistic approach* make it to be best suited to the characteristics required for the process of developing this kind of systems. Table 1 maps the specific challenges associated with the development of complex systems to the characteristics and features provided by the holistic system design approach. It shows how *holistic approach* provides measures to manage all the concerns present in a typical

The argument of the *holistic approach* being more suitable for developing complex systems is supported by the statement made by Gibson et al. in (Gibson et al., 2007) in which *system team members are supposed to be able to work across disciplinary boundaries toward a common goal when their disciplinary methodologies are different not only in detail but in kind*. A design process based on the *analytical approach* cannot fulfill this requirement since the system team members work exclusively in their own disciplines and they do not have access neither to a vision in perspective of the whole system nor to the context information related to the other elements in the system. The former is necessary for identifying the interacting elements while the latter is necessary for assessing the way the different elements interact with each

The characteristics of the *holistic approach* described above may propitiate the assumption that this approach remains pretty much superficial and that it does not get very detailed or specific. This assumption is incorrect in the sense that, inside the *holistic approach*, there is much effort devoted to in-scoping, high-fidelity modeling, and specification of system

**Mapping of characteristics** 

events

Table 1. Mapping of characteristics of complex systems and holistic approach

environment

Systems considered as a whole, not as a sum of parts

 Focus on understanding how the different parts of a system influence each other inside the whole System aspects considered in the context of

interactions among components and with other

 Identification of *emergent* properties that cannot be predicted by examining individual parts of a system Analysis of unexpected interactions and cause-effect

Consideration of system boundaries and operating

systems rather than in isolation

and the amount of uncertainties is high.

development process of complex systems.

requirements and architecture (Sage & Armstrong, 2000).

**Complex systems Holistic approach** 

 Difficulty to maintain whole system under perspective

 Big amount of internal and external interfaces

Implication of different technical

 Broad and heterogeneous stakeholders

fields

other.

As in any other field of life, the experience and knowledge acquired with the time plays a vital role in the design of complex systems. Past experience provides the system engineer with a set of rules of thumb, intuition and sense of proportion and magnitude, which combined together, result in a very valuable toolbox to be applied for proposing solutions, supporting judgements and making decisions during the development of complex systems. Those design principles, guidelines, or rules that have been learned from experience, especially with respect to the definition of the architecture of a system, have been considered by Jackson to constitute which is called heuristics (Jackson, 2010).

It is common that companies rely on heuristics-dominant system teams for the development of systems in areas considered as sensitive for the companies. However, this is a very individual-centred approach, in which system's or even company's know-how is concentrated in specific people and thus dependent on them. This kind of know-how is critical, since in the case of one key person leaving the team or the company, the know-how it possesses leaves with him or her, thus creating a loss of knowledge with two different consequences: On one side, the company loses all the existing information, creating a regression of company's know-how in the field. On the other side, it takes a lot of time to determine exactly which specific know-how has been lost and to assess which part of the know-how still remains in the company.

Another aspect of heuristics to be considered is that human beings unconsciously make use of the knowledge they possess in a specific situation in order to interpret the reality they confront. In other words, heuristics provide background information and helps to put the facts and figures in context and to interpret them. This means that two different members of the same system team might interpret in a different way and derive different conclusions from the same information just because they possess different background knowledge.

A standardized know-how management system can help making company's dependency on individuals' heuristics unnecessary or at least, less critical. The generation of standard documentation with predefined structure and contents allows condensing the most important information about projects and its transmission. A key piece of information that must be included in the standard documentation is the rationale behind the different decisions made in the project, in order to provide traceability. Standardized documentation means that anyone working in a company knows exactly which documents are available inside a project and which information do they contain. This makes possible to minimize the consequences of a key person leaving the team, since its successor ideally would be able to achieve the same knowledge status about the project in a fast and efficient way thanks to the traceability of decisions made. For the same reasons stated before, the information contained in the standardized documentation can be transmitted to every other member of the team or the company in a transparent way, thus enabling the achievement of homogeneous background information about the project that can be shared by all team members.

In the modern and globalized industrial market, where trends, products and technologies change very rapidly and companies worldwide compete fiercely for the same business niche, the reputation of a company frequently plays a determinant role. This reputation basically depends on the quality of the products they produce or the services they provide, which at the same time, greatly depends on the quality of the processes used during the whole product's life-cycle. The definition of efficient and high-quality working methodologies and best practices takes place as a result of an iterative learning process which refines itself making use of the lessons learned during the development of past projects. All this know-how is considered as a strategic business active of every company and therefore it is condensed in standard practices and regulations that become mandatory for every employee of the company. Every time a new employee joins the entity, he or she must get started with those internal regulations and assimilate them.

Nowadays, the system development strategies based on the black box approach, which uses in-house developed proprietary technologies, has been substituted by a white box approach based on Commercial-of-the-Shelf technologies, where most of the system development workload is subcontracted to external entities. This subcontracting strategy has many associated advantages like the reduction of development costs and risks (derived from delegating the development of specific system parts to companies with more experience in that type of elements) among others. However, this strategy has also associated risks that must be correctly managed in order not to become drawbacks with highly negative effects. One of those risky factors is a higher communication flow between at least two different entities, which in general possess different working methodologies and tools. A standardized system development process, makes the exchange of information effective and efficient, since on one side, there is no risk of misinterpretation of the transmitted information and on the other side, the number of required transactions decreases due to the fact that every part knows which documents with which specific content must be delivered in every phase of the development process.

All these aspects have been also considered by Sage and Armstrong (Sage & Armstrong, 2000) who stated that the development process of any system in general, and of complex systems in particular, should fulfil amongst others, the following requirements:


In summary, standardized processes help to increase the productivity in system development activities by improving the transparency of all team members' work, which eases and advances communication and collaboration. They also help to increase the quality of working methods and products, as well as to manage company's know-how by enabling traceability of requirements, decision, rationales and deliverables. This traceability makes all working steps reproducible and improves consistency and integrity of all deliverables, contributing to the management of knowledge created during the process. Additionally, standardized processes help to mitigate risks by enabling comparability with previous development projects, amongst others, which supports monitoring and controlling of cost and schedule.

#### **3.3 Fundamental development disciplines**

334 Systems Engineering – Practice and Theory

whole product's life-cycle. The definition of efficient and high-quality working methodologies and best practices takes place as a result of an iterative learning process which refines itself making use of the lessons learned during the development of past projects. All this know-how is considered as a strategic business active of every company and therefore it is condensed in standard practices and regulations that become mandatory for every employee of the company. Every time a new employee joins the entity, he or she

Nowadays, the system development strategies based on the black box approach, which uses in-house developed proprietary technologies, has been substituted by a white box approach based on Commercial-of-the-Shelf technologies, where most of the system development workload is subcontracted to external entities. This subcontracting strategy has many associated advantages like the reduction of development costs and risks (derived from delegating the development of specific system parts to companies with more experience in that type of elements) among others. However, this strategy has also associated risks that must be correctly managed in order not to become drawbacks with highly negative effects. One of those risky factors is a higher communication flow between at least two different entities, which in general possess different working methodologies and tools. A standardized system development process, makes the exchange of information effective and efficient, since on one side, there is no risk of misinterpretation of the transmitted information and on the other side, the number of required transactions decreases due to the fact that every part knows which documents with which specific content must be delivered

All these aspects have been also considered by Sage and Armstrong (Sage & Armstrong, 2000) who stated that the development process of any system in general, and of complex

Systems engineering processes should be supportive of appropriate standards and

 Systems engineering processes should support the use of automated aids for the engineering of systems, such as to result in production of high-quality trustworthy

 Systems engineering processes should be based upon methodologies that are teachable and transferable and that make the process visible and controllable at all life-cycle phases. Systems engineering processes should be associated with appropriate procedures to enable definition and documentation of all relevant factors at each phase in the system

In summary, standardized processes help to increase the productivity in system development activities by improving the transparency of all team members' work, which eases and advances communication and collaboration. They also help to increase the quality of working methods and products, as well as to manage company's know-how by enabling traceability of requirements, decision, rationales and deliverables. This traceability makes all working steps reproducible and improves consistency and integrity of all deliverables, contributing to the management of knowledge created during the process. Additionally, standardized processes help to mitigate risks by enabling comparability with previous development projects, amongst others, which supports monitoring and controlling of cost

systems in particular, should fulfil amongst others, the following requirements:

management approaches that result in trustworthy systems.

must get started with those internal regulations and assimilate them.

in every phase of the development process.

systems.

life cycle.

and schedule.

The fundamentals of building and managing complex systems at the top level have been identified by Eisner in (Eisner, 2005). According to him, there are three areas which are critically important in building and managing complex systems: Systems engineering, project management and general management. The importance of these three areas has also been identified by Sage and Armstrong in (Sage & Armstrong, 2000) in which they state that, *Systems engineering processes should enable an appropriate mix of design, development and systems management approaches*.

Additionally, in the special case of developing systems whose failure could imply catastrophic consequences like big economic losses or human casualties, the concepts, methods and tools belonging to the safety engineering discipline must also be considered as fundamental.

The area of general management is an extremely broad topic, which is out of the scope of the current chapter and therefore the chapter's contents will concentrate on the other disciplines mentioned, i.e. Systems engineering, project management and safety engineering.

#### **3.3.1 Systems engineering**

*Systems engineering is an interdisciplinary approach and means to enable the realization of successful systems* (Haskins, 2010). It is based on well-defined processes considering customer needs and all other stakeholders' requirements and it always profits from providing a holistic view on all problems across the whole development life-cycle. It has progressively attracted the attention in different fields of industry, as a methodology for managing the design and development of complex systems in a successful, efficient and straightforward way. According to (Gibson et al., 2007), *it is a logical, objective procedure for applying in an efficient, timely manner new and /or expanded performance requirements to the design, procurement, installation , and operation of an operational configuration consisting of distinct modules (or subsystems), each of which may embody inherent constraints or limitations.* 

This conceptual definition of Systems engineering, states implicitly that the development process is defendable against external critics and that all the decisions made inside are objective and traceable. As it has been reasoned previously, traceability is a fundamental characteristic that must be present in every development process because of the multiple benefits it has associated with it, i.e. project reproducibility or the creation of know-how by means of stating the rationale behind the design decisions made, or listing and describing the risks found out and resolved during the development process.

Additionally, previous definition of Systems engineering also describes implicitly its holistic nature, by taking in consideration all the phases of a system's life-cycle and the interfaces and interactions between the system of interest and the systems related to it.

The field of Systems Engineering has published an international standard called *ISO/IEC 15288 – Systems and software engineering* (ISO 15288, 2008). It provides a *common framework for describing the life-cycle of systems* from conception up to retirement and defines associated processes and terminology. Processes related to project management are specified therein, but because of standard's scope focusing on Systems engineering, those processes do not cover the complementary domain of project management. The last update of the ISO 15288 Standard was released in 20081 which points to it as an active standard which is still in an iterative improvement status. Nevertheless, the standard has been consolidated with the INCOSE Handbook (Haskins, 2010) which is broadly established worldwide.

#### **3.3.2 Project management**

*Project Management is the application of knowledge, skills, tools, and techniques to project activities to meet the project requirements* (PMI, 2008). It is also based on well-defined processes regarding planning, executing, monitoring, and controlling of all working activities and the effective application of all assigned project resources. Project management profits from an always transparent status of all activities and deliverables and from the early identification of any risks.

It must be remarked that project management consists not only on applying the specific skills necessary for carrying out a project once it has been accepted, but also on managing the systems team itself on an effective manner.

Gibson et al. identify in (Gibson et al., 2007) some requirements for building an effective systems team. Aspects like having a leader, defining a goal and using a common working methodology with a well-balanced set of skills among members who pull together towards the goals have been identified as critical for achieving project's goal on schedule.

Sage and Armstrong (Sage & Armstrong, 2000) state in addition to this that systems engineering processes should possess following characteristics from the point of view of project management: 1) *they should support the quality assurance of both the product and the process that leads to the product*, 2) *they should be associated with appropriate metrics and management controls* and 3) *they should support quality, total quality management, system design for human interaction, and other attributes associated with trustworthiness and integrity*. These statements support the idea of a holistic design process for developing complex systems.

The Project Management Institute (PMI) has published the guide to the *Project Management Body of Knowledge (PMBOK)* (PMI, 2008). This document is recognized as a standard by classical standardization entities like ANSI and IEEE. It covers all management topics completely, without taking engineering aspects into scope.

#### **3.3.3 Safety engineering**

Safety engineering can be seen as *a set of well-defined processes aiming at achieving freedom from unacceptable risk* (ISO 61508, 2009), together with the application of methodologies in order to quantify and to prove it. Due to the fact that not only the complexity of modern systems increases, but also their capabilities, the amount of functions performed by a system also raises. Inside those functions, there are safety-related functions performed by specific systems included whose failure would lead to important economical and material damages, severe injuries, or even fatalities. The increase of capabilities in systems, together with the growing humankind's dependency on them, leads to the fact that more often the safety depend directly on a fail-safe operation of systems. Furthermore, the more safety-related

 1The references made to ISO 15288 relate to the 2008 version of the standard, if not explicitly mentioned otherwise.

equipment is integrated into a system, the bigger is the probability that one system element fails. This, in turn, increases the concern about safety among the society. Additionally, due to the overall complexity of systems, assessing the impact of single failures on them and setting up preventive or corrective actions is a very challenging task. All these facts mentioned above have contributed to making safety considerations become more and more essential to modern development processes.

In the field of Safety engineering, a widely considered international standard is the *ISO/IEC 61508: Functional Safety of Electrical/Electronic/Programmable Electronic Safety-Related Systems* (ISO 61508, 2008) which sets out a generic approach for all safety life-cycle activities for systems whose elements perform safety functions. This standard is field-independent and sets the basis framework in which additional, branch-specific industrial safety standards are based, e.g. ISO 26262, the new safety standard for the automotive domain, or EN 50128 for railway systems.

#### **3.4 Drawbacks of standards involved in complex-systems development**

The historical evolution of every of the standards presented above has grown independently from each other. This fact implies that, even if the participation of the three cross-disciplines and their combined use has been recognized as critical for the development of complex systems by the industry, the different standards are poorly connected or not connected at all among them. This leads to a situation in which the standards overlap with each other in many processes and activities, and in the worst case they even could contain conflicting directives. Additionally, there is a lack of consolidated set of terms used inside the standards. Every standard makes its own definition of terms which creates confusion and misunderstandings and makes the cross-disciplinary communication difficult.

Besides, the standards themselves possess some deficiencies that difficult their interpretation and understanding, and consequently, their implementation. On one side, the ISO 15288 standard does not provide any sequence diagrams showing the relationships between the processes and activities contained in it. On the other side, the ISO 61508 standard lacks of a detailed description of the inputs and outputs associated with the different activities it describes.

#### **4. Systems engineering approach based on international standards**

#### **4.1 General description**

336 Systems Engineering – Practice and Theory

Standard was released in 20081 which points to it as an active standard which is still in an iterative improvement status. Nevertheless, the standard has been consolidated with the

*Project Management is the application of knowledge, skills, tools, and techniques to project activities to meet the project requirements* (PMI, 2008). It is also based on well-defined processes regarding planning, executing, monitoring, and controlling of all working activities and the effective application of all assigned project resources. Project management profits from an always transparent status of all activities and deliverables and from the early identification

It must be remarked that project management consists not only on applying the specific skills necessary for carrying out a project once it has been accepted, but also on managing

Gibson et al. identify in (Gibson et al., 2007) some requirements for building an effective systems team. Aspects like having a leader, defining a goal and using a common working methodology with a well-balanced set of skills among members who pull together towards

Sage and Armstrong (Sage & Armstrong, 2000) state in addition to this that systems engineering processes should possess following characteristics from the point of view of project management: 1) *they should support the quality assurance of both the product and the process that leads to the product*, 2) *they should be associated with appropriate metrics and management controls* and 3) *they should support quality, total quality management, system design for human interaction, and other attributes associated with trustworthiness and integrity*. These statements support the idea of a holistic design process for developing complex systems.

The Project Management Institute (PMI) has published the guide to the *Project Management Body of Knowledge (PMBOK)* (PMI, 2008). This document is recognized as a standard by classical standardization entities like ANSI and IEEE. It covers all management topics

Safety engineering can be seen as *a set of well-defined processes aiming at achieving freedom from unacceptable risk* (ISO 61508, 2009), together with the application of methodologies in order to quantify and to prove it. Due to the fact that not only the complexity of modern systems increases, but also their capabilities, the amount of functions performed by a system also raises. Inside those functions, there are safety-related functions performed by specific systems included whose failure would lead to important economical and material damages, severe injuries, or even fatalities. The increase of capabilities in systems, together with the growing humankind's dependency on them, leads to the fact that more often the safety depend directly on a fail-safe operation of systems. Furthermore, the more safety-related

1The references made to ISO 15288 relate to the 2008 version of the standard, if not explicitly mentioned

the goals have been identified as critical for achieving project's goal on schedule.

INCOSE Handbook (Haskins, 2010) which is broadly established worldwide.

**3.3.2 Project management** 

the systems team itself on an effective manner.

completely, without taking engineering aspects into scope.

**3.3.3 Safety engineering** 

otherwise.

of any risks.

The holistic Systems engineering view described in this work takes the ISO 15288 standard as its core and tries to combine it with the other two standards introduced above. Some of the technical processes contained in the ISO 15288 are also addressed by the safety and project management standards respectively, providing interfaces where information can be exchanged among them or even where processes can be merged together. This combination of standards can be noticed in the case of the project related processes of ISO 15288, which are completely replaced by those defined inside the PMBOK standard, due to the fact that this standard considers them in a much more detailed way. The agreement processes defined by the ISO 15288 standard are also considered by the PMBOK standard inside the procurement area, but in this work, merging the agreement processes of both standards has been considered out of scope.

From the five organizational project-enabling processes defined by the ISO 15288 standard, only the *Human resource management* and *Quality management* processes are explicitly addressed by the PMBOK standard. The remaining three processes are not explicitly treated by the project management standard and therefore they are not considered inside the present work. Fig. 2 shows the process groups defined by the systems engineering standard together with an overview of the process groups also addressed by the project management and safety standards.

#### **4.2 Harmonization process**

The analysis and comparison of different items like the standards mentioned above, is logically impossible without a common reference framework in which all the items to be compared can be represented.

Fig. 2. Overlapping of considered standards regarding process groups

A detailed analysis of the three international standards has revealed that no common reference framework exists among them. This fact implies that before any task of the merging process can be carried out, e.g. comparison and identification of interfaces among the standards, a reference framework must be defined. The PMBOK standard provides a clear overview of its management processes structured in a two-dimensional matrix, representing different process groups in its columns against specific knowledge areas in its rows. This kind of representation based on a matrix has been considered by the authors as a clear and valuable means for analysing, comparing and merging the different international standards and consequently, it has been selected as the reference framework for the merging process.

None of the ISO standards analysed defines process groups or knowledge areas in the way that PMBOK does. The PMBOK standard defines process groups according to a temporal sequence while the ISO 15288 standard defines the process groups on a purpose basis. As a consequence, their respective reference matrices of both ISO standards need to be created from the scratch. Instead of the process groups used by PMBOK standard, the different lifecycle stages named by the ISO 15288:2002 standard have been taken. In the case of the knowledge areas, if the ones from the PMBOK standard were not appropriate, new ones have been defined.

This approach showed that the matrices of both ISO standards can be merged to one unique matrix while the mapping of management process groups of the PMBOK standard into the life-cycle stages of the ISO standards is not possible. This is due to the fact that project management activities are carried out during the whole life-cycle of the system-of-interest and not just during a specific stage. Besides, there are also several knowledge areas regarding management, e.g. procurement, which cannot be considered together with technical processes. In consequence, the management and life-cycle stages have to be considered as parallel stages and two different process matrices have been created; one for management processes and another one for technical processes, respectively.

Finally, the processes being assigned to the same stage and knowledge area inside the technical processes' matrix are good candidates for interfacing or merging. After the description of the two matrices, a detailed analysis of the processes follows based on the matrices.

#### **4.2.1 Management processes**

338 Systems Engineering – Practice and Theory

From the five organizational project-enabling processes defined by the ISO 15288 standard, only the *Human resource management* and *Quality management* processes are explicitly addressed by the PMBOK standard. The remaining three processes are not explicitly treated by the project management standard and therefore they are not considered inside the present work. Fig. 2 shows the process groups defined by the systems engineering standard together with an overview of the process groups also addressed by the project management

The analysis and comparison of different items like the standards mentioned above, is logically impossible without a common reference framework in which all the items to be

A detailed analysis of the three international standards has revealed that no common reference framework exists among them. This fact implies that before any task of the merging process can be carried out, e.g. comparison and identification of interfaces among the standards, a reference framework must be defined. The PMBOK standard provides a clear overview of its management processes structured in a two-dimensional matrix, representing different process groups in its columns against specific knowledge areas in its rows. This kind of representation based on a matrix has been considered by the authors as a clear and valuable means for analysing, comparing and merging the different international standards and consequently, it

None of the ISO standards analysed defines process groups or knowledge areas in the way that PMBOK does. The PMBOK standard defines process groups according to a temporal sequence while the ISO 15288 standard defines the process groups on a purpose basis. As a consequence, their respective reference matrices of both ISO standards need to be created from the scratch. Instead of the process groups used by PMBOK standard, the different lifecycle stages named by the ISO 15288:2002 standard have been taken. In the case of the

Fig. 2. Overlapping of considered standards regarding process groups

has been selected as the reference framework for the merging process.

and safety standards.

**4.2 Harmonization process** 

compared can be represented.

The matrix shown in Fig. 3 is taken from the PMBOK standard. The columns represent process groups which can also be seen as project management stages starting with *Initiation* and ending with *Closing*. Each of the rows represents a typical project management topic, which is further called knowledge area. All of the forty two management processes specified by the PMBOK standard are classified into the cells resulting from the crossing of five process groups' columns with the nine knowledge areas' rows.

#### **4.2.2 Technical processes**

In the case of technical processes, the ISO 15288 standard does not define stages for the lifecycle of systems. However, a division of the life-cycle in various stages was provided in its previous version, ISO 15288:2002. These life-cycle stages have been assigned to the columns of the respective matrix. For the rows, ISO 15288 standard defines four knowledge areas (as shown in Fig. 2), in which the life-cycle processes are grouped by their purpose. However, these knowledge areas are not useful for comparing the processes with those contained in the other standards. Therefore, those used in the project management matrix were considered. Only two knowledge areas, *Scope* and *Quality,* were found to be also relevant for technical processes. Two further knowledge areas have been defined by the authors. On one hand, *Realisation* represents all activities which elaborate the outputs of the *Scope* area, which then can be quality-checked. On the other hand, *Service* describes all the activities to be carried out during the operating life of a system.

The ISO 15288 standard does not explicitly assign any processes to any life-cycle stages. In fact, the processes are initiated in one or more stages and some can be executed sequentially or in parallel. In this work, an interpretation process has been carried out in which the processes of the standard have been assigned to the cells of the matrix described above. The aim of this interpretation work was to enable the comparison and analysis of the processes and activities of the three standards in order to facilitate the identification of possible interfaces and overlapping areas between the different standards.


Fig. 3. Project management processes assigned to process groups and knowledge areas

Fig. 3. Project management processes assigned to process groups and knowledge areas

The eleven technical processes specified inside ISO 15288 have been spread over the matrix using a black font, as depicted in Fig. 4. Inside the *Conception* stage, three different processes have been assigned to two different knowledge areas. The two processes dealing with requirements have been assigned to the *Scope* area, while the *Architectural design* process has been assigned to the *Realisation* area. In the first case, requirements specify the scope of the system. In the second case, the process was assigned to that specific area because one of the process' activities is to evaluate different design candidates, which cannot be done in the development stage or later ones. Besides, the process generates a system design based on the requirements elicited in the scope area, which supports its assignment to the *Realisation* row.

The *Production* stage contains *Transition* and *Validation* processes in two different knowledge areas. *Transition* process has been assigned to the *Production* stage because the development ends before the transition of the system (ISO 24748-1, 2010). In the same way as with *Verification*, *Validation* has also been seen in the *Quality* area. It must be remarked that the *Validation* process has been considered by the authors to take place at the end of the transition, in which at the end, the customer accepts the system delivered and installed in its operational environment. *Operation* and *Maintenance* belong to *Utilization* and *Support*, while *Disposal* can be found in the *Retirement* stage. All of them are assigned to the *Service* area. The activities of the Disposal process can also be seen as a service in the widest sense.

The ISO 61508 standard defines sixteen so-called life-cycle phases. In this work, they are interpreted as activities because for each of them, its inputs and outputs are defined and in the corresponding standard's chapters, tasks are indicated that have to be carried out. The standard neither defines any superior life-cycle stages comparable to ISO 15288:2002 nor defines any knowledge areas. For this reason, and because the activities are also of a technical kind like the processes of ISO 15288, they have been assigned to the same matrix shown in Fig. 4.

The matrix contains all of the sixteen activities defined by ISO 61508, illustrated by a grey font. Six activities are assigned to the *Conception* stage divided in two different knowledge areas. *Concept*, *Overall scope definition*, *Hazard and risk analysis* and *Overall safety requirements* have been assigned to the *Scope* area because they contribute to defining the scope and safety related requirements for the design. The *Overall safety requirements allocation* and *System safety requirements specification* have been assigned to the *Realisation* area. This is due to the fact that both processes specify and allocate safety requirements to designed system elements during the *Architectural design* process.

Inside the development stage five different processes have been assigned into three different knowledge areas. First, *Realisation,* O*ther risk reduction measures* and *Overall installation and commissioning planning* have been assigned to the *Realisation* area because they address questions related to the physical implementation of the system. The two remaining planning activities, i.e. *Overall safety validation planning* and *Overall operation and maintenance planning* have been assigned to the *Quality* and *Service* knowledge areas respectively. The planning activities typically take place in parallel to the implementation and they must be carried out before the system is installed and validated in its operational environment.


Fig. 4. ISO 15288 technical processes and ISO 61508 activities assigned to life-cycle stages and knowledge areas.

Inside the *Development* stage, another three processes have been assigned to the *Realisation* and *quality* areas. On one hand, the processes *Implementation* and *Integration* have been assigned to the *Development* and *Realisation* area because the physical creation of the real system takes place inside them. On the other hand, the *Verification* process is part of the *Quality* area because it contributes to guarantee the quality of the system-of-interest under development.

The *Overall modification and retrofit* activity has been assigned to the *Support* stage because this activity is typically initiated when the system is under operation and its support is active. Due to the fact that the output of this activity can affect all knowledge areas including the scope, it has been assigned to this overall area. The last two activities of the ISO 61508 standard, i.e. *Overall operation, maintenance and repair* and *Decommissioning or disposal* can be found in the *Service* area, assigned to the corresponding life-cycle stage.

#### **4.3 Detailed standards interfacing and merging process**

342 Systems Engineering – Practice and Theory

Fig. 4. ISO 15288 technical processes and ISO 61508 activities assigned to life-cycle stages

it contributes to guarantee the quality of the system-of-interest under development.

Inside the *Development* stage, another three processes have been assigned to the *Realisation* and *quality* areas. On one hand, the processes *Implementation* and *Integration* have been assigned to the *Development* and *Realisation* area because the physical creation of the real system takes place inside them. On the other hand, the *Verification* process is part of the *Quality* area because

and knowledge areas.

Those processes which are in the same life-cycle stage or knowledge area or both, bear potential for being harmonized. After an in-depth analysis of the three standards, eleven information and twelve process interfaces have been respectively identified. On one hand, information interfaces represent some kind of information generated by any of the standards, which is provided to the other standards for its use. E.g. safety requirements provided by the ISO 61508 are merged into the *System requirements* document generated by the ISO 15288 standard. On the other hand, process interfaces represent similar activities that are carried out in at least two of the standards, which in consequence, can be put together in order to avoid duplicities that constitute a waste of resources.

Because processes basically describe a sequence of activities, they are typically represented by some kind of flow diagram. For this reason, a standardized graphical notation for process diagrams has been selected to represent the relevant process parts and the outcome of their merging.

#### **4.3.1 Business Process Model and Notation (BPMN) specification**

The Object Management Group (OMG), a non-profit consortium dedicated to developing open computer industry specifications, took over the development of the BPMN specification in 2005. BPMN's primary goal is to provide a notation that is readily understandable by all business users, from the business analysts, to technical developers, and to managers who will manage and monitor those processes. (OMG, 2011)

The notation used in Fig. 5 to Fig. 8 corresponds to BPMN. The processes defined in the different standards (*activities* in BPMN) are represented as boxes, their outputs (*data objects*) are depicted by the leaf symbol, and the arrows illustrate the sequence flow. Circle symbols represent either the start or end event, or they describe an incoming or outgoing link to another diagram or (not depicted) process. In BPMN, a diamond symbol illustrates a gateway control type which marks the point where sequence flow paths are joined or divided. Gateways that initiate or merge a parallel sequence flow are expressed by a diamond containing a *plus* symbol. In the following diagrams, those gateways have been mostly omitted for the sake of simplicity and size. Gateways that introduce a conditional sequence flow are expressed by an empty diamond. Horizontal *pool lanes* represent a categorization of activities.

#### **4.4 Harmonization result: The Holistic Systems Engineering view (HoSE)**

Fig. 5 to Fig. 8 represent the product life-cycle stages defined in ISO 15288:2002 respectively. Every figure contains the project management as well as technical processes corresponding to the specific life-cycle stage. Due to length constraints a complete in depth representation of all standard's levels is not possible, thus only the top level view has been provided. The processes of every standard are contained in a pool lane. The ISO 15288 standard is depicted in the middle pool lane of each figure. In case of the PMBOK standard, only the processes related to technical activities have been considered. Every sequence flow arrow crossing a lane represents an information interface between the corresponding standards.

#### **4.4.1 HoSE conception stage**

In Fig. 5, the corresponding processes of the three international standards for the *Conception* stage are shown. This includes eight of the technical processes already assigned to the conception stage as depicted in Fig. 4 as well as three related management processes.

In every standard, one initiating process is defined. Regarding the PMBOK, the first process is the *Identify stakeholders* process, for the ISO 15288 it is the *Stakeholder requirements definition* process, and so on. In this case, the *Identify stakeholders* process has been selected. Looking at the activities of the ISO 15288 *Stakeholder requirements definition* process and its outputs, it shows that it includes a sub-activity called *Identify the individual stakeholders*. This activity matches exactly with the *Identify stakeholders* process from PMBOK which identifies the related stakeholders and which lists them in an output document called *Stakeholder register*. As a consequence, the ISO 15288 sub-activity has been merged together with the PMBOK process and the *Stakeholder register* document it produces has been provided as an input to the remaining activities inside the *Stakeholder requirements definition* process of ISO 15288.

In the PMBOK lane in Fig. 5, the next process is the *Collect requirements* process. This can be merged with the activity *Elicit stakeholder requirements* of the *Stakeholder requirements definition* process from ISO 15288. At this point, a distinction between product and project requirements, as explicitly recommended by the PMBOK, helps to differentiate between project's progress and system-of-interest's advancements. In this way, PMBOK's activity of eliciting product requirements is merged into the ISO 15288 process, which also includes merging the techniques of facilitated workshops and prototypes into the ISO standard. In consequence, the output documents of the *Collect requirements* process are changed to project (only) requirements, project (only) requirements traceability matrix, and an (unchanged) requirements management plan. The sequence flow of the documents is kept as defined in the PMBOK, as illustrated by the grey lines.

The separation of the requirements into stakeholder and system requirements, as explicitly recommended by ISO 15288, enables the consideration of different views on the requirements. Stakeholder requirements define high-level functions from the point of view of client's expectations, while system requirements define functions in more detail from a technical perspective. Both kinds of requirements belong to the problem domain and not the solution domain. In other words, they try to specify what should be developed and not how it should be done. The stakeholder requirements constitute the input for the *Concept* activity of ISO 61508 and provide the level of understanding of the system-of-interest and its environment, required by this task. The *Concept* activity includes performing a *Functional hazard analysis (FHA)* which contributes together with safety-related requirements to the stakeholder requirements by identifying the likely sources of top-level hazards for the system. Those enhanced stakeholder requirements complement the requirements flowing to further PMBOK or ISO 15288 processes.

of all standard's levels is not possible, thus only the top level view has been provided. The processes of every standard are contained in a pool lane. The ISO 15288 standard is depicted in the middle pool lane of each figure. In case of the PMBOK standard, only the processes related to technical activities have been considered. Every sequence flow arrow crossing a

In Fig. 5, the corresponding processes of the three international standards for the *Conception* stage are shown. This includes eight of the technical processes already assigned to the

In every standard, one initiating process is defined. Regarding the PMBOK, the first process is the *Identify stakeholders* process, for the ISO 15288 it is the *Stakeholder requirements definition* process, and so on. In this case, the *Identify stakeholders* process has been selected. Looking at the activities of the ISO 15288 *Stakeholder requirements definition* process and its outputs, it shows that it includes a sub-activity called *Identify the individual stakeholders*. This activity matches exactly with the *Identify stakeholders* process from PMBOK which identifies the related stakeholders and which lists them in an output document called *Stakeholder register*. As a consequence, the ISO 15288 sub-activity has been merged together with the PMBOK process and the *Stakeholder register* document it produces has been provided as an input to the remaining activities inside the *Stakeholder requirements definition* process of ISO 15288.

In the PMBOK lane in Fig. 5, the next process is the *Collect requirements* process. This can be merged with the activity *Elicit stakeholder requirements* of the *Stakeholder requirements definition* process from ISO 15288. At this point, a distinction between product and project requirements, as explicitly recommended by the PMBOK, helps to differentiate between project's progress and system-of-interest's advancements. In this way, PMBOK's activity of eliciting product requirements is merged into the ISO 15288 process, which also includes merging the techniques of facilitated workshops and prototypes into the ISO standard. In consequence, the output documents of the *Collect requirements* process are changed to project (only) requirements, project (only) requirements traceability matrix, and an (unchanged) requirements management plan. The sequence flow of the documents is kept as defined in

The separation of the requirements into stakeholder and system requirements, as explicitly recommended by ISO 15288, enables the consideration of different views on the requirements. Stakeholder requirements define high-level functions from the point of view of client's expectations, while system requirements define functions in more detail from a technical perspective. Both kinds of requirements belong to the problem domain and not the solution domain. In other words, they try to specify what should be developed and not how it should be done. The stakeholder requirements constitute the input for the *Concept* activity of ISO 61508 and provide the level of understanding of the system-of-interest and its environment, required by this task. The *Concept* activity includes performing a *Functional hazard analysis (FHA)* which contributes together with safety-related requirements to the stakeholder requirements by identifying the likely sources of top-level hazards for the system. Those enhanced stakeholder requirements complement the requirements flowing to

conception stage as depicted in Fig. 4 as well as three related management processes.

lane represents an information interface between the corresponding standards.

**4.4.1 HoSE conception stage** 

the PMBOK, as illustrated by the grey lines.

further PMBOK or ISO 15288 processes.

Fig. 5. Conception stage of the holistic systems engineering view

Fig. 6. Development stage of the holistic systems engineering view

Fig. 6. Development stage of the holistic systems engineering view

Fig. 7. Production stage of the holistic systems engineering view

Fig. 8. Utilization, support, and retirement stages of the holistic systems engineering view

The *Requirements analysis* process of the ISO 15288 refines stakeholder requirements into technical system requirements. In the holistic view, the technique of P*roduct analysis*, specified in PMBOK's *Define scope* task, and the product related *Scope statement* are moved into this process. The complete system requirements are used by the *Overall scope definition* activity of ISO 61508 to refine the identified hazards and to specify the boundary and scope of the system-of-interest from the safety perspective. Both, *Requirements analysis* and *Overall scope definition* processes, could disclose weaknesses in the stakeholder requirements, which enforce the revision of the requirements (not depicted in the figure for the sake of clearness). The resulting enhanced system requirements flow into related PMBOK processes and into the *Architectural design* process of ISO 15288.

As shown in Fig. 5, the *Architectural design* process of ISO 15288 is split into several parts for being able to accommodate the safety assessment related activities. First, a preliminary architectural design is created and passed to the *Hazard and risk analysis* activity of ISO 61508. In this process, a *Failure Modes and Effects Analysis (FMEA)* together with a *Fault Tree Analysis (FTA)* is performed based on the provided design. The FMEA table and the fault trees are used in the *Overall safety requirements* activity to create safety related requirements for the architecture like required reliability and redundancy levels.

Those requirements are fed back into the *Architectural design* process which provides a refined design where system elements are identified and all requirements are allocated to the related elements (*Allocated design*). The allocation activity also includes the allocation of safety requirements which means that the *Overall safety requirements allocation* activity of ISO 61508 standard can be merged into the *Architectural design* process. In the *System safety requirements specification* activity, safety requirements for the system elements are identified which again influence the design refined in the *Architectural design* process. Finally, an architectural design is created representing the whole system, its decomposition, and its interfaces. Additionally, all system elements are specified in detail to enable their realization in the next stage: the development stage.

#### **4.4.2 HoSE development stage**

348 Systems Engineering – Practice and Theory

Fig. 8. Utilization, support, and retirement stages of the holistic systems engineering view

Fig. 6 shows the six technical processes assigned to the development stage in Fig. 4 as well as three related management processes. The specified system elements created in the *Conception* stage are realized inside the *Implementation* process of ISO 15288. *Realization* and *Other risk reduction measures* activities of ISO 61508 have been merged into this process since both of them are related with the physical implementation of the system-of-interest. The realized system elements resulting from the *Implementation* process are passed to the *Integration* process for further development or to the *Disposal* process, in case that the production of the system-of-interest has been cancelled. On the sub-contractor side, verification, quality control, and validation tasks may also follow directly after or within the *Implementation* process.

During the Integration process, the physical system elements are assembled together according to the architectural design. This process ends with the physical implementation of the system-of-interest including its configuration. During system integration, problems or non-conformances may arise, which lead to change requests.

Those requests are explicitly managed by PMBOK's *Perform integrated change control* process. Approved change requests enforce corrective actions to be carried out within the *Direct and manage project execution* process of the same standard. This may include revising the corresponding requirements, updating the project management plan, implementing an improved system element, or cancellation of the project, in the worst case. *Overall modification and retrofit* activity of the ISO 61508 standard is also responsible for managing change requests with regard to safety aspects, thus it has been merged into the change control process of the PMBOK standard.

As shown in Fig. 6, a PMBOK process called *Perform quality control* follows a successful integration, but it can also be carried out after *Implementation* and/or *Verification* processes. The goal is to check the quality of the output provided by the related process. Any nonconformances are managed like described in the previous paragraph. The *Verification* process of ISO 15288 checks if the realized system meets the architectural design and the system requirements which can also include quality requirements. Again, nonconformances may arise in this process; otherwise, the verified system can be transferred into the *Production* stage.

During the implementation of the system or its elements, safety related planning must be performed according to ISO 61508. The corresponding outputs are plans regarding installation, commissioning, safety validation, operation, and maintenance. Those plans have to be integrated into the project management plan.

#### **4.4.3 HoSE production stage**

In the *Transition* process of ISO 15288, the verified system is set up in its operational environment. This is done under consideration of stakeholder and system requirements and the installation plan provided by ISO 61508 which contains a description of the operational environment. The *Overall installation and commission* activity of ISO 61508 also deals with the installation aspects of safety-critical systems. Therefore, it has been merged into the *Transition* process of ISO15288 standard.

After the transition, during the ISO 15288 *Validation* process, the installed system is validated against the requirements and the safety validation plan. PMBOK's *Verify scope* process and the *Overall safety validation* activity of ISO 61508 have been merged into this process due to their common goals. To enable the verification of project's scope as required by PMBOK, the *Validation* process is enhanced by the project validation task from PMBOK which requires the project scope statement as an input document. This additional task may lead to project document updates regarding the current state of the project or product.

Non-conformances during *Transition* or *Validation* are managed as already described. They can affect any requirements, designs, plans, or realized system elements which leads to a reiteration of the corresponding process. After a successful *Validation* process, the system, including its operational configuration, can be passed to the *Utilization* and *Support* stage.

#### **4.4.4 HoSE utilization, support, and retirement stages**

The validated system and the safety related operation and maintenance plan are the inputs for the next processes of ISO 15288. During the *Operation* process, the system is used to deliver the expected services meeting the stakeholder requirements. The *Maintenance* process is typically applied in parallel to *Operation*. It enables a sustained system. The *Overall operation, maintenance and repair* activity of ISO 61508 is split in two and the corresponding parts are merged into the respective processes. *Operation* and *Maintenance* are carried out uninterruptedly until non-conformances arise or the end of service is reached.

During system operation and/or maintenance, change requests regarding the system or the services it delivers may arise. These must be evaluated through PMBOK's *Perform integrated change control* process. The *Overall modification and retrofit* activity of ISO 61508, responsible for guaranteeing the safe operation of the system, has been merged into this process. If the intended modification is unfeasible or system's end of service is reached, the *Disposal* process organizes the system's retiring and disposing. The *Decommissioning or disposal* activity of ISO 61508 has the same function, thus they have been merged together.

#### **4.5 Harmonization summary**

350 Systems Engineering – Practice and Theory

Those requests are explicitly managed by PMBOK's *Perform integrated change control* process. Approved change requests enforce corrective actions to be carried out within the *Direct and manage project execution* process of the same standard. This may include revising the corresponding requirements, updating the project management plan, implementing an improved system element, or cancellation of the project, in the worst case. *Overall modification and retrofit* activity of the ISO 61508 standard is also responsible for managing change requests with regard to safety aspects, thus it has been merged into the change

As shown in Fig. 6, a PMBOK process called *Perform quality control* follows a successful integration, but it can also be carried out after *Implementation* and/or *Verification* processes. The goal is to check the quality of the output provided by the related process. Any nonconformances are managed like described in the previous paragraph. The *Verification* process of ISO 15288 checks if the realized system meets the architectural design and the system requirements which can also include quality requirements. Again, nonconformances may arise in this process; otherwise, the verified system can be transferred

During the implementation of the system or its elements, safety related planning must be performed according to ISO 61508. The corresponding outputs are plans regarding installation, commissioning, safety validation, operation, and maintenance. Those plans

In the *Transition* process of ISO 15288, the verified system is set up in its operational environment. This is done under consideration of stakeholder and system requirements and the installation plan provided by ISO 61508 which contains a description of the operational environment. The *Overall installation and commission* activity of ISO 61508 also deals with the installation aspects of safety-critical systems. Therefore, it has been merged into the

After the transition, during the ISO 15288 *Validation* process, the installed system is validated against the requirements and the safety validation plan. PMBOK's *Verify scope* process and the *Overall safety validation* activity of ISO 61508 have been merged into this process due to their common goals. To enable the verification of project's scope as required by PMBOK, the *Validation* process is enhanced by the project validation task from PMBOK which requires the project scope statement as an input document. This additional task may lead to project document updates regarding the current state of the project or product.

Non-conformances during *Transition* or *Validation* are managed as already described. They can affect any requirements, designs, plans, or realized system elements which leads to a reiteration of the corresponding process. After a successful *Validation* process, the system, including its operational configuration, can be passed to the *Utilization* and *Support* stage.

The validated system and the safety related operation and maintenance plan are the inputs for the next processes of ISO 15288. During the *Operation* process, the system is used to

control process of the PMBOK standard.

have to be integrated into the project management plan.

**4.4.4 HoSE utilization, support, and retirement stages** 

into the *Production* stage.

**4.4.3 HoSE production stage** 

*Transition* process of ISO15288 standard.

Fig. 9 illustrates a general overview of the harmonization work done. It shows the considered disciplines of project management, systems engineering and safety engineering together with their identified interfaces. There are two kinds of interfaces: On one side, *Information interfaces* express a dependency between information as well as documents of different standards. An information interface results in a merge or change of the information, or document flow. On the other side, *Process interfaces* represent a merge of whole processes or process parts of different standards.

It must be remarked that interfaces between the three standards are present in every of the life cycle stages. This reinforces the usefulness of consolidating the processes of those three standards into a holistic view.

#### **4.6 Benefits of the holistic systems engineering view**

The use of standardized procedures during the development of complex systems has many associated advantages. As previously stated in section 3.2, these advantages arise in different aspects of a company. From a commercial point of view, standardized procedures contribute to increase the efficiency of company's processes, to improve the communication with subcontractors and clients, and as a result of those, to increase the quality of the products or services a company offers. From a corporate point of view, standardized procedures provide the basis for traceability and storing of decisions' rationale, which constitute the fundamental factors for generating and managing company's know-how.

Most of the systems development problems mentioned in section 2 can be solved or at least reduced by applying the mentioned standards. However, some of the problems can be solved more effectively by applying the presented harmonized view on the standards. This is especially true for those problems which address the topics knowledge management, risk management, communication and systems thinking.

Using the classification of problems provided in Fig. 1, it can be stated that the use of the HoSE view contributes to solve problems in all the problem areas homogenously, thus reinforcing its holistic character.

In the case of *Human-related* problems, the *Bad customer* and *Erratic communication* problems are solved. On one side, in the bad customer case, a holistic approach based on standardized processes generates standardized documentation. One of those documents is the stakeholder requirements document which must be approved by all the stakeholders. Using this document, later discussions about uncovered topics or not fulfilled objectives can be rejected. The *Systems-related* problem of *Scope arguments with customer* is solved in the same way. On the other side, the problem of erratic communications is solved more effectively in the case that the project manager and the systems engineer are different people. Following the holistic view presented, the project manager and the systems engineer follow the same processes now, e.g. in the field of requirements definition, which avoid any misunderstandings.

Fig. 9. General overview of the holistic Systems Engineering view

In the case of remaining *Systems-related* problems, *Insufficient funding* and *Insufficient schedule* problems are solved. All the different standards generate and store information during the whole life-cycle of previous projects. A holistic view condensates information from many different sources, thus providing an extremely valuable information source for the planning of further projects. This cumulated information supports an accurate and realistic calculation of resources during project planning.

In the case of *Software-related* problems, problems associated with risk management, performance and quality management like, *Cannot evaluate and mitigate software risks*, *Do not know how to deal with software warranties* and *Cannot satisfy a critical customer requirement to software performance* respectively, are solved due to the advantages provided by the HoSE view. In this case, the cross-discipline of safety engineering provides means for assessing risks, assessing the proper operation of the system and guaranteeing the satisfaction of critical requirements, which are all not present in a non-holistic approach. The same argument is applicable to the *Management-related* problem of *Quality of services and products inadequate*.

Finally, inside the *Management-related* problems, the HoSE view contributes to achieve one of the most important disciplines of a learning organization as stated by Senge in (Senge, 1994), Systems Thinking.

### **5. Conclusions**

352 Systems Engineering – Practice and Theory

In the case of *Human-related* problems, the *Bad customer* and *Erratic communication* problems are solved. On one side, in the bad customer case, a holistic approach based on standardized processes generates standardized documentation. One of those documents is the stakeholder requirements document which must be approved by all the stakeholders. Using this document, later discussions about uncovered topics or not fulfilled objectives can be rejected. The *Systems-related* problem of *Scope arguments with customer* is solved in the same way. On the other side, the problem of erratic communications is solved more effectively in the case that the project manager and the systems engineer are different people. Following the holistic view presented, the project manager and the systems engineer follow the same processes now, e.g. in the field of requirements definition, which avoid any

Fig. 9. General overview of the holistic Systems Engineering view

calculation of resources during project planning.

In the case of remaining *Systems-related* problems, *Insufficient funding* and *Insufficient schedule* problems are solved. All the different standards generate and store information during the whole life-cycle of previous projects. A holistic view condensates information from many different sources, thus providing an extremely valuable information source for the planning of further projects. This cumulated information supports an accurate and realistic

misunderstandings.

Increasing complexity of contemporary technical systems has led to several problems, inefficiencies and safety threats during their whole life-cycle. The system thinking philosophy, initiated as a consequence of the common need for a better understanding of multidisciplinary dependencies, surfaced the need of a holistic approach for the development of complex systems.

Standardized processes support the management of complexity in a critical way. Additionally, they improve risk mitigation, productivity and quality, and they serve as a basis for generating and managing the knowledge of a company.

Two different disciplines are considered to be essential in the development of modern complex systems: systems engineering and project management. In a reality were more and more responsibilities are being delegated to technical systems, the safety engineering discipline has become substantial also. For each of the three cross-disciplines, one internationally accepted standard has been chosen. ISO 15288 has been widely recognized as means for managing complexity and coping with uncertainties. The PMI PMBOK standard is comprised of detailed project management processes and activities and has gained the biggest support in the industry world-wide. Finally, ISO 61508 is a basic industrial standard which sets out a generic approach for developing safety-critical functions. This standard has been used as a reference for domain-specific safety standards.

Despite of the existing interdependencies regarding systems engineering, all three crossdisciplines have developed their corresponding standards with minimal consideration in form of referencing each other. This leads to a situation in which the standards overlap with each other in many processes and activities, and in the worst case, they even could contain conflicting directives. Additionally, some deficiencies like missing sequence diagrams or a clear description of inputs and outputs of the associated activities have been identified.

A unique kind of representation has been conceived in order to enable the comparison of the different standards. The processes belonging to different cross-disciplines have been arranged together in a matrix form, representing life-cycle stages and knowledge areas. Processes being assigned to the same stage and knowledge area were identified as possible candidates for being harmonized. Interacting processes and activities were either merged together or their information flows were adapted into a holistic view. The resulting view, called HoSE view, has been illustrated using the standardized *Business Process Model and Notation (BPMN)*.

The results of the work carried out disclose that several interfaces and synergies do exist between the three standards. The holistic view arisen from this work aims to provide a good basis for further harmonization and consolidation within standardisation activities. Furthermore, it also makes a contribution to enhance the systems engineering approach by further improving its capabilities regarding productivity, quality and risk mitigation.

#### **6. References**


together or their information flows were adapted into a holistic view. The resulting view, called HoSE view, has been illustrated using the standardized *Business Process Model and* 

The results of the work carried out disclose that several interfaces and synergies do exist between the three standards. The holistic view arisen from this work aims to provide a good basis for further harmonization and consolidation within standardisation activities. Furthermore, it also makes a contribution to enhance the systems engineering approach by further improving its capabilities regarding productivity, quality and risk mitigation.

Eisner, H. (2005). *Managing Complex Systems: Thinking Outside the Box*, John Wiley & Sons,

Gibson, J. E., Scherer W. T., & Gibson, W. F. (2007). *How to Do Systems Analysis*, John Wiley &

Haskins, C. (Ed.). (2010). *Systems Engineering Handbook: A Guide for System Life-Cycle Processes* 

*ISO/IEC 51*, *Safety Aspects: Guidelines for their Inclusion in Standards (*1999*),* International

*ISO/IEC 15288:2002, Systems and Software Engineering: System Life Cycle Processes* (2002), International Organization for Standardization (ISO)*,* Geneva, Switzerland. *ISO/IEC 15288:2008 Systems and Software Engineering: System Life Cycle Processes* (2008),

*ISO/IEC TR 24748-1, Systems and Software Engineering: Life cycle management -- Part 1: Guide* 

*ISO/IEC 61508:2010, Functional Safety of Electrical/Electronic/Programmable Electronic Safety-*

Jackson, S. (2010). *Architecting Resilient Systems: Accident Avoidance and Survival and* 

Object Management Group (2011). *Business Process Model and Notation (BPMN) v. 2.0*,

Project Management Institute, Inc. (2008). *A Guide to the Project Management Body of* 

Sage, A. P. & Armstrong, J. E. Jr. (2000). Introduction to Systems Engineering, John Wiley &

Senge, P. M. (1994). The Fifth Discipline: The Art & Practice of the Learning Organization,

Organization for Standardization *(ISO),* Geneva, Switzerland.

(ISO)*,* ISBN 978-0-7381-6603-2, Geneva, Switzerland.

978-2-88910-524-3, Geneva, Switzerland.

Sons, Inc., ISBN 0-471-02766-9, Hoboken, USA.

Doubleday Business, ISBN 0-385-26095-4, New York, USA.

*and Activities v. 3.2*, International Council on Systems Engineering (INCOSE), San

International Organization for Standardization (ISO), ISBN 0-7381-5666-3, Geneva,

*for life cycle management* (2010), International Organization for Standardization

*Related Systems* (2010), International Organization for Standardization (ISO)*,* ISBN

*Recovery from Disruptions*, John Wiley & Sons, Inc. ISBN 978-0-470-40503-1,

*Knowledge (PMBoK Guide), 4th ed.*, ISBN 978-1-933890-51-7, Newtown Square,

Inc., ISBN 978-0-471-69006-14, Hoboken, USA.

Sons, Inc., ISBN 978-0-470-00765-5, Hoboken, USA.

*Notation (BPMN)*.

**6. References** 

Diego, USA.

Switzerland.

Hoboken, USA.

Needham, USA.

USA.

### *Edited by Boris Cogan*

The book "Systems Engineering: Practice and Theory" is a collection of articles written by developers and researches from all around the globe. Mostly they present methodologies for separate Systems Engineering processes; others consider issues of adjacent knowledge areas and sub-areas that significantly contribute to systems development, operation, and maintenance. Case studies include aircraft, spacecrafts, and space systems development, post-analysis of data collected during operation of large systems etc. Important issues related to "bottlenecks" of Systems Engineering, such as complexity, reliability, and safety of different kinds of systems, creation, operation and maintenance of services, system-human communication, and management tasks done during system projects are addressed in the collection. This book is for people who are interested in the modern state of the Systems Engineering knowledge area and for systems engineers involved in different activities of the area. Some articles may be a valuable source for university lecturers and students; most of case studies can be directly used in Systems Engineering courses as illustrative materials.

Systems Engineering - Practice and Theory

Systems Engineering

Practice and Theory

*Edited by Boris Cogan*

Photo by ubrx / iStock