**5. Measurement metrics for system evolution**

We need to be able to instrument a system so that it becomes sensitive to any changes in software characteristics. Any appropriate measurement is a catalyst for improvement, it helps in system advancements, and improves the system's bottom line (Dekkers & McQuaid, 2002). The users' behaviors and contributions towards using a system's service(s) should be the integral measurement collected. A software-driven system's characteristics, usage and task request history, and its surrounding environment are all useful in measuring the quality and

Dynamics of System Evolution 33

**Mapped Graph Model** 

NB

*p*

*p*

NP2

*p*

WP2

*p*

WNB

WP1

WD2

*p*

*p*

NI NP1

*p*

WI

ND1

*p*

WD1

*p*

NE

WE ND2

Parallel mapping by probing stations

Fig. 3. Mapping of system workflow into an equivalent graph model with transition

P2

Yes

D2

way so that they can be re-adjusted or fine tuned as needed.

will impact on the USI and system value, as well.

From the mapped model, we can now introduce a set of system usage metrics to help assess system dynamics. As denoted in the figure 3, each node (Ni) is assigned with a weight (WNi) calculated from its constituent set of features. A node may or may not contain components with features of concerns. If not, the weight will be zero. The chance of traversing through an edge from one node to another is based on the desired usage patterns and branching factors from which the initial probabilities (PE) are assigned. The functionality of a process may be assigned a Feature Rank (FRi), which is based on its significance of contribution into the service provided by the system as defined by the stakeholders and its expected usage frequency. Feature Utilization (FUi) is the number of times a feature is actually used within a predefined usage cycle. A feature may be more under-utilized than expected, or overutilized. The User Satisfaction Index or USI (Mubin et el., 2009) is a quantitative indicator of the overall satisfaction of using the system during a predefined time frame. With the combination of USI's for all groups of users, we may deduce a quantitative system value and compare it against other metrics for overall assessment of system evolution. As mentioned earlier, for an evolvable system measurement metrics should be configured in a

Suitable weights for nodes (i.e. different components or characteristics) can be assigned based on how important it is in the overall system evaluation metrics. For example, the functionality, reliability, usability, and efficiency of a system can be assigned weights 4, 4, 3 and 2, respectively. Here, weight value measures from 1 (poor) to 4 (high) (Jeanrenaud & Romanazzi, 1994). In a similar way, for any desired set of metrics we may use ordinal values for generating indexes. We need to identify whether a feature is being over-utilized or under-utilized than expected by the system architect or analyst. Increased value of a Feature Rank may be due to its over-utilization. Thus, it will impact on USI and on system's overall value. Similarly, decreased value of a Feature Rank could be from the under-utilization and

probabilities and process weights

End

No

D1

Begin

I P1

Error

**System Workflow** 

Yes

No

**5.1.2 Metrics of system usage**

maintainability of that system (Coleman, et el., 1994). Measurements of these characteristics can be incorporated into system dynamics which can then be properly utilized as input to the upcoming evolutionary phases. It is always beneficial to outline the purpose and scope of the measurements within the context of system evolution. In addition, the metrics concerned with the measurement of an evolvable system's attributes will themselves be evolved, as needed. In addition to measuring the software itself, we would like to measure how the system impacts its users, as well as how the incremental additions to the system configurations influence the performance over a longer period of time. In summary, the set of measurement metrics are not fixed; instead it is up to the system's architect and stakeholders to pick and define any necessary metrics deemed useful upon careful observations or experimentations.

## **5.1 Classification of metrics**

A system that meets the needs of its users will reinforce satisfaction with the major elements of that system (Ives et el., 1983). Otherwise, the system value will continue to degrade. Metrics can be used to specify what we want, to predict what we can expect to get, to measure what we've got, and to control variations between desired and actually attained values of various attributes of software products (Sherif et el., 1988). Metrics-based metastructures help in fact-finding and process-selection decisions. At the component level, these models can be used to monitor any changes to the system as they occur, then fine tune their values. Also these help to predict fault-prone components (Coleman, et el., 1994). In other words, we can also analyze the values of metrics to generate a trigger that will indicate an inception phase of an upcoming system evolution.

For the purpose of studying system dynamics, we need to make the measurement a part of the overall development process and investigate three possible categories of metrics: (1) metrics of workflow graph, (2) metrics of system usage, and (3) metrics of development activities. The following sections discuss these classes of metrics in detail. Later in the chapter we will provide case studies on applying some of these metrics and their outcomes.

#### **5.1.1 Metrics of workflow graph**

Given any system or process, we can draw its information flow graph. The major elements in the information flow analysis can be determined at the system design time. The availability of this quantitative measure early in the system development process allows the system structure to be corrected with the least cost.

Also, by observing the communication patterns among the system components, we can define measurements for complexity, module coupling, level interactions, and stress points (Henry & Kafura, 1981). Figure 3 gives a simple example of mapping a system workflow into an equivalent graph model, with nodes (based on corresponding processes or decision points) and edges (based on the control flow paths). Hence, a general rule for mapping workflow processes, decision points and control paths can be defined by the system management team with the main objective that covers all areas with a set of nodes and edges of a graph model and apply various metrics related to graph models (such as McCabe's Cyclomatic index). Then at the junction points in the workflow, we may place data-loggers (Lehman, 1986) or, more specifically, probing stations or survey agents (Mubin & Luo, 2010a).

32 Real-Time Systems, Architecture, Scheduling, and Application

maintainability of that system (Coleman, et el., 1994). Measurements of these characteristics can be incorporated into system dynamics which can then be properly utilized as input to the upcoming evolutionary phases. It is always beneficial to outline the purpose and scope of the measurements within the context of system evolution. In addition, the metrics concerned with the measurement of an evolvable system's attributes will themselves be evolved, as needed. In addition to measuring the software itself, we would like to measure how the system impacts its users, as well as how the incremental additions to the system configurations influence the performance over a longer period of time. In summary, the set of measurement metrics are not fixed; instead it is up to the system's architect and stakeholders to pick and define any

A system that meets the needs of its users will reinforce satisfaction with the major elements of that system (Ives et el., 1983). Otherwise, the system value will continue to degrade. Metrics can be used to specify what we want, to predict what we can expect to get, to measure what we've got, and to control variations between desired and actually attained values of various attributes of software products (Sherif et el., 1988). Metrics-based metastructures help in fact-finding and process-selection decisions. At the component level, these models can be used to monitor any changes to the system as they occur, then fine tune their values. Also these help to predict fault-prone components (Coleman, et el., 1994). In other words, we can also analyze the values of metrics to generate a trigger that will indicate an

For the purpose of studying system dynamics, we need to make the measurement a part of the overall development process and investigate three possible categories of metrics: (1) metrics of workflow graph, (2) metrics of system usage, and (3) metrics of development activities. The following sections discuss these classes of metrics in detail. Later in the chapter we will provide case studies on applying some of these metrics and their outcomes.

Given any system or process, we can draw its information flow graph. The major elements in the information flow analysis can be determined at the system design time. The availability of this quantitative measure early in the system development process allows the

Also, by observing the communication patterns among the system components, we can define measurements for complexity, module coupling, level interactions, and stress points (Henry & Kafura, 1981). Figure 3 gives a simple example of mapping a system workflow into an equivalent graph model, with nodes (based on corresponding processes or decision points) and edges (based on the control flow paths). Hence, a general rule for mapping workflow processes, decision points and control paths can be defined by the system management team with the main objective that covers all areas with a set of nodes and edges of a graph model and apply various metrics related to graph models (such as McCabe's Cyclomatic index). Then at the junction points in the workflow, we may place data-loggers (Lehman, 1986) or, more

necessary metrics deemed useful upon careful observations or experimentations.

**5.1 Classification of metrics**

**5.1.1 Metrics of workflow graph**

inception phase of an upcoming system evolution.

system structure to be corrected with the least cost.

specifically, probing stations or survey agents (Mubin & Luo, 2010a).

Fig. 3. Mapping of system workflow into an equivalent graph model with transition probabilities and process weights
