**5.1.2 Metrics of system usage**

From the mapped model, we can now introduce a set of system usage metrics to help assess system dynamics. As denoted in the figure 3, each node (Ni) is assigned with a weight (WNi) calculated from its constituent set of features. A node may or may not contain components with features of concerns. If not, the weight will be zero. The chance of traversing through an edge from one node to another is based on the desired usage patterns and branching factors from which the initial probabilities (PE) are assigned. The functionality of a process may be assigned a Feature Rank (FRi), which is based on its significance of contribution into the service provided by the system as defined by the stakeholders and its expected usage frequency. Feature Utilization (FUi) is the number of times a feature is actually used within a predefined usage cycle. A feature may be more under-utilized than expected, or overutilized. The User Satisfaction Index or USI (Mubin et el., 2009) is a quantitative indicator of the overall satisfaction of using the system during a predefined time frame. With the combination of USI's for all groups of users, we may deduce a quantitative system value and compare it against other metrics for overall assessment of system evolution. As mentioned earlier, for an evolvable system measurement metrics should be configured in a way so that they can be re-adjusted or fine tuned as needed.

Suitable weights for nodes (i.e. different components or characteristics) can be assigned based on how important it is in the overall system evaluation metrics. For example, the functionality, reliability, usability, and efficiency of a system can be assigned weights 4, 4, 3 and 2, respectively. Here, weight value measures from 1 (poor) to 4 (high) (Jeanrenaud & Romanazzi, 1994). In a similar way, for any desired set of metrics we may use ordinal values for generating indexes. We need to identify whether a feature is being over-utilized or under-utilized than expected by the system architect or analyst. Increased value of a Feature Rank may be due to its over-utilization. Thus, it will impact on USI and on system's overall value. Similarly, decreased value of a Feature Rank could be from the under-utilization and will impact on the USI and system value, as well.

Dynamics of System Evolution 35

generate necessary supporting configurations to guide the subsequent evolutionary phases and provide appropriate suggestions for applying new changes and analyzing the changing patterns throughout the system lifecycle. Table 3 lists a summary of these sets of

In order to find comparative measurements of metrics so that we can visualize the impact of system dynamics we apply normalization and then plot the normalized measurements

Mapped nodes in the graph model are either processing nodes or decision nodes (figure 3). Decision paths reflect how the users are making choices and thus directly impacts on "Edge Probabilities", based on the variations in Edge Probabilities the analysts may decide to rearrange the workflow in favor of currently used paths thus changing the branching factors

Probing stations can collect node visit frequencies which will identify the popularity of workflow locations and, more specifically, certain features of the system. Feature Utilization (FU) frequency directly impacts the feature's rank. Variations in the rank are major indicators in identifying any redundant components (for elimination), under-utilized features (may need advertisements) and over-utilized features (that need to be efficient). With the changing rank in features each node's weight will vary. For certain nodes with no significant features the weight will eventually be reducded to zero and will have no

User satisfaction index (USI) reflects the outcome experience of a user or a group of users. A declining USI is a driving force to initiate major revisions into the system to meet newly emerged requirements. Overall system value is something that the stakeholders, architects, analysts and developers keep their eyes on. Based on the type of services provided they will derive a composite formula or index that best describes a system's overall performance at a particular instant of time. For example, a highly significant feature may yield a lesser system

It is beneficial to categorize system metrics into dependent and independent metrics. There should be clear precedence relationships among the dependent metrics in terms of their logical ordering of causes, events or development activities. If necessary, such dependencies can be artificially manipulated by associating one or more attributes such as priority level, task complexity, urgency, service queue re-ordering, etc. For example, priority of a certain activity can be put to a halt if a new higher priority activity arrives in the service queue. In general, we can establish a set of primary measures from which other attributes could be derived. That is, having recorded the primary measures associated with a given process, one could reconstruct evolutionary behavior depicted by them and by secondary measures where the later could be obtained by some combinations of the primary measures (Ramil &

metrics.

against each other.

**5.2 Indicator metrics**

as well as the underlying graph model.

contributions in the system value.

**5.3 Dependencies of metrics**

Lehman, 1999).

value during low usage activity for that cycle time.


Table 3. System metrics necessary for managing evolvable systems

## **5.1.3 Metrics of development activities**

In addition to system usage metrics, we also need to focus on the metrics that will help track all sorts of system development activities such as; new change requests, requests to add new features in a system component, and task completion rates based on the environment setup described in Section 3. Depending on the system's functionalities, we need to deduce a specific set of metrics to drive the meta-structure (wrapper-system) in the surrounding environment of the system so that its knowledgebase will be able to 34 Real-Time Systems, Architecture, Scheduling, and Application

Weight of Edge Probability that the sustem user or control flow will visit this

Weight of Node WN = ∑i=0..m { (FRNi \* FUNi) / (∑ FU0..N)}, where *m* is number of

end up in a sub-process {Pi}

collect contextual data

in the system work-flow

Average Inter-arrival time Frequency of Task Arrivals Task Request Priority

Frequency of Task Completions Wait-time/ Idle-Time/ Latent-Time Service-Time & Deployment-Time

In addition to system usage metrics, we also need to focus on the metrics that will help track all sorts of system development activities such as; new change requests, requests to add new features in a system component, and task completion rates based on the environment setup described in Section 3. Depending on the system's functionalities, we need to deduce a specific set of metrics to drive the meta-structure (wrapper-system) in the surrounding environment of the system so that its knowledgebase will be able to

Feature Rank FRi+1 = FRi + {(FUi+1 - FUi) / FUi}, where, FR0 is assigned as

• Interconnection rules:

above

tree

scope)

Table 3. System metrics necessary for managing evolvable systems

nature of task)

path while using the system, EN = p (path traversal)

features in this node's (N) equivalent process in the system

• Probing Stations {PSi} mark each branching areas and

a certain feature or component of the system

• Feature Rank {FRi} is an index and can be calculated in a way that is suitable for the architects to assess the utility of

• Branching Factor (%), based on rule (1) and (2) mentioned

• Locality of Change-requests: Map of change request areas

FU is the usage frequency of system features in the workflow

mentioned in table 1, and FU0 = 0; *i* indicates previous run cycle.

SV = ∑i=1..n {(USIi \* *f*i) / ∑j=1..n *f*j }, where *n* is the number of users in the system, and *f*j is the number of cycles completed by user j.

Allocated time for a task in multi-project environment (out-of-

Complexity = *f*(service-time, priority, USI-change, man-hour,

Tasks in Multi-project environment (out-of-scope)

System State A system's state can be viewed from different perspectives; and can give absolute, abstract or recurring metrics.

• Decision Nodes {Di} have one or more output branches that

**Class List of Metrics Description** 

Work-flow Nodes &

Feature Utilization

System's overall value, SV

Task Requests,

TR

Task Completion (or Service) Time, TC

Task Complexity

**5.1.3 Metrics of development activities**

Interconnections (Mubin & Luo, 2010a)

1. Metrics from Graph Model

2. System Usage Metrics

Development Activity Metrics

3.

generate necessary supporting configurations to guide the subsequent evolutionary phases and provide appropriate suggestions for applying new changes and analyzing the changing patterns throughout the system lifecycle. Table 3 lists a summary of these sets of metrics.

In order to find comparative measurements of metrics so that we can visualize the impact of system dynamics we apply normalization and then plot the normalized measurements against each other.
