**7. Implications of updating or applying new changes**

Empirical evidence has suggested a relationship between the application of design measurements and future software maintainability (Grady, 1994). Establishing a metastructure with a repository of historical measures of various metrics provides in depth analytics for software maintainability. Such a meta-structure serves as a link between the system users' needs and the developer team's activities for both system development and process improvements. Understanding the reasons for variability in the data provides a 36 Real-Time Systems, Architecture, Scheduling, and Application

An evolutionary phase is a transition from the current state of the system configuration to the next stage with newly system features applied into the system. There could be major changes or simple minor adjustments. System architects and stakeholders may pre-set the threshold on defining the degree of "evolutionary phases" specific to a system's functionalities or services provided to the end users. As a simple example of a onedimensional measure the receipt of X or greater number of "feature-adding" task-requests within one cycle will trigger an inception phase and prepare for an upcoming evolution. The number X can vary based on the type and service of the underlying system. Such measures

A solid framework should be established that collects all necessary metrics for the purpose of system evaluation as a prerequisite to the study of system evolution. According to the ISO 14598 (1999) "Information Technology – Software Product Evaluation," there are four phases that the evaluation process should follow: (1) requirements, (2) specification, (3) design and (4) execution. We can classify and rearrange system metrics to cover these areas in the right order and then the underlying meta-structure will collect and generate system evaluations for any instant of time, as described in figure 1. System evaluation should have

It is critical to observe the system within the operational environment over a period of time to identify and derive possible causes or key evolutionary factors covering the system dynamics. For this purpose we need to capture emerging requirements, shifts of usage patterns, changing service retention policies, changes in satisfaction levels (or composite system values), changing influences from the surrounding environment, and so on. Collectively, the comparative analysis of the metrics within each cycle with the knowledge base will suggest necessary system evolution. It is the responsibility of the system architect or analysts (along with stakeholders) to devise their own policies and threshold measures for determining the trigger for the next evolutionary phase. Over a period of time the knowledgebase will have sufficient historical data for generating a system's dynamic specifications (in terms of configuration values, development stages, changing patterns, and timing) to address the requests for newly desired features in the system. The process of system evolution itself may be generalized for the repeating evolutionary phases observed

Empirical evidence has suggested a relationship between the application of design measurements and future software maintainability (Grady, 1994). Establishing a metastructure with a repository of historical measures of various metrics provides in depth analytics for software maintainability. Such a meta-structure serves as a link between the system users' needs and the developer team's activities for both system development and process improvements. Understanding the reasons for variability in the data provides a

can be made composite to incorporate other necessary measurement metrics.

the flexibility to include new measures as it passes through evolutionary phases.

over a period of time. This will help to *predict* further evolutions.

**7. Implications of updating or applying new changes** 

**6. Evolutionary phases** 

**6.1 System evaluation**

**6.2 Evolutionary factors**

powerful decision tool. Naturally, real data will have variations. Efforts need to be given to understand the causes of variations in the collected data sets. Since any positive process improvement changes deliver better positive results the objective here is to improve the process and thereby maintain system value at higher level.

Newly added desirable features requested by system users or added by the developer team should nominally add value to the system. Therefore, such activities raise the level of overall satisfaction in user expectations. Evaluation of the track history of development activities within a system usage cycle can be captured through the changing rates of accumulating task completions compared with the cumulative system values. These cumulative system values may a composite estimation that is based on the accumulation of changes in identified USIs, superimposed with system access rates, for example

Fig. 4. Identifying possible state transitions in various systems from intersecting points within a yearly cycle

Dynamics of System Evolution 39

In comparing the accumulating rates of task completions (TC) with the rate of variations in system value (SV) in normalized plots (as shown in fig. 4), first we are interested in the intersecting points indicating that TC and SV have the same rates. Second, for the segments between the intersections, our focus is on the transition of SV. As described in the following sections we see wide variations in the patterns for different projects in

Although ADMIN-ROLES is a yearly usage system (with roll-over), higher activities are concentrated during the months of August and September. We noticed that there were four intersecting points in 2008 where the SV transitions occurred between the segments were T+-, T++, T-+ and T+-, respectively. The causes C1..n include the access patterns and the perceived changes in USI values over a year. At the intersecting points, the rates are the same and ideally we would expect the same rate of change in both TC and SV. However, due to various environmental factors both of these metrics vary and their patterns change during the segmented time frame. Moreover, the projected trend lines intersecting in 2008 and 2009 yearly cycles indicate the instability in system state. But, in 2010 there were no intersections with increasing disparity between TC and SV. Regression analyses in table 4 shows close ties between TC and SV in 2008 and 2010 and incidcates environmental

ITAP has seasonal usage patterns, and showed a scenario rather parallel activities. Brief intersections occured where the transition pattern was T+- in that segment. During the last three years, the trend lines did not intersect at all and TC > SV was maintained across these years. The regression analyses in table 4 also support the consistent and stonger connections

APPTRACK also has a rolling usage patterns. Again, regression analyses shows close ties between TC and SV in 2008 and 2010 and incidcates either environmental disturbances or

In summary, thorough analyses of a carefully designed set of relevant metrics, as well as the underlying development track history, open up many possible experiments for closely studying system dynamics over a period of time. Upon maturity, these metrics of measurements would form the foundation for developing futher statistical models to drive a

One of the basic patterns that we would like to know is how frequently a system is being used or accessed over each cycle (with sufficiently long periods between cycles). Then if we can compare this for several consecutive cycles, we can capture more details on how the

respective plots.

disturbances in 2009.

between TC and SV.

**7.1.2 Assessment of ITAP**

**7.1.3 Assessment of APPTRACK**

large changes in usage patterns in 2009.

set of tools for managing evolvable systems.

**8. System usage patterns** 

**7.1.1 Assessment of ADMIN-ROLES**

For example, figure 4 shows the *normalized* data plots (for side-by-side comparison) of the last several years for three development projects for a university administrative office. These are illustrated by different services: **ADMIN-ROLES** (to manage graduate administrative roles across the campus), **ITAP** (registration and scoring system for international teaching assistant program) and **APPTRACK** (a sub-system to manage graduate applications data online). ADMIN-ROLES and APPTRACK are moderate sized projects that run throughout the year. ITAP is a comparatively smaller sized project with seasonal usage patterns.

Referencing the state diagram mentioned in figure 2, we can compare the variations in the system value (as the state transitions Ti in the matrix) throughout the yearly cycle. We use the changing values of accumulating Task Requests or Task Completion rates as the underlying cause, because Task completion rates have a more direct impact on the users and overall system values. We have also generated the trend lines to observe the significance of the intersecting points (see figure 4), as well as their regression analyses in table 4.


Table 4. Regression Analyses: Task Completion (TC) vs. System Value (SV)
