6. Implementation

carried for the same. The data compares multiple releases of a software product over multiple years. Outage data represents unplanned, customer-reported, and uncovered failures, including full and partial outages. The outages were collected across a deployment of over 400 systems. The monthly outage count is annualized and normalized by the number of deployed systems as outages/year/system, which is equivalent to the failure rate. In the same way, the monthly outage downtime is annualized and normalized by deployed systems as downtime/year/deployed system. It should be noted that the downtime duration of each outage is discounted by percent-

In Figure 15, we show the predicted software reliability as a function of failure rate (on the left) and software availability as a function of annual downtime (on the right). The following

• From one release to another, the actual data generally lies within the 90% confidence limits for both availability and reliability. This is testament to the accuracy of the generated predictions.

• Over time, from one release to the next, we can observe a continuous improvement in software availability and reliability. This is not surprising since it takes time and increased

• There is a slight deterioration in reliability and availability at R5, corresponding to a change in hardware, but these quickly improve again after that. This can be explained by the need to re-design the software, but also demonstrates the important effect hardware can have on the quality of software, i.e. sometimes significant long term improvements in

• It is worth observing that predicting availability is generally more difficult than predicting reliability. This is due to the fact that availability is affected by the downtime while reliability is not. In addition, as software development teams get used to a product from one release to the next, they get used to the system, and therefore, are normally able to

age impact (i.e. 100% being a full outage), using the TL 9000 counting rule.

effort to enhance software design, development and test practices.

software quality may only be achieved through changes in hardware.

significantly reduce the average system downtime.

Figure 15. Release-over release software reliability and availability prediction—Project A.

observations can be made:

58 Telecommunication Networks - Trends and Developments

Figure 17 shows the implementation of BRACE. The tool is made up of multiple application programming interfaces (APIs), each of them connecting to a defect logging database (such as JIRA). Defect data is collected from the defect databases in real-time and pre-processed by a computer program (in Python) before being stored into a cloud-based, shared database used by the system. The SRGM algorithm (which is written in Python) then performs the core processing, providing a consistent, fast, flexible, robust, and statistically sound result. Using the output of from core processing, we have also created a unified graphical user interface (GUI) onto which a wealth of software quality metrics are presented to users. While in the current implementation all components of the tool are hosted in a virtual machine running in openstack, it is possible to have them also running in a dedicated server if needed.

As an example use case, for a given project, a number of input parameters are required for the tool. Such inputs include the project milestones, the require changes in defect rate before and after deployment, and a number of assumptions based on expert knowledge of both the product and development process.

References

Tutorials. 2016;18(1):236-262

10.1007/978-3-319-38980-6 21

com/science/article/pii/B9780122669507500293)

tions Research. Wurzburg-Wien. 1973. pp.395-422

detection. IEEE Transactions on Reliability. 1983:475-478

Software Engineering, SE-1. 1975;3:312-327

cation. New York: McGraw-Hill; 1987

munications in Japan. 2001:25-33

data. IEEE Transactions on Reliability. 2005:107-114

14-32

(SATC); 2001

[1] Mijumbi R, Serrat J, Gorricho JL, Bouten N, De Turck F, Boutaba R. Network function virtualization: State-of-the-art and research challenges. IEEE Communications Surveys &

Software Quality Assurance

61

http://dx.doi.org/10.5772/intechopen.79839

[2] Ozakinci R, Tarhan A. The role of process in early software defect prediction: Methods. In: Attributes and Metrics. Cham: Springer International Publishing; 2016. pp. 287-300. DOI:

[3] Martin LS. Probabilistic models for software reliability prediction. In: Freiberger W, editor. Statistical Computer Performance Evaluation. Academic Press; 1972. pp. 485-502, ISBN 9780122669507, https://doi.org/10.1016/B978-0-12-266950-7.50029-3. (http://www.sciencedirect.

[4] Jelinski Z, Moranda PB. Software reliability research. In: Feiberger W, editor. Statistical

[5] Schick GJ, Wolverton RW. Assessment of software reliability. In: Proceedings of Opera-

[6] Schneidewind NF. Analysis of error processes in computer software. In: Proceedings of the International Conference on Reliable Software. IEEE Computer Society; 1975. pp. 337-346

[7] Musa JD. A theory of software engineering and its application. IEEE Transactions on

[8] Goel AL, Okumoto K. Time-dependent error-detection rate model for software reliability and other performance measures. IEEE Transactions on Reliability. 1979:206-211

[9] Yamada S, Ohba M, Osaki S. S-shaped reliability growth modeling for software error

[10] Musa JD, Iannino A, Okumoto K. Software Reliability: Measurement, Prediction, Appli-

[11] Musa JD. Operational profiles in software-reliability engineering. IEEE Software. 1993:

[12] Wallace D, Coleman C. Application and improvement of software reliability models. In: Hardware and Software Reliability (323–08). Software Assurance Technology Center

[13] Okamura H, Dohi T, Osaki S. A reliability assessment method for software products in operational phase—Proposal of an accelerated life testing model. Electronics and Com-

[14] Jeske DR, Zhang X, Pham H. Adjusting software failure rates that are estimated from test

Computer Performance Evaluation. New York: Academic; 1972. pp. 465-484

Figure 17. Design and implementation of a cloud-based BRACE.
