We are IntechOpen, the world's leading publisher of Open Access books Built by scientists, for scientists

4,000+

Open access books available

116,000+

International authors and editors

120M+

Downloads

Our authors are among the

Top 1%

most cited scientists

12.2%

Contributors from top 500 universities

Selection of our books indexed in the Book Citation Index in Web of Science™ Core Collection (BKCI)

## Interested in publishing with us? Contact book.department@intechopen.com

Numbers displayed above are based on latest data collected. For more information visit www.intechopen.com

## **Meet the editor**

Dr. Massimiliano M. Schiraldi is tenured Assistant Professor in Operations Management at "Tor Vergata" University in Rome (Italy) since 2006. He received his PhD in Engineering and Management in 2003. Every year he teaches to more than 200 students in the Engineering School, organizing an average of 15 internships in industrial companies. Up to now, he supervised more

than 200 students in their thesis work on top of the tutorship of 11 PhD students. He is the lecturer of several courses in the field of Operations Management in various MBA programs. He is Guest Professor at the Guizhou University of Finance & Economics in Guiyang, China. He is the author of more than 60 scientific publications in Operations Management, and his specialties include Lean Production, Logistics and Supply Chain Management, Production & Inventory Planning, Warehouse Optimization, Stock Reduction, Business Process Management.

Contents

**Preface VII**

**System Design 51**

Filippo De Carlo

**Management 113**

and Maria Elena Nenni

**Performances 163**

**Management 183**

Chapter 6 **On Just-In-Time Production Leveling 141**

Chapter 1 **Improving Operations Performance with World Class**

Chapter 2 **Managing OEE to Optimize Factory Performance 31** Raffaele Iannone and Maria Elena Nenni

Chapter 5 **Production Scheduling Approaches for Operations**

Chapter 7 **Enterprise Risk Management to Drive Operations**

Chapter 8 **The Important Role of Packaging in Operations**

Alberto Regattieri and Giulia Santarelli

Chapter 3 **Using Overall Equipment Effectiveness for Manufacturing**

Vittorio Cesarotti, Alessio Giuiusa and Vito Introna

Francesco Giordano and Massimiliano M. Schiraldi

Giulio Di Gravio, Francesco Costantino and Massimo Tronci

Chapter 4 **Reliability and Maintainability in Operations Management 81**

Marcello Fera, Fabio Fruggiero, Alfredo Lambiase, Giada Martino

**Manufacturing Technique: A Case in Automotive Industry 1** Fabio De Felice, Antonella Petrillo and Stanislao Monfreda

### Contents

### **Preface XI**


Chapter 8 **The Important Role of Packaging in Operations Management 183** Alberto Regattieri and Giulia Santarelli

### Chapter 9 **An Overview of Human Reliability Analysis Techniques in Manufacturing Operations 221**

Valentina Di Pasquale, Raffaele Iannone, Salvatore Miranda and Stefano Riemma

Preface

– are pushed to increase operations efficiency.

Generally speaking, Operations Management is an area of business concerned with the pro‐ duction of goods and services. It involves the responsibility of ensuring that business opera‐ tions are efficient in terms of using as few resources as needed, and effective in terms of meeting customer requirements. It is concerned with managing the process that converts inputs (in the forms of materials, labour, and energy) into outputs (in the form of goods and/or services). People involved in the operations function typically deal with capacity planning, inventory control, quality assurance, workforce scheduling, materials manage‐ ment, equipment maintenance, and whatever else it takes "to get product out the door". In this book, we view operations in the broad sense rather than as a specific function: increas‐ ingly complex business environments together with the recent economic swings and sub‐ stantially squeezed margins put extra pressure on companies, and decision makers must cope with more difficult challenges. Thus, all organizations – not only manufacturing firms

I am extremely proud to present the contributions of a selected group of researchers, report‐ ing new ideas, original results and practical experiences as well as systematizing some fun‐ damental topics in Operations Management: World Class Manufacturing is introduced and described along with a case analysis; Overall Equipment Effectiveness is explained both in the context of performance improvement and of system design; reliability and maintainabili‐ ty theory is discussed, supported by practical examples of interest in operations manage‐ ment; production scheduling meta-heuristics and Just-In-Time levelling techniques are presented; a specific focus on the importance of packaging is given, as well as insights on

Although it represents only a small sample of the research activity on Operations Manage‐ ment, people from diverse backgrounds, academia, industry and research, can take advant‐ age of this volume. Specifically, the contents of this book should help students and managers in the field of industrial engineering to deepen their understanding of challenges

> **Massimiliano Schiraldi** University of Rome

> > Italy

in operations management, leading to efficient processes and effective decisions. Finally, the editor would like to thank all the people who contributed to this book.

Risk Management and human behaviour in manufacturing activities.

### Preface

Chapter 9 **An Overview of Human Reliability Analysis Techniques in**

Valentina Di Pasquale, Raffaele Iannone, Salvatore Miranda and

**Manufacturing Operations 221**

Stefano Riemma

**VI** Contents

Generally speaking, Operations Management is an area of business concerned with the pro‐ duction of goods and services. It involves the responsibility of ensuring that business opera‐ tions are efficient in terms of using as few resources as needed, and effective in terms of meeting customer requirements. It is concerned with managing the process that converts inputs (in the forms of materials, labour, and energy) into outputs (in the form of goods and/or services). People involved in the operations function typically deal with capacity planning, inventory control, quality assurance, workforce scheduling, materials manage‐ ment, equipment maintenance, and whatever else it takes "to get product out the door". In this book, we view operations in the broad sense rather than as a specific function: increas‐ ingly complex business environments together with the recent economic swings and sub‐ stantially squeezed margins put extra pressure on companies, and decision makers must cope with more difficult challenges. Thus, all organizations – not only manufacturing firms – are pushed to increase operations efficiency.

I am extremely proud to present the contributions of a selected group of researchers, report‐ ing new ideas, original results and practical experiences as well as systematizing some fun‐ damental topics in Operations Management: World Class Manufacturing is introduced and described along with a case analysis; Overall Equipment Effectiveness is explained both in the context of performance improvement and of system design; reliability and maintainabili‐ ty theory is discussed, supported by practical examples of interest in operations manage‐ ment; production scheduling meta-heuristics and Just-In-Time levelling techniques are presented; a specific focus on the importance of packaging is given, as well as insights on Risk Management and human behaviour in manufacturing activities.

Although it represents only a small sample of the research activity on Operations Manage‐ ment, people from diverse backgrounds, academia, industry and research, can take advant‐ age of this volume. Specifically, the contents of this book should help students and managers in the field of industrial engineering to deepen their understanding of challenges in operations management, leading to efficient processes and effective decisions.

Finally, the editor would like to thank all the people who contributed to this book.

**Massimiliano Schiraldi** University of Rome Italy

**Chapter 1**

**Improving Operations Performance**

**A Case in Automotive Industry**

Fabio De Felice, Antonella Petrillo and

Additional information is available at the end of the chapter

Stanislao Monfreda

**1. Introduction**

http://dx.doi.org/10.5772/54450

**with World Class Manufacturing Technique:**

Global competition has caused fundamental changes in the competitive environment of manufacturing industries. Firms must develop strategic objectives which, upon achieve‐ ment, result in a competitive advantage in the market place. However, for almost all manu‐ facturing industries, an increased productivity and better overall efficiency of the production line are the most important goals. Most industries would like to find the formula for the ultimate productivity improvement strategy. Industries often suffer from the lack of a systematic and consistent methodology. In particular the manufacturing world has faced many changes throughout the years and as a result, the manufacturing industry is constant‐ ly evolving in order to stay ahead of competition [1]. Innovation is a necessary process for the continuous changes in order to contribute to the economic growth in the manufacturing industry, especially to compete in the global market. In addition to innovation as a mode for continued growth and change, there are many other vehicles for growth in the manufactur‐ ing industry [2], [3]. One in particular that has been gaining momentum is the idea of World Class Manufacturing (WCM) developed by Richard J. Schonberger (in the 80s) who collected several cases, experiences and testimonies of companies that had embarked on the path of continuous "Kaizen" improvement for excellence in production, trying to give a systematic conception to the various practices and methodologies examined. Some of the benefits of in‐ tegrating WCM include increased competitiveness, development of new and improved tech‐ nology and innovation, increased flexibility, increased communication between management and production employees, and an increase in work quality and workforce

> © 2015 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution,

© 2013 De Felice et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

and reproduction in any medium, provided the original work is properly cited.

## **Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry**

Fabio De Felice, Antonella Petrillo and Stanislao Monfreda

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/54450

### **1. Introduction**

Global competition has caused fundamental changes in the competitive environment of manufacturing industries. Firms must develop strategic objectives which, upon achieve‐ ment, result in a competitive advantage in the market place. However, for almost all manu‐ facturing industries, an increased productivity and better overall efficiency of the production line are the most important goals. Most industries would like to find the formula for the ultimate productivity improvement strategy. Industries often suffer from the lack of a systematic and consistent methodology. In particular the manufacturing world has faced many changes throughout the years and as a result, the manufacturing industry is constant‐ ly evolving in order to stay ahead of competition [1]. Innovation is a necessary process for the continuous changes in order to contribute to the economic growth in the manufacturing industry, especially to compete in the global market. In addition to innovation as a mode for continued growth and change, there are many other vehicles for growth in the manufactur‐ ing industry [2], [3]. One in particular that has been gaining momentum is the idea of World Class Manufacturing (WCM) developed by Richard J. Schonberger (in the 80s) who collected several cases, experiences and testimonies of companies that had embarked on the path of continuous "Kaizen" improvement for excellence in production, trying to give a systematic conception to the various practices and methodologies examined. Some of the benefits of in‐ tegrating WCM include increased competitiveness, development of new and improved tech‐ nology and innovation, increased flexibility, increased communication between management and production employees, and an increase in work quality and workforce

© 2015 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 De Felice et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

empowerment. This work takes you to the journey of World Class Manufacturing System (WCMS) adopted by the most important automotive Company located in Italy, the Fiat Group Automobiles. World class can be defined as a tool used to search and allow a compa‐ ny to perform at a best-on-class level.

The aim of this work is to present establishments of the basic model of World Class Manu‐ facturing (WCM) quality management for the production system in the automotive industry in order to make products of the highest quality eliminating losses in all the factory fields an improvement of work standards.

The chapter is organized as follows: Section 2 introduces World Class Manufacturing and illustrates literature review, mission and principles of WCM, Section 3 describes Tools for WCM with particular attention on their features and on Key Performance and Key Activities Indicators and Section 4 describes the research methodology through a real case study in the largest Italian automotive company. To conclude, results and conclusions are provided.

### **2. Literature review**

Manufacturers in many industries face worldwide competitive pressures. These manufac‐ turers must provide high-quality products with leading-edge performance capabilities to survive, much less prosper. The automotive industry is no exception. There is intense pres‐ sure to produce high-performance at minimum-costs [4]. Companies attempting to adopt WCM have developed a statement of corporate philosophy or mission to which operating objectives are closely tied. A general perception is that when an organization is considered as world-class, it is also considered as the best in the world. But recently, many organiza‐ tions claim that they are world-class manufacturers. Indeed we can define world class man‐ ufacturing as a different production processes and organizational strategies which all have flexibility as their primary concern [5]. For example Womack et al. [6] defined a lead for quantifying world class. Instead Oliver et al. [7] observed that to qualify as world class, a plant had to demonstrate outstanding performance on both productivity and quality meas‐ ures. Summing up we can state that the term World-Class Manufacturing (WCM) means the pursuance of best practices in manufacturing. On the other hand we would like to note that one of the most important definition is due to Schonberger. He coined the term "World Class Manufacturing" to cover the many techniques and technologies designed to enable a company to match its best competitors [8].

**Figure 1.** The growth of techniques associated with the WCM concept

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry

http://dx.doi.org/10.5772/54450

3

**Figure 2.** WCM Model by Schonberger

When Schonberger first introduced the concept of "World Class Manufacturing", the term was seen to embrace the techniques and factors as listed in Figure 1. The substantial increase in techniques can be related in part to the growing influence of the manufacturing philoso‐ phies and economic success of Japanese manufacturers from the 1960s onwards. What is particularly interesting from a review of the literature is that while there is a degree of over‐ lap in some of the techniques, it is clear that relative to the elements that were seen as consti‐ tuting WCM in 1986, the term has evolved considerably.

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry http://dx.doi.org/10.5772/54450 3

**Figure 1.** The growth of techniques associated with the WCM concept

empowerment. This work takes you to the journey of World Class Manufacturing System (WCMS) adopted by the most important automotive Company located in Italy, the Fiat Group Automobiles. World class can be defined as a tool used to search and allow a compa‐

The aim of this work is to present establishments of the basic model of World Class Manu‐ facturing (WCM) quality management for the production system in the automotive industry in order to make products of the highest quality eliminating losses in all the factory fields an

The chapter is organized as follows: Section 2 introduces World Class Manufacturing and illustrates literature review, mission and principles of WCM, Section 3 describes Tools for WCM with particular attention on their features and on Key Performance and Key Activities Indicators and Section 4 describes the research methodology through a real case study in the largest Italian automotive company. To conclude, results and conclusions are provided.

Manufacturers in many industries face worldwide competitive pressures. These manufac‐ turers must provide high-quality products with leading-edge performance capabilities to survive, much less prosper. The automotive industry is no exception. There is intense pres‐ sure to produce high-performance at minimum-costs [4]. Companies attempting to adopt WCM have developed a statement of corporate philosophy or mission to which operating objectives are closely tied. A general perception is that when an organization is considered as world-class, it is also considered as the best in the world. But recently, many organiza‐ tions claim that they are world-class manufacturers. Indeed we can define world class man‐ ufacturing as a different production processes and organizational strategies which all have flexibility as their primary concern [5]. For example Womack et al. [6] defined a lead for quantifying world class. Instead Oliver et al. [7] observed that to qualify as world class, a plant had to demonstrate outstanding performance on both productivity and quality meas‐ ures. Summing up we can state that the term World-Class Manufacturing (WCM) means the pursuance of best practices in manufacturing. On the other hand we would like to note that one of the most important definition is due to Schonberger. He coined the term "World Class Manufacturing" to cover the many techniques and technologies designed to enable a

When Schonberger first introduced the concept of "World Class Manufacturing", the term was seen to embrace the techniques and factors as listed in Figure 1. The substantial increase in techniques can be related in part to the growing influence of the manufacturing philoso‐ phies and economic success of Japanese manufacturers from the 1960s onwards. What is particularly interesting from a review of the literature is that while there is a degree of over‐ lap in some of the techniques, it is clear that relative to the elements that were seen as consti‐

ny to perform at a best-on-class level.

2 Operations Management

improvement of work standards.

**2. Literature review**

company to match its best competitors [8].

tuting WCM in 1986, the term has evolved considerably.

**Figure 2.** WCM Model by Schonberger

These techniques have been known for a long time, but with Schonberger, a perfectly inte‐ grated and flexible system was obtained, capable of achieving company competitiveness with products of high quality. The WCM model by Schonberger is illustrated here above in Figure 2.

ly [9]. WCM is developed in 7 steps for each pillar and the steps are identified in three phas‐ es: *reactive, preventive and proactive*. In figure 4 an example of a typical correlation between steps and phases is shown, but this correlation could change for each different technical pil‐ lar; in fact each pillar could have a different relation to these phases. The approach of WCM needs to start from a "**model area**" and then extend to the entire company. WCM "attacks" the manufacturing area. WCM is based on a system of audits that give a score that allows to

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry

http://dx.doi.org/10.5772/54450

5

The process to achieve "World Class Manufacturing" (WCM) has a number of philosophies and elements that are common for all companies. Therefore, when applied to the manufac‐ turing field, TQM and WCM are synonymous. We would like to observe that customer needs and expectations is a very important element in WCM. The manufacturing strategy should be geared to support these needs. These could be dealing with certification, market share, company growth, profitability or other global targets. The outcomes should be de‐ fined so that they are measurable and have a definite timetable. These are also a means of defining employee responsibilities and making them feel involved. Employee education and training is an essential element in a World Class Manufacturing Company. They must un‐ derstand the company's vision and mission and consequential priorities. As introduced in World Class Manufacturing, well known disciplines such as: Total Quality Control; Total

get to the highest level. The highest level is represented by "*the world class level*".

**Figure 4.** World Class Manufacturing steps

**2.1. Mission and principles**

According to Fiat Group Automobiles, "World Class Manufacturing (WCM)" is: a struc‐ tured and integrated production system that encompasses all the processes of the plant, the security environment, from maintenance to logistics and quality. The goal is to continuously improve production performance, seeking a progressive elimination of waste, in order to en‐ sure product quality and maximum flexibility in responding to customer requests, through the involvement and motivation of the people working in the establishment.

The WCM program has been made by Prof. Hajime Yamashina from 2005 at the Fiat Group Automobiles. The program is shown here below in Figure 3.

**Figure 3.** World Class Manufacturing in Fiat Group Automobiles

Fiat Group Automobiles has customized the WCM approach to their needs with Prof. Ha‐ jime Yamashina from Kyoto University (he is also member of the Royal Swedish Academy and in particular he is RSA Member of Engineering Sciences), by redesigning and imple‐ menting the model through two lines of action: **10 technical pillars**; **10 managerial pillars.**

The definition proposed by Yamashina includes a manufacturing company that excels in ap‐ plied research, production engineering, improvement capability and detailed shop floor knowledge, and integrates those components into a combined system. In fact, according to Hajime Yamashina the most important thing continues to be the ability to change and quick‐ ly [9]. WCM is developed in 7 steps for each pillar and the steps are identified in three phas‐ es: *reactive, preventive and proactive*. In figure 4 an example of a typical correlation between steps and phases is shown, but this correlation could change for each different technical pil‐ lar; in fact each pillar could have a different relation to these phases. The approach of WCM needs to start from a "**model area**" and then extend to the entire company. WCM "attacks" the manufacturing area. WCM is based on a system of audits that give a score that allows to get to the highest level. The highest level is represented by "*the world class level*".

**Figure 4.** World Class Manufacturing steps

### **2.1. Mission and principles**

These techniques have been known for a long time, but with Schonberger, a perfectly inte‐ grated and flexible system was obtained, capable of achieving company competitiveness with products of high quality. The WCM model by Schonberger is illustrated here above in

According to Fiat Group Automobiles, "World Class Manufacturing (WCM)" is: a struc‐ tured and integrated production system that encompasses all the processes of the plant, the security environment, from maintenance to logistics and quality. The goal is to continuously improve production performance, seeking a progressive elimination of waste, in order to en‐ sure product quality and maximum flexibility in responding to customer requests, through

The WCM program has been made by Prof. Hajime Yamashina from 2005 at the Fiat Group

Fiat Group Automobiles has customized the WCM approach to their needs with Prof. Ha‐ jime Yamashina from Kyoto University (he is also member of the Royal Swedish Academy and in particular he is RSA Member of Engineering Sciences), by redesigning and imple‐ menting the model through two lines of action: **10 technical pillars**; **10 managerial pillars.**

The definition proposed by Yamashina includes a manufacturing company that excels in ap‐ plied research, production engineering, improvement capability and detailed shop floor knowledge, and integrates those components into a combined system. In fact, according to Hajime Yamashina the most important thing continues to be the ability to change and quick‐

the involvement and motivation of the people working in the establishment.

Automobiles. The program is shown here below in Figure 3.

**Figure 3.** World Class Manufacturing in Fiat Group Automobiles

Figure 2.

4 Operations Management

The process to achieve "World Class Manufacturing" (WCM) has a number of philosophies and elements that are common for all companies. Therefore, when applied to the manufac‐ turing field, TQM and WCM are synonymous. We would like to observe that customer needs and expectations is a very important element in WCM. The manufacturing strategy should be geared to support these needs. These could be dealing with certification, market share, company growth, profitability or other global targets. The outcomes should be de‐ fined so that they are measurable and have a definite timetable. These are also a means of defining employee responsibilities and making them feel involved. Employee education and training is an essential element in a World Class Manufacturing Company. They must un‐ derstand the company's vision and mission and consequential priorities. As introduced in World Class Manufacturing, well known disciplines such as: Total Quality Control; Total Productive Maintenance; Total Industrial Engineering; Just In Time and Lean Manufactur‐ ing are taken into account. Thus, World Class Manufacturing is based on a few fundamental principles:

**Technical Pillar Why Purpose**

To reduce drastically the number of accidents.

To improve the ergonomics of the workplace. To develop specific professional skills.

in the system production-logistics business.

competitiveness of the cost of the product.

through the conductors (equipment specialists).

It is constituted by two pillars:

are many losses (MUDA)to remove.

techniques.

breakdowns.

To ensure quality products. To reduce non-compliance.

To increase the skills of the employees.

from suppliers to the assembly line.

To put in place new plants as scheduled. To ensure a rapid start-up and stable. To reduce the Life Cycle Cost (LCC).

To design systems easily maintained and inspected.

To reduce significantly the levels of stocks.

To identify scientifically and systematically the main items of loss

http://dx.doi.org/10.5772/54450

7

To quantify the potential economic benefits and expected. To address the resources and commitment to managerial tasks

To reduce drastically the most important losses present in the system manufacturing plant, eliminating inefficiencies. To eliminate non-value-added activities, in order to increase the

To develop specific professional skills of problem solving.

*AM Autonomous Maintenance.* It is used to improve the overall efficiency of the production system through maintenance policies

WO Workplace Organization. It is develops to determine an improvement in the workplace, because often the materials and equipment are degrade; in particular because in the process there

To increase the efficiency of the machines using failure analysis

To facilitate the cooperation between conductors (equipment specialists) and maintainers (maintenance people) to reach zero

To minimize the material handling, even with direct deliveries

To develop a culture of prevention.

with greatest potential.

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry

Continuous improvement

Analysis of the losses and costs (losses within the

Priorities of actions to management the loss identified by the cost deployment

Continuous improvement of plant and workplace

Continuous improvement of downtime and failures

Continuous improvement of customers' needs

Optimization of stocks

installation time and costs and optimization of features of new products

Optimization of

of safety

costs)

SAF Safety

CD Cost Deployment

FI Focused Improvement

AA Autonomous Activities

> PM Professional Maintenance

QC Quality Control

LOG Logistics & Customer Service

> EEM Early Equipment Management EPM Early Product Management


### **2.2. Pillars: Description and features**

WCM foresees 10 technical pillars and 10 managerial pillars. The levels of accomplishment in technical fields are indirectly affected by the level of accomplishment in administrative fields. The pillar structure represents the "Temple of WCM" (Figure 5) and points out that, to achieve the standard of excellence, a parallel development of all the pillars is necessary. Each pillar focuses on a specific area of the production system using appropriate tools to achieve excellence global.

**Figure 5.** Temple of WCM

Here below in Table 1 features for each technical pillars are illustrated.

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry http://dx.doi.org/10.5772/54450 7

Productive Maintenance; Total Industrial Engineering; Just In Time and Lean Manufactur‐ ing are taken into account. Thus, World Class Manufacturing is based on a few fundamental

WCM foresees 10 technical pillars and 10 managerial pillars. The levels of accomplishment in technical fields are indirectly affected by the level of accomplishment in administrative fields. The pillar structure represents the "Temple of WCM" (Figure 5) and points out that, to achieve the standard of excellence, a parallel development of all the pillars is necessary. Each pillar focuses on a specific area of the production system using appropriate tools to

Here below in Table 1 features for each technical pillars are illustrated.

principles:

6 Operations Management

**•** the involvement of people is the key to change;

**•** it is not just a project, but a new way of working,

**•** accident prevention is a non-derogated "value";

**•** all forms of MUDA waste are not tolerable;

**•** eliminate the cause and not treat the effect.

**•** all faults must be made visible;

**2.2. Pillars: Description and features**

achieve excellence global.

**Figure 5.** Temple of WCM

**•** all leaders must demand respect for the standards set;

**•** methods should be applied with consistency and rigor;

**•** the customer's voice should reach all departments and offices;



**Main Tools Description**

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry

improving).

causes.

**5W + 1H**

**5 Whys**

**Heinrich Pyramid**

**Maintenance cycles**

**Kaizen (Quick, Standard, Major, Advanced)** (clean); Seiketsu (standardized); Shitsuke (maintaining and

http://dx.doi.org/10.5772/54450

9

It is used to ensure a complete analysis of a problem on all its fundamental aspects. The questions corresponding to the 5 W and 1 H

It is used to analyze the causes of a problem through a consecutive series of questions. It is applied in failures analysis, analysis of sporadic anomalies, analysis of chronic losses arising from specific

It is used for classifying the events that have an impact on safety such as fatalities, serious, minor, medications, near-accidents, accidents,

Are used for activities on Autonomous Maintenance and Professional

It is a daily process, the purpose of which goes beyond simple productivity improvement. It is also a process that, when done correctly, humanizes the workplace, eliminates overly hard work.

are: Who? What? Why? Where? When? How?

**AM Tag** It is a sheet which, suitably completed, is applied on the machine, in order to report any anomaly detected.

**WO Tag** It is a sheet which, suitably completed, is used in order to report any anomaly detected for Workplace Organization

**PM Tag** It is a sheet which, suitably completed, is used in order to report any anomaly detected for Professional Maintenance.

**SAF Tag** It is a sheet which, suitably completed, is used in order to report any

Organization and Professional Maintenance.

Organization and Professional Maintenance.

Organization and Professional Maintenance.

anomaly detected for Safety.

**Equipment ABC Prioritization** It is used to classify plants according their priorities of intervention in

**Cleaning cycles** Are used for activities on Autonomous Maintenance, Workplace

**Inspection cycles** Are used for activities on Autonomous Maintenance, Workplace

**Control cycles** Are used for activities on Autonomous Maintenance, Workplace

**Kanban** It is a tag used for programming and production scheduling.

**Two Videocamera Method** It is used to perform the video recording of the transactions in order to

case of failure.

Maintenance.

optimize them.

**FMEA-Failure Mode and Effect Analysis** It is used to prevent the potential failure modes.

dangerous conditions and unsafe practices over time.

**Table 1.** Description of pillars

As regards the ten Managerial Pillars there are: 1) Management Commitment; 2) Clarity of Objectives; 3) Route map to WCM; 4) Allocation of Highly Qualified People to Model Areas; 5) Organization Commitment; 6) Competence of Organization towards Improvement; 7) Time and Budget; 8)Detail Level; 9) Expansion Level and 10) Motivation of Operators

### **3. The main tools for World Class Manufacturing: Features and description**

WCM requires all decisions to be made based on objective measured data and its analysis. Therefore, all the traditional data analysis tools such as scatter diagrams, histograms and checklists are used. Thus, from literature survey it is inferred that it is not possible to use the specific single tool to achieve world-class performance and address all the manufacturing components. It is inferred that to address all the components of the manufacturing system the following tools are necessary (see Table 2):


Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry http://dx.doi.org/10.5772/54450

**Technical Pillar Why Purpose**

To ensure, through a structured system of training, correct skills

To comply with the requirements and standards of environmental

To develop an energy culture and to reduce the energy costs and

It is a methodology for the description and the analysis of a loss phenomenon (defects, failures malfunctions...). It based on the facts

It is used by the list of possible factors (causes, sub-causes) that give rise to the phenomenon. For the 4M the causes are grouped into 4

And for the 5M, there are the same 4M more the fifth that is the

It is used to achieve excellence through improvement of the workplace in terms of order, organization and cleanliness. The technique is based on: Seiri (separate and order); Seiton (arrange and organize); Seiso

categories: Methods; Materials; Machines; Mans.

To develop the roles of maintenance workers, technologists,

and abilities for each workstation.

management.

As regards the ten Managerial Pillars there are: 1) Management Commitment; 2) Clarity of Objectives; 3) Route map to WCM; 4) Allocation of Highly Qualified People to Model Areas; 5) Organization Commitment; 6) Competence of Organization towards Improvement; 7)

**3. The main tools for World Class Manufacturing: Features and description**

WCM requires all decisions to be made based on objective measured data and its analysis. Therefore, all the traditional data analysis tools such as scatter diagrams, histograms and checklists are used. Thus, from literature survey it is inferred that it is not possible to use the specific single tool to achieve world-class performance and address all the manufacturing components. It is inferred that to address all the components of the manufacturing system

**Main Tools Description**

environment.

and the use of the 5 senses

losses.

Time and Budget; 8)Detail Level; 9) Expansion Level and 10) Motivation of Operators

specialists such as major staff training.

Continuous improvement of the skills of employees

Continuous improvement

management and reduce

and workers

environmental

energy waste

the following tools are necessary (see Table 2):

**5 G**

**4M or 5M**

**5 S**

PD People Development

8 Operations Management

ENV Environment ENE Energy

**Table 1.** Description of pillars


9


**Main Tools Description**

**Rhythmic operation analysis** Analysis of the dispersion during the work cycle.

subgroups.

them.

**Value Stream Map**

**X Matrix**

*manage it and thus you can't improve upon it".*

several authors in order to "measure" WCM.

**3.1. Key Performance Indices and Key Activity Indicators**

**Table 2.** Main Tools and description

**QA Network quality assurance network** It is used to ensure the quality of the process by eliminating rework. **QuOA quality operation analysis** Preventive analysis of the work steps to ensure the quality.

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry

**SMED (Single Minute Exchange of Die)** It is a set of techniques to perform operations of development, set-up, with a duration < 10 minutes.

**Motion Economic Method** Analysis used to evaluate the efficiency of movement and optimize

customer and suppliers.

**Material Matrix** Classification of materials according to tree families A – B - C and

section and quality components.

In World Class Manufacturing the focus is on continuous improvement. As organizations adopt world class manufacturing, they need new methods of performance measurement to check their continuous improvement. Traditional performance measurement systems are not valid for the measurement of world class manufacturing practices as they are based on outdated traditional cost management systems, lagging metrics, not related to corporate strategy, inflexible, expensive and contradict continuous improvement [10]. To know the world class performance, measurement is important because "*if you can't measure it, you can't*

Here below in Table 3 is shown a brief report on different indices and indicators defined by

However, some authors [15; 16] proposed only productivity as a measure of manufacturing performance. Kennerley and Neely [17] identified the need for a method that could be used for the development of measures able to span diverse industry groups. From this point of view we would like to note that it is necessary to develop a more systematic approach in order to improve a project and process. In particular, in WCM we can use two types of indi‐ cators: Key Performance Indicator (KPI) and Key Activity Indicator (KAI). KPI represents a result of project improvement, e.g. sales, profit, labor productivity, equipment performance

It allows to highlight the waste of a process business, helping to represent the current flow of materials and information that, in relation to a specific product, through the value stream between

http://dx.doi.org/10.5772/54450

11

It is a tool for quality improvement, which allows compare two pairs of lists of items to highlight the correlations between a list, and the two adjacent lists. X matrix to relate defect mode, phenomenon, equipment


**Table 2.** Main Tools and description

**Main Tools Description**

**Spaghetti Chart** It is a graphical used to detail the actual physical flow and distances involved in a work process.

**OPL (One Point Lesson)** It is a technique that allows a simple and effective focus in a short time on the object of the training.

**Visual Aid** It is a set of signals that facilitates the work and communication within

**Poka Yoke** It is a prevention technique in order to avoid possible human errors in performance of any productive activity.

> It is a technique for the investigation of events of interest, in particular accidents, which examines what happened researching why it

> It is a techniques enables designers to determine simultaneously the individual and interactive effects of many factors that could affect the

> It is a collection of statistical models, and their associated procedures, in which the observed variance in a particular variable is partitioned into components attributable to different sources of variation.

It is a set of matrices which shows the correlations between the anomalies of the product and the phases of the production system.

machines which ensure performance of the desired quality.

order to minimize movement to reduce fatigue.

**Golden Zone & Strike zone Analysis** Analysis of work operations in the area that favors the handling in

the company.

happened.

**TWTTP (The way to teach people)** It is an interview in 4 questions to test the level of training on the operation to be performed.

**Analysis)** Analysis of judgment, recognition and action phases at work.

**5Q 0D (Five Questions to Zero Defects)** Analysis of the process or of the equipment (machine) through five questions to have zero defect.

output results in any design.

**PPA (Processing Point Analysis)** It is used for restore, maintain and improve operational standards of work by ensuring zero defects.

**QM Matrix (Matrix Maintenance Quality)** It is a tool used to define and maintain the operating conditions of the

**MURI Analysis** Ergonomic analysis of workstations. **MURA Analysis** Analysis of irregular operations.

**MUDA Analysis** Analysis of losses.

**SOP (Standard Operation Procedure)** Standard procedure for work. **JES (Job Elementary Sheet)** Sheet of elementary education.

**HERCA (Human Error Root Cause Analysis)**

10 Operations Management

**RJA (Reconditional Judgment Action**

**DOE**

**ANOVA**

**QA Matrix (Matrix Quality Assurance)**

### **3.1. Key Performance Indices and Key Activity Indicators**

In World Class Manufacturing the focus is on continuous improvement. As organizations adopt world class manufacturing, they need new methods of performance measurement to check their continuous improvement. Traditional performance measurement systems are not valid for the measurement of world class manufacturing practices as they are based on outdated traditional cost management systems, lagging metrics, not related to corporate strategy, inflexible, expensive and contradict continuous improvement [10]. To know the world class performance, measurement is important because "*if you can't measure it, you can't manage it and thus you can't improve upon it".*

Here below in Table 3 is shown a brief report on different indices and indicators defined by several authors in order to "measure" WCM.

However, some authors [15; 16] proposed only productivity as a measure of manufacturing performance. Kennerley and Neely [17] identified the need for a method that could be used for the development of measures able to span diverse industry groups. From this point of view we would like to note that it is necessary to develop a more systematic approach in order to improve a project and process. In particular, in WCM we can use two types of indi‐ cators: Key Performance Indicator (KPI) and Key Activity Indicator (KAI). KPI represents a result of project improvement, e.g. sales, profit, labor productivity, equipment performance


rate, product quality rate, Mean Time to Failure (MTBF) and Mean Time to Repair (MTTR) [18, 19]. KAI represents a process to achieve a purpose of project improvement, e.g. a total number of training cycles for employees who tackle performance improvement projects, a total number of employees who pass a public certification examination and an accumulative number of Kaizen cases [20]. A KAI & KPI overview applied step by step is seen in Figure 6.

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry

http://dx.doi.org/10.5772/54450

13

The aim of this work is to present establishments of the basic model of World Class Manu‐ facturing (WCM) quality management for the production system at Fiat Group Automobiles in order to make products of the highest quality eliminating losses in all the factory fields an improvement of work standards. In fact, World Class Manufacturing is a manufacturing system defined by 6 International companies including Fiat Group Automobiles with the in‐ tent to raise their performances and standards to World Class level with the cooperation of leading European and Japanese experts and this includes all the plant processes including quality, maintenance, cost management and logistics etc. from a universal point of view. Thus, automotive manufacturing requires the ability to manage the product and its associat‐ ed information across the entire fabricator. Systems must extend beyond their traditional

**Figure 6.** A KAI & KPI overview applied step by step

**4. Industrial case study**

**Table 3.** Main indices/indicators defined by different authors

rate, product quality rate, Mean Time to Failure (MTBF) and Mean Time to Repair (MTTR) [18, 19]. KAI represents a process to achieve a purpose of project improvement, e.g. a total number of training cycles for employees who tackle performance improvement projects, a total number of employees who pass a public certification examination and an accumulative number of Kaizen cases [20]. A KAI & KPI overview applied step by step is seen in Figure 6.

**Figure 6.** A KAI & KPI overview applied step by step

### **4. Industrial case study**

**Authors**

Digalwar and Metri

[13] Utzig [14]

Wee and Quazi [12]

Kodali et al. [11]

Cost/Price + + +

Cycle time +

Flexibility + + +

Engineering change notices +

Facility control +

Global competitiveness +

Inventory + +

Plant/Equipment/Tooling reliability +

Safety + +

Problem support +

Productivity + +

Speed/Lead Time + +

Quality + + +

Machine hours per part +

Green product/Process design + + Innovation and Technology +

Broad management/Worker involvement +

Customer relations/Service + +

**Indices/Indicators**

12 Operations Management

Competitive advantage +

Measurement and information management. +

Morale +

Supplier management +

Total involvement of employees +

**Table 3.** Main indices/indicators defined by different authors

Top management commitment + +

Training +

The aim of this work is to present establishments of the basic model of World Class Manu‐ facturing (WCM) quality management for the production system at Fiat Group Automobiles in order to make products of the highest quality eliminating losses in all the factory fields an improvement of work standards. In fact, World Class Manufacturing is a manufacturing system defined by 6 International companies including Fiat Group Automobiles with the in‐ tent to raise their performances and standards to World Class level with the cooperation of leading European and Japanese experts and this includes all the plant processes including quality, maintenance, cost management and logistics etc. from a universal point of view. Thus, automotive manufacturing requires the ability to manage the product and its associat‐ ed information across the entire fabricator. Systems must extend beyond their traditional role of product tracking to actively manage the product and its processing. This requires co‐ ordinating the information flow between process equipment and higher level systems, sup‐ porting both manual and automatic interfaces. A case study methodology was used to collect detailed information on division and plant strategic objectives, performance meas‐ urement systems, and performance measurement system linkages. The result of this re‐ search was to develop principles on strategic objectives, performance measurement systems and performance measurement system linkages for improved organizational coordination. The purpose of this study is to examine the relationship between division and plant per‐ formance measurement systems designed to support the firm's strategic objectives and to improve organizational coordination. We will focus our attention on the Cost Deployment Pillar, Autonomous Activities/Workplace Organization Pillar and Logistics/Customers Serv‐ ice Pillar.

workstation ergonomics and to ensure minimum material handling; Application of

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry

http://dx.doi.org/10.5772/54450

15

**- CHECK** – Analysis of results in order to verify productivity improvement, ergonomic im‐ provement (WO) and the optimization of the internal handling (in the plant) and external logistics flows (LOG). Check of the losses reduction according to Cost Deployment (CD).

*4.2.1. PLAN: Costs analysis and losses analysis (CD) for the manufacturing process (WO) and for the*

In this first part (PLAN) were analyzed the losses in the assembly process area so as to or‐ ganize the activities to reduce the losses identified in the second part of the analysis (DO). Object of the study was the Mechanical Subgroups ETU - Elementary Technology Unit (in a part of the Cassino Plant Assembly Shop). The aim of this analysis was to identify a pro‐

**•** Identify relationships between cost factors, processes generating costs and various types of

In fact, in general a production system is characterized by several waste and losses (MUDA),

It is important to give a measure of all the losses identified in process examination. The data collection is therefore the "*key element*" for the development of activities of Cost Deploy‐ ment. Here below in Figure 7 an example of losses identified from CD from the Assembly Shop is shown and in Figure 8 is shown an example of CD data collection regarding NVAA (Non-Value-Added Activities) for WO (for this case study we excluded check and rework losses) in the Mechanical Subgroups area. Finally Figure 9 shows Analysis of losses Cost De‐

countermeasures found in the production process and logistics (handling).

Here below is a description of the Statement of the Problem and methodology.

gram allowing to generate savings policies based on Cost Deployment:

**•** Find relationships between waste and losses and their reductions.

**- ACT** - Extension of the methodology and other cases.

*handling process (LOG)*

waste and losses;

**•** Non-value-added activities;

**•** Delay in material procurement;

**•** Troubleshooting Machines;

**•** Low balancing levels;

**•** Handling losses;

such as:

**•** Defects;

**•** Setup;

ployment.

**•** Breakdown.

### **4.1. Company background**

Fiat Group Automobiles is an automotive-focused industrial group engaged in designing, manufacturing and selling cars for the mass market under the Fiat, Lancia, Alfa Romeo, Fiat Professional and Abarth brands and luxury cars under the Ferrari and Maserati brands. It also operates in the components sector through Magneti Marelli, Teksid and Fiat Powertrain and in the production systems sector through Comau. Fiat operates in Europe, North and South America, and Asia. Its headquarters is in Turin, Italy and employs over 137,801 peo‐ ple [21]. Its 2008 revenues were almost € 59 billion, 3.4% of which were invested in R&D. Fiat's Research Center (CRF) can be appropriately defined as the "innovation engine" of the Fiat Group, as it is responsible for the applied research and technology development activi‐ ties of all its controlled companies [22]. The group Fiat has a diversified business portfolio, which shields it against demand fluctuations in certain product categories and also enables it to benefit from opportunities available in various divisions.

### **4.2. Statement of the problem and methodology**

The aim of the project is to increase the flexibility and productivity in an ETU (Elementary Technology Unit) of Mechanical Subgroups in a part of the FGA's assembling process in the Cassino Plant through the conventional Plan-Do-Check-Act approach using the WCM meth‐ odology:

**- PLAN -** Costs Analysis and Losses Analysis starting from Cost Deployment (CD) for the manufacturing process using the items and tools of Workplace Organization (WO) and for the handling process the Logistic and Customer Services (LOG) applications.

**• DO** - Analysis of the non-value-added Activities; analysis of re-balancing line and analysis of re-balancing of work activities in accordance with the analysis of the logistics flows using the material matrix and the flows matrix. Study and realization of prototypes to improve workstation ergonomics and to ensure minimum material handling; Application of countermeasures found in the production process and logistics (handling).

**- CHECK** – Analysis of results in order to verify productivity improvement, ergonomic im‐ provement (WO) and the optimization of the internal handling (in the plant) and external logistics flows (LOG). Check of the losses reduction according to Cost Deployment (CD).

**- ACT** - Extension of the methodology and other cases.

Here below is a description of the Statement of the Problem and methodology.

### *4.2.1. PLAN: Costs analysis and losses analysis (CD) for the manufacturing process (WO) and for the handling process (LOG)*

In this first part (PLAN) were analyzed the losses in the assembly process area so as to or‐ ganize the activities to reduce the losses identified in the second part of the analysis (DO). Object of the study was the Mechanical Subgroups ETU - Elementary Technology Unit (in a part of the Cassino Plant Assembly Shop). The aim of this analysis was to identify a pro‐ gram allowing to generate savings policies based on Cost Deployment:


In fact, in general a production system is characterized by several waste and losses (MUDA), such as:


role of product tracking to actively manage the product and its processing. This requires co‐ ordinating the information flow between process equipment and higher level systems, sup‐ porting both manual and automatic interfaces. A case study methodology was used to collect detailed information on division and plant strategic objectives, performance meas‐ urement systems, and performance measurement system linkages. The result of this re‐ search was to develop principles on strategic objectives, performance measurement systems and performance measurement system linkages for improved organizational coordination. The purpose of this study is to examine the relationship between division and plant per‐ formance measurement systems designed to support the firm's strategic objectives and to improve organizational coordination. We will focus our attention on the Cost Deployment Pillar, Autonomous Activities/Workplace Organization Pillar and Logistics/Customers Serv‐

Fiat Group Automobiles is an automotive-focused industrial group engaged in designing, manufacturing and selling cars for the mass market under the Fiat, Lancia, Alfa Romeo, Fiat Professional and Abarth brands and luxury cars under the Ferrari and Maserati brands. It also operates in the components sector through Magneti Marelli, Teksid and Fiat Powertrain and in the production systems sector through Comau. Fiat operates in Europe, North and South America, and Asia. Its headquarters is in Turin, Italy and employs over 137,801 peo‐ ple [21]. Its 2008 revenues were almost € 59 billion, 3.4% of which were invested in R&D. Fiat's Research Center (CRF) can be appropriately defined as the "innovation engine" of the Fiat Group, as it is responsible for the applied research and technology development activi‐ ties of all its controlled companies [22]. The group Fiat has a diversified business portfolio, which shields it against demand fluctuations in certain product categories and also enables

The aim of the project is to increase the flexibility and productivity in an ETU (Elementary Technology Unit) of Mechanical Subgroups in a part of the FGA's assembling process in the Cassino Plant through the conventional Plan-Do-Check-Act approach using the WCM meth‐

**- PLAN -** Costs Analysis and Losses Analysis starting from Cost Deployment (CD) for the manufacturing process using the items and tools of Workplace Organization (WO) and for

**• DO** - Analysis of the non-value-added Activities; analysis of re-balancing line and analysis of re-balancing of work activities in accordance with the analysis of the logistics flows using the material matrix and the flows matrix. Study and realization of prototypes to improve

the handling process the Logistic and Customer Services (LOG) applications.

it to benefit from opportunities available in various divisions.

**4.2. Statement of the problem and methodology**

ice Pillar.

14 Operations Management

odology:

**4.1. Company background**


It is important to give a measure of all the losses identified in process examination. The data collection is therefore the "*key element*" for the development of activities of Cost Deploy‐ ment. Here below in Figure 7 an example of losses identified from CD from the Assembly Shop is shown and in Figure 8 is shown an example of CD data collection regarding NVAA (Non-Value-Added Activities) for WO (for this case study we excluded check and rework losses) in the Mechanical Subgroups area. Finally Figure 9 shows Analysis of losses Cost De‐ ployment.

**Figure 7.** Analysis of losses Cost Deployment – Stratification of NVAA losses for Mechanical Subgroups ETU - Ele‐ mentary Technology Unit (figure highlights the most critical workstation)

**Figure 9.** Analysis of losses Cost Deployment – Pareto Analysis Line Balancing Losses or Insaturation on Mechanical

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry

http://dx.doi.org/10.5772/54450

17

*4.2.2. DO - Analysis of non-value-added activities, of the re-balancing line and analysis of re-balancing*

According to figure 9 and figure 10 were analyzed the losses regarding NVAA and Insatura‐ tion. In fact were analyzed all 4 critical workstations (because they have the worst losses) and were identified 41 types of non-value-added activities (walking, waiting, turning, pick‐ ing....) in the various sub-phases of the production process. In Table 4 is shown some exam‐

Some examples of standard tools used to analyze NVAA reduction (MUDA Analysis) for the 4 workstations are shown here below in Figures 10, 11 and 12) job stratification (VAA - Value Added Activities; NVAA – Non-Value-Added Activities; LBL - Low Balancing Level; EAWS - European Assembly Work Sheet – Ergonomy); 2) Spaghetti Chart and 3) Kaizen

ples of non-value-added activities analyzed (MUDA Analysis).

Subgroups ETU - Elementary Technology Unit

*of work activities*

Standard.

**Figure 8.** Analysis of losses Cost Deployment – Pareto Analysis NVAA Mechanical Subgroups ETU - Elementary Tech‐ nology Unit

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry http://dx.doi.org/10.5772/54450 17

**Figure 9.** Analysis of losses Cost Deployment – Pareto Analysis Line Balancing Losses or Insaturation on Mechanical Subgroups ETU - Elementary Technology Unit

**Figure 7.** Analysis of losses Cost Deployment – Stratification of NVAA losses for Mechanical Subgroups ETU - Ele‐

**Figure 8.** Analysis of losses Cost Deployment – Pareto Analysis NVAA Mechanical Subgroups ETU - Elementary Tech‐

mentary Technology Unit (figure highlights the most critical workstation)

nology Unit

16 Operations Management

### *4.2.2. DO - Analysis of non-value-added activities, of the re-balancing line and analysis of re-balancing of work activities*

According to figure 9 and figure 10 were analyzed the losses regarding NVAA and Insatura‐ tion. In fact were analyzed all 4 critical workstations (because they have the worst losses) and were identified 41 types of non-value-added activities (walking, waiting, turning, pick‐ ing....) in the various sub-phases of the production process. In Table 4 is shown some exam‐ ples of non-value-added activities analyzed (MUDA Analysis).

Some examples of standard tools used to analyze NVAA reduction (MUDA Analysis) for the 4 workstations are shown here below in Figures 10, 11 and 12) job stratification (VAA - Value Added Activities; NVAA – Non-Value-Added Activities; LBL - Low Balancing Level; EAWS - European Assembly Work Sheet – Ergonomy); 2) Spaghetti Chart and 3) Kaizen Standard.


**N° Losses identified Solution Non-value-added activities identified**

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry

http://dx.doi.org/10.5772/54450

19

Automatic combination To check

the waste container Print labels directly onto sheet unified To trow

labels coupling Automatic reading To pick

11 Use of a single box Enabling a second workstation To walk

12 Pick hub Pick subgroup (hub+ damper To pick

through keyboard New air equipment without keyboard To arrange

through keyboard New air equipment without keyboard To wait

<sup>8</sup> Throw liner nameplate into

<sup>9</sup> Pick equipment for reading

Combination of manual

<sup>13</sup> Use of electrical equipment

<sup>14</sup> Use of air equipment

10

pallet

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry http://dx.doi.org/10.5772/54450 19

**N° Losses identified Solution Non-value-added activities identified**

Unification of the sheets from 3 to 1 To select

Unification of the sheets from 3 to 1 To pick

Print sticker To walk

sequencing Complete hub sequencing To pick

2 Pick the box for sequencing Complete hub sequencing To pick

5 Pick identification sheet Unification of the sheets from 3 to 1 To pick

7 Pick identification hub label Digital label with barcode To pick

<sup>1</sup> Pick picking list for

18 Operations Management

Select sheets for the different model

Pick sheets for the different

<sup>6</sup> Go to the printer to pick up

sticker

3

4

process



**Figure 10.** Details of the 4 workstations

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry

http://dx.doi.org/10.5772/54450

21

**Figure 11.** Spaghetti Chart Example

**Table 4.** MUDA Analysis - NVAA

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry http://dx.doi.org/10.5772/54450 21

**Figure 10.** Details of the 4 workstations

**N° Losses identified Solution Non-value-added activities identified**

damper Complete hub sequencing To walk

pair with damper Complete hub sequencing To pick

line Pick subgroup (hub+ damper) To pick

the next workstation Select the work program To select

press a button once

Use a single workstation after the sequencing of the subgroup in order to

Use a single workstation after the sequencing of the subgroup and match processing activities during the translation of the pallet

To push

To wait

Complete hub sequencing To transport

Complete damper sequencing To transport

15

18

21

22

damper

box

20 Operations Management

Transport empty box hub sequencing to put the full

<sup>16</sup> Walk to the line side to pick

<sup>17</sup> Remove the small parts to

Transport empty box damper sequencing to put

<sup>19</sup> Pick the hub and put on the

<sup>20</sup> Select the work program for

Press the feed button for the

Wait for the translational motion of the pallet

**Table 4.** MUDA Analysis - NVAA

the full box

**Figure 11.** Spaghetti Chart Example


At this point was assumed the new flow of the complete damper (corner) = damper + com‐ plete hub sequencing according to the material matrix considering losses relating to han‐ dling (material matrix classification – see figure 14). The material matrix classifies the commodities (number of drawings) in three main groups: A (bulky, multi-variations, expen‐ sive), B (normal) and C (small parts) and subgroups (a mixture of group A: bulky and multivariations or bulky and expensive etc.). For each of these groups was filled out the flow matrix that defines the correct flow associated: JIS (and different levels), JIT (and different levels) and indirect (and different levels). After identifying the correct flow, in the JIS case, was built a prototype of the box (bin) to feed the line that would ensure the right number of parts to optimize logistic handling. However, the new box (bin) for this new mechanical subgroup must feed the line in a comfortable and ergonomic manner for the worker in the workstation, for this reason was simulated the solution before the realization of the box (bin)

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry

http://dx.doi.org/10.5772/54450

23

At the end of the Muda analysis (NVAA analysis) were applied all the solutions found to have a lean process (the internal target is to achieve 25% of average NVAA losses) and was reorganized the line through a new line balancing level (rebalancing) to achieve 5% of the average line balancing losses (internal target). Another important aspect was the logistics flows analysis (see figure 16) considering *advanced warehouses* (Figure 17). The simulation scenario was defined using trucks from the Cassino plant warehouses that also feed other

At the end of the handling analysis (flow, stock level…) thanks to this new "lean" organiza‐ tion of material matrix was used the correct line feed from the Just In Sequence warehouse. It was reduced the internal warehouse (stock level), the space used for sequencing (square metres), the indirect manpower used to feed the sequencing area and we obtained zero fork‐

commodities to achieve high levels of saturation to minimize handling losses.

(see figure 15).

**Figure 14.** Material matrix example

**Figure 12.** Standard Kaizen analysis Example

Figure 13 shows the initial scenario analyzed to identify problems and weaknesses.

**Figure 13.** Details of the 4 workstations

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry http://dx.doi.org/10.5772/54450 23


**Figure 14.** Material matrix example

**Figure 12.** Standard Kaizen analysis Example

22 Operations Management

**Figure 13.** Details of the 4 workstations

Figure 13 shows the initial scenario analyzed to identify problems and weaknesses.

At this point was assumed the new flow of the complete damper (corner) = damper + com‐ plete hub sequencing according to the material matrix considering losses relating to han‐ dling (material matrix classification – see figure 14). The material matrix classifies the commodities (number of drawings) in three main groups: A (bulky, multi-variations, expen‐ sive), B (normal) and C (small parts) and subgroups (a mixture of group A: bulky and multivariations or bulky and expensive etc.). For each of these groups was filled out the flow matrix that defines the correct flow associated: JIS (and different levels), JIT (and different levels) and indirect (and different levels). After identifying the correct flow, in the JIS case, was built a prototype of the box (bin) to feed the line that would ensure the right number of parts to optimize logistic handling. However, the new box (bin) for this new mechanical subgroup must feed the line in a comfortable and ergonomic manner for the worker in the workstation, for this reason was simulated the solution before the realization of the box (bin) (see figure 15).

At the end of the Muda analysis (NVAA analysis) were applied all the solutions found to have a lean process (the internal target is to achieve 25% of average NVAA losses) and was reorganized the line through a new line balancing level (rebalancing) to achieve 5% of the average line balancing losses (internal target). Another important aspect was the logistics flows analysis (see figure 16) considering *advanced warehouses* (Figure 17). The simulation scenario was defined using trucks from the Cassino plant warehouses that also feed other commodities to achieve high levels of saturation to minimize handling losses.

At the end of the handling analysis (flow, stock level…) thanks to this new "lean" organiza‐ tion of material matrix was used the correct line feed from the Just In Sequence warehouse. It was reduced the internal warehouse (stock level), the space used for sequencing (square metres), the indirect manpower used to feed the sequencing area and we obtained zero fork‐

**Figure 15.** Simulation of an ergonomic workstation

**Figure 16.** Initial logistic flows

lifts on the shopfloor because we used the ro-ro (roll in - roll out) system. Figure 18 shows the final scenario in which we have 1 operator instead of 4 operators.

### *4.2.3. Check – Analysis of results to verify productivity and ergonomic improvement and optimization of logistics flows*

**•** Productivity improvement +75% (Figure 19) direct labour;

**Figure 18.** Details of the final workstation

**Figure 17.** Logistic flows considering advanced warehouses

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry

http://dx.doi.org/10.5772/54450

25

In detail the main results and savings can be summarized as follows:

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry http://dx.doi.org/10.5772/54450 25

**Figure 17.** Logistic flows considering advanced warehouses

**Figure 18.** Details of the final workstation

lifts on the shopfloor because we used the ro-ro (roll in - roll out) system. Figure 18 shows

*4.2.3. Check – Analysis of results to verify productivity and ergonomic improvement and optimization*

the final scenario in which we have 1 operator instead of 4 operators.

In detail the main results and savings can be summarized as follows:

*of logistics flows*

**Figure 16.** Initial logistic flows

**Figure 15.** Simulation of an ergonomic workstation

24 Operations Management

**•** Productivity improvement +75% (Figure 19) direct labour;

**•** Ergonomics improvement +85% (Figure 20) according to the rest factor;

**Figure 21.** Optimization of logistic flows

Public Universities).

**Table 5.** Activities and status

**5. Conclusions**

LBL Reduction

*4.2.4. Act - Extension of the methodology and other cases*

NVAA Reduction NVAA Std analysis

Ergonomics Improvement Jack Software

Optimization of logistics flow Value stream map

NVAA Database

Excel Human Model

Balance line

Future developments include the extension of the methodology to the entire plant. Here be‐ low in Table 5 we can see the activities and status adopted. to achieve the results shown in the "check". We used traditional tools and methodology for the analysis and new tools to simulate the sceneries on the line and for the logistic problems we involved other resources outside the plant (ELASIS and CRF - FIAT Research Center, Fiat Central Department and

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry

**ACTIVITIES TOOL STATUS**

Check Saturation (Flexim Software/Plant simulation)

Check Saturation Flexim/Plant simulation

A key industrial policy conclusion is that intelligently designed selective policies can be ef‐ fective in developing production systems. Intelligent industrial policies need to be shaped to

+ +

http://dx.doi.org/10.5772/54450

27

+ +

+ +

+ +

**•** Optimization of logistic flows (Figure 21) according to the flow matrix.

**Figure 19.** Productivity optimization

**Figure 20.** Ergonomics improvement

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry http://dx.doi.org/10.5772/54450 27

**Figure 21.** Optimization of logistic flows

**•** Ergonomics improvement +85% (Figure 20) according to the rest factor;

**•** Optimization of logistic flows (Figure 21) according to the flow matrix.

**Figure 19.** Productivity optimization

26 Operations Management

**Figure 20.** Ergonomics improvement

### *4.2.4. Act - Extension of the methodology and other cases*

Future developments include the extension of the methodology to the entire plant. Here be‐ low in Table 5 we can see the activities and status adopted. to achieve the results shown in the "check". We used traditional tools and methodology for the analysis and new tools to simulate the sceneries on the line and for the logistic problems we involved other resources outside the plant (ELASIS and CRF - FIAT Research Center, Fiat Central Department and Public Universities).


**Table 5.** Activities and status

### **5. Conclusions**

A key industrial policy conclusion is that intelligently designed selective policies can be ef‐ fective in developing production systems. Intelligent industrial policies need to be shaped to respond to contingent factors which are specific to a sector, period and country. Fundamen‐ tally, it is not a question of whether these selective policies work, but under what circum‐ stances they work.

**References**

No. 17, pp. 4806-4822.

Sciences.

neering and Technology 2011; Vol.3 (5), 341-353.

DOI: 10.2316/P.2012.776-048 – pp 35-42.

Associates, New York, 1990).

York: Free Press, p. 205, 1986.

sue) (1994) S53–S63.

Pages 27-38.

3(1) (2004) 85–102.

pean Industrial Training 1999; 23(6) 300–309.

[1] De Felice F., Petrillo A, Silvestri, A. Multi-criteria risk analysis to improve safety in manufacturing systems. International Journal of Production Research. 2012; Vol. 50,

Improving Operations Performance with World Class Manufacturing Technique: A Case in Automotive Industry

http://dx.doi.org/10.5772/54450

29

[2] De Felice F., Petrillo A. Methodological Approach for Performing Human Reliability and Error Analysis in Railway Transportation System. International Journal of Engi‐

[3] De Felice F., Petrillo A. Hierarchical model to optimize performance in logistics poli‐ cies: multi attribute analysis. The 8th International Strategic Management Confer‐ ence. June 21-23, 2012 Barcelona –Spain. Elsevier Procedia Social and Behavioral

[4] De Felice F., Petrillo A. Productivity analysis through simulation technique to opti‐ mize an automated assembly line. Proceedings of the IASTED International Confer‐ ence, June 25 - 27, 2012 Napoli, Italy. Applied Simulation and Modelling (ASM 2012)

[5] Haynes A. Effect of world class manufacturing on shop floor workers, Journal Euro‐

[6] Womack J. P., Jones D. T., Roos D. The Machine that Changed the World (Rawson

[7] Oliver N., Delbridge R., Jones D., and Lowe J. World class manufacturing: Further evidence in the lean production debate, British Journal of Management 5(Special is‐

[8] Schoenberger R.J. World class manufacturing: the lessons of simplicity applied, New

[9] Yamashina H. Japanese manufacturing strategy and the role of total productive maintenance. Journal of Quality in Maintenance Engineering Volume 1, Issue 1, 1995,

[10] Ghalayini A.M., and Noble J.S. The changing basis of performance measurement, Int.

[11] Kodali R.B., Sangwan K.S., Sunnapwar V.K. Performance value analysis for the justi‐ fication of world-class manufacturing systems, J. Advanced Manufacturing Systems

[12] Wee Y.S., and Quazi H.A. Development and validation of critical factors of environ‐ mental management, Industrial Management & Data Systems 105(1) (2005) 96–114.

J. Operations & Production Management 16(8) (1996) 63–80.

From this point of view, World Class Manufacturing is a "key" concept. This is the reason why the concept constituting "World Class Manufacturing" has received considerable atten‐ tion in academic literature, even though it has been developed principally in relation to the needs of larger scale manufacturing organisations. Regards our case study we can conclude that WCM allows to reduce losses and optimize logistics flows. Thus, the main results can be summarized as follows:


Definitely the new process and the internal flows are very lean and efficient. In this case study it was implemented a servo system using Low Cost Automation. This system ensures only one picking point in order to have only one container at the side of the production line.

### **Acknowledgements**

We would like to express our gratitude to Fiat Group Automobiles S.p.A. - Cassino Plant, to the Plant Manager and his staff and other partner and organizations who gave us the possibility to carry out the necessary research and use their data for the research project "Infrastructures of advanced logistics for the lines with high flexibility" showed in tiny part, and briefly in this case study.

### **Author details**

Fabio De Felice1 , Antonella Petrillo1\* and Stanislao Monfreda2

\*Address all correspondence to: a.petrillo@unicas.it


### **References**

respond to contingent factors which are specific to a sector, period and country. Fundamen‐ tally, it is not a question of whether these selective policies work, but under what circum‐

From this point of view, World Class Manufacturing is a "key" concept. This is the reason why the concept constituting "World Class Manufacturing" has received considerable atten‐ tion in academic literature, even though it has been developed principally in relation to the needs of larger scale manufacturing organisations. Regards our case study we can conclude that WCM allows to reduce losses and optimize logistics flows. Thus, the main results can

**1.** greater efficiency because the inner product is cheaper because it is possible to use external warehouses or suppliers - outsourcing - specialized and more cost-effective for the

**2.** greater flexibility because it is possible to work more models (in Cassino with these logical sequencing and kitting there are 4 different model brands on the same assembly line: *Alfa*

**3.** no space constraint (in this example we get only 1 container already sequenced line side) Definitely the new process and the internal flows are very lean and efficient. In this case study it was implemented a servo system using Low Cost Automation. This system ensures only one picking point in order to have only one container at the side of the production line.

We would like to express our gratitude to Fiat Group Automobiles S.p.A. - Cassino Plant, to the Plant Manager and his staff and other partner and organizations who gave us the possibility to carry out the necessary research and use their data for the research project "Infrastructures of advanced logistics for the lines with high flexibility" showed in tiny part, and briefly in this

, Antonella Petrillo1\* and Stanislao Monfreda2

1 University of Cassino, Department of Civil and Mechanical Engineering, Cassino, Italy

2 Fiat Group Automobiles EMEA WCM Cassino Plant Coordinator, Cassino, Italy

\*Address all correspondence to: a.petrillo@unicas.it

*Romeo Giulietta, Chrysler, Lancia Delta and Fiat Bravo*;

stances they work.

28 Operations Management

be summarized as follows:

**Acknowledgements**

case study.

**Author details**

Fabio De Felice1

company;


[13] Digalwar A.K., Metri, B.A. Performance measurement framework for world class manufacturing, International Journal Applied Management & Technology 3(2) (2005) 83–102.

**Chapter 2**

**Managing OEE to Optimize Factory Performance**

Raffaele Iannone and Maria Elena Nenni

http://dx.doi.org/10.5772/55322

**1. Introduction**

and Lean Production.

Additional information is available at the end of the chapter

"If you can not measure it, you can not improve it."(Lord Kelvin)

tions are the perfect response even in advanced frameworks.

matical formulas for calculating OEE are provided too.

It is a common opinion that productivity improvement is nowadays the biggest challenge for companies in order to remain competitive in a global market [1, 2]. A well-known way of measuring the effectiveness is the Overall Equipment Efficiency (OEE) index. It has been firstly developed by the Japan Institute for Plant Maintenance (JIPM) and it is widely used in many industries. Moreover it is the backbone of methodologies for quality improvement as TQM

The strength of the OEE index is in making losses more transparent and in highlighting areas of improvement. OEE is often seen as a catalyst for change and it is easy to understand as a lot

The aim of this chapter is to answer to general questions as *what to measure? how to measure?* and *how to use the measurements?* in order to optimize the factory performance. The goal is to show as OEE is a good base for optimizing the factory performance. Moreover OEE's evolu‐

This chapter begins with an explanation of the difference between efficiency, effectiveness and productivity as well as with a formal definition for the components of effectiveness. Mathe‐

After the introduction to the fundamental of OEE, some interesting issues concerning the way to implement the index are investigated. Starting with the question that in calculat‐ ing OEE you have to take into consideration machines as operating in a linked and complex environment. So we analyze almost a model for the OEE calculation that lets a wider approach to the performance of the whole factory. The second issue concerns with monitoring the factory performance through OEE. It implies that information for decision-

> © 2013 Iannone and Nenni; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 Iannone and Nenni; licensee InTech. This is a paper distributed under the terms of the Creative Commons

of articles and discussion have been generated about this topic over the last years.


### **Managing OEE to Optimize Factory Performance**

Raffaele Iannone and Maria Elena Nenni

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/55322

**1. Introduction**

[13] Digalwar A.K., Metri, B.A. Performance measurement framework for world class manufacturing, International Journal Applied Management & Technology 3(2) (2005)

[14] Utzig L. J. CMS performance measurement, in Cost Management for Today's Ad‐ vanced Manufacturing: The CAM-I Conceptual Design, eds. C. Berliner and J. A.

[15] Hayes R.H., Abernathy W. J. Managing our way to economic decline, Harvard Busi‐

[16] Schmenner R. W. International factory productivity gains, J. Operations Management

[17] Kennerley M., Neely A., Adams C. Survival of the fittest measuring performance in a changing business environment, Measuring Business Excellence 7(4) (2003) 37–43. [18] Japan Institute of Plant Maintenance (JIPM) (ed.): A Report on Systemizing Indica‐ tors of Total Productive Maintenance (TPM) (in Japanese) (JIPM, Tokyo, 2007). [19] Shirose K. (ed.): TPM New Implementation Program in Fabrication and Assembly In‐

[20] Murata K., Katayam H. 2009. An evaluation of factory performance utilized KPI/KAI with data envelopment analysis Journal of the Operations Research Society of Japan

[22] Di Minin A., Frattini F., Piccaluga A. Fiat: Open Innovation in a downturn

(1993-2003). University of California, Berkeley vol., 52 (3). Spring 2010.

Brimson (Harvard Business School Press, Boston, 1988).

dustries Productivity Press, Portland, Oregon 1996.

[21] Datamonitor. Fiat S.p.A. Company Profile. 12 July 2011.

ness Review 58(1) (1980) 67–77.

2009, Vol. 52, No. 2, 204-220.

10(2) (1991) 229–54.

83–102.

30 Operations Management

"If you can not measure it, you can not improve it."(Lord Kelvin)

It is a common opinion that productivity improvement is nowadays the biggest challenge for companies in order to remain competitive in a global market [1, 2]. A well-known way of measuring the effectiveness is the Overall Equipment Efficiency (OEE) index. It has been firstly developed by the Japan Institute for Plant Maintenance (JIPM) and it is widely used in many industries. Moreover it is the backbone of methodologies for quality improvement as TQM and Lean Production.

The strength of the OEE index is in making losses more transparent and in highlighting areas of improvement. OEE is often seen as a catalyst for change and it is easy to understand as a lot of articles and discussion have been generated about this topic over the last years.

The aim of this chapter is to answer to general questions as *what to measure? how to measure?* and *how to use the measurements?* in order to optimize the factory performance. The goal is to show as OEE is a good base for optimizing the factory performance. Moreover OEE's evolu‐ tions are the perfect response even in advanced frameworks.

This chapter begins with an explanation of the difference between efficiency, effectiveness and productivity as well as with a formal definition for the components of effectiveness. Mathe‐ matical formulas for calculating OEE are provided too.

After the introduction to the fundamental of OEE, some interesting issues concerning the way to implement the index are investigated. Starting with the question that in calculat‐ ing OEE you have to take into consideration machines as operating in a linked and complex environment. So we analyze almost a model for the OEE calculation that lets a wider approach to the performance of the whole factory. The second issue concerns with monitoring the factory performance through OEE. It implies that information for decision-

making have to be guaranteed real-time. It is possible only through automated systems for calculating OEE and through the capability to collect a large amount of data. So we propose an examination of the main automated OEE systems from the simplest to high-level systems integrated into ERP software. Even data collection strategies are screened for rigorous measurement of OEE.

The last issue deals with how OEE has evolved into tools like TEEP, PEE, OFE, OPE and OAE in order to fit with different requirements.

At the end of the chapter, industrial examples of OEE application are presented and the results are discussed.

### **2. Fundamentals of OEE**

Overall equipment efficiency or effectiveness (OEE) is a hierarchy of metrics proposed by Seiichi Nakajima [3] to measure the performance of the equipment in a factory. OEE is a really powerful tool that can be used also to perform diagnostics as well as to compare production units in differing industries. The OEE has born as the backbone of Total Productive Mainte‐ nance (TPM) and then of other techniques employed in asset management programs, Lean manufacturing [4], Six Sigma [5], World Class Manufacturing [4].

**Figure 1.** Efficiency versus Effectiveness versus Productivity.

specific period (year, month, week, or day).

According to the previous remark a basic definition of OEE is:

*OEE* <sup>=</sup> *ValuableOperatingTime*

**•** Valuable Operating Time is the net time during which the equipment actually produces an

**•** Loading Time is the actual number of hours that the equipment is expected to work in a

The formula indicates how much the equipment is doing what it is supposed to do and it captures the degree of conforming to output requirements. It is clearly a measure of effective‐

OEE is not only a metric, but it also provides a framework to improve the process. A model for OEE calculation aims to point out each aspect of the process that can be ranked for improvement. To maximize equipment effectiveness it is necessary to bring the equipment to peak operating conditions and then keeping it there by eliminating or at least minimizing any factor that might diminish its performance. In other words a model for OEE calculation should be based on the identification of any losses that prevent equipment from achieving its

*LoadingTime* (1)

Managing OEE to Optimize Factory Performance

http://dx.doi.org/10.5772/55322

33

**4. Formal definition of OEE**

where:

ness.

acceptable product;

maximum effectiveness.

By the end of the 1980's, the concept of Total Production Maintenance became more widely known in the Western world [7] and along with it OEE implementation too. From then on an extensive literature [8-11] made OEE accessible and feasible for many Western companies.

### **3. Difference between efficiency, effectiveness and productivity**

Confusion exists as to whether OEE has indeed been an effectiveness or efficiency measure. The traditional vision of TMP referred to Overall Equipment Efficiency while now it is generally recognized as Overall Equipment Effectiveness. The difference between efficiency and effectiveness is that effectiveness is the actual output over the reference output and efficiency is the actual input over the reference input. The Equipment Efficiency refers thus to ability to perform well at the lowest overall cost. Equipment Efficiency is then unlinked from output and company goals. Hence the concept of Equipment Effectiveness relates to the ability of producing repeatedly what is intended producing, that is to say to produce value for the company (see Figure 1).

Productivity is defined as the actual output over the actual input (e.g. number of final products per employee), and both the effectiveness and the efficiency can influence it. Regarding to OEE, in a modern, customer-driven "lean" environment it is more useful to cope with effectiveness.

**Figure 1.** Efficiency versus Effectiveness versus Productivity.

### **4. Formal definition of OEE**

According to the previous remark a basic definition of OEE is:

$$\text{OEE} = \frac{\text{ValutableOperating Time}}{\text{LoadingTime}} \tag{1}$$

where:

making have to be guaranteed real-time. It is possible only through automated systems for calculating OEE and through the capability to collect a large amount of data. So we propose an examination of the main automated OEE systems from the simplest to high-level systems integrated into ERP software. Even data collection strategies are screened for

The last issue deals with how OEE has evolved into tools like TEEP, PEE, OFE, OPE and OAE

At the end of the chapter, industrial examples of OEE application are presented and the results

Overall equipment efficiency or effectiveness (OEE) is a hierarchy of metrics proposed by Seiichi Nakajima [3] to measure the performance of the equipment in a factory. OEE is a really powerful tool that can be used also to perform diagnostics as well as to compare production units in differing industries. The OEE has born as the backbone of Total Productive Mainte‐ nance (TPM) and then of other techniques employed in asset management programs, Lean

By the end of the 1980's, the concept of Total Production Maintenance became more widely known in the Western world [7] and along with it OEE implementation too. From then on an extensive literature [8-11] made OEE accessible and feasible for many Western companies.

Confusion exists as to whether OEE has indeed been an effectiveness or efficiency measure. The traditional vision of TMP referred to Overall Equipment Efficiency while now it is generally recognized as Overall Equipment Effectiveness. The difference between efficiency and effectiveness is that effectiveness is the actual output over the reference output and efficiency is the actual input over the reference input. The Equipment Efficiency refers thus to ability to perform well at the lowest overall cost. Equipment Efficiency is then unlinked from output and company goals. Hence the concept of Equipment Effectiveness relates to the ability of producing repeatedly what is intended producing, that is to say to produce value for the

Productivity is defined as the actual output over the actual input (e.g. number of final products per employee), and both the effectiveness and the efficiency can influence it. Regarding to OEE, in a modern, customer-driven "lean" environment it is more useful to

**3. Difference between efficiency, effectiveness and productivity**

manufacturing [4], Six Sigma [5], World Class Manufacturing [4].

rigorous measurement of OEE.

**2. Fundamentals of OEE**

company (see Figure 1).

cope with effectiveness.

are discussed.

32 Operations Management

in order to fit with different requirements.


The formula indicates how much the equipment is doing what it is supposed to do and it captures the degree of conforming to output requirements. It is clearly a measure of effective‐ ness.

OEE is not only a metric, but it also provides a framework to improve the process. A model for OEE calculation aims to point out each aspect of the process that can be ranked for improvement. To maximize equipment effectiveness it is necessary to bring the equipment to peak operating conditions and then keeping it there by eliminating or at least minimizing any factor that might diminish its performance. In other words a model for OEE calculation should be based on the identification of any losses that prevent equipment from achieving its maximum effectiveness.

The OEE calculation model is then designed to isolate losses that degrade the equipment effectiveness.

between start-up and completely stable throughput, yields products that do not conform to quality demand or not completely. They even happen because an incorrect functioning of

The framework in which we have divided losses in down time, speed and quality losses completely fits with the Six Big Losses model proposed by Nakajima [3] and that we summarize

**Category Big losses**

On the base of Six Big Losses model, it is possible to understand how the Loading Time decreases until to the Valuable Operating Time and the effectiveness is compromised. Let's go

LOADING TIME Planned downtime

Reduced speed

*Availability* (*A*)= *Operating Time*

*Performance* (*P*)= *Net Operating Time*


Managing OEE to Optimize Factory Performance

http://dx.doi.org/10.5772/55322

35

Set-up and adjustments

*Loading Time* (2)

*Operating Time* (3)



the machine or because process parameters are not tuned to standard.

DOWNTIME - Breakdown

QUALITY - Quality losses

OPERATING TIME Breakdown

NET OPERATING TIME Minor stoppages

Quality losses Reduced yield

**Table 1.** Six Big Losses model proposed by Nakajima [3].

through the next Figure 2.

**CALENDAR TIME** 

VALUABLE OPERATING

**Figure 2.** Breakdown of the calendar time.

At this point we can define:

TIME

SPEED - Idling, minor stoppages

in the Table 1:

### **5. Losses analysis**

Losses are activities that absorb resources without creating value. Losses can be divided by their frequency of occurrence, their cause and by different types they are. The latter one has been developed by Nakajima [3] and it is the well-known Six Big Losses framework. The other ones are interesting in order to rank rightly losses.

According to Johnson et al. [12], losses can be chronic or sporadic. The chronic disturbances are usually described as *"small, hidden and complicated"* while the sporadic ones occur quickly and with large deviations from the normal value. The loss frequency combined with the loss severity gives a measure of the damage and it is useful in order to establish the order in which the losses have to be removed. This classification makes it possible to rank the losses and remove them on the basis of their seriousness or impact on the organization.

Regarding divide losses by their causes, three different ones can be found:


The external causes such as shortage of raw materials, lack of personnel or limited demand do not touch the equipment effectiveness. They are of great importance for top management and they should be examined carefully because their reduction can directly increase the revenues and profit. However they are not responsible of the production or maintenance team and so they are not taken into consideration through the OEE metric.

To improve the equipment effectiveness the losses because of external causes have to be taken out and the losses caused by machine malfunctioning and process, changeable by the daily organization, can still be divided into:


between start-up and completely stable throughput, yields products that do not conform to quality demand or not completely. They even happen because an incorrect functioning of the machine or because process parameters are not tuned to standard.

The framework in which we have divided losses in down time, speed and quality losses completely fits with the Six Big Losses model proposed by Nakajima [3] and that we summarize in the Table 1:


**Table 1.** Six Big Losses model proposed by Nakajima [3].

The OEE calculation model is then designed to isolate losses that degrade the equipment

Losses are activities that absorb resources without creating value. Losses can be divided by their frequency of occurrence, their cause and by different types they are. The latter one has been developed by Nakajima [3] and it is the well-known Six Big Losses framework. The other

According to Johnson et al. [12], losses can be chronic or sporadic. The chronic disturbances are usually described as *"small, hidden and complicated"* while the sporadic ones occur quickly and with large deviations from the normal value. The loss frequency combined with the loss severity gives a measure of the damage and it is useful in order to establish the order in which the losses have to be removed. This classification makes it possible to rank the losses and

**1.** machine malfunctioning: an equipment or a part of this does not fulfill the demands;

**3.** external: cause of losses that cannot be improved by the maintenance or production team. The external causes such as shortage of raw materials, lack of personnel or limited demand do not touch the equipment effectiveness. They are of great importance for top management and they should be examined carefully because their reduction can directly increase the revenues and profit. However they are not responsible of the production or maintenance team and so

To improve the equipment effectiveness the losses because of external causes have to be taken out and the losses caused by machine malfunctioning and process, changeable by the daily

**• Down time losses:** when the machine should run, but it stands still. Most common down‐ time losses happen when a malfunction arises, an unplanned maintenance task must be

**• Speed losses:** the equipment is running, but it is not running at its maximum designed speed. Most common speed losses happen when equipment speed decrease but it is not zero. It can depend on a malfunctioning, a small technical imperfections, like stuck pack‐ aging or because of the start-up of the equipment related to a maintenance task, a setup or

**• Quality losses:** the equipment is producing products that do not fully meet the specified quality requirements. Most common quality losses occur because equipment, in the time

remove them on the basis of their seriousness or impact on the organization.

Regarding divide losses by their causes, three different ones can be found:

**2.** process: the way the equipment is used during production;

they are not taken into consideration through the OEE metric.

done in addition to the big revisions or a set-up/start-up time occurs.

organization, can still be divided into:

a stop for organizational reasons.

effectiveness.

34 Operations Management

**5. Losses analysis**

ones are interesting in order to rank rightly losses.

On the base of Six Big Losses model, it is possible to understand how the Loading Time decreases until to the Valuable Operating Time and the effectiveness is compromised. Let's go through the next Figure 2.


**Figure 2.** Breakdown of the calendar time.

At this point we can define:

$$Availability\ \left(A\right) = \frac{Operating\ Time}{Loading\ Time} \tag{2}$$

$$Performance\,\,\,(P) = \frac{\text{Net Operating Time}}{\text{Operating Time}}\tag{3}$$

$$\text{Quantity (Q)} = \frac{\text{Valutable Operating Time}}{\text{Net Operating Time}} \tag{4}$$

The value of the OEE is an indication of the size of the technical losses (machine malfunctioning and process) as a whole. The gap between the value of the OEE and 100% indicates the share

Managing OEE to Optimize Factory Performance

http://dx.doi.org/10.5772/55322

37

The compound effect of Availability, Performance and Quality provides surprising results, as

The example in Table 2 illustrates the sensitivity of the OEE measure to a low and combined performance. Consequently, it is impossible to reach 100 % OEE within an industrial context. Worldwide studies indicate that the average OEE rate in manufacturing plants is 60%. As pointed out by e.g. Bicheno [14] world class level of OEE is in the range of 85 % to 92 % for non-process industry. Clearly, there is room for improvement in most manufacturing plants! The challenge is, however, not to peak on those levels but thus to exhibit a stable OEE at world-

By having a structured framework based on the Six Big Losses, OEE lets to track underlying issues and root causes. By knowing what the Six Big Losses are and some of the causes that contribute to them, the next step is to focus on ways to monitor and correct them. In the

**• Breakdown:** eliminating unplanned downtime is critical to improving OEE. Other OEE factors cannot be addressed if the process is down. It is not only important to know how much and when down time equipment is but also to be able to link the lost time to the specific source or reason for the loss. With down time data tabulated, the most common approach is the Root Cause Analysis. It is applied starting with the most severe loss categories.

**• Set-up and adjustments:** tracking setup time is critical to reducing this loss. The most common approach to reduce this time is the Single Minute Exchange of Dies program

**• Minor stoppages and Reduced speed:** minor stoppages and reduced speed are the most difficult of the Six Big Losses to monitor and record. Cycle Time analysis should be utilized

**Availability** 86,7% **Performance** 93% **Quality** 95% **OEE** 76,6%

of technical losses compared to the Loading Time.

Let's go through a practical example in the Table 2.

visualized by e.g. Louglin [13].

**Table 2.** Example of OEE calculation.

**6. Attacking the six big losses**

following let's see what is the way:

class level [15].

(SMED).

Please note that:

$$\text{OEE} = \frac{\text{Valutable Operating Time}}{\text{Loading Time}} \tag{5}$$

and

$$\text{OEE} = \frac{\text{Operating Time}}{\text{Loading Time}} \times \frac{\text{Net Operating Time}}{\text{Operating Time}} \times \frac{\text{Valutable Operating Time}}{\text{Net Operating Time}}\tag{6}$$

finally

$$\text{OEE} = \text{Availability} \times \text{Performance} \times \text{Quality} \tag{7}$$

So through a bottom-up approach based on the Six Big Losses model, OEE breaks the per‐ formance of equipment into three separate and measurable components: Availability, Per‐ formance and Quality.

**• Availability:** it is the percentage of time that equipment is available to run during the total possible Loading Time. Availability is different than Utilization. Availability only includes the time the machine was scheduled, planned, or assigned to run. Utilization regards all hours of the calendar time. Utilization is more effective in capacity planning and analyzing fixed cost absorption. Availability looks at the equipment itself and focuses more on variable cost absorption. Availability can be even calculated as:

$$\text{Availability} = \frac{\text{Loading Time - Downtime}}{\text{Loading Time}} \tag{8}$$

**• Performance:** it is a measure of how well the machine runs within the Operating Time. Performance can be even calculated as:

$$\text{Performance} = \frac{\text{Actual Output (units)} \times \text{theoretical Cycle Time}}{\text{Operating Time}} \tag{9}$$

**• Quality:** it is a measure of the number of parts that meet specification compared to how many were produced. Quality can be even calculated as:

$$\text{Quantity} = \frac{\text{Actual output (units)} - \text{Defect amount (units)}}{\text{Actual output (units)}} \tag{10}$$

After the various factors are taken into account, all the results are expressed as a percentage that can be viewed as a snapshot of the current equipment effectiveness.

The value of the OEE is an indication of the size of the technical losses (machine malfunctioning and process) as a whole. The gap between the value of the OEE and 100% indicates the share of technical losses compared to the Loading Time.

The compound effect of Availability, Performance and Quality provides surprising results, as visualized by e.g. Louglin [13].

Let's go through a practical example in the Table 2.


**Table 2.** Example of OEE calculation.

*Quality* (*Q*)= *Valuable Operating Time*

*OEE* <sup>=</sup> *Valuable Operating Time*

*Operating Time* <sup>×</sup> *Valuable Operating Time*

So through a bottom-up approach based on the Six Big Losses model, OEE breaks the per‐ formance of equipment into three separate and measurable components: Availability, Per‐

**• Availability:** it is the percentage of time that equipment is available to run during the total possible Loading Time. Availability is different than Utilization. Availability only includes the time the machine was scheduled, planned, or assigned to run. Utilization regards all hours of the calendar time. Utilization is more effective in capacity planning and analyzing fixed cost absorption. Availability looks at the equipment itself and focuses more on variable

*Availability* <sup>=</sup> *Loading Time* - *Downtime*

*Performance* <sup>=</sup> *Actual Output* (*units*) <sup>×</sup> *theoretical Cycle Time*

*Quality* <sup>=</sup> *Actual output* (*units*) - *Defect amount* (*units*)

that can be viewed as a snapshot of the current equipment effectiveness.

**• Performance:** it is a measure of how well the machine runs within the Operating Time.

**• Quality:** it is a measure of the number of parts that meet specification compared to how

After the various factors are taken into account, all the results are expressed as a percentage

*OEE* = *Availability* ×*Performance* ×*Quality* (7)

*Loading Time* <sup>×</sup> *Net Operating Time*

Please note that:

36 Operations Management

*OEE* <sup>=</sup> *Operating Time*

cost absorption. Availability can be even calculated as:

many were produced. Quality can be even calculated as:

Performance can be even calculated as:

and

finally

formance and Quality.

*Net Operating Time* (4)

*Loading Time* (5)

*Loading Time* (8)

*Operating Time* (9)

*Actual output* (*units*) (10)

*Net Operating Time* (6)

The example in Table 2 illustrates the sensitivity of the OEE measure to a low and combined performance. Consequently, it is impossible to reach 100 % OEE within an industrial context. Worldwide studies indicate that the average OEE rate in manufacturing plants is 60%. As pointed out by e.g. Bicheno [14] world class level of OEE is in the range of 85 % to 92 % for non-process industry. Clearly, there is room for improvement in most manufacturing plants! The challenge is, however, not to peak on those levels but thus to exhibit a stable OEE at worldclass level [15].

### **6. Attacking the six big losses**

By having a structured framework based on the Six Big Losses, OEE lets to track underlying issues and root causes. By knowing what the Six Big Losses are and some of the causes that contribute to them, the next step is to focus on ways to monitor and correct them. In the following let's see what is the way:


to point out these loss types. In most processes recording data for Cycle Time analysis needs to be automated since the cycles are as quick as they do not leave adequate time for manual data logging. By comparing all cycles to the theoretical Cycle Time, the losses can be automatically clustered for analysis. It is important to analyze Minor stoppages and Reduced speed separately because the root causes are typically very different.

for the measurement or analysis of OFE [19]. Huang [20] stated that the factory level metric can be computed by synthesizing the subsystem level metrics, capturing their interconnectivity

OPE and OAE are extensively implemented in industry under different formulations. They involve a practical approach developed to fit the specific requirements of different industries.

As mentioned in the previous section equipment operates in a linked and complex environ‐ ment. So it is necessary to pay attention beyond the performance of individual tools towards the performance of the whole factory. According to Scott and Pisa [21], the answer to this requirement is the OFE metric, which is about combining activities and relationships between different parts of the equipment, and integrating information, decisions, and actions across many independent systems and subsystems. The problem is that a specific and unique method to calculate OFE does not exist. There many methodologies and approaches, with different

A first common-sense approach is to measure OEE at the end of the line or process. Following

*Loading Time* (12)

Managing OEE to Optimize Factory Performance

http://dx.doi.org/10.5772/55322

39

*theoretical output* (*units*) (13)

*Theoretical output from the factory* (*units*) (14)

*OEE* <sup>=</sup> (*Actual output* - *Defect amount*) <sup>×</sup> *theoretical Cicle Time*

*OEE* <sup>=</sup> *Effective output* (*units*)

*OFE* <sup>=</sup> *Effective output from the factory* (*units*)

Here OEE measures effectiveness in term of output that is easy to be taken out at factory level

It is not always ideal. The complexity of OEE measurement arises where single or multiple sub-cells are constrained by an upstream or downstream operation or bottleneck operation. The flow is always restricted or limited by a bottleneck operation, just as a chain is only as strong as its weakest link. So according to Goldratt [22] we can measure OEE in real time at the bottleneck. Any variations at the bottleneck correlate directly to upstream and downstream process performance. Huang et al. [23] proposed a manufacturing system modeling approach, which captures the equipment interconnectivity information. It identifies four unique subsys‐ tems (series, parallel, assembly and expansion) as a basis for modeling a manufacturing

level of complexity, different information coming from and different lacks.

information.

**8. OEE for the factory**

this approach we can see OEE as

too. So OFE becomes:

system, as shown in Figure 3.

and

**• Quality losses and Reduced yield:** parts that require rework of any kind should be considered rejects. Tracking when rejects occur and the type is critical to point out potential causes, and in many cases patterns will be discovered. Often a Six Sigma program, where a common metric is achieving a defect rate of less than 3.4 defects per million opportunities, is used to focus attention on a goal of "zero defects".

### **7. OEE evolution: TEEP, PEE, OAE, OFE, and OPE**

During the last decades, both practitioners and researchers have raised the discussion about OEE in many ways. One of the most popular has led to modification and enlargement of individual original OEE tool to fit a broader perspective as supposed important for the companies [16]. With the evolution of OEE, different definitions have also come up in literature and in practice, coupled with their changed formulations. Some of these formulations (TEEP and PEE) are still at the equipment level, while the others (OAE, OFE and OPE) extended OEE to the factory level. Let's go through the main features of each formulation.

TEEP stands for **Total Equipment Effectiveness Performance** and it was proposed firstly by Invancic [17]. TEEP is a performance metric that shows the total performance of equipment based on the amount of time the equipment was present. So OEE quantifies how well a manufacturing unit performs relative to its designed capacity, during the periods when it is scheduled to run. TEEP measures OEE effectiveness against Calendar Time, i.e.: 24 hours per day, 365 days per year.

$$\text{ATEEP} = \frac{\text{Valueable Operating Time}}{\text{Calender Time}} \quad = \text{OEE} \times \frac{\text{Loading Time}}{\text{Calender Time}} \tag{11}$$

OEE and TEEP are thus two closely related measurements. Typically the equipment is on site and thus TEEP is metric that shows how well equipment is utilized. TEEP is useful for business analysis and important to maximize before spending capital dollars for more capacity.

PEE stands for **Production Equipment Efficiency** and it was firstly proposed by Raouf [18]. The main difference from OEE is that each item is weighted. So Availability, Performance, and Quality don't have an equal importance as it happens for OEE.

At the level of the factory we found **Overall Factory Effectiveness** (OFE), **Overall Production Effectiveness** (OPE), and **Overall Asset Effectiveness** (OAE) metrics. OFE is the most widespread and well known in literature. It covers the effort to export the OEE tool to the whole factory. The question is what kind of method should be applied to OEE values from all pieces of equipment, to derive the factory level metric. There is no standard method or metrics for the measurement or analysis of OFE [19]. Huang [20] stated that the factory level metric can be computed by synthesizing the subsystem level metrics, capturing their interconnectivity information.

OPE and OAE are extensively implemented in industry under different formulations. They involve a practical approach developed to fit the specific requirements of different industries.

### **8. OEE for the factory**

As mentioned in the previous section equipment operates in a linked and complex environ‐ ment. So it is necessary to pay attention beyond the performance of individual tools towards the performance of the whole factory. According to Scott and Pisa [21], the answer to this requirement is the OFE metric, which is about combining activities and relationships between different parts of the equipment, and integrating information, decisions, and actions across many independent systems and subsystems. The problem is that a specific and unique method to calculate OFE does not exist. There many methodologies and approaches, with different level of complexity, different information coming from and different lacks.

A first common-sense approach is to measure OEE at the end of the line or process. Following this approach we can see OEE as

$$\text{OEE} = \frac{\text{(Actual output - Defect amount)} \times \text{theoretical Cicle Time}}{\text{Loading Time}} \tag{12}$$

and

to point out these loss types. In most processes recording data for Cycle Time analysis needs to be automated since the cycles are as quick as they do not leave adequate time for manual data logging. By comparing all cycles to the theoretical Cycle Time, the losses can be automatically clustered for analysis. It is important to analyze Minor stoppages and

**• Quality losses and Reduced yield:** parts that require rework of any kind should be considered rejects. Tracking when rejects occur and the type is critical to point out potential causes, and in many cases patterns will be discovered. Often a Six Sigma program, where a common metric is achieving a defect rate of less than 3.4 defects per million opportunities,

During the last decades, both practitioners and researchers have raised the discussion about OEE in many ways. One of the most popular has led to modification and enlargement of individual original OEE tool to fit a broader perspective as supposed important for the companies [16]. With the evolution of OEE, different definitions have also come up in literature and in practice, coupled with their changed formulations. Some of these formulations (TEEP and PEE) are still at the equipment level, while the others (OAE, OFE and OPE) extended OEE

TEEP stands for **Total Equipment Effectiveness Performance** and it was proposed firstly by Invancic [17]. TEEP is a performance metric that shows the total performance of equipment based on the amount of time the equipment was present. So OEE quantifies how well a manufacturing unit performs relative to its designed capacity, during the periods when it is scheduled to run. TEEP measures OEE effectiveness against Calendar Time, i.e.: 24 hours per

*Calendar Time* <sup>=</sup>*OEE* <sup>×</sup> *Loading Time*

OEE and TEEP are thus two closely related measurements. Typically the equipment is on site and thus TEEP is metric that shows how well equipment is utilized. TEEP is useful for business analysis and important to maximize before spending capital dollars for more capacity.

PEE stands for **Production Equipment Efficiency** and it was firstly proposed by Raouf [18]. The main difference from OEE is that each item is weighted. So Availability, Performance, and

At the level of the factory we found **Overall Factory Effectiveness** (OFE), **Overall Production Effectiveness** (OPE), and **Overall Asset Effectiveness** (OAE) metrics. OFE is the most widespread and well known in literature. It covers the effort to export the OEE tool to the whole factory. The question is what kind of method should be applied to OEE values from all pieces of equipment, to derive the factory level metric. There is no standard method or metrics

*Calendar Time* (11)

Reduced speed separately because the root causes are typically very different.

is used to focus attention on a goal of "zero defects".

day, 365 days per year.

38 Operations Management

**7. OEE evolution: TEEP, PEE, OAE, OFE, and OPE**

*TEEP* <sup>=</sup> *Valuable Operating Time*

Quality don't have an equal importance as it happens for OEE.

to the factory level. Let's go through the main features of each formulation.

$$\text{OEE} = \frac{\text{Effective output (units)}}{\text{theoretical output (units)}} \tag{13}$$

Here OEE measures effectiveness in term of output that is easy to be taken out at factory level too. So OFE becomes:

$$\text{OFE} = \frac{\text{Effective output } \text{ from the factory (units)}}{\text{Theoretical output } \text{ from the factory (units)}} \tag{14}$$

It is not always ideal. The complexity of OEE measurement arises where single or multiple sub-cells are constrained by an upstream or downstream operation or bottleneck operation. The flow is always restricted or limited by a bottleneck operation, just as a chain is only as strong as its weakest link. So according to Goldratt [22] we can measure OEE in real time at the bottleneck. Any variations at the bottleneck correlate directly to upstream and downstream process performance. Huang et al. [23] proposed a manufacturing system modeling approach, which captures the equipment interconnectivity information. It identifies four unique subsys‐ tems (series, parallel, assembly and expansion) as a basis for modeling a manufacturing system, as shown in Figure 3.

be even used to compare performance across the factory highlighting poor line performance

Managing OEE to Optimize Factory Performance

http://dx.doi.org/10.5772/55322

41

**•** Identifying bottlenecks as not only the slowest machine, but as the machine both slower and

All these goals need of an approach based on the Deming Cycle [32]. It is an improvement cycle to increase the plant OEE rating until the target goals and world class manufacturing

This approach requires a large amount of data that can be provided both in a static or dynamic way. In the first case data are picked up only at the end of a certain period and used in the

There is another way to use OEE and it is to know exactly what is happening in real time through a continuous monitoring to immediately identify possible problems and react in realtime using appropriate corrective actions. Information on OEE items (maintenance and

or to quantify improvements made [31]. Moreover improving can be pursued by:

**•** Backtracking to determine what loss reduces effectiveness.

less effective.

status are achieved (Figure 4)

**Figure 4.** Improvement approach to increase the plant OEE.

Diagnosis & Analysis stage.

**Figure 3.** Types of manufacturing subsystems.

Muthiah et al. [24] developed the approach to derive OTE metrics for these subsystems based on a "system constraint" approach that automatically takes into account equipment idle time.

Other methods are based on modeling the manufacturing systems. Some of these notable approaches are queuing analysis methods [25], Markovian methods [26], Petri net based methods [27], integrated computer-aided manufacturing definition (IDEF) method [28], and structured analysis and design technique (SADT) [29]. In addition to them there are several commercial tools that have been reviewed and categorized by Muthiah and Huang [30].

### **9. What is OEE for?**

OEE provides simple and consolidated formulas to measure effectiveness of the equipment or production system. Moreover Dal et al. [31] point out that it can also be used as an indicator of process improvement activities since OEE is directly linked to the losses as well as OEE can be even used to compare performance across the factory highlighting poor line performance or to quantify improvements made [31]. Moreover improving can be pursued by:


All these goals need of an approach based on the Deming Cycle [32]. It is an improvement cycle to increase the plant OEE rating until the target goals and world class manufacturing status are achieved (Figure 4)

**Figure 4.** Improvement approach to increase the plant OEE.

**Figure 3.** Types of manufacturing subsystems.

40 Operations Management

**9. What is OEE for?**

Muthiah et al. [24] developed the approach to derive OTE metrics for these subsystems based on a "system constraint" approach that automatically takes into account equipment idle time.

Other methods are based on modeling the manufacturing systems. Some of these notable approaches are queuing analysis methods [25], Markovian methods [26], Petri net based methods [27], integrated computer-aided manufacturing definition (IDEF) method [28], and structured analysis and design technique (SADT) [29]. In addition to them there are several commercial tools that have been reviewed and categorized by Muthiah and Huang [30].

OEE provides simple and consolidated formulas to measure effectiveness of the equipment or production system. Moreover Dal et al. [31] point out that it can also be used as an indicator of process improvement activities since OEE is directly linked to the losses as well as OEE can This approach requires a large amount of data that can be provided both in a static or dynamic way. In the first case data are picked up only at the end of a certain period and used in the Diagnosis & Analysis stage.

There is another way to use OEE and it is to know exactly what is happening in real time through a continuous monitoring to immediately identify possible problems and react in realtime using appropriate corrective actions. Information on OEE items (maintenance and operational equipment effectiveness, product data accuracy, uptimes, utilization, bottlenecks, yield and scrap metrics, etc.) is really valuable in environments where making decisions in near real-time is critical. This second approach requires then a data collection system com‐ pletely automatized and moreover the Diagnosis & Analysis stage should be automatic.

Generally there are many companies in which manual data collection is convenient. In other companies where each operator is responsible for a number of processing machines, timely and accurate data collection can be very challenging and a key goal should be fast and efficient data collection, with data put it to use throughout the day and in real-time, a more desirable

Managing OEE to Optimize Factory Performance

http://dx.doi.org/10.5772/55322

43

In any case the implementation of data collection for OEE has limited value if it is not integrated in a continuous work procedure, as a part of the improvement initiative. Daily meeting and sharing information both cross-functionally and bottom-up in the organization hierarchy become a prerequisite. As well as it is useful integrating OEE into an automated management system. OEE can be applied when using a total manufacturing information system providing the detailed historical information that allows thorough diagnoses and improvement plans

**11. Automating OEE and integration of OEE into automated management**

Automating OEE gives a company the ability to collect and classify data from the shop floor into meaningful information that can help managers understand the root causes of production inefficiency. Therefore giving greater visibility to make more informed decisions on process improvement. An automated OEE system addresses the three primary functions of OEE:

**• Acquisition:** it concerns data collection that as discussed above data will be completely

**• Analysis:** it usually provides algorithms to calculate OEE and other items related to. Moreover it is often able to support downtime classification via reason trees and other technical analysis. The more sophisticated the package, the more analysis equipment is

**• Visualization:** OEE metrics are available through reports or they can be displayed even via

There is a lot of commercial software that provide automated OEE system, but it is possible even to integrate OEE into general tools as ERP ones. They usually offer a wide range of

approach would be realized if each machine could indicate data by itself.

An automatic OEE data recording implies:

**•** integrated reporting and analysis;

but more importantly it gives the summary signals.

a software interface directly to the operator.

**•** immediate corrective action; **•** motivation for operators.

**•** better accuracy;

**•** less labor; **•** traceability;

**system**

automatic.

available.

In the next sections we will take into consideration different strategies to acquire data and we will illustrate main automated tool for the OEE integration.

### **10. Data collection strategies**

The OEE calculations should be based on correct input parameters from the production system as reported by Ericsson [33]. Data acquisition strategies range from very manual to very automated. The manual data collection method consists of a paper template, where the operators fill in the cause and duration of a breakdown and provide comments about minor stoppages and speed losses. It is a low-tech approach. On the contrary a high-tech approach runs through an automatic OEE calculation system that is governed by sensors connected to the equipment, automatically registering the start time and duration of a stoppage and prompting the operator to provide the system with information about the downtime cause. An automatic approach usually provides opportunities to set up lists of downtime causes, scheduling the available operating time and making an automatic OEE calculation for a time period. A variety of reports of production performance and visualization of the performance results are even possible to retrieve from the system.

Two approaches have to be compared through opportunity and cost both, in a quantitative as well as in a qualitative way. Regarding cost, the main figures in case of the manual approach are derived from the hourly wage cost of operators multiplied by time spent to register data on paper templates, feed them into a computer system and for generating reports and performing OEE calculations. In case of the automatic approach cost concerns a yearly license cost for an automatic OEE calculation system together with an investment cost for hardware. The introduction of both the manual and automatic data collection methods must be preceded and then associated with training of the operators on OEE as a performance measure, and on different parameters affecting the OEE outcome. The purpose of training the operators was twofold:


Another issue to overcome is the balance between the efforts of providing adequate informa‐ tion in relation to the level of detail needed in the improvement process. In fact if a critical success factor in an improvement project driven by OEE is the retrieval of detailed information about production losses, however not all the improvement projects require a higher and really expensive data precision.

Generally there are many companies in which manual data collection is convenient. In other companies where each operator is responsible for a number of processing machines, timely and accurate data collection can be very challenging and a key goal should be fast and efficient data collection, with data put it to use throughout the day and in real-time, a more desirable approach would be realized if each machine could indicate data by itself.

An automatic OEE data recording implies:


operational equipment effectiveness, product data accuracy, uptimes, utilization, bottlenecks, yield and scrap metrics, etc.) is really valuable in environments where making decisions in near real-time is critical. This second approach requires then a data collection system com‐ pletely automatized and moreover the Diagnosis & Analysis stage should be automatic.

In the next sections we will take into consideration different strategies to acquire data and we

The OEE calculations should be based on correct input parameters from the production system as reported by Ericsson [33]. Data acquisition strategies range from very manual to very automated. The manual data collection method consists of a paper template, where the operators fill in the cause and duration of a breakdown and provide comments about minor stoppages and speed losses. It is a low-tech approach. On the contrary a high-tech approach runs through an automatic OEE calculation system that is governed by sensors connected to the equipment, automatically registering the start time and duration of a stoppage and prompting the operator to provide the system with information about the downtime cause. An automatic approach usually provides opportunities to set up lists of downtime causes, scheduling the available operating time and making an automatic OEE calculation for a time period. A variety of reports of production performance and visualization of the performance

Two approaches have to be compared through opportunity and cost both, in a quantitative as well as in a qualitative way. Regarding cost, the main figures in case of the manual approach are derived from the hourly wage cost of operators multiplied by time spent to register data on paper templates, feed them into a computer system and for generating reports and performing OEE calculations. In case of the automatic approach cost concerns a yearly license cost for an automatic OEE calculation system together with an investment cost for hardware. The introduction of both the manual and automatic data collection methods must be preceded and then associated with training of the operators on OEE as a performance measure, and on different parameters affecting the OEE outcome. The purpose of training the operators was

**1.** The quality of the input data is likely to increase in alignment with an increase in the

**2.** The involvement of the operators in identifying performance loss factors is likely to create

Another issue to overcome is the balance between the efforts of providing adequate informa‐ tion in relation to the level of detail needed in the improvement process. In fact if a critical success factor in an improvement project driven by OEE is the retrieval of detailed information about production losses, however not all the improvement projects require a higher and really

a better engagement for providing the system with accurate information.

will illustrate main automated tool for the OEE integration.

results are even possible to retrieve from the system.

**10. Data collection strategies**

42 Operations Management

twofold:

competence of the staff;

expensive data precision.


In any case the implementation of data collection for OEE has limited value if it is not integrated in a continuous work procedure, as a part of the improvement initiative. Daily meeting and sharing information both cross-functionally and bottom-up in the organization hierarchy become a prerequisite. As well as it is useful integrating OEE into an automated management system. OEE can be applied when using a total manufacturing information system providing the detailed historical information that allows thorough diagnoses and improvement plans but more importantly it gives the summary signals.

### **11. Automating OEE and integration of OEE into automated management system**

Automating OEE gives a company the ability to collect and classify data from the shop floor into meaningful information that can help managers understand the root causes of production inefficiency. Therefore giving greater visibility to make more informed decisions on process improvement. An automated OEE system addresses the three primary functions of OEE:


There is a lot of commercial software that provide automated OEE system, but it is possible even to integrate OEE into general tools as ERP ones. They usually offer a wide range of capabilities. They are able to gather and coordinate the operations of a plant and provide measurable information. The advantages are that database are completely integrated so the coordination among different functions involved is better. For example manufacturing can see the upcoming planned maintenance and maintenance can see the production schedules. Automated Management systems are naturally and inherently eligible for providing feasible decision support on plant profitability and establish a foundation for addressing other manufacturing challenges in the future.

**12.3. Case study 3**

from a major contract electronics manufacturer.

preferred techniques and practices. The goals were then:

**•** Standardized downtime reporting among plants

lines to improve effectiveness. The results were:

**•** OEE increase of 45%

**•** Reduced costs

**12.4. Case study 4**

to achieve the company goals?

**•** Find a common metric to measure productivity across plants

**•** Identified 25% more downtime not found with previous methods

A company providing a broad range of services to leading original equipment manufacturers in the information technology and communications industries [36] obtained three new plants

Managing OEE to Optimize Factory Performance

http://dx.doi.org/10.5772/55322

45

Each plant had distinct ways of identifying and determining downtime, as well as their own

The manufacturer's issues were complicated by the fact it makes about 30,000 different products out of 300,000 different parts, and adds an average of 2,000 new products into its manufacturing mix every month. With this number of products, frequent changeovers are necessary. It also becomes vital to have a scientific method to be able to compare all the different lines. The company was searching for a common framework in order to compare its three newest plants. The solution was the identification of factors leading to assembly line down‐ time. Companies utilizing this information can make comparisons across plants and assembly

The Whirlpool Corporation's Findlay Division manufactures dishwashers for many brands in the world [37]. The demand for product is at an all-time high. The goal was then how to get more out of the facility and its equipment without making huge capital investments? And more specifically how can the maintenance department support the needs of manufacturing

To make these improvements, the Division used OEE as a measure of their current equipment efficiency. As the company started tracking individual pieces of equipment's OEE ratings, it became apparent that there was room for improvement. The combination of fundamental maintenance practices such as Root Cause Failure analysis and a preventive and predictive maintenance system, along with very strong support from Division leadership, enabled the Findlay Division to get off the ground with the Total Productive Maintenance program. Again "it was the people that made this change possible" (Jim Dray, TPM Facilitator). The Division

The OEE measure is an excellent KPI for use on both strategic and operational levels, if it is used correctly. When an organization holds people with knowledge and experience of the typical shortages of OEE and its common implementation challenges, the probability of achieving the intended benefits of OEE will certainly increase. Based on using OEE as an improvement driver at the case study company, some success factors have been identified:

has been able to increase production by 21%, without any significant capital costs.

### **12. OEE applications**

At the end of the chapter, industrial examples of OEE application are presented to remark as different industries and different goals can be all involved through the OOE metric.

### **12.1. Case study 1**

Sigma/Q [34] is a leading manufacturer of quality packaging in Northland Central America serving various markets across the globe. The company's primary goal was to improve plant performance and reduce operational costs.

The solution was to build a foundation for continuous improvement through OEE. The first step was to automate the data collection and analysis processes and introduce a real-time strategy. But the real key success factor was operator involvement in the performance improvement process. The company identified key contributors to reward them appropriately during performance reviews.

As a result, OEE increased by 40%, variability in run speed due to frequent starts and stops in the manufacturing process, was dramatically reduced and run speed was increased by 23%. Last but not least operators aspired to achieve higher levels of operational excellence, pro‐ moting a culture of continuous improvement across the various plants.

### **12.2. Case study 2**

A global pharmaceutical company [35] has shown the will to understand if OEE as a metric could be used as an ongoing tool of improvement. It has chosen an off-shore plant and as pilot a packaging line running a full 144-hour weekly cycle and handling more than 90 products because it allowed the collection of data over both shifts. The line also had counters on most unit operations that could be easily utilized for the collection of quality data by the line operators. Twelve weeks of data was collected with operator buy-in. The test has shown that many of the current metrics were too high-level to extract the causes of issues and therefore target improvements to them. Therefore the more than 90 products routed through the test line were divided into six groups based on the highest pack rates. The continuous real-time monitoring was able to account the 90% of available run time for with little impact running the line.

### **12.3. Case study 3**

capabilities. They are able to gather and coordinate the operations of a plant and provide measurable information. The advantages are that database are completely integrated so the coordination among different functions involved is better. For example manufacturing can see the upcoming planned maintenance and maintenance can see the production schedules. Automated Management systems are naturally and inherently eligible for providing feasible decision support on plant profitability and establish a foundation for addressing other

At the end of the chapter, industrial examples of OEE application are presented to remark as

Sigma/Q [34] is a leading manufacturer of quality packaging in Northland Central America serving various markets across the globe. The company's primary goal was to improve plant

The solution was to build a foundation for continuous improvement through OEE. The first step was to automate the data collection and analysis processes and introduce a real-time strategy. But the real key success factor was operator involvement in the performance improvement process. The company identified key contributors to reward them appropriately

As a result, OEE increased by 40%, variability in run speed due to frequent starts and stops in the manufacturing process, was dramatically reduced and run speed was increased by 23%. Last but not least operators aspired to achieve higher levels of operational excellence, pro‐

A global pharmaceutical company [35] has shown the will to understand if OEE as a metric could be used as an ongoing tool of improvement. It has chosen an off-shore plant and as pilot a packaging line running a full 144-hour weekly cycle and handling more than 90 products because it allowed the collection of data over both shifts. The line also had counters on most unit operations that could be easily utilized for the collection of quality data by the line operators. Twelve weeks of data was collected with operator buy-in. The test has shown that many of the current metrics were too high-level to extract the causes of issues and therefore target improvements to them. Therefore the more than 90 products routed through the test line were divided into six groups based on the highest pack rates. The continuous real-time monitoring was able to account the 90% of available run time for with little impact running

moting a culture of continuous improvement across the various plants.

different industries and different goals can be all involved through the OOE metric.

manufacturing challenges in the future.

performance and reduce operational costs.

during performance reviews.

**12.2. Case study 2**

the line.

**12. OEE applications**

**12.1. Case study 1**

44 Operations Management

A company providing a broad range of services to leading original equipment manufacturers in the information technology and communications industries [36] obtained three new plants from a major contract electronics manufacturer.

Each plant had distinct ways of identifying and determining downtime, as well as their own preferred techniques and practices. The goals were then:


The manufacturer's issues were complicated by the fact it makes about 30,000 different products out of 300,000 different parts, and adds an average of 2,000 new products into its manufacturing mix every month. With this number of products, frequent changeovers are necessary. It also becomes vital to have a scientific method to be able to compare all the different lines. The company was searching for a common framework in order to compare its three newest plants. The solution was the identification of factors leading to assembly line down‐ time. Companies utilizing this information can make comparisons across plants and assembly lines to improve effectiveness. The results were:


### **12.4. Case study 4**

The Whirlpool Corporation's Findlay Division manufactures dishwashers for many brands in the world [37]. The demand for product is at an all-time high. The goal was then how to get more out of the facility and its equipment without making huge capital investments? And more specifically how can the maintenance department support the needs of manufacturing to achieve the company goals?

To make these improvements, the Division used OEE as a measure of their current equipment efficiency. As the company started tracking individual pieces of equipment's OEE ratings, it became apparent that there was room for improvement. The combination of fundamental maintenance practices such as Root Cause Failure analysis and a preventive and predictive maintenance system, along with very strong support from Division leadership, enabled the Findlay Division to get off the ground with the Total Productive Maintenance program. Again "it was the people that made this change possible" (Jim Dray, TPM Facilitator). The Division has been able to increase production by 21%, without any significant capital costs.

The OEE measure is an excellent KPI for use on both strategic and operational levels, if it is used correctly. When an organization holds people with knowledge and experience of the typical shortages of OEE and its common implementation challenges, the probability of achieving the intended benefits of OEE will certainly increase. Based on using OEE as an improvement driver at the case study company, some success factors have been identified:

**•** A standard definition of OEE must be clearly defined and communicated at all levels within the organization since this is the foundation for its utilization. It is especially important to determine how the ideal cycle time and planned and unplanned downtime should be interpreted.

**•** how it is monitored and by whom

**Author details**

Raffaele Iannone1

**References**

**•** how it aligns with the overall production strategy

**•** how it could be utilized for sustainability purpose.

costs both when they are in operation and during downtime.

applicable productivity KPI ́s will be elaborated on in future research.

and Maria Elena Nenni2\*

1 Department of Industrial Engineering, University of Salerno, Italy

2 Department of Industrial Engineering, University of Naples Federico II, Italy

[1] Fleischer, J, Weismann, U, & Niggeschmidt, S. Calculation and Optimisation Model for Costs and Effects of Availability Relevant Service Elements: proceedings of the CIRP International Conference on Life Cycle Engineering, LCE2006, 31 May- 2 June

[2] Huang, S. H, Dismukes, J. P, Mousalam, A, Razzak, R. B, & Robinson, D. E. Manufac‐ turing Productivity improvement using effectiveness metrics and simulation analy‐

sis. International Journal of Production Research (2003). , 41(3), 513-527.

\*Address all correspondence to: menenni@unina.it

(2006). Leuven, Belgium.

Moreover it is remarkable that setting high OEE goals in an environment with excessive capacity is of less value since it is not possible to utilize the equipment full time. OEE measure is less suitable as a target KPI, since OEE only measures the efficiency during the time the equipment is planned to be operating, while equipment and personnel drives manufacturing

Managing OEE to Optimize Factory Performance

http://dx.doi.org/10.5772/55322

47

The purpose of measuring OEE can be questioned in the light of the financial crisis. There are some authors that have reported the need of further research work on linking OEE with financial measures. Dal et al. [31] asserts "there would appear to be a useful line of research in exploring the link between OEE and the popular business models such as balanced scorecard". Muchiri et al. [16] suggests "Further research should explore the dynamics of translating equipment effectiveness or loss of effectiveness in terms of cost." The authors agree with these statements, there is clearly a missing link between OEE and manufacturing cost. Jonsson et al. [39] presents a manufacturing cost model linking production performance with economic parameters. The utilization of this manufacturing cost model in developing industrially


### **13. Conclusion**

There are many challenges associated with the implementation of OEE for monitoring and managing production performance, for example:


**•** how it is monitored and by whom

**•** A standard definition of OEE must be clearly defined and communicated at all levels within the organization since this is the foundation for its utilization. It is especially important to determine how the ideal cycle time and planned and unplanned downtime should be

**•** Involving the operators in the process of defining production loss causes and configuring the templates and lists to be used for monitoring promotes operator commitment, under‐ standing of the procedure and awareness of the frequency of sporadic and chronic distur‐

**•** Driving the OEE implementation as a project with a predefined organization, a structured working procedure promoting cross-functional and shop floor involvement, and practical guidance on what activities to execute and in what order, implies resource allocation that

**•** Viewing and communicating OEE as a driver for improvements rather than a management measure for follow-up and control of performance (although this is also the case) is one of

**•** Active involvement of the support functions, especially production engineering and maintenance, is required, otherwise the level of improvements to increase OEE will not be

**•** Separating improvement actions into those directly having an impact on process stability, i.e. OEE, from those with indirect impact is necessary especially in the initial implementation

**•** Including reporting OEE and prioritized daily actions in the routines of daily follow-up meetings (from team level to department/site level) is an excellent way to integrate OEE as

**•** Results should be communicated, e.g. by graphical visualization of the OEE improvements on the boards. Visualizing OEE and process output together are illustrative and motivating. **•** Including production performance in the company ́s overall production strategy and managing this with a continuous follow up of OEE as a KPI on different consolidation levels is the optimal driver for efficient management. When top management attention is contin‐ uously given to the process of achieving stable production processes the possibilities of

There are many challenges associated with the implementation of OEE for monitoring and

forces management attention and puts OEE on the agenda.

enough and the speed of change will consequently be too low.

a driver for improvements in the operations management system.

the cornerstones for a successful OEE implementation.

phase to show quick results.

reaching good results certainly increases.

managing production performance, for example: **•** how it is defined, interpreted and compared **•** how the OEE data are collected and analyzed

**13. Conclusion**

interpreted.

46 Operations Management

bances.


Moreover it is remarkable that setting high OEE goals in an environment with excessive capacity is of less value since it is not possible to utilize the equipment full time. OEE measure is less suitable as a target KPI, since OEE only measures the efficiency during the time the equipment is planned to be operating, while equipment and personnel drives manufacturing costs both when they are in operation and during downtime.

The purpose of measuring OEE can be questioned in the light of the financial crisis. There are some authors that have reported the need of further research work on linking OEE with financial measures. Dal et al. [31] asserts "there would appear to be a useful line of research in exploring the link between OEE and the popular business models such as balanced scorecard". Muchiri et al. [16] suggests "Further research should explore the dynamics of translating equipment effectiveness or loss of effectiveness in terms of cost." The authors agree with these statements, there is clearly a missing link between OEE and manufacturing cost. Jonsson et al. [39] presents a manufacturing cost model linking production performance with economic parameters. The utilization of this manufacturing cost model in developing industrially applicable productivity KPI ́s will be elaborated on in future research.

### **Author details**

Raffaele Iannone1 and Maria Elena Nenni2\*


### **References**


[3] Nakajima, S. Introduction to TPM: Total Productive Maintenance. Productivity Press; (1988).

[19] Oechsner, R. From OEE to OFE. Materials science in semiconductor processing

Managing OEE to Optimize Factory Performance

http://dx.doi.org/10.5772/55322

49

[20] Huang, S. H, & Keskar, H. Comprehensive and configurable metrics for supplier se‐

[21] Scott, D, & Pisa, R. Can Overall Factory Effectiveness Prolong Moore's Law? Solid

[22] Goldratt, E. M, & Cox, J. The Goal: A Process of Ongoing Improvement. North River

[23] Huang, S. H, Dimukes, J. P, Shi, J, Su, Q, Razzak, M. A, & Robinson, D. E. Manufac‐ turing system modeling for productivity improvement. Journal of Manufacturing

[24] Muthiah, K. M. N, Huang, S. H, & Mahadevan, S. Automating factory performance diagnostics using overall throughput effectiveness (OTE) metric. International Jour‐

[25] Bose, S. J. An Introduction to Queueing Systems. Kluwer/Plenum Publishers; (2002). [26] Meyn, S. P, & Tweedie, R. L. Markov Chains and Stochastic Stability. Springer-Ver‐

[27] David, R, & Alla, H. Discrete, continuous, and hybrid Petri Nets. Springer; (2005).

[28] Cheng-leong, A, Pheng, K. L, & Leng, G. R. K. IDEF: a comprehensive modelling methodology for the development of manufacturing enterprise systems. Internation‐

[29] Santarek, K, & Buseif, I. M. Modeling and design of flexible manufacturing systems using SADT and petri nets tools. Journal of Material Process Technology (1998). ,

[30] Muthiah, K. M. N, & Huang, S. H. A review of literature on manufacturing systems productivity measurement and improvement. International Journal of Industrial En‐

[31] Dal, B, Tugwell, P, & Greatbanks, R. Overall equipment effectiveness as a measure of operational improvement. A practical analysis. International Journal of Operations &

[32] Deming, W. E. Out of the Crisis. MIT Center for Advanced Engineering Study;

[33] Ericsson, J. Disruption analysis- An Important Tool in Lean Production. PhD thesis. Department of Production and Materials Engineering, Lund University, Sweden;

[34] http://wwwneustro.com/oeedocs/Sigma\_Q\_Neustro.pdf (accessed 3 October (2012).

lection. International Journal of Production Economics (2007).

nal of Advanced Manufacturing Technology (2008).

al Journal of Production Research (1999). , 1999, 37-3839.

(2003).

State Technology (1998).

Systems (2002). , 2002, 21-249.

Press; (1992).

lag; (1993).

1998, 76-212.

(1986).

(1997).

gineering (2006). , 2006, 1-461.

Production Management (2000).


[19] Oechsner, R. From OEE to OFE. Materials science in semiconductor processing (2003).

[3] Nakajima, S. Introduction to TPM: Total Productive Maintenance. Productivity Press;

[4] Womack, J. P, Jones, D. T, & Roos, D. The Machine That Changed the World. Rawson

[5] Harry, M. J. Six Sigma: a breakthrough strategy for profitability. Quality Progress

[8] E79-98 Guideline for the Definition and Measurements of Overall Equipment Effec‐ tiveness: proceedings of Advanced Semiconductor Manufacturing Conference and

[9] Koch, A. OEE for Operators: Overall Equipment Effectiveness. Productivity Press;

[11] Stamatis, D. H. The OEE Primer. Understanding Overall Equipment Effectiveness,

[12] Jonsson, P, & Lesshammar, M. Evaluation and improvement of manufacturing per‐ formance measurement systems- the role of OEE. International Journal of Operations

[13] Louglin, S. Aholistic Approach to Overall Equipment Effectiveness. IEE Computing

[14] Bicheno, J. The New Lean Toolbox towards fast flexible flow. Moreton Press; (2004).

[15] Andersson, C, & Bellgran, M. Managing Production Performance with Overall Equipment Efficiency (OEE)- Implementation Issues and Common Pitfalls. http:// msep.engr.wisc.edu/phocadownload/cirp44\_managing%20production%20perform‐

[16] Muchiri, P, & Pintelon, L. Performance measurement using overall equipment effec‐ tiveness (OEE): Literature review and practical application discussion. International

[17] Ivancic, I. Development of Maintenance in Modern Production: proceedings of 14th European Maintenance Conference, EUROMAINTENANCE'October (1998). Du‐

[18] Raouf, A. Improving Capital Productivity Through Maintenance. International Jour‐

nal of Operations & Production Management (1994). , 14(7), 44-52.

[6] Todd, J. World-class Manufacturing. McGraw-Hill; (1995).

[7] Nakajima, S. TPM Development Program, Productivity Press; (1989).

Workshop, IEEE/SEMI 1998, 23- 25 September 1998, Boston, MA.

[10] Hansen, B. Overall Equipment Effectiveness. Industrial Press; (2001).

Reliability, and Maintainability. Taylor & Francis; (2010).

& Production Management (1999). , 19(1), 55-78.

ance.pdfaccessed 20 September (2012).

brovnik, Hrvatska., 98, 5-7.

and Control Engineering Journal (2003). , 14(6), 37-42.

Journal of Production Research (2008). , 46(13), 1-45.

(1988).

48 Operations Management

(1998).

(1999).

Associates; (1990).


[35] http://wwwinformance.com/download.aspx?id=pharma\_casestudy.pdf (accessed 12 October (2012).

**Chapter 3**

**Using Overall Equipment Effectiveness for**

Different metrics for measuring and analyzing the productivity of manufacturing systems have been studied for several decades. The traditional metrics for measuring productivity were *throughput* and *utilization rate*, which only measure part of the performance of manufacturing equipment. But, they were not very helpful for *"identifying the problems and underlying improve‐*

During the last years, several societal elements have raised the interest in analyze the phe‐ nomena underlying the identification of productive performance parameters as: capacity,

This rising interest has highlighted the need for more rigorously defined and acknowledged productivity metrics that allow to take into account a set of synthetic but important factors (availability, performance and quality) [1]. Most relevant causes identified in literature are: **•** The growing attention devoted by the management to cost reduction approaches [2] [3]; **•** The interest connected to successful eastern productions approaches, like Total *Productive*

**•** The importance to go beyond the limits of traditional business management control

For this reasons, a variety of new performance concepts have been developed. The total productive maintenance (TPM) concept, launched by Seiichi Nakajima [4] in the 1980s, has provided probably the most acknowledged and widespread quantitative metric for the measure of the productivity of any production equipment in a factory: the *Overall Equip‐ ment Effectiveness* (OEE). OEE is an appropriate measure for manufacturing organizations

> © 2013 Cesarotti et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 Cesarotti et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

production throughput, utilization, saturation, availability, quality, etc.

*Maintenance* [4], *World Class Manufacturing* [5] or *Lean production* [6];

Vittorio Cesarotti, Alessio Giuiusa and Vito Introna

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/56089

*ments needed to increase productivity"* [1].

**1. Introduction**

system [7];

**Manufacturing System Design**


### **Chapter 3**

### **Using Overall Equipment Effectiveness for Manufacturing System Design**

Vittorio Cesarotti, Alessio Giuiusa and Vito Introna

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/56089

### **1. Introduction**

[35] http://wwwinformance.com/download.aspx?id=pharma\_casestudy.pdf (accessed 12

[36] http://wwwinformance.com/download.aspx?id=CASE-STUDY\_HIGH-TECH.pdf (ac‐

[37] http://wwwleanexpertise.com/TPMONLINE/articles\_on\_total\_productive\_mainte‐

[38] Jonsson, M, Andersson, C, & Stahl, J. E. A general economic model for manufactur‐ ing cost simulation: proceedings of the 41st CIRP Conference on Manufacturing Sys‐

nance/tpm/whirpoolcase.htm (accessed 12 October (2012).

October (2012).

50 Operations Management

cessed 20 October (2012).

tems, May (2008). Tokyo., 26-28.

Different metrics for measuring and analyzing the productivity of manufacturing systems have been studied for several decades. The traditional metrics for measuring productivity were *throughput* and *utilization rate*, which only measure part of the performance of manufacturing equipment. But, they were not very helpful for *"identifying the problems and underlying improve‐ ments needed to increase productivity"* [1].

During the last years, several societal elements have raised the interest in analyze the phe‐ nomena underlying the identification of productive performance parameters as: capacity, production throughput, utilization, saturation, availability, quality, etc.

This rising interest has highlighted the need for more rigorously defined and acknowledged productivity metrics that allow to take into account a set of synthetic but important factors (availability, performance and quality) [1]. Most relevant causes identified in literature are:


For this reasons, a variety of new performance concepts have been developed. The total productive maintenance (TPM) concept, launched by Seiichi Nakajima [4] in the 1980s, has provided probably the most acknowledged and widespread quantitative metric for the measure of the productivity of any production equipment in a factory: the *Overall Equip‐ ment Effectiveness* (OEE). OEE is an appropriate measure for manufacturing organizations

and it has being used broadly in manufacturing industry, typically to monitor and control the performance (time losses) of an equipment/work station within a production system [8]. The OEE allows to quantify and to assign all the time losses, that affect an equip‐ ment whilst the production, to three standard categories. Being standard and widely acknowledged, OEE has constituted a powerful tool for production systems performance benchmarking and characterization, as also the starting point for several analysis techni‐ ques, continuous improvement and research [9] [10]. Despite this widespread and rele‐ vance, the use of OEE presents limitations. As a matter of fact, OEE focus is on the single equipment, yet the performance of a single equipment in a production system is general‐ ly influenced by the performance of other systems to which it is interconnected. The time losses propagation from a station to another may widely affect the performance of a single equipment. Since OEE measures the performance of the equipment within the specific system, a low value of OEE for a given equipment can depend either on little perform‐ ance of the equipment itself and/or time losses propagation due to other interconnected equipments of the system.

**2. Manufacturing system design: Establish the number of production**

Each process designer, when starting the design of a new production system, must ensure that the number of equipments necessary to carry out a given process activity (e.g. metal milling) is sufficient to realize the required volume. Still, the designer must generally ensure that the minimum number of equipment is bought due to elevated investment costs. Clearly, the performance inefficiencies and their propagation became critical, when the purchase of an extra (set of) equipment(s) is required to offset time losses propagation. From a price strategy perspective, the process designer is generally requested to assure the number of requested equipments is effectively the minimum possible for the requested volume. Any not necessary over-sizing results in an extra investment cost for the company, compromising the economical

Typically, the general equation to assess the number of equipments needed to process a demand of products (*D*) within a total calendar time *C* <sup>t</sup> (usually one year) can be written as

*D*\**cti*

is theoretical cycle time for the equipment *i* to process a piece of product;

**•** ϑ is a coefficient that includes all the external time losses that affect a production system,

It is therefore possible to define *L <sup>t</sup>* , Loading time, as the percentage of total calendar time *C*

The equation (1) shows that the process designer must consider in his/her analysis three parameters unknown a priori, which influence dramatically the production system sizing and play a key role in the design of the system in order to realize the desired throughput. These parameters affect the total time available for production and the real time each equipment

*Ct*\**ϑ*\**η<sup>i</sup>* <sup>+</sup> <sup>1</sup> (1)

Using Overall Equipment Effectiveness for Manufacturing System Design

http://dx.doi.org/10.5772/56089

53

*L <sup>t</sup>* =*Ct*\***ϑ** (2)

*ni* =*int*

**•** *D* is the number of products that must be produced;

**•** *Ct* is the number of hours (or minutes) in one year.

that is actually scheduled for operation (2):

request to realize a piece [9], and are respectively:

is the efficiency of the equipment i within the system.

**•** External time losses, which are considered in the analysis with ϑ ;

precluding production.

**machines**

performance.

follow (1):

Where:

**•** *cti*

**•** η <sup>i</sup>

t

This issue has been widely investigated in literature through the introduction of a new metric: the Overall Equipment Effectiveness (OTE), that considers the whole production system as a whole. OTE embraces the performance losses of a production system both due to the equip‐ ments and their interactions.

Process Designers need usually to identify the number of each equipments necessary to realize each activity of the production process, considering the interaction and consequent time losses a priori. Hence, for a proper design of the system, we believe that the OEE provides designer with better information on each equipment than OTE. In this chapter we will show how OEE can be used to carry out a correct equipments sizing and an effective production system design, taking into account both equipment time losses and their propagation throughout the whole production system.

In the first paragraph we will show the approach that a process designer should face when designing a new production system starting from scratch.

In the second paragraph we will investigate the typical time-losses that affect a production system, although are independent from the production system itself.

In the third part we will define all the internal time losses that need to be considered when assessing the OEE, along with the description of a set of critical factors related to OEE assess‐ ment, such as buffer-sizing and choice of the plant layout.

In the fourth paragraph we will show and quantify how time losses of a single equipment affects the whole system and vice-versa.

Finally, we will show through the simulation some real cases in which a process design have been fully completed, considering both equipment and time losses propagation.

### **2. Manufacturing system design: Establish the number of production machines**

Each process designer, when starting the design of a new production system, must ensure that the number of equipments necessary to carry out a given process activity (e.g. metal milling) is sufficient to realize the required volume. Still, the designer must generally ensure that the minimum number of equipment is bought due to elevated investment costs. Clearly, the performance inefficiencies and their propagation became critical, when the purchase of an extra (set of) equipment(s) is required to offset time losses propagation. From a price strategy perspective, the process designer is generally requested to assure the number of requested equipments is effectively the minimum possible for the requested volume. Any not necessary over-sizing results in an extra investment cost for the company, compromising the economical performance.

Typically, the general equation to assess the number of equipments needed to process a demand of products (*D*) within a total calendar time *C* <sup>t</sup> (usually one year) can be written as follow (1):

$$\mu\_{i} = \inf \left\lceil \frac{D^\* c t\_i}{\mathbb{C}\_i \text{"} \mathbb{S}^\* \eta\_i} \right\rceil + 1 \tag{1}$$

Where:

and it has being used broadly in manufacturing industry, typically to monitor and control the performance (time losses) of an equipment/work station within a production system [8]. The OEE allows to quantify and to assign all the time losses, that affect an equip‐ ment whilst the production, to three standard categories. Being standard and widely acknowledged, OEE has constituted a powerful tool for production systems performance benchmarking and characterization, as also the starting point for several analysis techni‐ ques, continuous improvement and research [9] [10]. Despite this widespread and rele‐ vance, the use of OEE presents limitations. As a matter of fact, OEE focus is on the single equipment, yet the performance of a single equipment in a production system is general‐ ly influenced by the performance of other systems to which it is interconnected. The time losses propagation from a station to another may widely affect the performance of a single equipment. Since OEE measures the performance of the equipment within the specific system, a low value of OEE for a given equipment can depend either on little perform‐ ance of the equipment itself and/or time losses propagation due to other interconnected

This issue has been widely investigated in literature through the introduction of a new metric: the Overall Equipment Effectiveness (OTE), that considers the whole production system as a whole. OTE embraces the performance losses of a production system both due to the equip‐

Process Designers need usually to identify the number of each equipments necessary to realize each activity of the production process, considering the interaction and consequent time losses a priori. Hence, for a proper design of the system, we believe that the OEE provides designer with better information on each equipment than OTE. In this chapter we will show how OEE can be used to carry out a correct equipments sizing and an effective production system design, taking into account both equipment time losses and their propagation throughout the whole

In the first paragraph we will show the approach that a process designer should face when

In the second paragraph we will investigate the typical time-losses that affect a production

In the third part we will define all the internal time losses that need to be considered when assessing the OEE, along with the description of a set of critical factors related to OEE assess‐

In the fourth paragraph we will show and quantify how time losses of a single equipment

Finally, we will show through the simulation some real cases in which a process design have

been fully completed, considering both equipment and time losses propagation.

designing a new production system starting from scratch.

ment, such as buffer-sizing and choice of the plant layout.

affects the whole system and vice-versa.

system, although are independent from the production system itself.

equipments of the system.

52 Operations Management

ments and their interactions.

production system.


It is therefore possible to define *L <sup>t</sup>* , Loading time, as the percentage of total calendar time *C* t that is actually scheduled for operation (2):

$$L\_{\ t} = C\_{t} \, ^\*\mathfrak{B} \tag{2}$$

The equation (1) shows that the process designer must consider in his/her analysis three parameters unknown a priori, which influence dramatically the production system sizing and play a key role in the design of the system in order to realize the desired throughput. These parameters affect the total time available for production and the real time each equipment request to realize a piece [9], and are respectively:

**•** External time losses, which are considered in the analysis with ϑ ;


This list highlights the complexity implicitly involved in a process design. Several forecasts and assumptions may be required. In this sense, it is a good practice to ensure that the ratio in equation (3) is always respected for each equipment:

$$\frac{\left(\frac{D^\* \ell\_{t\_i}}{L\_{\cdot \cdot} \* \eta\_i}\right)}{\eta\_i} < 1\tag{3}$$

**Symbol Name Description Synonyms**

(earthquakes, flood);

Lack of material in stocks;

Lack of energy;

Training of workers;

physiological increases; man machine interaction;

available material; Lack of service vehicle; Failure to other machines;

products;

Lt3 Stand by time Micro-absenteism, shift changes;

Lack of orders in the production plan;

Lack of manpower (strikes, absenteeism);

Technical tests and manufacturing of nonmarketable

Lack of raw material stocks for single machines; Unsuitable physical and chemical properties of the

The external time losses assessment may vary in accordance to theirs categories, historical available data and other exogenous factors. Some stops are established for through internal policies (e.g. number of shift, production system closure for holidays, etc.). Other macro-stops are assessed (e.g. Opening time to satisfy forecasted demand), whereas others are considered as a forfeit in accordance to the Operations Manager Experience. It is not possible to provide a general magnitude order because, the extent of time losses depend from a variety of characteristic factor connected mainly to the specific process and the specific firm. Among the most common ways to assess this time losses we found: Historical data, Benchmarking with

The Calendar time *Ct* is reduced after the external time losses. The percentage of *Ct* in which the production system does not produce is expressed by (1- ϑ) , affecting consequent‐

These parameters should be considered carefully by system designers in assessing the loading time (2). Although these parameters do not propagate throughout the line their consideration

similar production system, Operations Manager Experience, Corporate Policies.

is fundamental to ensure the identification of a proper number of equipments.

Summer vacations, holidays, shifts, special events

Using Overall Equipment Effectiveness for Manufacturing System Design

System External Causes

55

http://dx.doi.org/10.5772/56089

System External Causes

Machine External

System External Causes

Causes;

Lt1 Idle times resulting

from law regulations or corporate decisions

Lt2 Unplanned time Lack of demand;

**Table 1.** Adapted from Grando et al. 2005

**3.2. Considerations**

ly the *L <sup>t</sup>* (2).

As a good practice, to ensure (3) being properly lower than 1 allows to embrace, among others, the variability and uncertainty implicitly embedded within the demand forecast.

In the next paragraph we will analyze the External time losses that must be considered during the design.

### **3. External time losses**

### **3.1. Background**

For the design of a production system several time-losses, of different nature, need to be considered. Literature is plenty of classifications in this sense, although they can diverge one each others in parameters, number, categorization, level of detail, etc. [11] [12]. Usually each classification is tailored on a set of sensible drivers, such as data availability, expected results, etc. [13].

One relevant classification of both external and internal time losses is provided by Grando et al. [14]. Starting from this classification and focusing on external time losses only, we will briefly introduce a description of common time-losses in Operations Management, highlight‐ ing which are most relevant and which are negligible under certain hypothesis for the design of a production system (Table 1).

The categories LT1 and LT2 don't affect the performance of a single equipment, nor influence the propagation of time-losses throughout the production system.

Still, it is important to notice that some causes, even though labeled as external, are complex to asses during the design. Despite these causes are external, and well known by operations manager, due to the implicit complexity in assessing them, these are detected only when the production system is working via the OEE, with consequence on OEE values. For example, the lack of material feeding a production line does not depend by the OEE of the specific station/equipment. Nevertheless when lack of material occurs a station cannot produce with consequences on equipment efficiency, detected by the OEE. (4).


**Table 1.** Adapted from Grando et al. 2005

### **3.2. Considerations**

**•** The theoretical time cycle, which depends upon the selected equipment(s);

( *<sup>D</sup>*\**ct <sup>i</sup> L <sup>t</sup>*\**η <sup>i</sup>* )

the variability and uncertainty implicitly embedded within the demand forecast.

interactions, in accordance to the specific design.

equation (3) is always respected for each equipment:

the design.

54 Operations Management

**3.1. Background**

etc. [13].

**3. External time losses**

of a production system (Table 1).

**•** The efficiency of the equipment which depends upon the selected equipments and their

This list highlights the complexity implicitly involved in a process design. Several forecasts and assumptions may be required. In this sense, it is a good practice to ensure that the ratio in

As a good practice, to ensure (3) being properly lower than 1 allows to embrace, among others,

In the next paragraph we will analyze the External time losses that must be considered during

For the design of a production system several time-losses, of different nature, need to be considered. Literature is plenty of classifications in this sense, although they can diverge one each others in parameters, number, categorization, level of detail, etc. [11] [12]. Usually each classification is tailored on a set of sensible drivers, such as data availability, expected results,

One relevant classification of both external and internal time losses is provided by Grando et al. [14]. Starting from this classification and focusing on external time losses only, we will briefly introduce a description of common time-losses in Operations Management, highlight‐ ing which are most relevant and which are negligible under certain hypothesis for the design

The categories LT1 and LT2 don't affect the performance of a single equipment, nor influence

Still, it is important to notice that some causes, even though labeled as external, are complex to asses during the design. Despite these causes are external, and well known by operations manager, due to the implicit complexity in assessing them, these are detected only when the production system is working via the OEE, with consequence on OEE values. For example, the lack of material feeding a production line does not depend by the OEE of the specific station/equipment. Nevertheless when lack of material occurs a station cannot produce with

the propagation of time-losses throughout the production system.

consequences on equipment efficiency, detected by the OEE. (4).

*ni* <1 (3)

The external time losses assessment may vary in accordance to theirs categories, historical available data and other exogenous factors. Some stops are established for through internal policies (e.g. number of shift, production system closure for holidays, etc.). Other macro-stops are assessed (e.g. Opening time to satisfy forecasted demand), whereas others are considered as a forfeit in accordance to the Operations Manager Experience. It is not possible to provide a general magnitude order because, the extent of time losses depend from a variety of characteristic factor connected mainly to the specific process and the specific firm. Among the most common ways to assess this time losses we found: Historical data, Benchmarking with similar production system, Operations Manager Experience, Corporate Policies.

The Calendar time *Ct* is reduced after the external time losses. The percentage of *Ct* in which the production system does not produce is expressed by (1- ϑ) , affecting consequent‐ ly the *L <sup>t</sup>* (2).

These parameters should be considered carefully by system designers in assessing the loading time (2). Although these parameters do not propagate throughout the line their consideration is fundamental to ensure the identification of a proper number of equipments.

### *3.2.1. Idle times*

There is a set of idle times that result from law regulations or corporate decisions. These stops are generally known a-priori, since they are regulated by local law and usually contribute to the production plant localization-decision process. Only causes external to the production system are responsible for their presence.

**•** Lack of raw material causes the interruption of the throughput. Since we have already considered the ineffective management of the orders in "Unplanned Time", the other related causes of time-losses depend on demand fluctuation or in ineffectiveness of the suppliers as well. In both cases the presence of safety stock allows operations manager to reduce or

Using Overall Equipment Effectiveness for Manufacturing System Design

http://dx.doi.org/10.5772/56089

57

**•** Low raw material standard quality (e.g. physical and chemical properties), may affect dramatically the performance of the system. Production resource (time, equipment, etc) are used to elaborate a throughput without value (or with a lower value) because of little raw material quality. Also in this case, this time losses do not affect the design of a production system, under the hypothesis that Operations Manager ensures the raw material quality is respected (e.g. incoming goods inspection). The missed detection of low quality raw materials can lead the Operations Manager to attribute the cause of defectiveness to the

Considering the Vehicle based internal transport, a broader set of considerations is requested. Given two consecutive stations *i-j,* the vehicles make available the output of station i to station

*i j*

In this sense any vehicle can be considered as an equipment that is carrying out the transfor‐

*i i->j j*

The activity to transport the output from station i to station j is a transformation (position) itself. Like the equipments, also the service vehicles affect and are affected by the OTE. In this sense successive considerations on equipments losses categorization, OEE, and their propa‐ gations throughout the system, OTE, can be extended to service vehicles. Hence, the design of service vehicles would be carried out according to the same guidelines we provide in succes‐

*Production System*

**Figure 1.** Vehicle based internal transport: transport the output of station i to the station j

*Production System*

**Figure 2.** Service vehicles that connect i-j can be represented as a station itself amid i-j

mation on a piece, moving the piece itself from station i to station j (Figure 2).

equipment (or set of equipment) where the defect is detected.

eliminate theirs effects.

j (figure 1).

sive section of this chapter.

### *3.2.2. Unplanned times*

The unplanned time are generally generated by system external causes connected with machineries, production planning and production risks.

A whole production system (e.g. production line) or some equipment may be temporarily used for non marketable product (e.g. prototype), or they may are not supposed to produce, due to test (e.g. for law requirements), installation of new equipments and the related activities (e.g. training of workers).

Similarly, a production system may face idle time because of lack of demand, absence of a production schedule (ineffectiveness of marketing function or production planning activities) or lack of material in stock due to ineffectiveness in managing the orders. Clearly, the presence of a production schedule in a production system is independent by the Operations manager and by the production system design as well. Yet, the lack of stock material, although inde‐ pendent from the production system design is one of the critical responsibility of any OM (inventory management).

Among this set of time losses we find also other external factors that affect the system availa‐ bility, which are usually managed by companies as a risk. In this sense occurrence of phe‐ nomenon like the lack of energy or the presence of strikes are risks that companies well know and that usually manage according to one of the four risk management strategy (avoidance, transfer, mitigation acceptance) depending on their impact and probability.

### *3.2.3. Stand by time*

Finally, the stand-by time losses are a set of losses due to system internal causes, but still equipment external causes. This time losses may affect widely the OTE of the production line and depend on: work organization losses, raw material and material handling.

Micro-absenteeism and shift changes may affect the performances of all the system that are based on man-machine interaction, such as the production equipments or the transportation systems as well. Lack of performance may propagate throughout the whole system as other equipment ineffectiveness. Even so, Operations manager can't avoid these losses by designing a better production line. Effective strategies in this sense are connected with social science that aim to achieve the employee engagement in the workplace [15].

Nonetheless Operations Manager can avoid the physiological increases by choosing ergo‐ nomic workstations.

The production system can present other time-losses because of the raw material, both in term of lack and quality:

**•** Lack of raw material causes the interruption of the throughput. Since we have already considered the ineffective management of the orders in "Unplanned Time", the other related causes of time-losses depend on demand fluctuation or in ineffectiveness of the suppliers as well. In both cases the presence of safety stock allows operations manager to reduce or eliminate theirs effects.

*3.2.1. Idle times*

56 Operations Management

*3.2.2. Unplanned times*

training of workers).

(inventory management).

*3.2.3. Stand by time*

nomic workstations.

of lack and quality:

system are responsible for their presence.

machineries, production planning and production risks.

There is a set of idle times that result from law regulations or corporate decisions. These stops are generally known a-priori, since they are regulated by local law and usually contribute to the production plant localization-decision process. Only causes external to the production

The unplanned time are generally generated by system external causes connected with

A whole production system (e.g. production line) or some equipment may be temporarily used for non marketable product (e.g. prototype), or they may are not supposed to produce, due to test (e.g. for law requirements), installation of new equipments and the related activities (e.g.

Similarly, a production system may face idle time because of lack of demand, absence of a production schedule (ineffectiveness of marketing function or production planning activities) or lack of material in stock due to ineffectiveness in managing the orders. Clearly, the presence of a production schedule in a production system is independent by the Operations manager and by the production system design as well. Yet, the lack of stock material, although inde‐ pendent from the production system design is one of the critical responsibility of any OM

Among this set of time losses we find also other external factors that affect the system availa‐ bility, which are usually managed by companies as a risk. In this sense occurrence of phe‐ nomenon like the lack of energy or the presence of strikes are risks that companies well know and that usually manage according to one of the four risk management strategy (avoidance,

Finally, the stand-by time losses are a set of losses due to system internal causes, but still equipment external causes. This time losses may affect widely the OTE of the production line

Micro-absenteeism and shift changes may affect the performances of all the system that are based on man-machine interaction, such as the production equipments or the transportation systems as well. Lack of performance may propagate throughout the whole system as other equipment ineffectiveness. Even so, Operations manager can't avoid these losses by designing a better production line. Effective strategies in this sense are connected with social science that

Nonetheless Operations Manager can avoid the physiological increases by choosing ergo‐

The production system can present other time-losses because of the raw material, both in term

transfer, mitigation acceptance) depending on their impact and probability.

and depend on: work organization losses, raw material and material handling.

aim to achieve the employee engagement in the workplace [15].

**•** Low raw material standard quality (e.g. physical and chemical properties), may affect dramatically the performance of the system. Production resource (time, equipment, etc) are used to elaborate a throughput without value (or with a lower value) because of little raw material quality. Also in this case, this time losses do not affect the design of a production system, under the hypothesis that Operations Manager ensures the raw material quality is respected (e.g. incoming goods inspection). The missed detection of low quality raw materials can lead the Operations Manager to attribute the cause of defectiveness to the equipment (or set of equipment) where the defect is detected.

Considering the Vehicle based internal transport, a broader set of considerations is requested. Given two consecutive stations *i-j,* the vehicles make available the output of station i to station j (figure 1).

**Figure 1.** Vehicle based internal transport: transport the output of station i to the station j

In this sense any vehicle can be considered as an equipment that is carrying out the transfor‐ mation on a piece, moving the piece itself from station i to station j (Figure 2).

**Figure 2.** Service vehicles that connect i-j can be represented as a station itself amid i-j

The activity to transport the output from station i to station j is a transformation (position) itself. Like the equipments, also the service vehicles affect and are affected by the OTE. In this sense successive considerations on equipments losses categorization, OEE, and their propa‐ gations throughout the system, OTE, can be extended to service vehicles. Hence, the design of service vehicles would be carried out according to the same guidelines we provide in succes‐ sive section of this chapter.

### **4. The formulation of OEE**

In this paragraph we will provide process designer with a set of topics that need to be addressed when considering the OEE during the design of a new production system. A proper assessment a-priori of the OEE, and the consequent design and sizing of the system demand process designer to consider a variety of complex factors, all related with OEE. It is important to notice that OEE measures not only the internal losses of efficiency, but is also detects time losses due to external time losses (par.2.1, par.2.2). Hence, in this paragraph we will firstly define analytically the OEE. Secondly we will investigate, through the analysis of relevant literature, the relation between the OEE of single equipment and the OEE of the production system as a set of interconnected equipments. Then we will describe how different time losses categories, of an equipment, affect both the OEE of the equipment and the OEE of the Whole system. Finally we will debate how OEE need to be considered with different perspective in accordance to factors as ways to realize the production and plant layout.

*Qeff* <sup>=</sup> *Pg Pa*

Using Overall Equipment Effectiveness for Manufacturing System Design

*Aeff* Availability efficiency. It considers failure and maintenance downtime and time devoted to indirect

*Peeff* Performance efficiency. It consider minor stoppages and time losses caused by speed reduction

*Tu* Equipment uptime during the *Tt*. It is lower that *Tt* because of failure, maintenance and set up.

*T <sup>p</sup>* Equipment production time. It is lower than *Tt* because of minor stoppages, resets, adjustments

) because of speed/production rate slowdowns.

(*a*) Average actual processing rate for equipment in production for actual product output. It is lower than

*Pa* Actual product units processed by equipment during *Tt*. We assume that for each product rework the

The OEE analysis, if based on single equipment data, is not sufficient, since *no machine is isolated in a factory, but operates in a linked and complex environment*[18]*.* A set of inter-dependent relations between two or more equipments of a production system generally exists, which leads to the

Mutual influence between two consecutive stations occurs even if both stations are working ideally. In fact if two consecutive stations (e.g. station A and station B) present different cy‐ cle times, the faster station (eg. Station A = 100 pcs/hour) need to reduce/stop its production rate in accordance with the other station production rate (e.g. Station B = 80 pcs/hour).

> **Station A Station B** 100 pcs/hour 80 pcs/hour

In this case, the detected OEE of station A would be 80%, even if any efficiency loss occurs.

propagation of availability, performance and quality losses throughout the system.

*Qeff* Quality efficiency. It consider loss of production caused by scraps and rework.

Table 2 summarizes briefly each factor.

*Tt* Total time of observation.

following changeovers.

(*th* )

*Pg* Good product output from equipment during *Tt*.

same cycle time is requested.

(*th* ) Average theoretical processing rate for actual product output.

This losses propagation is due to the unbalanced cycle time.

theoretical (*Ravg*

**Table 2.** OEE factors description

production task (e.g. set up, changeovers).

**Factor Description**

*Ravg*

*Ravg*

(7)

59

http://dx.doi.org/10.5772/56089

### **4.1. Mathematical formulation**

OEE is formulated as a function of a number of mutually exclusive components, such as *availability efficiency*, *performance efficiency*, and *quality efficiency* in order to quantify various types of productivity losses.

OEE is a value variable from 0 to 100%. An high value of OEE indicates that machine is operating close to its maximum efficiency. Although the OEE does not diagnose a specific reason why a machine is not running as efficiently as possible, it does give some insight into the reason [16]. It is therefore possible to analyze these areas to determine where the lack of efficiency is occurring: breakdown, set-up and adjustment, idling and minor storage, reduced speed, and quality defect and rework [1] [4].

In literature exist a meaningful set of time losses classification related to the three reported efficiencies (availability, performance and quality). Grando et al. [14] for example provided a meaningful and comprehensive classification of the time-losses that affect a single equipment, considering its interaction in the interaction system. Waters et al. [9] and Chase et al. [17] showed a variety of acknowledged possible efficiency losses schemes, while Nakajima [4] defined the most acknowledged classification of the "6 big losses".

In accordance with Nakajima notations, the conventional formula for OEE can be written as follow [1]:

$$\text{OEE} = \text{A}\_{\text{eff}} \text{ } \text{Pe}\_{\text{eff}} \text{ } \text{Q}\_{\text{eff}} \tag{4}$$

$$A\_{eff} = \frac{T\_u}{T\_s} \tag{5}$$

$$Pe\_{eff} = \frac{T\_p}{T\_u} \ast \frac{R\_{avg}^{(\iota)}}{R\_{avg}^{(\iota\_0)}} \tag{6}$$

Using Overall Equipment Effectiveness for Manufacturing System Design http://dx.doi.org/10.5772/56089 59

$$Q\_{eff} = \frac{P\_g}{P\_a} \tag{7}$$

Table 2 summarizes briefly each factor.

**4. The formulation of OEE**

58 Operations Management

**4.1. Mathematical formulation**

types of productivity losses.

follow [1]:

speed, and quality defect and rework [1] [4].

In this paragraph we will provide process designer with a set of topics that need to be addressed when considering the OEE during the design of a new production system. A proper assessment a-priori of the OEE, and the consequent design and sizing of the system demand process designer to consider a variety of complex factors, all related with OEE. It is important to notice that OEE measures not only the internal losses of efficiency, but is also detects time losses due to external time losses (par.2.1, par.2.2). Hence, in this paragraph we will firstly define analytically the OEE. Secondly we will investigate, through the analysis of relevant literature, the relation between the OEE of single equipment and the OEE of the production system as a set of interconnected equipments. Then we will describe how different time losses categories, of an equipment, affect both the OEE of the equipment and the OEE of the Whole system. Finally we will debate how OEE need to be considered with different perspective in

OEE is formulated as a function of a number of mutually exclusive components, such as *availability efficiency*, *performance efficiency*, and *quality efficiency* in order to quantify various

OEE is a value variable from 0 to 100%. An high value of OEE indicates that machine is operating close to its maximum efficiency. Although the OEE does not diagnose a specific reason why a machine is not running as efficiently as possible, it does give some insight into the reason [16]. It is therefore possible to analyze these areas to determine where the lack of efficiency is occurring: breakdown, set-up and adjustment, idling and minor storage, reduced

In literature exist a meaningful set of time losses classification related to the three reported efficiencies (availability, performance and quality). Grando et al. [14] for example provided a meaningful and comprehensive classification of the time-losses that affect a single equipment, considering its interaction in the interaction system. Waters et al. [9] and Chase et al. [17] showed a variety of acknowledged possible efficiency losses schemes, while Nakajima [4]

In accordance with Nakajima notations, the conventional formula for OEE can be written as

*Aeff* <sup>=</sup> *Tu Tt*

> *Tu* \* *Ravg* (*a*) *Ravg*

*Peeff* <sup>=</sup> *<sup>T</sup> <sup>p</sup>*

*OEE* = *Aeff Peeff Qeff* (4)

(*th* ) (6)

(5)

accordance to factors as ways to realize the production and plant layout.

defined the most acknowledged classification of the "6 big losses".


**Table 2.** OEE factors description

The OEE analysis, if based on single equipment data, is not sufficient, since *no machine is isolated in a factory, but operates in a linked and complex environment*[18]*.* A set of inter-dependent relations between two or more equipments of a production system generally exists, which leads to the propagation of availability, performance and quality losses throughout the system.

Mutual influence between two consecutive stations occurs even if both stations are working ideally. In fact if two consecutive stations (e.g. station A and station B) present different cy‐ cle times, the faster station (eg. Station A = 100 pcs/hour) need to reduce/stop its production rate in accordance with the other station production rate (e.g. Station B = 80 pcs/hour).


In this case, the detected OEE of station A would be 80%, even if any efficiency loss occurs. This losses propagation is due to the unbalanced cycle time.

Therefore, when considering the OEE of equipment in a given manufacturing system, the measured OEE is always the performance of the equipment within the specific system. This leads to practical consequence for the design of the system itself.

to the slowest machine (the bottleneck), theoretical production rate *Ravg*(*<sup>N</sup>* )

*OTE*

( ) ( )

´ <sup>=</sup> (9)

Using Overall Equipment Effectiveness for Manufacturing System Design

*th n avg n th avg F*

The *OEEn* computed in (9) is the OEE of nth station introduced in the production system (the *OEEn* when *n* is in the system and it is influenced by the performance of other *n-1* equipments).

According to (9) the only measure of *OEEn* is a measure of the performance of the whole system (OTE). This is true because performance data on *n* are gathered when the station *n* is already working in the system with the other *n-1* station and, therefore, its performance is affected from the performance of the other *n-1* prior stations. This means that the model proposed by Huang, could be used *only when the system exists and it is running*, so *OEEn* could be directly

But during system design, when only technical data of single equipment are known, the same formulation in (9) can't be used, since without information on the system *OEEn* in unknown

The OEE of each equipment, *as isolated machine* (independent by other station) is affected only by (5),(6) and (7) theoretical intrinsic value. But once the equipment is part of a system its performance depends also upon the interaction with other *n-1* equipments and thus on their performance. It is now more evident why, for a correct estimate and/or analysis of equipment OEE and system OEE, it is necessary to take into account losses propagation. These differences between single subsystem and entire system need to be deeply analyzed to understand real causes of system efficiency looses. In particular their investigation is fundamental during the design process, because a correct evaluation of OEE and for the study of effective losses reduction actions (i.e. buffer capacity dimensioning, quality control station positioning); but also during the normal execution of the operations because it leads to correct evaluation of

The table 3 shows how efficiency losses of a single subsystem (e.g. an equipment/ machine), given by Nakajima [4] can spread to other subsystem (e.g. in series machines) and then to

In accordance to table 3 a relevant lack of coordination in deploying available factory resources (people, information, materials, and tools) by using OEE metric (based on single equipment) exists. Hence, a wider approach for a holistic production system design has to focus also *on*

*the performance of the whole factory* [18], resulting by the interactions of its equipments.

a-priori. Hence, in this case the (9) couldn't provide a correct value of OTE.

causes of efficiency losses and their real impact on the system.

**4.3. How equipment time-losses influence the system performance and vice-versa**

*OEE R*

*R*

station as shown in (9):

measured on field.

whole system.

(*th* ) and *OEEn* of nth

61

http://dx.doi.org/10.5772/56089

A comprehensive analysis of the production system performance can be reached by extending the concept of OEE, as the performance of individual equipment, up to factory level [18]. In this sense OEE metric is well accepted as an effective measure of manufacturing performance not only for single machine but also for the whole production system [19] and it is known as *Overall Throughput Effectiveness* OTE [1] [20].

We refer to OTE as the OEE of the whole production system.

Therefore we can talk of:


### **4.2. An analytical formulation to study equipment and system OEE**

$$\text{System OEE} = \frac{\text{Number of good parts produced by system in total time}}{\text{Theoretical number of parts produced by system in total time}} \tag{8}$$

The System OEE measures the systemic performance of a manufacturing system (productive line, floor, factory) which combines activities, relationships between different machines and processes, integrating information, decisions and actions across many independents systems and subsystem [1]. For its optimization it is necessary to improve coordinately many interde‐ pendent activities. This will also increase the focus on the plant-wide picture.

Figure 3 clarify which is the difference between Equipment OEE and System OEE, showing how the performance of each equipment affects and is affected by the performances of the other connected equipments. These time losses propagation result on a Overall System OEE. Considering the figure 3 we can indeed argue that given a set of *i=1,..,n* equipments, *OEEi* of

the *i* th equipment depends on the process in which it has been introduced, due to the availa‐ bility, performance and quality losses propagation.

**Figure 3.** A production system composed of n stations

According to the model proposed by Huang et al in [1], the System OEE (OTE) for a series of *n* connected subsystems, is formulated in function of theoretical production rate *Ravg*(*<sup>F</sup>* ) (*th* ) relating to the slowest machine (the bottleneck), theoretical production rate *Ravg*(*<sup>N</sup>* ) (*th* ) and *OEEn* of nth station as shown in (9):

Therefore, when considering the OEE of equipment in a given manufacturing system, the measured OEE is always the performance of the equipment within the specific system. This

A comprehensive analysis of the production system performance can be reached by extending the concept of OEE, as the performance of individual equipment, up to factory level [18]. In this sense OEE metric is well accepted as an effective measure of manufacturing performance not only for single machine but also for the whole production system [19] and it is known as

**•** Equipment OEE, as the OEE of the single equipment, which measures the performance of

**•** System OEE (or OTE), which is the performance of the whole system and can be defined as

The System OEE measures the systemic performance of a manufacturing system (productive line, floor, factory) which combines activities, relationships between different machines and processes, integrating information, decisions and actions across many independents systems and subsystem [1]. For its optimization it is necessary to improve coordinately many interde‐

Figure 3 clarify which is the difference between Equipment OEE and System OEE, showing how the performance of each equipment affects and is affected by the performances of the other connected equipments. These time losses propagation result on a Overall System OEE. Considering the figure 3 we can indeed argue that given a set of *i=1,..,n* equipments, *OEEi*

the *i* th equipment depends on the process in which it has been introduced, due to the availa‐

*1 2 i n-1 n*

According to the model proposed by Huang et al in [1], the System OEE (OTE) for a series of

*n* connected subsystems, is formulated in function of theoretical production rate *Ravg*(*<sup>F</sup>* )

Theoretical number of parts produced by system in total time (8)

of

(*th* ) relating

the performance of the bottleneck equipment in the given production system.

System OEE= Number of good parts produced by system in total time

pendent activities. This will also increase the focus on the plant-wide picture.

**4.2. An analytical formulation to study equipment and system OEE**

leads to practical consequence for the design of the system itself.

We refer to OTE as the OEE of the whole production system.

the equipment in the given production system.

bility, performance and quality losses propagation.

*ProductiveSystem*

**Figure 3.** A production system composed of n stations

*Overall Throughput Effectiveness* OTE [1] [20].

Therefore we can talk of:

60 Operations Management

$$OTE = \frac{OEE\_n \times \mathcal{R}\_{avg(u)}^{th}}{\mathcal{R}\_{avg(F)}^{th}} \tag{9}$$

The *OEEn* computed in (9) is the OEE of nth station introduced in the production system (the *OEEn* when *n* is in the system and it is influenced by the performance of other *n-1* equipments).

According to (9) the only measure of *OEEn* is a measure of the performance of the whole system (OTE). This is true because performance data on *n* are gathered when the station *n* is already working in the system with the other *n-1* station and, therefore, its performance is affected from the performance of the other *n-1* prior stations. This means that the model proposed by Huang, could be used *only when the system exists and it is running*, so *OEEn* could be directly measured on field.

But during system design, when only technical data of single equipment are known, the same formulation in (9) can't be used, since without information on the system *OEEn* in unknown a-priori. Hence, in this case the (9) couldn't provide a correct value of OTE.

### **4.3. How equipment time-losses influence the system performance and vice-versa**

The OEE of each equipment, *as isolated machine* (independent by other station) is affected only by (5),(6) and (7) theoretical intrinsic value. But once the equipment is part of a system its performance depends also upon the interaction with other *n-1* equipments and thus on their performance. It is now more evident why, for a correct estimate and/or analysis of equipment OEE and system OEE, it is necessary to take into account losses propagation. These differences between single subsystem and entire system need to be deeply analyzed to understand real causes of system efficiency looses. In particular their investigation is fundamental during the design process, because a correct evaluation of OEE and for the study of effective losses reduction actions (i.e. buffer capacity dimensioning, quality control station positioning); but also during the normal execution of the operations because it leads to correct evaluation of causes of efficiency losses and their real impact on the system.

The table 3 shows how efficiency losses of a single subsystem (e.g. an equipment/ machine), given by Nakajima [4] can spread to other subsystem (e.g. in series machines) and then to whole system.

In accordance to table 3 a relevant lack of coordination in deploying available factory resources (people, information, materials, and tools) by using OEE metric (based on single equipment) exists. Hence, a wider approach for a holistic production system design has to focus also *on the performance of the whole factory* [18], resulting by the interactions of its equipments.


*Flow shop production systems* are typical of high volume and low variety production. The equipment present all a similar cycle time [23] and is usually organized in a product layout where interoperation buffers are small or absent. Due to similarity among the equipments that compose the production system, the saturation level of the different equipments are likely to be similar one each other. The OEE are similar as well. In this sense the focus of the analysis will be on loss time propagation causes, with the aim to avoid their occurrence to rise the OTE

Using Overall Equipment Effectiveness for Manufacturing System Design

http://dx.doi.org/10.5772/56089

63

On the other hand, in *job shop production systems*, due to the specific nature of operations (multiflows, different productive paths, need for process flexibility rather than efficiency) charac‐ terized by higher idle time and higher stand-by-time, lower values of performances index are

Different products categories usually require a different sequence of tasks within the same production system so the equipment is organized in a process layout. In this case rather than focusing on efficiency, the design focuses on production system flexibility and in the layout optimization in order to ensure that different production processes can take place effectively. Generally different processes, to produce different products, imply that bottleneck may shift from a station to another due to different production processes and different processing time

Due to the shift of bottleneck the presence of buffers between the stations usually allows different stations to work in an asynchronous manner, consecutively reducing/eliminating the

Nevertheless, when the productive mix is known and stable over time, the study of plant layout can embrace bottleneck optimization for each product of the mix, since a lower flexibility is

The analysis of quality propagation amid two or more stations should not be a relevant issue in job shop, since defects are usually detected and managed within the specific station.

Still, in several manufacturing system, despite a flow shop production, the equipment is organized in a process layout due to physical attributes of equipment (e.g. manufacturing of electrical cables showed in § 4) or different operational condition (e.g. pharmaceutical sector). In this case usually buffers are present and their size can dramatically influence the OTE of the

In an explicit attempt to avoid unmanageable models, we will now provide process designers and operations managers with useful hints and suggestion about the effect of inefficiencies propagation among a production line along with the development of a set of simulation

OEE is formulated as a function of a number of mutually exclusive components, such as availability efficiency, performance efficiency, and quality efficiency in order to quantify

of each station in accordance to the specific processed product as well.

**4.5. OEE and OTE factors for production system design**

various types of productivity losses.

propagation of low utilization rates.

of the system.

pursued.

demanded.

production system.

scenarios (§ 3.5).

**Table 3.** Example of propagation of losses in the system

This issue have been widely debated and acknowledged in literature [1] [18]. Several Authors [8] [21] have recognized and analyzed the need for a coherent, systematic methodology for design at the factory level.

Furthermore, the following activities, according to [18] [21] have to be considered as OTE is also started at the factory design level:


At present, there is not a common well defined and proven methodology for the analysis of System OEE [1] [19] *during the system design.* By the way the effect of efficiency losses propa‐ gation must be considered and deeply analyzed to understand and eliminate the causes before the production system is realized. In this sense the *simulation* is considered the most reliable method, to date, in designing, studying and analyzing the manufacturing systems and its dynamic performance [1] [19]*.* Discrete event simulation and advanced process control are the most representatives of such areas [22]*.*

### **4.4. Layout impact on OEE**

Finally, it is important to consider how the focus of the design may vary according the type of production system. In flow-shop production system the design mostly focuses on the OTE of the whole production line, whereas in job-shop production system the analysis may focus either on the OEE of a single equipment or in those of the specific shop floor, rather than those of the whole production system. This is due to the intrinsic factors that underlies a layout configuration choice.

*Flow shop production systems* are typical of high volume and low variety production. The equipment present all a similar cycle time [23] and is usually organized in a product layout where interoperation buffers are small or absent. Due to similarity among the equipments that compose the production system, the saturation level of the different equipments are likely to be similar one each other. The OEE are similar as well. In this sense the focus of the analysis will be on loss time propagation causes, with the aim to avoid their occurrence to rise the OTE of the system.

**Single subsystem Entire system**

Downtimes losses of upstream unit could slackening production rate

Downtimes losses of downstream unit could slackening production

Minor stoppages and speed reduction could influencing production rate of the downstream and upstream unit in absence of buffer

Production scraps and rework are losses for entire process depends on where the scraps are identified, rejected or reworked in the process

of downstream unit without fair buffer capacity

rate of upstream unit without fair buffer capacity

This issue have been widely debated and acknowledged in literature [1] [18]. Several Authors [8] [21] have recognized and analyzed the need for a coherent, systematic methodology for

Furthermore, the following activities, according to [18] [21] have to be considered as OTE is

**•** Agility and responsiveness (more customization, fast response to unexpected changes,

**•** Production cost (better asset utilization, higher throughput, less inventory, less setup, less

At present, there is not a common well defined and proven methodology for the analysis of System OEE [1] [19] *during the system design.* By the way the effect of efficiency losses propa‐ gation must be considered and deeply analyzed to understand and eliminate the causes before the production system is realized. In this sense the *simulation* is considered the most reliable method, to date, in designing, studying and analyzing the manufacturing systems and its dynamic performance [1] [19]*.* Discrete event simulation and advanced process control are the

Finally, it is important to consider how the focus of the design may vary according the type of production system. In flow-shop production system the design mostly focuses on the OTE of the whole production line, whereas in job-shop production system the analysis may focus either on the OEE of a single equipment or in those of the specific shop floor, rather than those of the whole production system. This is due to the intrinsic factors that underlies a layout

**•** Quality (better equipment reliability, higher yields, less rework, no misprocessing);

**•** Speed (faster ramp up, shorter cycle times, faster delivery);

Set-up and adjustment

**Availability** Breakdown losses

62 Operations Management

**Performance** Idling and minor stoppages Reduced speed

**Quality** Quality defects and rework Yield losses

design at the factory level.

simpler integration); **•** Technology changes;

idle time);

**Table 3.** Example of propagation of losses in the system

also started at the factory design level:

most representatives of such areas [22]*.*

**4.4. Layout impact on OEE**

configuration choice.

On the other hand, in *job shop production systems*, due to the specific nature of operations (multiflows, different productive paths, need for process flexibility rather than efficiency) charac‐ terized by higher idle time and higher stand-by-time, lower values of performances index are pursued.

Different products categories usually require a different sequence of tasks within the same production system so the equipment is organized in a process layout. In this case rather than focusing on efficiency, the design focuses on production system flexibility and in the layout optimization in order to ensure that different production processes can take place effectively.

Generally different processes, to produce different products, imply that bottleneck may shift from a station to another due to different production processes and different processing time of each station in accordance to the specific processed product as well.

Due to the shift of bottleneck the presence of buffers between the stations usually allows different stations to work in an asynchronous manner, consecutively reducing/eliminating the propagation of low utilization rates.

Nevertheless, when the productive mix is known and stable over time, the study of plant layout can embrace bottleneck optimization for each product of the mix, since a lower flexibility is demanded.

The analysis of quality propagation amid two or more stations should not be a relevant issue in job shop, since defects are usually detected and managed within the specific station.

Still, in several manufacturing system, despite a flow shop production, the equipment is organized in a process layout due to physical attributes of equipment (e.g. manufacturing of electrical cables showed in § 4) or different operational condition (e.g. pharmaceutical sector). In this case usually buffers are present and their size can dramatically influence the OTE of the production system.

In an explicit attempt to avoid unmanageable models, we will now provide process designers and operations managers with useful hints and suggestion about the effect of inefficiencies propagation among a production line along with the development of a set of simulation scenarios (§ 3.5).

### **4.5. OEE and OTE factors for production system design**

OEE is formulated as a function of a number of mutually exclusive components, such as availability efficiency, performance efficiency, and quality efficiency in order to quantify various types of productivity losses.

During the design of the production system the use of intrinsic performance index for the sizing of each equipment although wrong could seem the only rational approach for the design. By the way, this approach don't consider the interaction between the stations. Someone can argue that to make independent each station from the other stations through the buffer would simplify the design and increase the availability. Still, the interposition of a buffer between two or more station may not be possible for several reason. Most relevant are:

in station *i*, loss availability can interest only the single equipment I or the whole production

Using Overall Equipment Effectiveness for Manufacturing System Design

http://dx.doi.org/10.5772/56089

65

**•** If buffers are not present, the set up of station *i* implies the stop of the whole line (figure 4). This is a typical configuration of flow shop process realized by one or more production

**•** If buffers are present (before and beyond the station *i*) and their size is sufficient to decouple the station *i* by the other *i-1* and *i+1* station during the whole set up, the line continues to

*1 2 i n-1 n*

*1 2 i n-1 n*

Hence, the buffer design plays a key role in the phenomena of losses propagation throughout the line not only for set-up losses, but also for other availability losses and performance losses as well. The degree of propagation ranges according to the buffer size amid zero (total dependence-maximum propagation) and maximum buffer size (total independence-no propagation). It will be debated in the following (§ 3.5.3), when considering the performance losses, although the same principles can be applied to avoid propagation of minor set up losses

production system can be defined similarly. Nevertheless it depends upon the equipment configurations. Operations Manager, through the choice of equipment configurations can increase the maintenance availability. This is a design decision, since different equipments must be bought and installed according to desired availability level. The choice of the config‐ uration usually results as a trade-off between equipment costs and system availability. The two main equipment configuration (not-redundant system, redundant system) are debated in

*Tt*

. The availability of the whole

line, depending on the buffer presence, their location and dimension:

**Figure 4.** Barely decoupled/Coupled Production System (buffer unimportant or null)

(mostly for short set-up/changeover, like adjustment and calibrations).

The availability of an equipment [24] is defined as *Aeff* <sup>=</sup> *Tu*

line as food, beverages, pharmaceutical packaging,....

work regularly (figure 5).

*Production System*

*Production System*

**Figure 5.** Decoupled Production System

*4.5.2. Maintenance availability*

the following.


In our model we will show how a production system can be defined considering availability, performance and quality efficiency (5),(6), (7) of each station along with their interactions. The method embraces a set of hints and suggestions (best practices) that lead designers in handle interactions and losses propagation with the aim to rise the expected performance of the system. Furthermore, through the development of a simulation model of a real production system for the electrical cable production we provide students with a clear understanding of how time-losses propagate in a real manufacturing system.

The design process of a new production system should always include the simulation of the identified solution, since the simulation provides designer with a holistic understanding of the system. In this sense in this paragraph we provide a method where the design of a production system is an iterative process: the simulation output is the input of a successive design step, until the designed system meet the expected performance and performance are validated by simulation. Each loss will be firstly described referring to a single equipment, than its effect will be analyzed considering the whole system, also throughout the support of simulation tools.

### *4.5.1. Set up availability*

Availability losses due to set up and changeover must be considered during the design of the plant. In accordance with the production mix, the number of set-up generally results as a tradeoff between the set up costs (due to loss of availability + substituted tools, etc.) and the warehouse cost.

During the design phase some relevant consideration connected with set-up time losses should be considered. A production line is composed of n stations. The same line can usually produce more than one product type. Depending on the difference between different product types a changeover in one or more stations of the line can be required. Usually, the more negligible the differences between the products, the lower the number of equipments subjected to set up (e.g. it is sufficient the set up only of the label machine to change the labels of a product depending on the destination country). In a given line of n equipments, if a set up is requested in station *i*, loss availability can interest only the single equipment I or the whole production line, depending on the buffer presence, their location and dimension:


**Figure 4.** Barely decoupled/Coupled Production System (buffer unimportant or null)

**Figure 5.** Decoupled Production System

During the design of the production system the use of intrinsic performance index for the sizing of each equipment although wrong could seem the only rational approach for the design. By the way, this approach don't consider the interaction between the stations. Someone can argue that to make independent each station from the other stations through the buffer would simplify the design and increase the availability. Still, the interposition of a buffer between

**•** economic (the creation of stock amid each couple of station increase the WIP and conse‐

In our model we will show how a production system can be defined considering availability, performance and quality efficiency (5),(6), (7) of each station along with their interactions. The method embraces a set of hints and suggestions (best practices) that lead designers in handle interactions and losses propagation with the aim to rise the expected performance of the system. Furthermore, through the development of a simulation model of a real production system for the electrical cable production we provide students with a clear understanding of

The design process of a new production system should always include the simulation of the identified solution, since the simulation provides designer with a holistic understanding of the system. In this sense in this paragraph we provide a method where the design of a production system is an iterative process: the simulation output is the input of a successive design step, until the designed system meet the expected performance and performance are validated by simulation. Each loss will be firstly described referring to a single equipment, than its effect will be analyzed considering the whole system, also throughout the support of simulation

Availability losses due to set up and changeover must be considered during the design of the plant. In accordance with the production mix, the number of set-up generally results as a tradeoff between the set up costs (due to loss of availability + substituted tools, etc.) and the

During the design phase some relevant consideration connected with set-up time losses should be considered. A production line is composed of n stations. The same line can usually produce more than one product type. Depending on the difference between different product types a changeover in one or more stations of the line can be required. Usually, the more negligible the differences between the products, the lower the number of equipments subjected to set up (e.g. it is sufficient the set up only of the label machine to change the labels of a product depending on the destination country). In a given line of n equipments, if a set up is requested

two or more station may not be possible for several reason. Most relevant are:

**•** product features (buffer increase cross times, critical for perishable products);

how time-losses propagate in a real manufacturing system.

quently interest on current assets);

**•** performance;

64 Operations Management

tools.

*4.5.1. Set up availability*

warehouse cost.

**•** logistic (space unavailability, huge size of the product, compact plant layout, etc.);

Hence, the buffer design plays a key role in the phenomena of losses propagation throughout the line not only for set-up losses, but also for other availability losses and performance losses as well. The degree of propagation ranges according to the buffer size amid zero (total dependence-maximum propagation) and maximum buffer size (total independence-no propagation). It will be debated in the following (§ 3.5.3), when considering the performance losses, although the same principles can be applied to avoid propagation of minor set up losses (mostly for short set-up/changeover, like adjustment and calibrations).

### *4.5.2. Maintenance availability*

The availability of an equipment [24] is defined as *Aeff* <sup>=</sup> *Tu Tt* . The availability of the whole production system can be defined similarly. Nevertheless it depends upon the equipment configurations. Operations Manager, through the choice of equipment configurations can increase the maintenance availability. This is a design decision, since different equipments must be bought and installed according to desired availability level. The choice of the config‐ uration usually results as a trade-off between equipment costs and system availability. The two main equipment configuration (not-redundant system, redundant system) are debated in the following.

### *Not redundant system*

When a system is composed of non redundant equipment, each station produces only if the equipment is working.

Hence if we consider a line of n equipment connected a s a series we have that the downtime of each equipment causes the downtime of the whole system.

$$A\_{system} = \prod\_{i=1}^{n} A\_i \tag{10}$$

*Asystem* =∏ *i*=1 *n*

*0,7*

100

**Figure 7.** Availability of totally redundant equipments connected with not redundant equipments

*Production System*

produced and the relative probability that each state manifests.

*0,7*

100

*Production System*

**Figure 8.** Availability of partially redundant equipments connected with not redundant equipments

ically.

Partial redundancy

The figure 8 shows an example.

*0,8*

100

100

*0,8*

To achieve an higher level of availability it has been necessary to buy two identical equipments (double cost). Hence, the higher value of availability of the system should be worth econom‐

An intermediate solution can be the partial redundancy of an equipment. This is named *K/n* system, where *n* is the total number of equipment of the parallel system and *k* is the minimum number of the *n* equipment that must work properly to ensure the throughput is produced.

The capacity of equipment *b', b''* and *b'''* is 50 pieces in the referral time unit. If the three systems must ensure a throughput of 100 pieces, it is at least necessary that *k=2* of the *n=3* equipment produce 50 pieces. The table 4 shows the configuration states which ensure the output is

*0,8*

**b'**

a c

**b''**

50

50

*0,8*

**b'''**

*0,9*

<sup>100</sup> *0,8* <sup>50</sup>

*Ai* =0, 7\* 0, 96 \*0, 9=0, 6048 (14)

Using Overall Equipment Effectiveness for Manufacturing System Design

http://dx.doi.org/10.5772/56089

67

*0,9*

100

$$A\_{system} = \prod\_{i=1}^{n} A\_i = 0, \text{ } \mathcal{T}^\*0, \text{ } 8^\*0, \text{ } 9 = 0, \text{ } 504 \tag{11}$$

The availability of system composed of a series of equipment is always lower than the availability of each equipment (figure 6).

**Figure 6.** Availability of not redundant System

Total redundant system

Oppositely, to avoid failure propagation amid stations, designer can set the line with a total redundancy of a given equipment. In this case only the contemporaneous downtime of both equipments causes the downtime of the whole system.

$$A\_{system} = \mathbf{1} \cdot \prod\_{i=1}^{n} (\mathbf{1} \cdot A\_i) \tag{12}$$

In the example in figure 7 we have two single equipments connected with a redundant system of two equipment (dotted line system).

Hence, the redundant system availability (dotted line system) rises from 0,8 (of the single equipment) up to:

$$A\_{parallel} = 1 - \prod\_{i=1}^{n} (1 - A\_i) = \begin{pmatrix} 1 \ -0 \ \text{8} \end{pmatrix} \star \begin{pmatrix} 1 \ -0 \ \text{8} \end{pmatrix} = 0, \ \text{96} \tag{13}$$

Consequently the availability of the whole system will be:

Using Overall Equipment Effectiveness for Manufacturing System Design http://dx.doi.org/10.5772/56089 67

$$A\_{system} = \prod\_{i=1}^{n} A\_i = 0, \; \mathcal{T}^\* \Big[0, \; 96 \Big] \* 0, \; 9 = 0, \; 6048 \tag{14}$$

**Figure 7.** Availability of totally redundant equipments connected with not redundant equipments

To achieve an higher level of availability it has been necessary to buy two identical equipments (double cost). Hence, the higher value of availability of the system should be worth econom‐ ically.

Partial redundancy

*Not redundant system*

66 Operations Management

equipment is working.

When a system is composed of non redundant equipment, each station produces only if the

Hence if we consider a line of n equipment connected a s a series we have that the downtime

The availability of system composed of a series of equipment is always lower than the

*0,7 0,8 0,9*

Oppositely, to avoid failure propagation amid stations, designer can set the line with a total redundancy of a given equipment. In this case only the contemporaneous downtime of both

> *i*=1 *n* (1 - *Ai*

In the example in figure 7 we have two single equipments connected with a redundant system

Hence, the redundant system availability (dotted line system) rises from 0,8 (of the single

*Asystem* =1 - ∏

*Ai* (10)

) (12)

) =(1 - 0, 8)\*(1 - 0, 8) =0, 96 (13)

*Ai* =0, 7\*0, 8\*0, 9=0, 504 (11)

*Asystem* =∏ *i*=1 *n*

of each equipment causes the downtime of the whole system.

*Asystem* =∏ *i*=1 *n*

*Production System*

equipments causes the downtime of the whole system.

*Aparallel* =1 - ∏

Consequently the availability of the whole system will be:

*i*=1 *n* (1 - *Ai*

availability of each equipment (figure 6).

**Figure 6.** Availability of not redundant System

of two equipment (dotted line system).

Total redundant system

equipment) up to:

An intermediate solution can be the partial redundancy of an equipment. This is named *K/n* system, where *n* is the total number of equipment of the parallel system and *k* is the minimum number of the *n* equipment that must work properly to ensure the throughput is produced. The figure 8 shows an example.

The capacity of equipment *b', b''* and *b'''* is 50 pieces in the referral time unit. If the three systems must ensure a throughput of 100 pieces, it is at least necessary that *k=2* of the *n=3* equipment produce 50 pieces. The table 4 shows the configuration states which ensure the output is produced and the relative probability that each state manifests.

**Figure 8.** Availability of partially redundant equipments connected with not redundant equipments


**Table 4.** State Analysis Configuration

In this example all equipments *b* have the same reliability (*0,8*), hence the probability the system of three equipment ensure the output should have been calculated, without the state analysis configuration (table 4), through the binomial distribution:

$$\mathbf{R}\_{k/m} = \sum\_{j=k}^{n} \binom{m}{j} \mathbf{R}^{-j} \mathbf{I} \mathbf{1} \mathbf{1} \cdot \mathbf{R} \mathbf{J}^{n-j} \tag{15}$$

Considering the example in figure 9 when *b'* and *b''* are both up the throughput of the subsystem *b'-b''* is 100, since capacity of a and c is 100. Supposing that capacity of *a* and *c* is modular, when *b'* is down the subsystem can produce 60 pieces in the time unit. Similarly, when *b''* is down the subsystem can produce 70. Hence, the expected amount of pieces

When considering the whole system if either *a* or *c* are down the system cannot produce. Hence, the expected throughput in the considered time unit must be reduced of the availability of the

*0,8*

**a c**

**b'**

**b''**

70

60

**Probability of occurrence**

Expected number of Pieces Produced 84,8

*0,8*

**Figure 9.** Availability of partially redundant equipments connected with not redundant equipments at modular ca‐

UP UP 100 0,8\*0,8 0,64 64

UP DOWN 70 0,8\*(1-0,8) 0,16 11,2

DOWN UP 60 (1-0,8)\*0,8 0,16 9,6

OEE theory includes in performance losses both the cycle time slowdown and minor stop‐ pages. Also time losses of this category propagate, as stated before, throughout the whole

*0,9*

Using Overall Equipment Effectiveness for Manufacturing System Design

http://dx.doi.org/10.5772/56089

69

100

**[\*100] Expected Pieces**

**Produced**

produced by b'-b'' is 84,8 pieces (table 5).

*0,7*

100

**Throughput**

*Production System*

**b' b'' Maximum**

**Table 5.** State Analysis Configuration

production process.

*4.5.3. Minor stoppages and speed reduction*

two equipments:

pacity

$$R\_{2/3} = \binom{3}{2} 0, \text{ 82[1 - 0, 81]} + \binom{3}{3} 0, \text{ 83[-0, 896]} \tag{16}$$

Hence, the availability of the system (a, b'-b''-b''', c) will be:

$$A\_{system} = \prod\_{i=1}^{n} A\_i = 0, \ \mathcal{T}^\* \Big[0, \ 896\Big] \* 0, \ 9 = 0, \ 56448 \tag{17}$$

In this case the investment in redundancy is lower than the previous. It is clear how the choice of the level of availability is a trade-off between fix-cost (due to equipment investment) and lack of availability.

In all the cases we considered the buffer as null.

When reliability of the equipments (*b* in our example) the binomial distribution (16) is not applicable, therefore the state analysis configuration (table 4) is required.

*Redundancy with modular capacity*

Another configuration is possible.

The production system can be designed as composed of two equipment which singular capacity is lower than the requested but which sum is higher. In this case if it is possible to modulate the production capacity of previous and successive stations the expected throughput will be higher than the output of a singular equipment.

Considering the example in figure 9 when *b'* and *b''* are both up the throughput of the subsystem *b'-b''* is 100, since capacity of a and c is 100. Supposing that capacity of *a* and *c* is modular, when *b'* is down the subsystem can produce 60 pieces in the time unit. Similarly, when *b''* is down the subsystem can produce 70. Hence, the expected amount of pieces produced by b'-b'' is 84,8 pieces (table 5).

When considering the whole system if either *a* or *c* are down the system cannot produce. Hence, the expected throughput in the considered time unit must be reduced of the availability of the two equipments:

**Figure 9.** Availability of partially redundant equipments connected with not redundant equipments at modular ca‐ pacity


**Table 5.** State Analysis Configuration

**b' b'' b''' Probability of**

**Table 4.** State Analysis Configuration

68 Operations Management

lack of availability.

configuration (table 4), through the binomial distribution:

*R*2/3 =( 3 2

*Asystem* =∏ *i*=1 *n*

In all the cases we considered the buffer as null.

will be higher than the output of a singular equipment.

*Redundancy with modular capacity*

Another configuration is possible.

Hence, the availability of the system (a, b'-b''-b''', c) will be:

*Rk* /*<sup>n</sup>* = ∑ *j*=*k n* ( *n j*

)0, 8<sup>2</sup> 1 - 0, 8 + (

UP UP UP 0,8\*0,8\*0,8 0,512

UP UP DOWN 0,8\*0,8\*(1-0,8) 0,128

UP DOWN UP 0,8\*(1-0,8)\*0,8 0,128

DOWN UP UP (1-0,8)\*0,8\*0,8 0,128

In this example all equipments *b* have the same reliability (*0,8*), hence the probability the system of three equipment ensure the output should have been calculated, without the state analysis

> 3 3 )0, 8<sup>3</sup>

In this case the investment in redundancy is lower than the previous. It is clear how the choice of the level of availability is a trade-off between fix-cost (due to equipment investment) and

When reliability of the equipments (*b* in our example) the binomial distribution (16) is not

The production system can be designed as composed of two equipment which singular capacity is lower than the requested but which sum is higher. In this case if it is possible to modulate the production capacity of previous and successive stations the expected throughput

applicable, therefore the state analysis configuration (table 4) is required.

**occurrance**

)*R <sup>j</sup>* 1 - *R <sup>n</sup>*- *<sup>j</sup>* (15)

*Ai* =0, 7\* 0, 896 \*0, 9=0, 56448 (17)

=0,896 (16)

Total Availability 0,896

**[\*100]**

*4.5.3. Minor stoppages and speed reduction*

OEE theory includes in performance losses both the cycle time slowdown and minor stop‐ pages. Also time losses of this category propagate, as stated before, throughout the whole production process.

A first type of performance losses propagation is due to the propagation of minor stoppages and reduced speed among machines in series system. From theoretical point of view, between two machines with the same cycle time 1 and without buffer, minor stoppage and reduced speed propagate completely like as major stoppage. Obviously just a little buffer can mitigate the propagation.

*Psystem* =∏ *i*=1 2

size is Bmax (calculated on the two specific couple of stations), then we have:

That for the considered 2 stations system is:

to j=Bmax is achieved (j =0,..,Bmax).

We can therefore introduce the parameter

*Psystem* =*Mini*=1

But when the buffer is properly designed, it doesn't allow the minor stoppages and speed losses to propagate from a station to another. We define this Buffer size as Bmax. When, in a production system of n stations, given any couple of consecutive station, the interposed buffer

*<sup>n</sup>* (*Pi*

Hence, the extent of the propagation of performance losses depends on the buffer size (j) that is interposed between the two stations. Generally, a bigger buffer increases the performance of the system, since it increases the decoupling degree between two consecutive stations, up

**Rel**.**P**(**j**) <sup>=</sup> *<sup>P</sup>* ( *<sup>j</sup>*)

**When <sup>j</sup>** <sup>=</sup> **Bmax**, **Rel**.**<sup>P</sup>** (**<sup>B</sup> max**) <sup>=</sup> **<sup>P</sup>** (**Bmax**)

Figure 11 shows the trend of *Rel.P(j)* depending on the buffer size (*j*), when the performance rate of each station is modeled with an exponential distribution [23] in a flow shop environ‐ ment. The two curves represent the minimum and the maximum simulation results. All the others simulation results are included between these two curves. Maximum curve represents the configuration with the lowest difference in performance index between the two stations,

By analyzing the figure 11 it is clear how an inopportune buffer size affect the performance of the line and how increase in buffer size allows to obtain improve in production line OEE. By the way, once achieved an opportune buffer size no improvement derives from a further increase in buffer. These considerations of Performance index trend are fundamental for an

Considering the model with two station, figure 11, we have that:

**When <sup>j</sup>** <sup>=</sup> 0, **Rel**.**<sup>P</sup>** (0) <sup>=</sup> **<sup>P</sup>** (0)

the minimum the configuration with the highest difference.

effective design of a production system.

*Pi* (19)

http://dx.doi.org/10.5772/56089

71

Using Overall Equipment Effectiveness for Manufacturing System Design

) (20)

*Psystem* =*Min* (*P*1, *P*2) (21)

*<sup>P</sup>*(*Bmax*) (22)

**<sup>P</sup>**(**Bmax**) = 1; (24)

**<sup>P</sup>**(**Bmax**) = **P**(1)**\*P**(2) / **min**(**P**(1);**P**(2)); (23)

Several models to study the role of buffers in avoiding the propagation of performance losses are available in *Buffer Design for Availability* literature [22]. The problem is of scientific rele‐ vance, since the lack of opportune buffer between the two stations can indeed affect dramat‐ ically the availability of the whole system. To briefly introduce this problem we refer to a production system composed of two consecutive equipments (or stations) with an interposed buffer (figure 10).

**Figure 10.** Station-Buffer-Station system. Adapted by [23]

Under the likely hypothesis that the ideal cycle times of the two stations are identical [23], the variability of speed that affect the stations is not necessarily of the same magnitude, due to its dependence on several factors. Furthermore Performance index is an average of the *Tt* , therefore a same machine can sometimes perform at a reduced speed and sometimes an highest speed2 . The presence of this effect in two consecutive equipments can be mutually compensate or add up. Once again, within the propagation analysis for production system design, the role of buffer is dramatically important.

When buffer size is null the system is in series. Hence, as for availability, speed losses of each equipment affect the performance of the whole system:

$$P\_{system} = \prod\_{i=1}^{n} P\_i \tag{18}$$

Therefore, for the two stations system we can posit:

<sup>1</sup> As shown in par. 3.1. When two consecutive stations present different cycle times, the faster station works with the same cycle time of slower station, with consequence on equipment OEE, even if any time losses is occurred. On the other hand, when two consecutive stations are balanced (same cycle time) if any time loss is occurring the two stations OEE will be 100%. Ideally, the higher value of performance rate can be reached when the two stations are balanced.

<sup>2</sup> This time losses are typically caused by yield reduction (the actual process yield is lower than the design yield). This effect is more likely to be considered in production process where the equipment saturation level affect its yield, like furnaces, chemical reactor, etc.

Using Overall Equipment Effectiveness for Manufacturing System Design http://dx.doi.org/10.5772/56089 71

$$P\_{system} = \prod\_{i=1}^{2} P\_i \tag{19}$$

But when the buffer is properly designed, it doesn't allow the minor stoppages and speed losses to propagate from a station to another. We define this Buffer size as Bmax. When, in a production system of n stations, given any couple of consecutive station, the interposed buffer size is Bmax (calculated on the two specific couple of stations), then we have:

$$P\_{system} = \text{Min}\_{i=1}^{n} \{ P\_i \} \tag{20}$$

That for the considered 2 stations system is:

A first type of performance losses propagation is due to the propagation of minor stoppages and reduced speed among machines in series system. From theoretical point of view, between

speed propagate completely like as major stoppage. Obviously just a little buffer can mitigate

Several models to study the role of buffers in avoiding the propagation of performance losses are available in *Buffer Design for Availability* literature [22]. The problem is of scientific rele‐ vance, since the lack of opportune buffer between the two stations can indeed affect dramat‐ ically the availability of the whole system. To briefly introduce this problem we refer to a production system composed of two consecutive equipments (or stations) with an interposed

*1 2*

Under the likely hypothesis that the ideal cycle times of the two stations are identical [23], the variability of speed that affect the stations is not necessarily of the same magnitude, due to its dependence on several factors. Furthermore Performance index is an average of the *Tt* , therefore a same machine can sometimes perform at a reduced speed and sometimes an highest

When buffer size is null the system is in series. Hence, as for availability, speed losses of each

1 As shown in par. 3.1. When two consecutive stations present different cycle times, the faster station works with the same cycle time of slower station, with consequence on equipment OEE, even if any time losses is occurred. On the other hand, when two consecutive stations are balanced (same cycle time) if any time loss is occurring the two stations OEE will be 100%. Ideally, the higher value of performance rate can be reached when the two stations are balanced. 2 This time losses are typically caused by yield reduction (the actual process yield is lower than the design yield). This effect is more likely to be considered in production process where the equipment saturation level affect its yield, like

*Psystem* =∏ *i*=1 *n*

. The presence of this effect in two consecutive equipments can be mutually compensate or add up. Once again, within the propagation analysis for production system design, the role

*Production System*

**Figure 10.** Station-Buffer-Station system. Adapted by [23]

equipment affect the performance of the whole system:

Therefore, for the two stations system we can posit:

of buffer is dramatically important.

furnaces, chemical reactor, etc.

and without buffer, minor stoppage and reduced

*Pi* (18)

two machines with the same cycle time 1

the propagation.

70 Operations Management

buffer (figure 10).

speed2

$$P\_{system} = \text{Min}\left(P\_{1\prime}, P\_2\right) \tag{21}$$

Hence, the extent of the propagation of performance losses depends on the buffer size (j) that is interposed between the two stations. Generally, a bigger buffer increases the performance of the system, since it increases the decoupling degree between two consecutive stations, up to j=Bmax is achieved (j =0,..,Bmax).

We can therefore introduce the parameter

$$\mathbf{Rel.P(j)} = \frac{P(j)}{P(B\max)}\tag{22}$$

Considering the model with two station, figure 11, we have that:

$$\textbf{When } \textbf{j} = 0, \text{ } \textbf{Rel.}\\\textbf{P} \left(0\right) = \frac{\textbf{P} \left(0\right)}{\textbf{P} \left(\textbf{Run.}\right)} = \textbf{P} \left(1\right) \* \textbf{P} \left(2\right) / \min \left(\textbf{P} \left(1\right); \textbf{P} \left(2\right)\right); \tag{23}$$

$$\textbf{When } \textbf{j} = \textbf{B} \textbf{max}, \text{ } \textbf{Rel.P} \left(\textbf{B} \,\textbf{max}\right) = \frac{\textbf{P} \left(\textbf{B} \textbf{max}\right)}{\textbf{F} \left(\textbf{B} \textbf{max}\right)} = 1;\tag{24}$$

Figure 11 shows the trend of *Rel.P(j)* depending on the buffer size (*j*), when the performance rate of each station is modeled with an exponential distribution [23] in a flow shop environ‐ ment. The two curves represent the minimum and the maximum simulation results. All the others simulation results are included between these two curves. Maximum curve represents the configuration with the lowest difference in performance index between the two stations, the minimum the configuration with the highest difference.

By analyzing the figure 11 it is clear how an inopportune buffer size affect the performance of the line and how increase in buffer size allows to obtain improve in production line OEE. By the way, once achieved an opportune buffer size no improvement derives from a further increase in buffer. These considerations of Performance index trend are fundamental for an effective design of a production system.

have quality losses (time spent to work products that will be discarded) due to its own and station 2 defectiveness. If we consider two stations with an assigned defectiveness *Sj* and quality control station is only at the end of the line, quality rate quality rate could be formulate as shown in case 3 in figure 12. In this case both stations will have quality losses due to the propagation of defectiveness in the line. Case 2 and 3 point out that quality losses could be not simple to evaluate if we consider a long process both in design and management of system. In particular in the quality rate of station 1 we consider time lost for reject in the station 2.

**1 2**

Case 2)

)())(( <sup>22211</sup> 111 == *sQssQ*

Finally, it is important to highlight the different role that the quality efficiency plays during

When the system is producing, Operations Manager focuses his attention on the causes of the delectability with the aim to reduce it. When it is to design the production system, Operations Manager focuses on the expected quality efficiency of each station, on the location of quality control, on the process (rework or scraps) to identify the correct number of equipments or

In this sense, the analysis is vertical during the production phase, but it follows the whole

*1 2 i n-1 n*

To study losses propagation and to show how these dynamics affect OEE in a complex system [25] this chapter presents some examples taken from an OEE study of a real manufacturing

system carried out by the authors through a process simulation analysis [19].

scraps

Using Overall Equipment Effectiveness for Manufacturing System Design

**1 2**

Case 1)

)()( <sup>2211</sup> 11 == *sQsQ*

rework

**Figure 12.** Different cases of quality rate calculation

the design phase and the production.

station for each activity of the process.

process during the design (figure 13).

**Figure 13.** Two approaches for quality efficiency

**5. The simulation model**

**1 2 C**

Case 3)

http://dx.doi.org/10.5772/56089

))(( <sup>2121</sup> 11 == *ssQQ*

**Production**

scraps

73

**Figure 11.** Rel OEE depending on buffer size in system affected by variability due to speed losses

### *4.5.4. Quality losses*

In this paragraph we analyze how quality losses propagate in the system and if it is possible to assess the effect of quality control on OEE and OTE.

First of all we have to consider that quality rate for a station is usually calculated considering only the time spent for the manufacturing of products that have been rejected in the same station. This traditional approach focuses on stations that cause defects but doesn't allow to point out completely the effect of the machine defectiveness on the system. In order to do so, the total time wasted by a station due to quality losses should include even the time spent for manufacturing of good products that will be rejected for defectiveness caused by other stations. In this sense quality losses depends on where scraps are identified and rejected. For example, scraps in the last station should be considered loss of time for the upstream station to estimate the real impact of the loss on the system and to estimate the theoretical production capacity needed in the upstream station. In conclusion the authors propose to calculate quality rate for a station considering as quality loss all time spent to manufacture products that will not complete the whole process successfully.

From a theoretical point of view we could consider the following case for calculation of quality rate of a station that depends on types of rejection (scraps or rework) and on quality controls positioning. If we consider two stations with an assigned defectiveness *Sj* and each station reworks its scraps with a rework cycle time equal to theoretical cycle time, quality rate could be formulate as shown in case 1 in figure 12. Each station will have quality losses (time spent to rework products) due its own defectiveness. If we consider two stations with an assigned defectiveness *Sj* and a quality control station at downstream each station, quality rate could be formulate as shown in case 2 in figure 12. The station 1, that is the upstream station, will have quality losses (time spent to work products that will be discarded) due to its own and station 2 defectiveness. If we consider two stations with an assigned defectiveness *Sj* and quality control station is only at the end of the line, quality rate quality rate could be formulate as shown in case 3 in figure 12. In this case both stations will have quality losses due to the propagation of defectiveness in the line. Case 2 and 3 point out that quality losses could be not simple to evaluate if we consider a long process both in design and management of system. In particular in the quality rate of station 1 we consider time lost for reject in the station 2.

**Figure 12.** Different cases of quality rate calculation

50.00%

1 21 41 61 81 101 121 141

In this paragraph we analyze how quality losses propagate in the system and if it is possible

First of all we have to consider that quality rate for a station is usually calculated considering only the time spent for the manufacturing of products that have been rejected in the same station. This traditional approach focuses on stations that cause defects but doesn't allow to point out completely the effect of the machine defectiveness on the system. In order to do so, the total time wasted by a station due to quality losses should include even the time spent for manufacturing of good products that will be rejected for defectiveness caused by other stations. In this sense quality losses depends on where scraps are identified and rejected. For example, scraps in the last station should be considered loss of time for the upstream station to estimate the real impact of the loss on the system and to estimate the theoretical production capacity needed in the upstream station. In conclusion the authors propose to calculate quality rate for a station considering as quality loss all time spent to manufacture products that will not

From a theoretical point of view we could consider the following case for calculation of quality rate of a station that depends on types of rejection (scraps or rework) and on quality controls positioning. If we consider two stations with an assigned defectiveness *Sj* and each station reworks its scraps with a rework cycle time equal to theoretical cycle time, quality rate could be formulate as shown in case 1 in figure 12. Each station will have quality losses (time spent to rework products) due its own defectiveness. If we consider two stations with an assigned

be formulate as shown in case 2 in figure 12. The station 1, that is the upstream station, will

and a quality control station at downstream each station, quality rate could

**Figure 11.** Rel OEE depending on buffer size in system affected by variability due to speed losses

to assess the effect of quality control on OEE and OTE.

complete the whole process successfully.

MIN MAX

**Buffer Size (j)**

60.00%

70.00%

80.00%

**Rel.P(j)**

72 Operations Management

*4.5.4. Quality losses*

defectiveness *Sj*

90.00%

100.00%

110.00%

Finally, it is important to highlight the different role that the quality efficiency plays during the design phase and the production.

When the system is producing, Operations Manager focuses his attention on the causes of the delectability with the aim to reduce it. When it is to design the production system, Operations Manager focuses on the expected quality efficiency of each station, on the location of quality control, on the process (rework or scraps) to identify the correct number of equipments or station for each activity of the process.

In this sense, the analysis is vertical during the production phase, but it follows the whole process during the design (figure 13).

**Figure 13.** Two approaches for quality efficiency

### **5. The simulation model**

To study losses propagation and to show how these dynamics affect OEE in a complex system [25] this chapter presents some examples taken from an OEE study of a real manufacturing system carried out by the authors through a process simulation analysis [19].

Simulation is run for each kind of time losses (Availability, Performance and Quality), to clearly show how each equipment ineffectiveness may compromise the performance of the whole system.

Considering equipment isolated from the system the OEE for the single machine is equal to its availability; in particular, relating to previous data, machines have an OEE equal to 0,67 and 0,5 respectively for insulating and packaging. The case points out how the losses due to

Using Overall Equipment Effectiveness for Manufacturing System Design

http://dx.doi.org/10.5772/56089

75

A simulation has been run to study the effect of buffer capacity in this case. Capacity of buffer downstream of insulating has been changed from 0 to 30 coils for different simulations. The results of simulations are shown in figure 15a. The OEE for both machines is equal to 0,33 with no buffer capacity. This results is the composition of availability of insulating and packaging (0,67 x 0,5) as expected. The OEEs increase in function of buffer dimension that avoids the propagation of major stoppage and availability losses propagation. Also the OTE is equal to 0,33 that is, according to formulation in (1) and as previously explained, equal to OEE of the

Insulating and packaging increase rapidly OEEs since a structural limits of buffer capacity of 15 coils; from this value OEEs of two stations converge on value of 0,5. The upstream insulating station, that has an availability greater than packaging, has to adapt itself to real cycle time of

It's important to point out that in performance monitoring of manufacturing plant the propagation of the previous losses is often gathered as performance losses (reduced speed or minor stoppage) in absence of specific data collection relating to major stoppage due to absence of material flow. So, if we consider also all other efficiency looses ignored in this sample, we can understand how much could be difficult to identify the real impact of this kind of efficiency losses monitoring the real system. Moreover simulation supports in system design in order to dimension buffer capacity (e.g. in this case structural limit for OEE is reached for 16 coils). Moreover through simulation it is possible to point out that the positive effect of buffer is

reduced with an higher cycle time of machine as shown in figure 15b.

**Figure 15.** OEE in function of buffer dimension (a) and cycle time (b)

major stoppage spread to other station in function of buffer capacity dimension.

last station but assessed in the system.

packaging that is the bottleneck station.

The simulation model is about a manufacturing plant for production of electrical cable. In particular we focuses on production of unipolar electrical cable that takes place by a flow-shop process. In the floor plant the production equipment is grouped in production areas arranged according to their functions (process layout). The different production areas are located along the line of product flow (product layout). Buffers are present amongst the production areas to stock the product in process. This particular plant allows to analyze deeply the problem of OEE-OTE investigation due to its complexity.

In terms of layout the production system was realized as a job shop system, although the flow of material from a station to another was continuous and typical of flow shop process. As stated in (§2) the reason lies on due to the huge size of the products that passes from a station to another. For this reason the buffer amid station, although present, couldn't contain huge amount of material.

The process implemented in the simulation model is shown in figure 14. Entities are unit quantity of cable that have different mass amongst stations. Parameters that are data input in the model are equipment speed, defectiveness, equipment failure rate and mean time to repair. Each parameters is described by a statistical distribution in order to simulate random condi‐ tion. In particular equipment speed has been simulated with a triangular distribution in order to simulate performance losses due to speed reduction.

The model evaluates OTE and OEE for each station as usually measured in manufacturing plant. The model has been validated through a plan of tests and its results of OEE has been compared with results obtained from an analytic evaluation.

**Figure 14.** ASME representation of manufacturing process

### **5.1. Example of availability losses propagation**

In accordance with the proposed method (§ 3.5) we show how availability losses propagate in the system and to assess the effect of buffer capacity on OEE through the simulation. We focuses on the insulating and packaging working stations. Technical data about availability of equipment are: mean time between failure for insulating is 20000 sec while for packaging is 30000 sec; mean time between repair for insulating is 10000 sec while for packaging is 30000 sec. The cycle time of the working stations are the same equal to 2800 sec for coil. The quality rates are set to 1. Idling, minor stoppages and reduced speed are not considered and set to 0.

Considering equipment isolated from the system the OEE for the single machine is equal to its availability; in particular, relating to previous data, machines have an OEE equal to 0,67 and 0,5 respectively for insulating and packaging. The case points out how the losses due to major stoppage spread to other station in function of buffer capacity dimension.

Simulation is run for each kind of time losses (Availability, Performance and Quality), to clearly show how each equipment ineffectiveness may compromise the performance of the

The simulation model is about a manufacturing plant for production of electrical cable. In particular we focuses on production of unipolar electrical cable that takes place by a flow-shop process. In the floor plant the production equipment is grouped in production areas arranged according to their functions (process layout). The different production areas are located along the line of product flow (product layout). Buffers are present amongst the production areas to stock the product in process. This particular plant allows to analyze deeply the problem of

In terms of layout the production system was realized as a job shop system, although the flow of material from a station to another was continuous and typical of flow shop process. As stated in (§2) the reason lies on due to the huge size of the products that passes from a station to another. For this reason the buffer amid station, although present, couldn't contain huge

The process implemented in the simulation model is shown in figure 14. Entities are unit quantity of cable that have different mass amongst stations. Parameters that are data input in the model are equipment speed, defectiveness, equipment failure rate and mean time to repair. Each parameters is described by a statistical distribution in order to simulate random condi‐ tion. In particular equipment speed has been simulated with a triangular distribution in order

The model evaluates OTE and OEE for each station as usually measured in manufacturing plant. The model has been validated through a plan of tests and its results of OEE has been

Roughing Drawing Bunching Insulating Packaging

In accordance with the proposed method (§ 3.5) we show how availability losses propagate in the system and to assess the effect of buffer capacity on OEE through the simulation. We focuses on the insulating and packaging working stations. Technical data about availability of equipment are: mean time between failure for insulating is 20000 sec while for packaging is 30000 sec; mean time between repair for insulating is 10000 sec while for packaging is 30000 sec. The cycle time of the working stations are the same equal to 2800 sec for coil. The quality rates are set to 1. Idling, minor stoppages and reduced speed are not considered and set to 0.

whole system.

74 Operations Management

amount of material.

OEE-OTE investigation due to its complexity.

to simulate performance losses due to speed reduction.

**Figure 14.** ASME representation of manufacturing process

**5.1. Example of availability losses propagation**

compared with results obtained from an analytic evaluation.

A simulation has been run to study the effect of buffer capacity in this case. Capacity of buffer downstream of insulating has been changed from 0 to 30 coils for different simulations. The results of simulations are shown in figure 15a. The OEE for both machines is equal to 0,33 with no buffer capacity. This results is the composition of availability of insulating and packaging (0,67 x 0,5) as expected. The OEEs increase in function of buffer dimension that avoids the propagation of major stoppage and availability losses propagation. Also the OTE is equal to 0,33 that is, according to formulation in (1) and as previously explained, equal to OEE of the last station but assessed in the system.

Insulating and packaging increase rapidly OEEs since a structural limits of buffer capacity of 15 coils; from this value OEEs of two stations converge on value of 0,5. The upstream insulating station, that has an availability greater than packaging, has to adapt itself to real cycle time of packaging that is the bottleneck station.

It's important to point out that in performance monitoring of manufacturing plant the propagation of the previous losses is often gathered as performance losses (reduced speed or minor stoppage) in absence of specific data collection relating to major stoppage due to absence of material flow. So, if we consider also all other efficiency looses ignored in this sample, we can understand how much could be difficult to identify the real impact of this kind of efficiency losses monitoring the real system. Moreover simulation supports in system design in order to dimension buffer capacity (e.g. in this case structural limit for OEE is reached for 16 coils). Moreover through simulation it is possible to point out that the positive effect of buffer is reduced with an higher cycle time of machine as shown in figure 15b.

**Figure 15.** OEE in function of buffer dimension (a) and cycle time (b)

### **5.2. Minor stoppages and speed reduction**

We run the simulation also for the case study (§ 4). The simulation shown how two stations, with the same theoretical cycle time (200 sec/coil) affected by a triangular distribution with a performance rate of 52% as single machine, have: 48% of performance rate with a capacity buffer of 1 coil and 50% of performance rate with a capacity buffer of 2 coils. But if we consider two stations with the same theoretic cycle time but affects by different triangular distributions so that theoretic performance rates differ, simulation shows how the performance rates of two stations converge towards the lowest one as expected (19), (20).

cases are shown in table 6 in which the proposal method has compared with the traditional one. The proposal method allowed to identify the correct efficiency, for example to dimension the drawing station, because it considers time wasted to manufacture products rejected in bunching station. The difference between values of Q2 and OTE is explained by the value of P2=0,95 that is due to the propagation of quality losses for the upstream station in performance losses for the downstream station. Moreover about positioning of quality control the case 2 has to be prefer because the simulation shows a positive effect on the OTE if the bunching

Case 2) 0,952 0,95 0,952 0,95 0,95 0,952

Case 3) 0,952 0,952 0,952 -- 0,952 0,952

**Table 6.** Comparison of quality rate calculation and evaluation of impact of quality control positioning on quality rates

The evaluation of Overall Equipment Effectiveness (OEE) and Overall Throughput Effective‐ ness (OTE) can be critical for the correct estimation of workstations number needed to realize the desired throughput (production system design), as also for the analysis and the continuous

The use of OEE as performance improvement tool has been widely described in the literature. But it has been less approached in system design for a correct evaluation of the system efficiency (OTE), in order to study losses propagation, overlapping of efficiency losses and

In this chapter, starting by the available literature on time losses, we identified a simplified set of relevant time-losses that need to be considered during the design phase. Then, through the simulation, we shown how OEE of single machine and the value of OTE of the whole system are interconnected and mutually influencing each other, due to the propagation of availability,

For each category of time losses we described the effects of efficiency losses propagation from a station to the system, for a correct estimation and analysis of OEE and OTE during manu‐ facturing system design. We also shown how to avoid losses propagation through adequate technical solutions which can be defined during system design as the buffer sizing, the

The simulation model shown in this chapter was based on a real production system and it used real data to study the losses propagation in a manufacturing plant for production of electrical

improvement of the system performance (during the system management).

**Proposal method Traditional method** Q1 Q2 OTE Q1 Q2 OTE

Using Overall Equipment Effectiveness for Manufacturing System Design

http://dx.doi.org/10.5772/56089

77

station is the system bottleneck (as it happens in the real system).

and on OTE

**6. Conclusions**

effective actions for losses reduction.

performance and quality losses throughout the system.

equipment configuration and the positioning of control stations.

Through the same simulation model we considered also the second type of performance losses propagation, due to the propagation of reduced speed caused by unbalanced line. Figure 16 shown the effect of unbalanced cycle time of stations relating to insulating and packaging. The station have the same P as single machine equal to 67% but different theoretical cycle time. In particular insulating, the upstream station, is faster than packag‐ ing. Availability and quality rate of stations is set to 1. The buffer capacity is set to 1 coil. A simulation has been run to study the effect of unbalancing station. Theoretical cycle time of insulating has been changed since theoretical cycle time of packaging that is fixed in mean. The simulation points out that insulating has to adapt itself to cycle time of packaging that is the bottleneck station. This results in the model as a lower value for performance rate of insulating station. The same happens often in real systems where the result is influenced by all the efficiency losses at the same time. The effect disappears gradually with a better balancing of two stations as in figure 16.

**Figure 16.** Performance rate of insulating and packaging in function of insulating cycle time

### **5.3. Quality losses**

In relation to the model, this sample focuses on the drawing and bunching working stations that have defectiveness set to 5%, the same cycle times and no other efficiency losses. The quality control has been changed simulating case 2 and 3. The results of simulation for the two cases are shown in table 6 in which the proposal method has compared with the traditional one. The proposal method allowed to identify the correct efficiency, for example to dimension the drawing station, because it considers time wasted to manufacture products rejected in bunching station. The difference between values of Q2 and OTE is explained by the value of P2=0,95 that is due to the propagation of quality losses for the upstream station in performance losses for the downstream station. Moreover about positioning of quality control the case 2 has to be prefer because the simulation shows a positive effect on the OTE if the bunching station is the system bottleneck (as it happens in the real system).


**Table 6.** Comparison of quality rate calculation and evaluation of impact of quality control positioning on quality rates and on OTE

### **6. Conclusions**

**5.2. Minor stoppages and speed reduction**

76 Operations Management

stations converge towards the lowest one as expected (19), (20).

a better balancing of two stations as in figure 16.

**performance rate**

**5.3. Quality losses**

**Figure 16.** Performance rate of insulating and packaging in function of insulating cycle time

In relation to the model, this sample focuses on the drawing and bunching working stations that have defectiveness set to 5%, the same cycle times and no other efficiency losses. The quality control has been changed simulating case 2 and 3. The results of simulation for the two

We run the simulation also for the case study (§ 4). The simulation shown how two stations, with the same theoretical cycle time (200 sec/coil) affected by a triangular distribution with a performance rate of 52% as single machine, have: 48% of performance rate with a capacity buffer of 1 coil and 50% of performance rate with a capacity buffer of 2 coils. But if we consider two stations with the same theoretic cycle time but affects by different triangular distributions so that theoretic performance rates differ, simulation shows how the performance rates of two

Through the same simulation model we considered also the second type of performance losses propagation, due to the propagation of reduced speed caused by unbalanced line. Figure 16 shown the effect of unbalanced cycle time of stations relating to insulating and packaging. The station have the same P as single machine equal to 67% but different theoretical cycle time. In particular insulating, the upstream station, is faster than packag‐ ing. Availability and quality rate of stations is set to 1. The buffer capacity is set to 1 coil. A simulation has been run to study the effect of unbalancing station. Theoretical cycle time of insulating has been changed since theoretical cycle time of packaging that is fixed in mean. The simulation points out that insulating has to adapt itself to cycle time of packaging that is the bottleneck station. This results in the model as a lower value for performance rate of insulating station. The same happens often in real systems where the result is influenced by all the efficiency losses at the same time. The effect disappears gradually with

> The evaluation of Overall Equipment Effectiveness (OEE) and Overall Throughput Effective‐ ness (OTE) can be critical for the correct estimation of workstations number needed to realize the desired throughput (production system design), as also for the analysis and the continuous improvement of the system performance (during the system management).

> The use of OEE as performance improvement tool has been widely described in the literature. But it has been less approached in system design for a correct evaluation of the system efficiency (OTE), in order to study losses propagation, overlapping of efficiency losses and effective actions for losses reduction.

> In this chapter, starting by the available literature on time losses, we identified a simplified set of relevant time-losses that need to be considered during the design phase. Then, through the simulation, we shown how OEE of single machine and the value of OTE of the whole system are interconnected and mutually influencing each other, due to the propagation of availability, performance and quality losses throughout the system.

> For each category of time losses we described the effects of efficiency losses propagation from a station to the system, for a correct estimation and analysis of OEE and OTE during manu‐ facturing system design. We also shown how to avoid losses propagation through adequate technical solutions which can be defined during system design as the buffer sizing, the equipment configuration and the positioning of control stations.

> The simulation model shown in this chapter was based on a real production system and it used real data to study the losses propagation in a manufacturing plant for production of electrical

cable. The validation of the model ensures the meaningful of the approach and of the identified set of possible solutions and hints.

[7] Dixon, N. A. V. J.R., The new performance challenge. Measuring operations for

Using Overall Equipment Effectiveness for Manufacturing System Design

http://dx.doi.org/10.5772/56089

79

[8] S. D., «Can CIM improve overall factory effetivenes,» in *Pan Pacific Microelectronic*

[11] A. V. A., Semiconductor Manufacturing Productivity- Overall Equipment Effective‐

[12] Rooda, D. R. A. J. J.E., «Equipment effectiveness: OEE revisited,» *IEEE Transactions on*

[13] Gamberini, G. L. R. B. R., «Alternative approaches for OEE evaluation: some guide‐ lines directing the choice,» in *XVII Summer School Francesco Turco*, Venice, (2012). [14] Grando, T. F. A., «Modelling Plant Capacity and Productivity,» *Production Planning*

[15] Spada, C. V. C., «The Impact of Cultural Issues and Interpersonal Behavior on Sus‐ tainable Excellence and Competitiveness: An Analysis of the Italian Context,» *Contri‐*

[16] Badiger, G. R, & Proposal, A. , «A. evaluation of OEE and impact of six big losses on equipment earning capacity,» *International Journal Process Management & Benchmark‐*

[17] Jacobs, A. N. C. B, & Operations, R. F. and supply chain management, McGraw-Hill,

[18] Oechsner, R. From overall equipment efficiency(OEE) to overall Fab effectiveness

[19] Introna, D. S. B. C. V, & Flow-shop, V. process oee calculation and improvement us‐

[20] R. MA, «Factory Level Metrics: Basis for Productivity Improvement,» in *Proceedings of the International Conference on Modeling and Analysis of Semiconductor*, Tempe, Arizo‐

[21] Scott, P. R. D., «Can overall factory effetiveness prolong Moore's Law?,» *Solid State*

[22] B. D., «Buffer size design linked to reliability performance: A simulative study,»

[23] Introna, G. A. V. V., «Increasing Availability of Production Flow lines through Opti‐ mal Buffer Sizing: a Simulative Study,» in *The 23rd European Modeling & Simulation*

*Computers & Industrial Engineering,* vol. 56, p. 1633-1641, 2009.

*Symposium (Simulation in Industry)*, Rome, (2011).

(OFE),» *Materials Science in Semiconductor Processing,* (2003). , 5 , 333-339.

[9] Waters, W. D. D.J., Operations Management, Kogan Page Publishers, (1999).

[10] Chase, A. N. J. F. R.B., Operations Management, McGraw-Hill, (2008).

world-class competition, Dow Jones Irwin, (1990).

ness (OEE) guidebook, SEMATECH, 1995.

*and Control,* n. 3, (2005). , 16 , 209-322.

*ing,* (2008). , 235 - 247 .

A cura di, (2010).

na, USA, 2002.

*Technology,* (1998). , 41 , 75-82.

*butions to Management Science,* (2008). , 95 - 113 .

ing simulation analysis,» in *MITIP*, Florence, (2007).

*Semiconductor Manufacturing,* n. 1, (2005). , 18 , 189-196.

*Symposium*, Kauai, HI, 1999.

By analyzing and each time losses we also shown how the choices taken during the design of the production system to increase the OTE (e.g. buffer size, maintenance configuration, etc.) affect the successive management of the operations.

### **Acknowledgements**

The realization of this chapter would not have been possible without the support of a person whose cooperated with the chair of Operations Management of University of Rome "Tor Vergata" in the last years, producing valuable research. The authors wish to express their gratitude to Dr. Bruna Di Silvio without whose knowledge, diligence and assistance this work would not have been successful.

### **Author details**

Vittorio Cesarotti1 , Alessio Giuiusa1,2 and Vito Introna1

1 University of Rome "Tor Vergata", Italy

2 Area Manager Inbound Operations at Amazon.com

### **References**


cable. The validation of the model ensures the meaningful of the approach and of the identified

By analyzing and each time losses we also shown how the choices taken during the design of the production system to increase the OTE (e.g. buffer size, maintenance configuration, etc.)

The realization of this chapter would not have been possible without the support of a person whose cooperated with the chair of Operations Management of University of Rome "Tor Vergata" in the last years, producing valuable research. The authors wish to express their gratitude to Dr. Bruna Di Silvio without whose knowledge, diligence and assistance this work

[1] H. H. S., «Manufacturing productivity improvement using effectivenes metrics and

[2] B. I., «Effective measurement and successful elements of company productivity: the basis of competitiveness and world prosperity,» *International Journal of Production*

[3] Jeong, P. D. K.Y., «Operational efficiency and effectiveness measurement,» *Interna‐ tional Journal of Operations and Production Management,* n. 1404-1416, (2001). , 21

[4] N. S., Introduction to TPM- Total Productive Maintenance, Productivity Press, 1988.

[5] S. R.J., World Class Manufacturing. The lesson of simplicity Applied, The Free Press,

[6] Womack, J. D. J.P., Lean Thinking, Simon & Schuster, (1996).

, Alessio Giuiusa1,2 and Vito Introna1

set of possible solutions and hints.

**Acknowledgements**

78 Operations Management

would not have been successful.

1 University of Rome "Tor Vergata", Italy

simulation analysis,» 2002.

*Economics,* vol. 52, pp. 203-213, 1997.

2 Area Manager Inbound Operations at Amazon.com

**Author details**

Vittorio Cesarotti1

**References**

1987.

affect the successive management of the operations.


[24] Connor, P. D. T. O. Practical Reliability Engineering (Fourth Ed.), New York: John Wiley & Sons, (2002).

**Chapter 4**

**Reliability and Maintainability in Operations**

The study of component and process reliability is the basis of many efficiency evaluations in Operations Management discipline. For example, in the calculation of the Overall Equipment Effectiveness (OEE) introduced by Nakajima [1], it is necessary to estimate a crucial parameter called availability. This is strictly related to reliability. Still as an example, consider how, in the study of service level, it is important to know the availability of machines, which again depends

Reliability is defined as the probability that a component (or an entire system) will perform its function for a specified period of time, when operating in its design environment. The elements necessary for the definition of reliability are, therefore, an unambiguous criterion for judging whether something is working or not and the exact definition of environmental conditions and usage. Then, reliability can be defined as the time dependent probability of correct operation if we assume that a component is used for its intended function in its design environment and if we clearly define what we mean with "failure". For this definition, any discussion on the

A broader definition of reliability is that "reliability is the science to predict, analyze, prevent and mitigate failures over time." It is a science, with its theoretical basis and principles. It also has sub-disciplines, all related - in some way - to the study and knowledge of faults. Reliability is closely related to mathematics, and especially to statistics, physics, chemistry, mechanics and electronics. In the end, given that the human element is almost always part of the systems,

In addition to the prediction of system durability, reliability also tries to give answers to other questions. Indeed, we can try to derive from reliability also the availability performance of a

> © 2013 De Carlo; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

© 2013 De Carlo; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

reliability basics starts with the coverage of the key concepts of probability.

Additional information is available at the end of the chapter

**Management**

Filippo De Carlo

**1. Introduction**

http://dx.doi.org/10.5772/54161

on their reliability and maintainability.

it often has to do with psychology and psychiatry.


## **Reliability and Maintainability in Operations Management**

Filippo De Carlo

[24] Connor, P. D. T. O. Practical Reliability Engineering (Fourth Ed.), New York: John

[25] Kane, J. O. Simulating production performance: cross case analysis and policy impli‐ cations,» *Industrial Management & Data Systems,* n. 4, (2004). , 104 , 309-321.

[26] Gondhinathan, B. A, & Proposal, R. , «A. evaluation of OEE and impact of six big losses on equipment earning capacity,» *International Journal Process Management &*

Wiley & Sons, (2002).

80 Operations Management

*Benchmarking,* n. 3, (2008). , 2 , 235-247.

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/54161

### **1. Introduction**

The study of component and process reliability is the basis of many efficiency evaluations in Operations Management discipline. For example, in the calculation of the Overall Equipment Effectiveness (OEE) introduced by Nakajima [1], it is necessary to estimate a crucial parameter called availability. This is strictly related to reliability. Still as an example, consider how, in the study of service level, it is important to know the availability of machines, which again depends on their reliability and maintainability.

Reliability is defined as the probability that a component (or an entire system) will perform its function for a specified period of time, when operating in its design environment. The elements necessary for the definition of reliability are, therefore, an unambiguous criterion for judging whether something is working or not and the exact definition of environmental conditions and usage. Then, reliability can be defined as the time dependent probability of correct operation if we assume that a component is used for its intended function in its design environment and if we clearly define what we mean with "failure". For this definition, any discussion on the reliability basics starts with the coverage of the key concepts of probability.

A broader definition of reliability is that "reliability is the science to predict, analyze, prevent and mitigate failures over time." It is a science, with its theoretical basis and principles. It also has sub-disciplines, all related - in some way - to the study and knowledge of faults. Reliability is closely related to mathematics, and especially to statistics, physics, chemistry, mechanics and electronics. In the end, given that the human element is almost always part of the systems, it often has to do with psychology and psychiatry.

In addition to the prediction of system durability, reliability also tries to give answers to other questions. Indeed, we can try to derive from reliability also the availability performance of a

© 2013 De Carlo; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 De Carlo; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

system. In fact, availability depends on the time between two consecutive failures and on how long it takes to restore the system. Reliability study can be also used to understand how faults can be avoided. You can try to prevent potential failures, acting on the design, materials and maintenance.

faults. **Dynamic reliability**, instead, assumes that some failures, so-called primary failures, promote the emergence of secondary and tertiary faults, with a cascading effect. In this text

In the traditional paradigm of static reliability, individual components have a binary status: either working or failed. Systems, in turn, are composed by an integer number *n* of compo‐ nents, all mutually independent. Depending on how the components are configured in creating the system and according to the operation or failure of individual components, the

Let's consider a generic *X* system consisting of *n*elements. The static reliability modeling implies that the operating status of the *i* - *th* component is represented by the state function *Xi*

> 1 *if the i* - *th component works* 0 *if the i* - *th component fails*

> > 1 *if the system works* 0 *if the system fails*

The most common configuration of the components is the series system. A series system works if and only if all components work. Therefore, the status of a series system is given by the state

> *Xi* = min *i*∈{1,2,…,*n*}

System configurations are often represented graphically with Reliability Block Diagrams (RBDs) where each component is represented by a block and the connections between them express the configuration of the system. The operation of the system depends on the ability to cross the diagram from left to right only by passing through the elements in operation. Figure

The state of operation of the system is modeled by the state function Φ(*X* )

Φ(*X* )=∏ *i*=1 *n*

where the symbol ∏ indicates the product of the arguments.

1 contains the RBD of a four components series system.

**Figure 1.** Reliability block diagram for a four components (1,2,3,4) series system.

Φ(*X* )={

(1)

83

(2)

*Xi* (3)

Reliability and Maintainability in Operations Management

http://dx.doi.org/10.5772/54161

we will only deal with static models of reliability.

*Xi* ={

system either works or does not work.

defined as:

function:

Reliability involves almost all aspects related to the possession of a property: cost management, customer satisfaction, the proper management of resources, passing through the ability to sell products or services, safety and quality of the product.

This chapter presents a discussion of reliability theory, supported by practical examples of interest in operations management. Basic elements of probability theory, as the sample space, random events and Bayes' theorem should be revised for a deeper understanding.

### **2. Reliability basics**

The period of regular operation of an equipment ends when any chemical-physical phenom‐ enon, said fault, occurred in one or more of its parts, determines a variation of its nominal performances. This makes the behavior of the device unacceptable. The equipment passes from the state of operation to that of non-functioning.

In Table 1 faults are classified according to their origin. For each failure mode an extended description is given.


**Table 1.** Main causes of failure. The table shows the main cases of failure with a detailed description

To study reliability you need to transform reality into a model, which allows the analysis by applying laws and analyzing its behavior [2]. Reliability models can be divided into static and dynamic ones. **Static models** assume that a failure does not result in the occurrence of other faults. **Dynamic reliability**, instead, assumes that some failures, so-called primary failures, promote the emergence of secondary and tertiary faults, with a cascading effect. In this text we will only deal with static models of reliability.

system. In fact, availability depends on the time between two consecutive failures and on how long it takes to restore the system. Reliability study can be also used to understand how faults can be avoided. You can try to prevent potential failures, acting on the design, materials and

Reliability involves almost all aspects related to the possession of a property: cost management, customer satisfaction, the proper management of resources, passing through the ability to sell

This chapter presents a discussion of reliability theory, supported by practical examples of interest in operations management. Basic elements of probability theory, as the sample space,

The period of regular operation of an equipment ends when any chemical-physical phenom‐ enon, said fault, occurred in one or more of its parts, determines a variation of its nominal performances. This makes the behavior of the device unacceptable. The equipment passes from

In Table 1 faults are classified according to their origin. For each failure mode an extended

Temperature Operational variable that depends mainly on the specific characteristics of the material (thermal inertia), as well as the spatial and temporal distribution of heat sources.

Wear State of physical degradation of the component; it manifests itself as a result of aging phenomena

Corrosion Phenomenon that depends on the characteristics of the environment in which the component is

To study reliability you need to transform reality into a model, which allows the analysis by applying laws and analyzing its behavior [2]. Reliability models can be divided into static and dynamic ones. **Static models** assume that a failure does not result in the occurrence of other

that make the component no longer suitable.

**Table 1.** Main causes of failure. The table shows the main cases of failure with a detailed description

Function of the temporal and spatial distribution of the load conditions and of the response of the material. The structural characteristics of the component play an important role, and should be assessed in the broadest form as possible, incorporating also possible design errors, embodiments,

that accompany the normal activities (friction between the materials, exposure to harmful agents,

operating. These conditions can lead to material degradation or chemical and physical processes

random events and Bayes' theorem should be revised for a deeper understanding.

products or services, safety and quality of the product.

the state of operation to that of non-functioning.

material defects, etc..

etc..)

maintenance.

82 Operations Management

**2. Reliability basics**

description is given.

Stress, shock, fatigue

**Failure cause Description**

In the traditional paradigm of static reliability, individual components have a binary status: either working or failed. Systems, in turn, are composed by an integer number *n* of compo‐ nents, all mutually independent. Depending on how the components are configured in creating the system and according to the operation or failure of individual components, the system either works or does not work.

Let's consider a generic *X* system consisting of *n*elements. The static reliability modeling implies that the operating status of the *i* - *th* component is represented by the state function *Xi* defined as:

$$\mathbf{X}\_i = \begin{cases} 1 & \text{if the } i \text{ - } th \text{ component works} \\ 0 & \text{if the } i \text{ - } th \text{ component fails} \end{cases} \tag{1}$$

The state of operation of the system is modeled by the state function Φ(*X* )

$$\Phi(\mathbf{X}) = \begin{cases} 1 & \text{if } \text{the system works} \\ 0 & \text{if the system fails} \end{cases} \tag{2}$$

The most common configuration of the components is the series system. A series system works if and only if all components work. Therefore, the status of a series system is given by the state function:

$$\text{sp(X \)}=\prod\_{i=1}^{n} X\_i = \min\_{i \in \{1,2,\ldots,n\}} X\_i \tag{3}$$

where the symbol ∏ indicates the product of the arguments.

System configurations are often represented graphically with Reliability Block Diagrams (RBDs) where each component is represented by a block and the connections between them express the configuration of the system. The operation of the system depends on the ability to cross the diagram from left to right only by passing through the elements in operation. Figure 1 contains the RBD of a four components series system.

**Figure 1.** Reliability block diagram for a four components (1,2,3,4) series system.

The second most common configuration of the components is the parallel system. A parallel system works if and only if at least one component is working. A parallel system does not work if and only if all components do not work. So, if Φ - (*X* ) is the function that represents the state of not functioning of the system and *X i* indicates the non-functioning of the *i* - *th* element, you can write:

$$\stackrel{\circ}{\Phi}(X) = \prod\_{i=1}^{n} X\_i \tag{4}$$

**Figure 3.** Series-parallel system. The picture shows the RBD of a system due to the series-parallel model of 9 elementa‐

Reliability and Maintainability in Operations Management

http://dx.doi.org/10.5772/54161

85

**Figure 4.** Calculation of the state function of a series-parallel. Referring to the configuration of Figure 3, the state function of the system is calculated by first making the state functions of the parallel of{1,2}, of {3,4, 5} and of

{6,7, 8 , 9}. Then we evaluate the state function of the series of the three groups just obtained.

ry units.

Accordingly, the state of a parallel system is given by the state function:

$$\mathfrak{O}(X) = 1 - \prod\_{i=1}^{n} \{1 - X\_i\} = \coprod\_{i=1}^{n} X\_i = \max\_{i \in \{1, 2, \dots, n\}} X\_i \tag{5}$$

where the symbol ∐ indicates the complement of the product of the complements of the arguments. Figure 2 contains a RBD for a system of four components arranged in parallel.

**Figure 2.** Parallel system. The image represents the RBD of a system of four elements (1,2,3,4) arranged in a reliability parallel configuration.

Another common configuration of the components is the series-parallel systems. In these systems, components are configured using combinations in series and parallel configurations. An example of such a system is shown in Figure 3.

State functions for series-parallel systems are obtained by decomposition of the system. With this approach, the system is broken down into subsystems or configurations that are in series or in parallel. The state functions of the subsystems are then combined appropriately, de‐ pending on how they are configured. A schematic example is shown in Figure 4.

The second most common configuration of the components is the parallel system. A parallel system works if and only if at least one component is working. A parallel system does not


(*X* )=∏ *i*=1 *n X* -

Φ -

(1 - *X <sup>i</sup>*

) =∐ *i*=1 *n*

where the symbol ∐ indicates the complement of the product of the complements of the arguments. Figure 2 contains a RBD for a system of four components arranged in parallel.

**Figure 2.** Parallel system. The image represents the RBD of a system of four elements (1,2,3,4) arranged in a reliability

Another common configuration of the components is the series-parallel systems. In these systems, components are configured using combinations in series and parallel configurations.

State functions for series-parallel systems are obtained by decomposition of the system. With this approach, the system is broken down into subsystems or configurations that are in series or in parallel. The state functions of the subsystems are then combined appropriately, de‐

pending on how they are configured. A schematic example is shown in Figure 4.

*Xi* = max *i*∈{1,2,…,*n*}

Accordingly, the state of a parallel system is given by the state function:

*i*=1 *n*

Φ(*X* )=1 - ∏


(*X* ) is the function that represents the

*i* indicates the non-functioning of the *i* - *th* element,

*<sup>i</sup>* (4)

*Xi* (5)

work if and only if all components do not work. So, if Φ

state of not functioning of the system and *X*

you can write:

84 Operations Management

parallel configuration.

An example of such a system is shown in Figure 3.

**Figure 3.** Series-parallel system. The picture shows the RBD of a system due to the series-parallel model of 9 elementa‐ ry units.

**Figure 4.** Calculation of the state function of a series-parallel. Referring to the configuration of Figure 3, the state function of the system is calculated by first making the state functions of the parallel of{1,2}, of {3,4, 5} and of {6,7, 8 , 9}. Then we evaluate the state function of the series of the three groups just obtained.

A particular component configuration, widely recognized and used, is the **parallel** *k***out of***n*. A system *k*out of *n*works if and only if at least *k*of the *n*components works. Note that a series system can be seen as a system *n*out of *n*and a parallel system is a system 1 out of*n*. The state function of a system *k*out of *n*is given by the following algebraic system:

$$\Phi(X) = \begin{cases} 1 & \text{if } \sum\_{i=1}^{n} X\_i \ge k \\ 0 & \text{otherwise} \end{cases} \tag{6}$$

of the system. This configuration is, therefore, constructed with the creation of a series subsystem for each path using only the minimum components of that set. Then, these subsys‐ tems are connected in parallel. An example of an equivalent system is shown in Figure 7.

**Figure 6.** Minimal Cut Set. The system of the left contains the minimal cut set, indicated by the dashed lines, shown in the right part. Each of them represents a minimum subset of the components of the system such that the failure of all

**Figure 7.** Equivalent configurations with MPS. You build a series subsystem for each MPS. Then such subsystems are

The second equivalent configuration, is based on the logical principle that the failure of all the components of any MCS implies the fault of the system. This configuration is built with the creation of a parallel subsystem for each MCS using only the components of that group. Then,

After examining the components and the status of the system, the next step in the static modeling of reliability is that of considering the probability of operation of the component and

1 2

Reliability and Maintainability in Operations Management

http://dx.doi.org/10.5772/54161

87

1 3 5

4 3 2

4 5

5

2

4

connected in parallel.

of the system.

3

components in the subset does not imply the operation of the system.

these subsystems are connected in series (see Figure 8).

1

The RBD for a system *k*out of *n*has an appearance identical to the RBD schema of a parallel system of *n*components with the addition of a label "*k* out of *n*". For other more complex system configurations, such as the bridge configuration (see Figure 5), we may use more intricate techniques such as the minimal path set and the minimal cut set, to construct the system state function.

A Minimal Path Set - MPS is a subset of the components of the system such that the operation of all the components in the subset implies the operation of the system. The set is minimal because the removal of any element from the subset eliminates this property. An example is shown in Figure 5.

**Figure 5.** Minimal Path Set. The system on the left contains the minimal path set indicated by the arrows and shown in the right part. Each of them represents a minimal subset of the components of the system such that the operation of all the components in the subset implies the operation of the system.

A Minimal Cut Set - MCS is a subset of the components of the system such that the failure of all components in the subset does not imply the operation of the system. Still, the set is called min‐ imal because the removal of any component from the subset clears this property (see Figure 6).

MCS and MPS can be used to build equivalent configurations of more complex systems, not referable to the simple series-parallel model. The first equivalent configuration is based on the consideration that the operation of all the components, in at least a MPS, entails the operation

Reliability and Maintainability in Operations Management http://dx.doi.org/10.5772/54161 87

A particular component configuration, widely recognized and used, is the **parallel** *k***out of***n*. A system *k*out of *n*works if and only if at least *k*of the *n*components works. Note that a series system can be seen as a system *n*out of *n*and a parallel system is a system 1 out of*n*. The state

> *i*=1 *n Xi* ≥*k*

The RBD for a system *k*out of *n*has an appearance identical to the RBD schema of a parallel system of *n*components with the addition of a label "*k* out of *n*". For other more complex system configurations, such as the bridge configuration (see Figure 5), we may use more intricate techniques such as the minimal path set and the minimal cut set, to construct the system state

A Minimal Path Set - MPS is a subset of the components of the system such that the operation of all the components in the subset implies the operation of the system. The set is minimal because the removal of any element from the subset eliminates this property. An example is

**Figure 5.** Minimal Path Set. The system on the left contains the minimal path set indicated by the arrows and shown in the right part. Each of them represents a minimal subset of the components of the system such that the operation of

A Minimal Cut Set - MCS is a subset of the components of the system such that the failure of all components in the subset does not imply the operation of the system. Still, the set is called min‐ imal because the removal of any component from the subset clears this property (see Figure 6).

MCS and MPS can be used to build equivalent configurations of more complex systems, not referable to the simple series-parallel model. The first equivalent configuration is based on the consideration that the operation of all the components, in at least a MPS, entails the operation

all the components in the subset implies the operation of the system.

(6)

0 *otherwise*

function of a system *k*out of *n*is given by the following algebraic system:

function.

86 Operations Management

shown in Figure 5.

<sup>Φ</sup>(*<sup>X</sup>* )={1 *if* <sup>∑</sup>

**Figure 6.** Minimal Cut Set. The system of the left contains the minimal cut set, indicated by the dashed lines, shown in the right part. Each of them represents a minimum subset of the components of the system such that the failure of all components in the subset does not imply the operation of the system.

of the system. This configuration is, therefore, constructed with the creation of a series subsystem for each path using only the minimum components of that set. Then, these subsys‐ tems are connected in parallel. An example of an equivalent system is shown in Figure 7.

**Figure 7.** Equivalent configurations with MPS. You build a series subsystem for each MPS. Then such subsystems are connected in parallel.

The second equivalent configuration, is based on the logical principle that the failure of all the components of any MCS implies the fault of the system. This configuration is built with the creation of a parallel subsystem for each MCS using only the components of that group. Then, these subsystems are connected in series (see Figure 8).

After examining the components and the status of the system, the next step in the static modeling of reliability is that of considering the probability of operation of the component and of the system.

**Figure 8.** Equivalent configurations with MCS. You build a subsystem in parallel for each MCS. Then the subsystems are connected in series.

The reliability *Ri* of the *i* - *th* component is defined by:

$$R\_i = P\left(X\_i = 1\right) \tag{7}$$

*R* =1 - ∏ *i*=1 *n* (1 - *Ri*

> *i*=1 *n*

follows:

*R* =*P*(⋃ *i*=1 *n*

(*Xi* =1)) =1 - *P*(⋂

system with *n* elements is given by:

their co-product: 1 - (1 - 0.85)<sup>4</sup> =0.9995.

system.

*i*=1 *n*

(*Xi* =0)) =1 - ∏

) =∐ *i*=1 *n*

In fact, from the definition of system reliability and by the properties of event probabilities, it

*P*(*Xi* =0) = =1 - ∏

In many parallel systems, components are identical. In this case, the reliability of a parallel

**Figure 10.** A parallel system consisting of 4 elements with the same reliability of 0.85. The system reliability s given by

For a series-parallel system, system reliability is determined using the same approach of decomposition used to construct the state function for such systems. Consider, for instance, the system drawn in Figure 11, consisting of 9 elements with reliability *R*<sup>1</sup> =*R*<sup>2</sup> =0.9; *R*<sup>3</sup> =*R*<sup>4</sup> =*R*<sup>5</sup> =0.8 and *R*<sup>6</sup> =*R*<sup>7</sup> =*R*<sup>8</sup> =*R*<sup>9</sup> =0.7. Let's calculate the overall reliability of the

*R* =1 - (1 - *Ri*

*i*=1 *n*

1 - *P*(*Xi* =1) =1 - ∏

*Ri* (10)

Reliability and Maintainability in Operations Management

*i*=1 *n* (1 - *Ri*

)*<sup>n</sup>* (12)

) =∐ *i*=1 *n*

http://dx.doi.org/10.5772/54161

*Ri* (11)

89

while the **reliability of the system***R* is defined as in equation 8:

$$R = P\{\uplus(X) = 1\}\tag{8}$$

The methodology used to calculate the reliability of the system depends on the configuration of the system itself. For a series system, the reliability of the system is given by the product of the individual reliability (law of Lusser, defined by German engineer Robert Lusser in the 50s):

$$R = \prod\_{i=1}^{n} R\_i \qquad \text{since} \qquad R = P\left(\bigcap\_{i=1}^{n} \left(X\_i = 1\right)\right) = \prod\_{i=1}^{n} P\left(X\_i = 1\right) = \prod\_{i=1}^{n} R\_i \tag{9}$$

For an example, see Figure 9.

**Figure 9.** serial system consisting of 4 elements with reliability equal to 0.98, 0.99, 0.995 and 0.975. The reliability of the whole system is given by their product: *R* = 0.98 · 0.99 · 0.995 · 0.975 = 0.941

For a parallel system, reliability is:

Reliability and Maintainability in Operations Management http://dx.doi.org/10.5772/54161 89

)*<sup>n</sup>* (12)

$$R = 1 \ -\prod\_{i=1}^{n} \left(1 \ -R\_i\right) = \coprod\_{i=1}^{n} R\_i \tag{10}$$

In fact, from the definition of system reliability and by the properties of event probabilities, it follows:

$$R = P\left(\bigcup\_{i=1}^{n} \left(X\_i = 1\right)\right) = 1 - P\left(\bigcap\_{i=1}^{n} \left(X\_i = 0\right)\right) = 1 - \prod\_{i=1}^{n} P\left(X\_i = 0\right) = = 1 - \prod\_{i=1}^{n} \left[1 - P\left(X\_i = 1\right)\right] = 1 - \prod\_{i=1}^{n} \left(1 - R\_i\right) = \prod\_{i=1}^{n} R\_i \tag{11}$$

In many parallel systems, components are identical. In this case, the reliability of a parallel system with *n* elements is given by:

*R* =1 - (1 - *Ri*

The reliability *Ri*

are connected in series.

88 Operations Management

of the *i* - *th* component is defined by:

The methodology used to calculate the reliability of the system depends on the configuration of the system itself. For a series system, the reliability of the system is given by the product of the individual reliability (law of Lusser, defined by German engineer Robert Lusser in the 50s):

**Figure 8.** Equivalent configurations with MCS. You build a subsystem in parallel for each MCS. Then the subsystems

(*Xi* =1)) =∏

**Figure 9.** serial system consisting of 4 elements with reliability equal to 0.98, 0.99, 0.995 and 0.975. The reliability of

*i*=1 *n*

*P*(*Xi* =1) =∏

*i*=1 *n*

*Ri* (9)

*i*=1 *n*

while the **reliability of the system***R* is defined as in equation 8:

*Ri* since *R* =*P*(⋂

the whole system is given by their product: *R* = 0.98 · 0.99 · 0.995 · 0.975 = 0.941

*R* =∏ *i*=1 *n*

For an example, see Figure 9.

For a parallel system, reliability is:

*Ri* =*P*(*Xi* =1) (7)

*R* =*P*(Φ(*X* )=1) (8)

**Figure 10.** A parallel system consisting of 4 elements with the same reliability of 0.85. The system reliability s given by their co-product: 1 - (1 - 0.85)<sup>4</sup> =0.9995.

For a series-parallel system, system reliability is determined using the same approach of decomposition used to construct the state function for such systems. Consider, for instance, the system drawn in Figure 11, consisting of 9 elements with reliability *R*<sup>1</sup> =*R*<sup>2</sup> =0.9; *R*<sup>3</sup> =*R*<sup>4</sup> =*R*<sup>5</sup> =0.8 and *R*<sup>6</sup> =*R*<sup>7</sup> =*R*<sup>8</sup> =*R*<sup>9</sup> =0.7. Let's calculate the overall reliability of the system.

**Figure 11.** The system consists of three groups of blocks arranged in series. Each block is, in turn, formed by elements in parallel. First we must calculate *R*1,2 =1 - (1 - 0.8)<sup>2</sup> =0.99. So it is possible to estimated *R*3,4,5 =1 - (1 - 0.8)<sup>3</sup> =0.992. Then we must calculate the reliability of the last parallel block *R*6,7,8,9 =1 - (1 - 0.7)<sup>4</sup> =0.9919. Finally, we proceed to the series of the three blocks: *R* = *R*1,2 ∙ *R*3,4,5 ∙ *R*6,7,8,9 =0.974.

To calculate the overall reliability, for all other types of systems which can't be brought back to a series-parallel scheme, it must be adopted a more intensive calculation approach [3] that is normally done with the aid of special software.

Reliability functions of the system can also be used to calculate measures of **reliability importance**.

These measurements are used to assess which components of a system offer the greatest opportunity to improve the overall reliability. The most widely recognized definition of reliability importance *I ' <sup>i</sup>* of the components is the **reliability marginal gain**, in terms of overall system rise of functionality, obtained by a marginal increase of the component reliability:

$$\begin{array}{c} I \stackrel{\cdot}{.i} = \frac{\partial \mathcal{R}}{\partial \mathcal{R}\_i} \end{array} \tag{13}$$

*Ii* = ∏ *j*=1 *j*≠*i*

If the system is arranged in parallel, the reliability importance becomes as follows:

*n*

(1 - *R <sup>j</sup>*

It follows that the most important component in a parallel system is the more reliable. With the same data as the previous example, this time having a parallel arrangement, we can verify Eq. 16 for the first item: *I*<sup>1</sup> =*R*(11) - *R*(01) = 1 - (1 - 1) ·(1 - 0.8)∙(1 - 0.7) - 1 - (1 - 0)· (1 - 0.8)∙(1 - 0.7)

For the calculation of the reliability importance of components belonging to complex systems, which are not attributable to the series-parallel simple scheme, reliability of different systems must be counted. For this reason the calculation is often done using automated algorithms.

Suppose you have studied the reliability of a component, and found that it is 80% for a mission duration of 3 hours. Knowing that we have 5 identical items simultaneously active, we might be interested in knowing what the overall reliability of the group would be. In other words, we want to know what is the probability of having a certain number of items functioning at

Consider a set of *m*identical and independent systems in a same instant, each having a reliability*R*. The group may represent a set of systems in use, independent and identical, or could represent a set of devices under test, independent and identical. A discrete random variable of great interest reliability is*N* , the number of functioning items. Under the assump‐ tions specified, *N* is a binomial random variable, which expresses the probability of a Bernoulli process. The corresponding probabilistic model is, therefore, the one that describes the extraction of balls from an urn filled with a known number of red and green balls. Suppose that the percentage *R*of green balls is coincident with the reliability after 3 hours. After each extraction from the urn, the ball is put back in the container. Extraction is repeated *m*times, and we look for the probability of finding *n*green. The sequence of random variables thus obtained is a Bernoulli process of which each extraction is a test. Since the probability of obtaining *N* successes in *m*extractions from an urn, with restitution of the ball, follows the

binomial distribution B(*m*, *R*)B, the probability mass function of *N* is the well-known:

the end of the 3 hours of mission. This issue is best known as fleet reliability.

*Ii* = ∏ *j*=1 *j*≠*i*

=1 - 0 - 1 + (1 - 0.8)∙(1 - 0.7)=(1 - 0.8)∙(1 - 0.7).

**3. Fleet reliability**

*n*

Thus, the most important component (in terms of reliability) in a series system is the less reliable. For example, consider three elements of reliability *R*<sup>1</sup> =0.9, *R*<sup>2</sup> =0.8 e *R*<sup>3</sup> =0.7. It is therefore: *I*<sup>1</sup> =0.8∙0.7=0.56, *I*<sup>2</sup> =0.9∙0.7=0.63 and *I*<sup>3</sup> =0.9 · 0.8=0.72 which is the higher value.

*<sup>R</sup> <sup>j</sup>* (15)

Reliability and Maintainability in Operations Management

http://dx.doi.org/10.5772/54161

91

) (16)

For other system configurations, an alternative approach facilitates the calculation of reliability importance of the components. Let *R*(1*<sup>i</sup>* )be the reliability of the system modified so that *Ri* =1 and *R*(0*<sup>i</sup>* )be the reliability of the system modified with*Ri* =0, always keeping unchanged the other components. In this context, the reliability importance *Ii* is given by:

$$I\_i = R\begin{pmatrix} 1\_i \end{pmatrix} - R\begin{pmatrix} 0\_i \end{pmatrix} \tag{14}$$

In a series system, this formulation is equivalent to writing:

$$I\_i = \prod\_{j=1 \atop j \neq i}^{n} \mathbb{R}\_{\,j} \tag{15}$$

Thus, the most important component (in terms of reliability) in a series system is the less reliable. For example, consider three elements of reliability *R*<sup>1</sup> =0.9, *R*<sup>2</sup> =0.8 e *R*<sup>3</sup> =0.7. It is therefore: *I*<sup>1</sup> =0.8∙0.7=0.56, *I*<sup>2</sup> =0.9∙0.7=0.63 and *I*<sup>3</sup> =0.9 · 0.8=0.72 which is the higher value.

If the system is arranged in parallel, the reliability importance becomes as follows:

$$I\_i = \prod\_{j=1 \atop j \neq i}^{n} \{1 - R\_{\,j}\} \tag{16}$$

It follows that the most important component in a parallel system is the more reliable. With the same data as the previous example, this time having a parallel arrangement, we can verify Eq. 16 for the first item: *I*<sup>1</sup> =*R*(11) - *R*(01) = 1 - (1 - 1) ·(1 - 0.8)∙(1 - 0.7) - 1 - (1 - 0)· (1 - 0.8)∙(1 - 0.7) =1 - 0 - 1 + (1 - 0.8)∙(1 - 0.7)=(1 - 0.8)∙(1 - 0.7).

For the calculation of the reliability importance of components belonging to complex systems, which are not attributable to the series-parallel simple scheme, reliability of different systems must be counted. For this reason the calculation is often done using automated algorithms.

### **3. Fleet reliability**

**Figure 11.** The system consists of three groups of blocks arranged in series. Each block is, in turn, formed by elements in parallel. First we must calculate *R*1,2 =1 - (1 - 0.8)<sup>2</sup> =0.99. So it is possible to estimated *R*3,4,5 =1 - (1 - 0.8)<sup>3</sup> =0.992. Then we must calculate the reliability of the last parallel block *R*6,7,8,9 =1 - (1 - 0.7)<sup>4</sup> =0.9919. Finally, we proceed to the

To calculate the overall reliability, for all other types of systems which can't be brought back to a series-parallel scheme, it must be adopted a more intensive calculation approach [3] that

Reliability functions of the system can also be used to calculate measures of **reliability**

These measurements are used to assess which components of a system offer the greatest opportunity to improve the overall reliability. The most widely recognized definition of

system rise of functionality, obtained by a marginal increase of the component reliability:

For other system configurations, an alternative approach facilitates the calculation of reliability

) - *R*(0*<sup>i</sup>*

)be the reliability of the system modified with*Ri* =0, always keeping unchanged the

*I* ' *<sup>i</sup>* <sup>=</sup> <sup>∂</sup> *<sup>R</sup>* ∂ *Ri*

*Ii* =*R*(1*<sup>i</sup>*

other components. In this context, the reliability importance *Ii*

In a series system, this formulation is equivalent to writing:

*<sup>i</sup>* of the components is the **reliability marginal gain**, in terms of overall

)be the reliability of the system modified so that *Ri* =1

is given by:

) (14)

(13)

series of the three blocks: *R* = *R*1,2 ∙ *R*3,4,5 ∙ *R*6,7,8,9 =0.974.

**importance**.

90 Operations Management

and *R*(0*<sup>i</sup>*

reliability importance *I '*

importance of the components. Let *R*(1*<sup>i</sup>*

is normally done with the aid of special software.

Suppose you have studied the reliability of a component, and found that it is 80% for a mission duration of 3 hours. Knowing that we have 5 identical items simultaneously active, we might be interested in knowing what the overall reliability of the group would be. In other words, we want to know what is the probability of having a certain number of items functioning at the end of the 3 hours of mission. This issue is best known as fleet reliability.

Consider a set of *m*identical and independent systems in a same instant, each having a reliability*R*. The group may represent a set of systems in use, independent and identical, or could represent a set of devices under test, independent and identical. A discrete random variable of great interest reliability is*N* , the number of functioning items. Under the assump‐ tions specified, *N* is a binomial random variable, which expresses the probability of a Bernoulli process. The corresponding probabilistic model is, therefore, the one that describes the extraction of balls from an urn filled with a known number of red and green balls. Suppose that the percentage *R*of green balls is coincident with the reliability after 3 hours. After each extraction from the urn, the ball is put back in the container. Extraction is repeated *m*times, and we look for the probability of finding *n*green. The sequence of random variables thus obtained is a Bernoulli process of which each extraction is a test. Since the probability of obtaining *N* successes in *m*extractions from an urn, with restitution of the ball, follows the binomial distribution B(*m*, *R*)B, the probability mass function of *N* is the well-known:

$$P\{\mathbf{N}=n\} = \frac{m\,\mathrm{l}}{n\,\mathrm{l}\,\mathrm{(m\,-\,n)}\,\mathrm{l}}\,\mathrm{R}\,\,^{n}\{\mathbf{1}\,-\,\mathrm{R}\}^{m\,\mathrm{-}n} \tag{17}$$

failure rate is the ratio between the instantaneous probability of failure in a neighborhood of *t*- conditioned to the fact that the element is healthy in *t*- and the amplitude of the same

The hazard function *λ*(*t*) [5] coincides with the intensity function *z*(*t*) of a Poisson process. The

*P*(*t* ≤ *T* < *t* + ∆ *t* | *T* ≥ *t*)

Thanks to Bayes' theorem, it can be shown that the relationship between the hazard function,

Thanks to the previous equation, with some simple mathematical manipulations, we obtain

*f* (*t*)=*λ*(*t*)∙*e*

The most popular conceptual model of the hazard function is the **bathtub curve**. According to this model, the failure rate of the device is relatively high and descending in the first part of the device life, due to the potential manufacturing defects, called **early failures**. They manifest themselves in the first phase of operation of the system and their causes are often linked to structural deficiencies, design or installation defects. In terms of reliability, a system

Later, at the end of the life of the device, the failure rate increases due to wear phenomena. They are caused by alterations of the component for material and structural aging. The beginning of the period of wear is identified by an increase in the frequency of failures which continues as time goes by. The **wear-out failures** occur around the average age of operating;

the only way to avoid this type of failure is to replace the population in advance.

*<sup>λ</sup>*(*t*)= *<sup>f</sup>* (*t*)

*R*(*t*)=*e* -*∫* 0 *t*

<sup>∆</sup> *<sup>t</sup>* (22)

Reliability and Maintainability in Operations Management

http://dx.doi.org/10.5772/54161

93

*<sup>R</sup>*(*t*) (23)

*<sup>λ</sup>*(*u*)∙*du* (24)

0 *t*

*<sup>λ</sup>*(*u*)∙*du* (26)

*λ*(*u*)*du* (25)

*<sup>R</sup>*(*t*) *dR*(*t*)= - *λ*(*t*)*dt* →ln *R*(*t*) - ln *R*(0) = - *∫*


*λ*(*t*)= *lim Δt*→0

density of probability of failure and reliability is the following:

neighborhood.

hazard function is given by:

the following relation:

*<sup>R</sup>*(*t*)= *<sup>f</sup>* (*t*)

*<sup>λ</sup>*(*t*) <sup>=</sup> <sup>1</sup>

In fact, since ln *R*(0) =ln 1 =0, we have:

*dt* <sup>=</sup> - <sup>1</sup>

*<sup>λ</sup>*(*t*) <sup>∙</sup> *dR*(*t*)

From equation 24 derive the other two fundamental relations:

*F* (*t*)=1 - *e*

*dt* <sup>→</sup> <sup>1</sup>


that manifests infantile failures improves over the course of time.

*<sup>λ</sup>*(*t*) <sup>∙</sup> *dF* (*t*)

The expected value of *N* is given by: *E*(*N* )=*μN* =*m*∙*R*and the standard deviation is: *σ<sup>N</sup>* = *m*∙*R* ∙(1 - *R*).

Let's consider, for example, a corporate fleet consisting of 100 independent and identical systems. All systems have the same mission, independent from the other missions. Each system has a reliability of mission equal to 90%. We want to calculate the average number of missions completed and also what is the probability that at least 95% of systems would complete their mission. This involves analyzing the distribution of the binomial random variable character‐ ized by *R* = 0.90and*m* = 100. The expected value is given by *E*(*N* )=*μN* =100∙0.9=90.

The probability that at least 95% of the systems complete their mission can be calculated as the sum of the probabilities that complete their mission 95, 96, 97, 98, 99 and 100 elements of the fleet:

$$P\{N \ge n\} = \sum\_{n=99}^{100} \left[ \frac{m!}{n! \left\{ (m-n)! \right\}!} R \, ^n \{1 - R\}^{m-n} \right] = 0,058 \tag{18}$$

### **4. Time dependent reliability models**

When reliability is expressed as a function of time, the continuous random variable, not negative, of interest is *T* , the instant of failure of the device. Let *f* (*t*) be the probability density function of *T* , and let *F* (*t*) be the cumulative distribution function of *T* . *F* (*t*) is also known as failure function or unreliability function [4].

In the context of reliability, two additional functions are often used: the **reliability** and the hazard function. Let's define **Reliability** *R*(*t*)as the survival function:

$$R(t) = P(T \ge t) = 1 - F(t) \tag{19}$$

The **Mean Time To Failure - MTTF** is defined as the expected value of the failure time:

$$MTTF = E\left\{T\right\} = \left\{\_0^\infty t \,\,\bullet \, f\left(t\right) \,\bullet \, dt\right.\tag{20}$$

Integrating by parts, we can prove the equivalent expression:

$$MTTF = E\left(T\right) = \left\| \begin{smallmatrix} T \\ \end{smallmatrix} R\left(t\right) \bullet dt\right\|\tag{21}$$

### **5. Hazard function**

Another very important function is the **hazard function**, denoted by *λ*(*t*), defined as the trend of the instantaneous failure rate at time *t* of an element that has survived up to that time *t*. The failure rate is the ratio between the instantaneous probability of failure in a neighborhood of *t*- conditioned to the fact that the element is healthy in *t*- and the amplitude of the same neighborhood.

The hazard function *λ*(*t*) [5] coincides with the intensity function *z*(*t*) of a Poisson process. The hazard function is given by:

$$\lambda\left(t\right) = \lim\_{\Delta t \to 0} \frac{P\left(t \le T \le t \star \Delta t \mid T \ge t\right)}{\Delta t} \tag{22}$$

Thanks to Bayes' theorem, it can be shown that the relationship between the hazard function, density of probability of failure and reliability is the following:

$$
\lambda\left(t\right) = \frac{f\left(t\right)}{R\left(t\right)}\tag{23}
$$

Thanks to the previous equation, with some simple mathematical manipulations, we obtain the following relation:

$$R(t) = e^{\int\_{t}^{t} \lambda(u) \bullet du} \tag{24}$$

In fact, since ln *R*(0) =ln 1 =0, we have:

*<sup>P</sup>*(*<sup>N</sup>* <sup>=</sup>*n*)= *<sup>m</sup>* !

*P*(*N* ≥*n*)= ∑

**4. Time dependent reliability models**

failure function or unreliability function [4].

**5. Hazard function**

*n*=95

hazard function. Let's define **Reliability** *R*(*t*)as the survival function:

*MTTF* = *E*(*T* )=*∫*

*MTTF* = *E*(*T* )=*∫*

Integrating by parts, we can prove the equivalent expression:

100 *m* !

*σ<sup>N</sup>* = *m*∙*R* ∙(1 - *R*).

92 Operations Management

fleet:

The expected value of *N* is given by: *E*(*N* )=*μN* =*m*∙*R*and the standard deviation is:

Let's consider, for example, a corporate fleet consisting of 100 independent and identical systems. All systems have the same mission, independent from the other missions. Each system has a reliability of mission equal to 90%. We want to calculate the average number of missions completed and also what is the probability that at least 95% of systems would complete their mission. This involves analyzing the distribution of the binomial random variable character‐

The probability that at least 95% of the systems complete their mission can be calculated as the sum of the probabilities that complete their mission 95, 96, 97, 98, 99 and 100 elements of the

When reliability is expressed as a function of time, the continuous random variable, not negative, of interest is *T* , the instant of failure of the device. Let *f* (*t*) be the probability density function of *T* , and let *F* (*t*) be the cumulative distribution function of *T* . *F* (*t*) is also known as

In the context of reliability, two additional functions are often used: the **reliability** and the

The **Mean Time To Failure - MTTF** is defined as the expected value of the failure time:

0

0

Another very important function is the **hazard function**, denoted by *λ*(*t*), defined as the trend of the instantaneous failure rate at time *t* of an element that has survived up to that time *t*. The

ized by *R* = 0.90and*m* = 100. The expected value is given by *E*(*N* )=*μN* =100∙0.9=90.

*<sup>n</sup>* ! (*<sup>m</sup>* - *<sup>n</sup>*)! *<sup>R</sup> <sup>n</sup>*(1 - *<sup>R</sup>*)*m*-*<sup>n</sup>* (17)

*<sup>n</sup>* ! (*<sup>m</sup>* - *<sup>n</sup>*)! *<sup>R</sup> <sup>n</sup>*(1 - *<sup>R</sup>*)*m*-*<sup>n</sup>* =0,058 (18)

*R*(*t*)=*P*(*T* ≥*t*)=1 - *F* (*t*) (19)

<sup>∞</sup>*t* ∙ *f* (*t*)∙*dt* (20)

<sup>∞</sup>*R*(*t*)∙*dt* (21)

$$\text{R}\,\text{R}\,(t) = \frac{f(t)}{\Lambda(t)} = \frac{1}{\Lambda(t)} \bullet \frac{d\mathbb{P}\,(t)}{dt} = \text{--}\,\frac{1}{\Lambda(t)} \bullet \frac{d\mathbb{R}\,(t)}{dt} \to \frac{1}{\mathbb{R}\,(t)}\\d\mathbb{R}\,(t) = \text{-}\,\lambda(t)dt \to \text{ln}\,\mathbb{R}\,(t)\,\mathbb{I} - \ln\mathbb{I}\\ \text{R}\,(0) \mathbb{I} = \text{-}\,\mathbf{j}\_0^t \,\lambda(u)du \tag{25}$$

From equation 24 derive the other two fundamental relations:

$$F(t) = 1 \cdot e^{\frac{\int\_{\lambda}^{\lambda} \lambda \, (u) \bullet du}{\int\_{\lambda}^{\lambda} \lambda \, du}} f(t) = \lambda(t) \bullet e^{\frac{\int\_{\lambda}^{\lambda} \lambda \, (u) \bullet du}{\int\_{\lambda}^{\lambda} \lambda \, du}} \tag{26}$$

The most popular conceptual model of the hazard function is the **bathtub curve**. According to this model, the failure rate of the device is relatively high and descending in the first part of the device life, due to the potential manufacturing defects, called **early failures**. They manifest themselves in the first phase of operation of the system and their causes are often linked to structural deficiencies, design or installation defects. In terms of reliability, a system that manifests infantile failures improves over the course of time.

Later, at the end of the life of the device, the failure rate increases due to wear phenomena. They are caused by alterations of the component for material and structural aging. The beginning of the period of wear is identified by an increase in the frequency of failures which continues as time goes by. The **wear-out failures** occur around the average age of operating; the only way to avoid this type of failure is to replace the population in advance.

Between the period of early failures and of wear-out, the failure rate is about constant: failures are due to random events and are called **random failures**. They occur in non-nominal operating conditions, which put a strain on the components, resulting in the inevitable changes and the consequent loss of operational capabilities. This type of failure occurs during the useful life of the system and corresponds to unpredictable situations. The central period with constant failure rate is called **useful life**. The juxtaposition of the three periods in a graph which represents the trend of the failure rate of the system, gives rise to a curve whose characteristic shape recalls the section of a bathtub, as shown in Figure 12.

*R*(*t* + *t*<sup>0</sup> |*t*0) =*P*(*T* >*t* + *t*<sup>0</sup> |*T* >*t*0) (27)

*<sup>P</sup>*(*<sup>T</sup>* <sup>&</sup>gt; *<sup>t</sup>*0) (28)

Reliability and Maintainability in Operations Management

http://dx.doi.org/10.5772/54161

95

*<sup>R</sup>*(*t*0) (29)

<sup>∞</sup>*R*(*<sup>t</sup>* <sup>+</sup> *<sup>t</sup>*<sup>0</sup> <sup>|</sup>*t*0)∙*dt* (30)

<sup>∞</sup>*R*(*t*)∙*dt* (31)

*λ*(*u*)∙*du* =1 (32)

*<sup>P</sup>*(*<sup>T</sup>* <sup>&</sup>gt;*<sup>t</sup>* <sup>+</sup> *<sup>t</sup>*<sup>0</sup> <sup>|</sup>*<sup>T</sup>* <sup>&</sup>gt;*t*0) <sup>=</sup> *<sup>P</sup>*(*<sup>T</sup>* <sup>&</sup>gt; *<sup>t</sup>*<sup>0</sup> <sup>|</sup> *<sup>T</sup>* <sup>&</sup>gt; *<sup>t</sup>* <sup>+</sup> *<sup>t</sup>*0) <sup>∙</sup> *<sup>P</sup>*(*<sup>T</sup>* <sup>&</sup>gt; *<sup>t</sup>* <sup>+</sup> *<sup>t</sup>*0)

*<sup>R</sup>*(*<sup>t</sup>* <sup>+</sup> *<sup>t</sup>*<sup>0</sup> <sup>|</sup>*t*0) <sup>=</sup> *<sup>R</sup>*(*<sup>t</sup>* <sup>+</sup> *<sup>t</sup>*0)

residual life of a device that has already survived a time *t*0:

running (burn-in) to avoid errors in the field.

*MTTF*(*t*0) = *E*(*T* - *t*<sup>0</sup> |*T* >*t*0) =*∫*

And, given that *P*(*T* >*t*<sup>0</sup> |*T* >*t* + *t*0) =1, we obtain the final expression, which determines the

The **residual Mean Time To Failure** – **residual MTTF** measures the expected value of the

For an IFR device, the residual reliability and the residual MTTF, decrease progressively as the device accumulates hours of operation. This behavior explains the use of preventive actions to avoid failures. For a DFR device, both the residual reliability and the residual MTTF increase while the device accumulates hours of operation. This behavior motivates the use of an intense

The **Mean Time To Failure** –**MTTF**, measures the expected value of the life of a device and coincides with the residual time to failure, where *t*<sup>0</sup> =0. In this case we have the following

The **characteristic life** of a device is the time *tC* corresponding to a reliability *R*(*tC*) equal to

Let us consider a CFR device with a constant failure rate *λ*. The time-to-failure is an exponential random variable. In fact, the probability density function of a failure, is typical of an expo‐

0 *tC* 0

*MTTF* =*MTTF* (0)=*E*(*T* |*T* >0)=*∫*

, that is the time for which the area under the hazard function is unitary:

*<sup>R</sup>*(*tC*) <sup>=</sup>*e*-1 =0,368 <sup>→</sup>*R*(*tC*) <sup>=</sup> *<sup>∫</sup>*

0

Applying Bayes' theorem we have:

residual reliability:

relationship:

nential distribution:

1 *e*

**Figure 12.** Bathtub curve. The hazard function shape allows us to identify three areas: the initial period of the early failures, the middle time of the useful life and the final area of wear-out.

### The most common mathematical classifications of the hazard curve are the so called **Constant Failure Rate - CFR**, **Increasing Failure Rate - IFR** and **Decreasing Failure Rate - DFR**.

The CFR model is based on the assumption that the failure rate does not change over time. Mathematically, this model is the most simple and is based on the principle that the faults are purely random events. The IFR model is based on the assumption that the failure rate grows up over time. The model assumes that faults become more likely over time because of wear, as is frequently found in mechanical components. The DFR model is based on the assumption that the failure rate decreases over time. This model assumes that failures become less likely as time goes by, as it occurs in some electronic components.

Since the failure rate may change over time, one can define a reliability parameter that behaves as if there was a kind of counter that accumulates hours of operation. The **residual reliabili‐ ty** function *R*(*t* + *t*<sup>0</sup> |*t*0), in fact, measures the reliability of a given device which has already survived a determined time *t*0. The function is defined as follows:

Reliability and Maintainability in Operations Management http://dx.doi.org/10.5772/54161 95

$$R\left(t + t\_0 \mid t\_0\right) = P\left(T > t + t\_0 \mid T > t\_0\right) \tag{27}$$

Applying Bayes' theorem we have:

Between the period of early failures and of wear-out, the failure rate is about constant: failures are due to random events and are called **random failures**. They occur in non-nominal operating conditions, which put a strain on the components, resulting in the inevitable changes and the consequent loss of operational capabilities. This type of failure occurs during the useful life of the system and corresponds to unpredictable situations. The central period with constant failure rate is called **useful life**. The juxtaposition of the three periods in a graph which represents the trend of the failure rate of the system, gives rise to a curve whose characteristic

**Figure 12.** Bathtub curve. The hazard function shape allows us to identify three areas: the initial period of the early

The most common mathematical classifications of the hazard curve are the so called **Constant Failure Rate - CFR**, **Increasing Failure Rate - IFR** and **Decreasing Failure Rate - DFR**.

The CFR model is based on the assumption that the failure rate does not change over time. Mathematically, this model is the most simple and is based on the principle that the faults are purely random events. The IFR model is based on the assumption that the failure rate grows up over time. The model assumes that faults become more likely over time because of wear, as is frequently found in mechanical components. The DFR model is based on the assumption that the failure rate decreases over time. This model assumes that failures become less likely

Since the failure rate may change over time, one can define a reliability parameter that behaves as if there was a kind of counter that accumulates hours of operation. The **residual reliabili‐ ty** function *R*(*t* + *t*<sup>0</sup> |*t*0), in fact, measures the reliability of a given device which has already

shape recalls the section of a bathtub, as shown in Figure 12.

94 Operations Management

failures, the middle time of the useful life and the final area of wear-out.

as time goes by, as it occurs in some electronic components.

survived a determined time *t*0. The function is defined as follows:

$$P\left(T > t + t\_0 \mid T > t\_0\right) = \frac{P\left(T > t\_0 \mid T > t + t\_0\right) \bullet P\left(T > t + t\_0\right)}{P\left(T > t\_0\right)}\tag{28}$$

And, given that *P*(*T* >*t*<sup>0</sup> |*T* >*t* + *t*0) =1, we obtain the final expression, which determines the residual reliability:

$$R\left(t+t\_0 \mid t\_0\right) = \frac{R\left(t+t\_0\right)}{R\left(t\_0\right)}\tag{29}$$

The **residual Mean Time To Failure** – **residual MTTF** measures the expected value of the residual life of a device that has already survived a time *t*0:

$$MTTF\left(t\_0\right) = E\left(T \cdot t\_0 \mid T > t\_0\right) = \left\| \Box^\* R\left(t + t\_0 \mid t\_0\right) \bullet dt\right\|\tag{30}$$

For an IFR device, the residual reliability and the residual MTTF, decrease progressively as the device accumulates hours of operation. This behavior explains the use of preventive actions to avoid failures. For a DFR device, both the residual reliability and the residual MTTF increase while the device accumulates hours of operation. This behavior motivates the use of an intense running (burn-in) to avoid errors in the field.

The **Mean Time To Failure** –**MTTF**, measures the expected value of the life of a device and coincides with the residual time to failure, where *t*<sup>0</sup> =0. In this case we have the following relationship:

$$MTTF = MTTF\left(0\right) = E\left(T \mid T > 0\right) = \left\|\_{0}^{\infty} R\left(t\right) \bullet dt\right\|\tag{31}$$

The **characteristic life** of a device is the time *tC* corresponding to a reliability *R*(*tC*) equal to 1 *e* , that is the time for which the area under the hazard function is unitary:

$$R\left(t\_{\mathcal{C}}\right) = e^{-1} = 0,\\ \text{368} \quad -R\left(t\_{\mathcal{C}}\right) = \stackrel{t\_{\mathcal{C}}}{\int\_{0}^{t\_{\mathcal{C}}}} \lambda\left(u\right) \bullet du = 1\tag{32}$$

Let us consider a CFR device with a constant failure rate *λ*. The time-to-failure is an exponential random variable. In fact, the probability density function of a failure, is typical of an expo‐ nential distribution:

$$f\begin{pmatrix} t \end{pmatrix} = \lambda\begin{pmatrix} t \end{pmatrix} \bullet \ e^{\int\_{0}^{t} \lambda\begin{pmatrix} u \end{pmatrix} \bullet du} = \lambda e^{-\lambda \bullet t} \tag{33}$$

The corresponding cumulative distribution function *F* (*t*)is:

$$F\left(t\right) = \stackrel{t}{\int} f\left(z\right) dz = \stackrel{t}{\int} \lambda e^{-\lambda \bullet z} dz = 1 - e^{-\lambda \bullet t} \tag{34}$$

The reliability function *R*(*t*)is the survival function:

$$R\left(t\right) = 1 - F\left(t\right) = e^{-\lambda\_1 \bullet t} \tag{35}$$

*<sup>P</sup>*(*<sup>t</sup>* <sup>&</sup>lt;*<sup>T</sup>* <sup>&</sup>lt;*<sup>t</sup>* <sup>+</sup> *dt* <sup>|</sup>*<sup>T</sup>* <sup>&</sup>gt;*t*)= *<sup>P</sup>*(*<sup>T</sup>* <sup>&</sup>gt; *<sup>t</sup>* <sup>|</sup> *<sup>t</sup>* <sup>&</sup>lt; *<sup>T</sup>* <sup>&</sup>lt; *<sup>t</sup>* <sup>+</sup> *dt*) <sup>∙</sup> *<sup>P</sup>*(*<sup>t</sup>* <sup>&</sup>lt; *<sup>T</sup>* <sup>&</sup>lt; *<sup>t</sup>* <sup>+</sup> *dt*)

with λ =1.

**Figure 13.** Probability density function and cumulative distribution of an exponential function. In the figure is seen

*<sup>P</sup>*(*<sup>T</sup>* <sup>&</sup>gt; *<sup>t</sup>*) <sup>=</sup> *<sup>f</sup>* (*t*)*dt*

As can be seen, this probability does not depend on *t*, i.e. it is not function of the life time already elapsed. It is as if the component does not have a memory of its own history and it is

The use of the constant failure rate model, facilitates the calculation of the characteristic life of


Therefore, the characteristic life, in addition to be calculated as the time value *tC* for which the

*<sup>λ</sup> <sup>e</sup>*-*λ*∙*t*<sup>|</sup> <sup>∞</sup>

<sup>0</sup> <sup>=</sup> - <sup>0</sup> *<sup>λ</sup>* + 1 *<sup>λ</sup>* <sup>=</sup> <sup>1</sup>

reliability is 0.368, can more easily be evaluated as the reciprocal of the failure rate.

<sup>∞</sup>*e*-*λ*∙*<sup>t</sup>* <sup>∙</sup>*dt* =- <sup>1</sup>

In the CFR model, then, the MTTF and the characteristic life coincide and are equal to 1

The definition of MTTF, in the CFR model, can be integrated by parts and give:

0

*<sup>e</sup>* -*λ*∙*<sup>t</sup>* <sup>=</sup> *<sup>λ</sup><sup>e</sup>* -*λ*∙*<sup>t</sup>*

*dt*

Since *P*(*T* >*t* |*t* <*T* <*t* + *dt*)=1, being a certainty, it follows:

and of *f* (*t*)=λ ∙*e* -λ∙*<sup>t</sup>*

the trend of *f* (*t*)=λ ∙*e* -λ∙*<sup>t</sup>*

*<sup>P</sup>*(*<sup>t</sup>* <sup>&</sup>lt;*<sup>T</sup>* <sup>&</sup>lt;*<sup>t</sup>* <sup>+</sup> *dt* <sup>|</sup>*<sup>T</sup>* <sup>&</sup>gt;*t*)= *<sup>P</sup>*(*<sup>t</sup>* <sup>&</sup>lt; *<sup>T</sup>* <sup>&</sup>lt; *<sup>t</sup>* <sup>+</sup> *dt*)

for this reason that the exponential distribution is called **memoryless**.

*R*(*tC*) =*e*

<sup>∞</sup>*R*(*t*)∙*dt* =*∫*

*MTTF* =*∫* 0

a device. In fact for a CFR item, *tC*is the reciprocal of the failure rate. In fact:

*<sup>P</sup>*(*<sup>T</sup>* <sup>&</sup>gt; *<sup>t</sup>*) (40)

Reliability and Maintainability in Operations Management

http://dx.doi.org/10.5772/54161

97

*<sup>e</sup>* -*λ*∙*<sup>t</sup>* <sup>=</sup>*<sup>λ</sup>* <sup>∙</sup>*dt* (41)

*<sup>λ</sup>* (42)

*<sup>λ</sup>* (43)

*λ* .

For CFR items, the residual reliability and the residual MTTF both remain constant when the device accumulates hours of operation. In fact, from the definition of residual reliability, ∀*t*0∈ 0, *∞* , we have:

$$R\left(t + t\_0 \mid t\_0\right) = \frac{R\left(t + t\_0\right)}{R\left(t\_0\right)} = \frac{e^{-\lambda \bullet \left(t + t\_0\right)}}{e^{-\lambda \bullet t\_0}} = e^{-\lambda \bullet \left(t + t\_0\right) + \lambda \bullet t\_0} = e^{-\lambda \bullet t} = R\left(t\right) \tag{36}$$

Similarly, for the residual MTTF, is true the invariance in time:

$$MTTF\left(t\_0\right) = \left\| \begin{pmatrix} t \ \left(t + t\_0 \mid t\_0\right) \bullet dt = \emptyset \end{pmatrix} \bullet \begin{pmatrix} t \ \left(\bullet \ \left.dt\right\rangle \end{pmatrix} \begin{pmatrix} 0 \ \left.\begin{pmatrix} 0 \ \left.\begin{pmatrix} 0 \ \end{pmatrix}\right\} \end{pmatrix} \end{pmatrix} \tag{37}$$

This behavior implies that the actions of prevention and running are useless for CFR devices. Figure 13 shows the trend of the function *f* (*t*)=*λ* ∙*e*-*λ*∙*<sup>t</sup>* and of the cumulative distribution function *F* (*t*)=1 - *e*-*λ*∙*<sup>t</sup>* for a constant failure rate *λ* =1. In this case, since *λ* =1, the probability density function and the reliability function, overlap: *f* (*t*)=*R*(*t*)=*e*-*<sup>t</sup>* .

The probability of having a fault, not yet occurred at time *t*, in the next *dt*, can be written as follows:

$$P\{t \le T \le t + dt \mid T > t\}\tag{38}$$

Recalling the Bayes' theorem, in which we consider the probability of an hypothesis H, being known the evidence E:

$$P(H \mid E) = \frac{P(E \mid H) \bullet P(H)}{P(E)} \tag{39}$$

we can replace the evidence E with the fact that the fault has not yet taken place, from which we obtain *P*(*E*)→*P*(*T* >*t*). We also exchange the hypothesis H with the occurrence of the fault in the neighborhood of *t*, obtaining *P*(*H* )→ *P*(*t* <*T* <*t* + *dt*). So we get:

**Figure 13.** Probability density function and cumulative distribution of an exponential function. In the figure is seen the trend of *f* (*t*)=λ ∙*e* -λ∙*<sup>t</sup>* and of *f* (*t*)=λ ∙*e* -λ∙*<sup>t</sup>* with λ =1.

$$P\left\{t \le T < t + dt \mid T > t\right\} = \frac{P\left\{T > t \mid t < T < t + dt\right\} \bullet P\left\{t < T < t + dt\right\}}{P\left\{T > t\right\}}\tag{40}$$

Since *P*(*T* >*t* |*t* <*T* <*t* + *dt*)=1, being a certainty, it follows:

*f* (*t*)=*λ*(*t*)∙*e*

*f* (*z*)*dz* = *∫* -∞ *t λe*-*λ*∙*<sup>z</sup>*

The corresponding cumulative distribution function *F* (*t*)is:

*F* (*t*)= *∫* -∞ *t*

The reliability function *R*(*t*)is the survival function:

*<sup>R</sup>*(*<sup>t</sup>* <sup>+</sup> *<sup>t</sup>*<sup>0</sup> <sup>|</sup>*t*0) <sup>=</sup> *<sup>R</sup>*(*<sup>t</sup>* <sup>+</sup> *<sup>t</sup>*0)

0

Figure 13 shows the trend of the function *f* (*t*)=*λ* ∙*e*-*λ*∙*<sup>t</sup>*

*MTTF*(*t*0) =*∫*

*<sup>R</sup>*(*t*0) <sup>=</sup> *<sup>e</sup>*

<sup>∞</sup>*R*(*<sup>t</sup>* <sup>+</sup> *<sup>t</sup>*<sup>0</sup> <sup>|</sup>*t*0)∙*dt* <sup>=</sup>*<sup>∫</sup>*

density function and the reliability function, overlap: *f* (*t*)=*R*(*t*)=*e*-*<sup>t</sup>*

in the neighborhood of *t*, obtaining *P*(*H* )→ *P*(*t* <*T* <*t* + *dt*). So we get:

Similarly, for the residual MTTF, is true the invariance in time:


*e* -*λ*∙*t* <sup>0</sup> =*e*

∀*t*0∈ 0, *∞* , we have:

96 Operations Management

follows:

known the evidence E:


For CFR items, the residual reliability and the residual MTTF both remain constant when the device accumulates hours of operation. In fact, from the definition of residual reliability,

0

This behavior implies that the actions of prevention and running are useless for CFR devices.

function *F* (*t*)=1 - *e*-*λ*∙*<sup>t</sup>* for a constant failure rate *λ* =1. In this case, since *λ* =1, the probability

The probability of having a fault, not yet occurred at time *t*, in the next *dt*, can be written as

Recalling the Bayes' theorem, in which we consider the probability of an hypothesis H, being

we can replace the evidence E with the fact that the fault has not yet taken place, from which we obtain *P*(*E*)→*P*(*T* >*t*). We also exchange the hypothesis H with the occurrence of the fault

*<sup>P</sup>*(*<sup>H</sup>* <sup>|</sup>*E*)= *<sup>P</sup>*(*<sup>E</sup>* <sup>|</sup> *<sup>H</sup>* ) <sup>∙</sup> *<sup>P</sup>*(*<sup>H</sup>* )

<sup>=</sup>*λe*-*λ*∙*<sup>t</sup>* (33)

*dz* =1 - *e*-*λ*∙*<sup>t</sup>* (34)


<sup>∞</sup>*R*(*t*)∙*dt* <sup>∀</sup>*t*0<sup>∈</sup> 0, <sup>∞</sup> (37)

.

*<sup>P</sup>*(*E*) (39)

*P*(*t* <*T* <*t* + *dt* |*T* >*t*) (38)

and of the cumulative distribution

*R*(*t*)=1 - *F* (*t*)=*e*-*λ*∙*<sup>t</sup>* (35)

$$P\left(t < T < t + dt \mid T > t\right) = \frac{P\left(t < T < t + dt\right)}{P(T > t)} = \frac{f\left(t\right)dt}{e^{-\lambda \bullet t}} = \frac{\lambda e^{-\lambda \bullet t}dt}{e^{-\lambda \bullet t}} = \lambda \bullet dt \tag{41}$$

As can be seen, this probability does not depend on *t*, i.e. it is not function of the life time already elapsed. It is as if the component does not have a memory of its own history and it is for this reason that the exponential distribution is called **memoryless**.

The use of the constant failure rate model, facilitates the calculation of the characteristic life of a device. In fact for a CFR item, *tC*is the reciprocal of the failure rate. In fact:

$$R\left(t\_{\mathcal{C}}\right) = e^{\cdot \lambda \cdot t\_{\mathcal{C}}} = e^{-1} \to t\_{\mathcal{C}} = \frac{1}{\lambda} \tag{42}$$

Therefore, the characteristic life, in addition to be calculated as the time value *tC* for which the reliability is 0.368, can more easily be evaluated as the reciprocal of the failure rate.

The definition of MTTF, in the CFR model, can be integrated by parts and give:

$$MTTF = \left\| \stackrel{\circ \circ}{\ast} \mathbb{R}(t) \bullet dt = \left\| \stackrel{\circ \circ}{\ast} e^{\cdot \cdot \lambda \bullet t} \bullet dt = \text{--} \frac{1}{\overline{\lambda}} e^{-\lambda \bullet t} \right\|\_{0}^{\circ \circ} = \text{--} \frac{0}{\overline{\lambda}} + \frac{1}{\overline{\lambda}} = \frac{1}{\overline{\lambda}} \tag{43}$$

In the CFR model, then, the MTTF and the characteristic life coincide and are equal to 1 *λ* . Let us consider, for example, a component with constant failure rate equal to *λ* =0.0002failures per hour. We want to calculate the MTTF of the component and its reliability after 10000 hours of operation. We'll calculate, then, what is the probability that the component survives other 10000 hours. Assuming, finally, that it has worked without failure for the first 6000 hours, we'll calculate the expected value of the remaining life of the component.

From equation 43 we have:

$$MTTF = \frac{1}{\lambda} = \frac{1}{0.0002\left[\frac{\text{fürhren}}{\text{h}}\right]} = 5000\left[\text{fh}\right] \tag{44}$$

In a system of CFR elements arranged in series, then, the failure rate of the system is equal to the sum of failure rates of the components. The MTTF can thus be calculated using the simple

> *<sup>λ</sup><sup>s</sup>* <sup>=</sup> <sup>1</sup> ∑ *i*=1 *n λi*

For example, let me show the following example. A system consists of a pump and a filter, used to separate two parts of a mixture: the concentrate and the squeezing. Knowing that the failure rate of the pump is constant and is *λ<sup>P</sup>* =1,5∙10-4 failures per hour and that the failure

To begin, we compare the physical arrangement with the reliability one, as represented in the

*<sup>λ</sup><sup>i</sup>* <sup>=</sup>*λ<sup>P</sup>* <sup>+</sup> *<sup>λ</sup><sup>F</sup>* =1.8∙10-4 failures

*<sup>λ</sup><sup>s</sup>* <sup>=</sup> <sup>1</sup>

As a year of continuous operation is 24 · 365=8,760 hours, the reliability after one year is:

, let's try to assess the failure rate of the system,

Reliability and Maintainability in Operations Management

http://dx.doi.org/10.5772/54161

<sup>h</sup> (51)

1.8 <sup>∙</sup> <sup>10</sup>-4 =5,555 h (52)

(50)

99

*MTTF* <sup>=</sup> <sup>1</sup>

the MTTF and the reliability after one year of continuous operation.

**Figure 14.** physical and reliability modeling of a pump and a filter producing orange juice.

As can be seen, it is a simple series, for which we can write:

*λ<sup>s</sup>* = ∑ *i*=1 *n*

·8760 =0.2066

*MTTF* is the reciprocal of the failure rate and can be written:

*MTTF* <sup>=</sup> <sup>1</sup>

rate of the filter is also CFR and is *λ<sup>F</sup>* =3∙10-5

relation:

following figure:

*RS* =*e* -*λs*∙*t* =*e*-1.8∙10-4

For the law of the reliability *R*(*t*)=*e*-*λ*∙*<sup>t</sup>* , you get the reliability at 10000 hours:

$$R\,\text{(10000)} = e^{-0.0002 \cdot 10000} = 0.135\tag{45}$$

The probability that the component survives other 10000 hours, is calculated with the residual reliability. Knowing that this, in the model CFR, is independent from time, we have:

$$R\left(t + t\_0 \mid t\_0\right) = R\left(t\right) \to R\left(20000 \mid 100000\right) = R\left(10000\right) = 0.135\tag{46}$$

Suppose now that it has worked without failure for 6000 hours. The expected value of the residual life of the component is calculated using the residual MTTF, that is invariant. In fact:

$$\text{MTTF}\left(t\_0\right) = \left\|\ulcorner\text{R}\left(t + t\_0 \mid t\_0\right) \bullet dt \to \text{MTTF}\left(6000\right) = \left\|\ulcorner\text{R}\left(t + 6000 \mid 6000\right) \bullet dt = \left\|\ulcorner\text{R}\left(t\right) \bullet dt = \text{MTTF} = 5000\right\|\text{h}\mathbf{I} \tag{47}$$

### **6. CFR in series**

Let us consider *n* different elements, each with its own constant failure rate *λ<sup>i</sup>* and reliability *Ri* =*e* -*λi* ∙*t* , arranged in series and let us evaluate the overall reliability *RS* . From equation 9 we have:

$$R\_S = \prod\_{i=1}^{n} R\_i = \prod\_{i=1}^{n} \mathbf{e}^{\cdot \cdot \lambda\_i \bullet t} = \mathbf{e}^{\cdot \cdot \sum\_{i=1}^{n} \lambda\_i \bullet t} \tag{48}$$

Since the reliability of the overall system will take the form of the type *RS* =*e* -*λs*∙*t* , we can conclude that:

$$R\_S = \mathbf{e}^{\cdot \cdot \sum\_{i=1}^n \lambda\_i \bullet t} = e^{\cdot \cdot \lambda\_{s\*} t} \to \lambda\_s = \sum\_{i=1}^n \lambda\_i \tag{49}$$

In a system of CFR elements arranged in series, then, the failure rate of the system is equal to the sum of failure rates of the components. The MTTF can thus be calculated using the simple relation:

Let us consider, for example, a component with constant failure rate equal to *λ* =0.0002failures per hour. We want to calculate the MTTF of the component and its reliability after 10000 hours of operation. We'll calculate, then, what is the probability that the component survives other 10000 hours. Assuming, finally, that it has worked without failure for the first 6000 hours, we'll

The probability that the component survives other 10000 hours, is calculated with the residual

Suppose now that it has worked without failure for 6000 hours. The expected value of the residual life of the component is calculated using the residual MTTF, that is invariant. In fact:

reliability. Knowing that this, in the model CFR, is independent from time, we have:

0

Let us consider *n* different elements, each with its own constant failure rate *λ<sup>i</sup>*

Since the reliability of the overall system will take the form of the type *RS* =*e*

*RS* =∏ *i*=1 *n Ri* =∏ *i*=1 *n* e -*λi* ∙*t* =e -∑ *i*=1 *n λi* ∙*t*

*RS* =e -∑ *i*=1 *n λi* ∙*t* =*e* -*λs*∙*t* =5000 h (44)

, you get the reliability at 10000 hours:

*R*(*t* + *t*<sup>0</sup> |*t*0) =*R*(*t*)→*R*(20000|10000)=*R*(10000)=0.135 (46)

<sup>∞</sup>*R*(*t* + 6000|6000)∙*dt* =*∫*

, arranged in series and let us evaluate the overall reliability *RS* . From equation 9 we

→*λ<sup>s</sup>* = ∑ *i*=1 *n*

*R*(10000)=*e*-0.0002∙<sup>10000</sup> =0.135 (45)

0

<sup>∞</sup>*R*(*t*)∙*dt* =*MTTF* =5000 h (47)

and reliability


*<sup>λ</sup><sup>i</sup>* (49)

(48)

, we can

calculate the expected value of the remaining life of the component.

*<sup>λ</sup>* <sup>=</sup> <sup>1</sup> 0.0002 failures h

*MTTF* <sup>=</sup> <sup>1</sup>

From equation 43 we have:

98 Operations Management

*MTTF* (*t*0) =*∫*

*Ri* =*e* -*λi* ∙*t*

have:

conclude that:

0

**6. CFR in series**

For the law of the reliability *R*(*t*)=*e*-*λ*∙*<sup>t</sup>*

<sup>∞</sup>*R*(*<sup>t</sup>* <sup>+</sup> *<sup>t</sup>*<sup>0</sup> <sup>|</sup>*t*0)∙*dt* <sup>→</sup>*MTTF* (6000)=*<sup>∫</sup>*

$$MTTF = \frac{1}{\lambda\_\*} = \frac{1}{\sum\_{i=1}^k \lambda\_i} \tag{50}$$

For example, let me show the following example. A system consists of a pump and a filter, used to separate two parts of a mixture: the concentrate and the squeezing. Knowing that the failure rate of the pump is constant and is *λ<sup>P</sup>* =1,5∙10-4 failures per hour and that the failure rate of the filter is also CFR and is *λ<sup>F</sup>* =3∙10-5 , let's try to assess the failure rate of the system, the MTTF and the reliability after one year of continuous operation.

To begin, we compare the physical arrangement with the reliability one, as represented in the following figure:

**Figure 14.** physical and reliability modeling of a pump and a filter producing orange juice.

As can be seen, it is a simple series, for which we can write:

$$
\lambda\_s = \sum\_{i=1}^{n} \lambda\_i = \lambda\_P + \lambda\_F = 1.8 \bullet 10^{-4} \left[ \frac{\text{failures}}{\text{h}} \right] \tag{51}
$$

*MTTF* is the reciprocal of the failure rate and can be written:

$$MTTF = \frac{1}{\lambda\_{\ast}} = \frac{1}{1.8 \cdot 10^{-4}} = 5,555 \text{[h]} \tag{52}$$

As a year of continuous operation is 24 · 365=8,760 hours, the reliability after one year is: *RS* =*e* -*λs*∙*t* =*e*-1.8∙10-4 ·8760 =0.2066

### **7. CFR in parallel**

If two components arranged in parallel are similar and have constant failure rate λ, the reliability of the system *RP* can be calculated with equation 10, wherein *RC* is the reliability of the component *RC* <sup>=</sup>*e*-*λ<sup>t</sup>* :

$$R\_P = 1 \cdot \prod\_{i=1}^{2} \left(1 \cdot R\_i\right) = 1 \cdot \left(1 \cdot R\_1\right)^2 = 2R\_\odot \text{ - } R\_\odot ^2 = 2 \text{e}^{\cdot \lambda \bullet t} \text{ - e}^{\cdot 2 \lambda \bullet t} \tag{53}$$

The calculation of the MTTF leads to *MTTF* <sup>=</sup> <sup>3</sup> <sup>2</sup>*<sup>λ</sup>* . In fact we have:

$$\text{MTTF} = \text{f}\_0^\infty \text{R}(t) \bullet dt = \text{f}\_0^\infty \text{2e}^{-\lambda \text{rt}} \text{ - e}^{\cdot 2\lambda \text{ } \bullet t} \bullet dt = \text{ - } \frac{2}{\lambda} \text{e}^{-\lambda t} + \frac{1}{2\lambda} \text{e}^{-2\lambda t} \Big|\_{0}^{\prime\prime} = \frac{2}{\lambda} (0 \text{ - 1}) + \frac{1}{2\lambda} (0 \text{ - 1}) = \frac{3}{2\lambda} \tag{54}$$

Therefore, the MTTF increases compared to the single component CFR. The failure rate of the parallel system *λP*, reciprocal of the MTTF, is:

$$
\lambda\_P = \frac{1}{MTTF} = \frac{2}{3}\lambda \tag{55}
$$

*<sup>λ</sup><sup>P</sup>* <sup>=</sup> <sup>2</sup>

The MTTF is the reciprocal of the failure rate and is:

<sup>3</sup> *<sup>λ</sup>* <sup>=</sup> <sup>2</sup>

*MTTF* <sup>=</sup> <sup>1</sup>

*RP* =*e*

to withstand the load of the system. The reliability is:

rate *λ* =9 · 10-6

follows:

<sup>=</sup> <sup>3</sup> !

*R*<sup>2</sup> *out of* <sup>3</sup> = ∑

*j*=2 3 ( 3 *<sup>j</sup>* )*R <sup>j</sup>*

<sup>2</sup> ! (3 - 2)! *<sup>e</sup>*-2*λ<sup>t</sup>* <sup>∙</sup>(1 - *<sup>e</sup>*-*λt*)3-2 <sup>+</sup>


*Rk* out of *<sup>n</sup>* =*P*(*k* ≤ *j* ≤*n*)= ∑

operation. Let's get the reliability after one year of operation.

=( <sup>3</sup>

3 !

∙(1 - *R*)3- *<sup>j</sup>*

nent 2. For simplicity, it is assumed that S is not affected by faults.

=3∙*e*-2*λ<sup>t</sup>* ∙(1 - *e*-*λt*)3-2 + 1∙*e*-*λ<sup>t</sup>* ∙(1 - *e*-*λt*)3-3

*<sup>λ</sup><sup>p</sup>* <sup>=</sup> <sup>1</sup>

As a year of continuous operation is 24 · 365=8,760 hours, the reliability after one year is:

It is interesting to calculate the reliability of a system of identical elements arranged in a parallel configuration *k* out of *n*. The system is partially redundant since a group of *k* elements is able

> *j*=*k n* ( *n <sup>j</sup>* )*R <sup>j</sup>*

Let us consider, for example, three electric generators, arranged in parallel and with failure

We'll have: *n* =3, *k* =2. So, after a year of operation (*t* =8760 *h* ), reliability can be calculated as

<sup>2</sup> )e-2*<sup>λ</sup><sup>t</sup>*(1 - <sup>e</sup>-*λt*)3-2 <sup>+</sup> ( <sup>3</sup>

=0.963

second component comes into operation only when the first fails. Otherwise, it is idle.

A particular arrangement of components is that of the so-called parallel with stand-by: the

**Figure 16.** RBD diagram of a parallel system with stand-by. When component 1 fails, the switch S activates compo‐

<sup>3</sup> ! (3 - 3)! *<sup>e</sup>*-*λ<sup>t</sup>* <sup>∙</sup>(1 - *<sup>e</sup>*-*λt*)3-3

. In order for the system to be active, it is sufficient that only two items are in

=

<sup>3</sup> )e-*<sup>λ</sup><sup>t</sup>*(1 - <sup>e</sup>-*λt*)3-3

=

=*e*-6∙10-6

<sup>3</sup> <sup>9</sup>∙10-6 =6∙10-6 guasti

<sup>h</sup> (56)

http://dx.doi.org/10.5772/54161

101

Reliability and Maintainability in Operations Management

<sup>6</sup> <sup>∙</sup> <sup>10</sup>-6 =166,666 h (57)

·8,760 =0.9488 (58)

∙(1 - *R*)*n*- *<sup>j</sup>* (59)

As you can see, the failure rate is not halved, but was reduced by one third.

For example, let us consider a safety system which consists of two batteries and each one is able to compensate for the lack of electric power of the grid. The two generators are equal and have a constant failure rate *λ<sup>B</sup>* =9∙10-6 failures per hour. We'd like to calculate the failure rate of the system, the MTTF and reliability after one year of continuous operation.

As in the previous case, we start with a reliability block diagram of the problem, as visible in Figure 15.

**Figure 15.** Physical and reliability modeling of an energy supply system.

It is a parallel arrangement, for which the following equation is applicable:

Reliability and Maintainability in Operations Management http://dx.doi.org/10.5772/54161 101

$$
\lambda\_P = \frac{2}{3}\lambda = \frac{2}{3}\Theta \bullet 10^{-6} = 6 \bullet 10^6 \left[\frac{\text{guasti}}{\text{h}}\right] \tag{56}
$$

The MTTF is the reciprocal of the failure rate and is:

**7. CFR in parallel**

100 Operations Management

the component *RC* <sup>=</sup>*e*-*λ<sup>t</sup>*

*MTTF* =*∫* 0

Figure 15.

:

The calculation of the MTTF leads to *MTTF* <sup>=</sup> <sup>3</sup>

<sup>∞</sup>2e-*λ*∙*<sup>t</sup>* - <sup>e</sup>-2*λ*∙*<sup>t</sup>* <sup>∙</sup>*dt* <sup>=</sup> - <sup>2</sup>

0

parallel system *λP*, reciprocal of the MTTF, is:

*RP* =1 - ∏ *i*=1 2 (1 - *Ri*

<sup>∞</sup>*R*(*t*)∙*dt* =*∫*

If two components arranged in parallel are similar and have constant failure rate λ, the reliability of the system *RP* can be calculated with equation 10, wherein *RC* is the reliability of

*<sup>λ</sup> <sup>e</sup>*-*λ<sup>t</sup>* <sup>+</sup>

Therefore, the MTTF increases compared to the single component CFR. The failure rate of the

*MTTF* <sup>=</sup> <sup>2</sup>

For example, let us consider a safety system which consists of two batteries and each one is able to compensate for the lack of electric power of the grid. The two generators are equal and have a constant failure rate *λ<sup>B</sup>* =9∙10-6 failures per hour. We'd like to calculate the failure rate

As in the previous case, we start with a reliability block diagram of the problem, as visible in

*<sup>λ</sup><sup>P</sup>* <sup>=</sup> <sup>1</sup>

of the system, the MTTF and reliability after one year of continuous operation.

**Figure 15.** Physical and reliability modeling of an energy supply system.

It is a parallel arrangement, for which the following equation is applicable:

As you can see, the failure rate is not halved, but was reduced by one third.

<sup>2</sup>*<sup>λ</sup>* . In fact we have:

<sup>0</sup> <sup>=</sup> <sup>2</sup>

*<sup>λ</sup>* (0 - 1) +

<sup>3</sup> *λ* (55)

1 <sup>2</sup>*<sup>λ</sup> <sup>e</sup>*-2*λt*<sup>|</sup> <sup>∞</sup>

<sup>2</sup> =2e-*λ*∙*<sup>t</sup>* - e-2*λ*∙*<sup>t</sup>* (53)

1

<sup>2</sup>*<sup>λ</sup>* (0 - 1)= <sup>3</sup>

<sup>2</sup>*<sup>λ</sup>* (54)

) =1 - (1 - *R*1)<sup>2</sup> =2*RC* - *RC*

$$MTTF = \frac{1}{\lambda\_p} = \frac{1}{6 \bullet 10^6} = 166,666 \,\text{[h]}\tag{57}$$

As a year of continuous operation is 24 · 365=8,760 hours, the reliability after one year is:

$$R\_p = e^{-\lambda\_p} \mathbf{r}^t = e^{-6 \bullet 10^{-6} \cdot 8.760} = 0.9488 \tag{58}$$

It is interesting to calculate the reliability of a system of identical elements arranged in a parallel configuration *k* out of *n*. The system is partially redundant since a group of *k* elements is able to withstand the load of the system. The reliability is:

$$R\_{k\text{ out of }n} = P\{k \le j \le n\} = \sum\_{j=k}^{n} {n\choose j} R^{\nkern-1.1em]} \bullet \{1\ -R\}^{n\text{-}j} \tag{59}$$

Let us consider, for example, three electric generators, arranged in parallel and with failure rate *λ* =9 · 10-6 . In order for the system to be active, it is sufficient that only two items are in operation. Let's get the reliability after one year of operation.

We'll have: *n* =3, *k* =2. So, after a year of operation (*t* =8760 *h* ), reliability can be calculated as follows:

$$\begin{split} &R\_{2\text{ out of }\frac{3}{3}} = \sum\_{j=2}^{3} \binom{3}{j} \text{R}^{j} \bullet \{1 \text{ } - \text{R}\}^{3 - j} = \binom{3}{2} \text{e}^{-2\lambda t} \{1 \text{ } - \text{e}^{\cdot \lambda t}\}^{3 - 2} + \binom{3}{3} \text{e}^{-\lambda t} \{1 \text{ } - \text{e}^{-\lambda t}\}^{3 - 3} = \\ &= \frac{3!}{2!(3 \cdot 2)!} \text{e}^{-2\lambda t} \bullet \{1 \text{ } - \text{e}^{-\lambda t}\}^{3 - 2} + \frac{3!}{3!(3 \cdot 3)!} \text{e}^{-\lambda t} \bullet \{1 \text{ } - \text{e}^{-\lambda t}\}^{3 - 3} = \\ &= 3 \bullet \text{e}^{-2\lambda t} \bullet \{1 \text{ } - \text{e}^{-\lambda t}\}^{3 - 2} + 1 \bullet \text{e}^{-\lambda t} \bullet \{1 \text{ } - \text{e}^{-\lambda t}\}^{3 - 3} = 0.963 \end{split}$$

A particular arrangement of components is that of the so-called parallel with stand-by: the second component comes into operation only when the first fails. Otherwise, it is idle.

**Figure 16.** RBD diagram of a parallel system with stand-by. When component 1 fails, the switch S activates compo‐ nent 2. For simplicity, it is assumed that S is not affected by faults.

If the components are similar, then *λ*<sup>1</sup> =*λ*2. It's possible to demonstrate that for the stand-by parallel system we have:

$$MTTF = \frac{2}{\lambda} \tag{60}$$

**•** diagnose the condition;

create two scenarios:

**9. Availability**

maintenance of condition.

**•** estimate the Remaining Useful Life – RUL;

parts, the maximization of the availability and so on.

used in the initial phase of design of the systems themselves.

a fault, but also for preventive or corrective maintenance.

**•** decide whether to maintain or to continue to operate normally.

that identify ranges of values for which maintenance action must arise [9].

CBM schedule is modeled with algorithms aiming at high effectiveness, in terms of cost minimization, being subject to constraints such as, for example, the maximum time for the maintenance action, the periods of high production rate, the timing of supply of the pieces

In support of the prognosis, it is now widespread the use of diagrams that do understand, even graphically, when the sensor outputs reach alarm levels. They also set out the alert thresholds

Starting from a state of degradation, detected by a measurement at the time *tk* , we calculate the likelihood that the system will still be functioning within the next instant of inspection *tk* +1. The choice to act with a preventive maintenance is based on the comparison of the expected value of the cost of unavailability, with the costs associated with the repair. Therefore, you

**•** continue to operate: if we are in the area of not alarming values. It is also possible that being in the area of preventive maintenance, we opt for a postponement of maintenance because it has already been established replacement intervention within a short interval of time **•** stop the task: if we are in the area of values above the threshold established for preventive

The modeling of repairable systems is commonly used to evaluate the performance of one or more repairable systems and of the related maintenance policies. The information can also be

In the traditional paradigm of modeling, a repairable system can only be in one of two states: working (up) or inoperative (down). Note that a system may not be functioning not only for

Availability may be generically be defined as the percentage of time that a repairable system is in an operating condition. However, in the literature, there are four specific measures of repairable system availability. We consider only the **limit availability**, defined with the limit

The limit availability just seen is also called **intrinsic availability**, to distinguish it from the **technical availability**, which also includes the logistics cycle times incidental to maintenance

*A*(*t*) (61)

Reliability and Maintainability in Operations Management

http://dx.doi.org/10.5772/54161

103

of the probability *A*(*t*) that the system is working at time *t*, when *t* tends to infinity.

*A*=lim *t*→∞

Thus, in parallel with stand-by, the MTTF is doubled.=

### **8. Repairable systems**

The devices for which it is possible to perform some operations that allow to reactivate the functionality, deserve special attention. A repairable system [6] is a system that, after the failure, can be restored to a functional condition from any action of maintenance, including replacement of the entire system. Maintenance actions performed on a repairable system can be classified into two groups: **Corrective Maintenance - CM** and **Preventive Maintenance - PM**. Corrective maintenance is performed in response to system errors and might correspond to a specific activity of both repair of replacement. Preventive maintenance actions, however, are not performed in response to the failure of the system to repair, but are intended to delay or prevent system failures. Note that the preventive activities are not necessarily cheaper or faster than the corrective actions.

As corrective actions, preventive activities may correspond to both repair and replacement activities. Finally, note that the actions of operational maintenance (servicing) such as, for example, put gas in a vehicle, are not considered PM [7].

Preventative maintenance can be divided into two subcategories: **scheduled** and **on-condi‐ tion**. Scheduled maintenance (hard-time maintenance) consists of routine maintenance operations, scheduled on the basis of precise measures of elapsed operating time.

**Condition-Based Maintenance - CBM** [8] (also known as predictive maintenance) is one of the most widely used tools for monitoring of industrial plants and for the management of maintenance policies. The main aim of this approach is to optimize maintenance by reduc‐ ing costs and increasing availability. In CBM it is necessary to identify, if it exists, a measur‐ able parameter, which expresses, with accuracy, the conditions of degradation of the system. What is needed, therefore, is a physical system of sensors and transducers capable of moni‐ toring the parameter and, thereby, the reliability performance of the plant. The choice of the monitored parameter is crucial, as is its time evolution that lets you know when mainte‐ nance action must be undertaken, whether corrective or preventive.

To adopt a CBM policy requires investment in instrumentation and prediction and control systems: you must run a thorough feasibility study to see if the cost of implementing the apparatus are truly sustainable in the system by reducing maintenance costs.

The CBM approach consists of the following steps:

**•** group the data from the sensors;

**•** diagnose the condition;

If the components are similar, then *λ*<sup>1</sup> =*λ*2. It's possible to demonstrate that for the stand-by

The devices for which it is possible to perform some operations that allow to reactivate the functionality, deserve special attention. A repairable system [6] is a system that, after the failure, can be restored to a functional condition from any action of maintenance, including replacement of the entire system. Maintenance actions performed on a repairable system can be classified into two groups: **Corrective Maintenance - CM** and **Preventive Maintenance - PM**. Corrective maintenance is performed in response to system errors and might correspond to a specific activity of both repair of replacement. Preventive maintenance actions, however, are not performed in response to the failure of the system to repair, but are intended to delay or prevent system failures. Note that the preventive activities are not necessarily cheaper or

As corrective actions, preventive activities may correspond to both repair and replacement activities. Finally, note that the actions of operational maintenance (servicing) such as, for

Preventative maintenance can be divided into two subcategories: **scheduled** and **on-condi‐ tion**. Scheduled maintenance (hard-time maintenance) consists of routine maintenance

**Condition-Based Maintenance - CBM** [8] (also known as predictive maintenance) is one of the most widely used tools for monitoring of industrial plants and for the management of maintenance policies. The main aim of this approach is to optimize maintenance by reduc‐ ing costs and increasing availability. In CBM it is necessary to identify, if it exists, a measur‐ able parameter, which expresses, with accuracy, the conditions of degradation of the system. What is needed, therefore, is a physical system of sensors and transducers capable of moni‐ toring the parameter and, thereby, the reliability performance of the plant. The choice of the monitored parameter is crucial, as is its time evolution that lets you know when mainte‐

To adopt a CBM policy requires investment in instrumentation and prediction and control systems: you must run a thorough feasibility study to see if the cost of implementing the

operations, scheduled on the basis of precise measures of elapsed operating time.

nance action must be undertaken, whether corrective or preventive.

The CBM approach consists of the following steps:

**•** group the data from the sensors;

apparatus are truly sustainable in the system by reducing maintenance costs.

*<sup>λ</sup>* (60)

*MTTF* <sup>=</sup> <sup>2</sup>

Thus, in parallel with stand-by, the MTTF is doubled.=

example, put gas in a vehicle, are not considered PM [7].

parallel system we have:

102 Operations Management

**8. Repairable systems**

faster than the corrective actions.


CBM schedule is modeled with algorithms aiming at high effectiveness, in terms of cost minimization, being subject to constraints such as, for example, the maximum time for the maintenance action, the periods of high production rate, the timing of supply of the pieces parts, the maximization of the availability and so on.

In support of the prognosis, it is now widespread the use of diagrams that do understand, even graphically, when the sensor outputs reach alarm levels. They also set out the alert thresholds that identify ranges of values for which maintenance action must arise [9].

Starting from a state of degradation, detected by a measurement at the time *tk* , we calculate the likelihood that the system will still be functioning within the next instant of inspection *tk* +1. The choice to act with a preventive maintenance is based on the comparison of the expected value of the cost of unavailability, with the costs associated with the repair. Therefore, you create two scenarios:


The modeling of repairable systems is commonly used to evaluate the performance of one or more repairable systems and of the related maintenance policies. The information can also be used in the initial phase of design of the systems themselves.

In the traditional paradigm of modeling, a repairable system can only be in one of two states: working (up) or inoperative (down). Note that a system may not be functioning not only for a fault, but also for preventive or corrective maintenance.

### **9. Availability**

Availability may be generically be defined as the percentage of time that a repairable system is in an operating condition. However, in the literature, there are four specific measures of repairable system availability. We consider only the **limit availability**, defined with the limit of the probability *A*(*t*) that the system is working at time *t*, when *t* tends to infinity.

$$A = \lim\_{t \to \infty} A(t) \tag{61}$$

The limit availability just seen is also called **intrinsic availability**, to distinguish it from the **technical availability**, which also includes the logistics cycle times incidental to maintenance actions (such as waiting for the maintenance, waiting for spare parts, testing...), and from the **operational availability** that encompasses all other factors that contribute to the unavailability of the system such as time of organization and preparation for action in complex and specific business context [10].

Let's denote by *Ti*

Let us now designate with *Di*

system state returns operating

and MTTR=10 hours.

the duration of the *i* - *th* interval of operation of the repairable system. For

assume that these random variables are independent and identically distributed. Therefore, each cycle (whether it is an operating cycle or a corrective maintenance action) has an identical probabilistic behavior, and the completion of a maintenance action coincides with time when

) <sup>=</sup> *MTTF*

Let us consider the special case of the general substitution model where *Ti* is an exponential

with constant repair rate *μ*. Since the reparable system has a constant failure rate (CFR), we know that aging and the impact of corrective maintenance are irrelevant on reliability

Let us analyze, for example, a repairable system, subject to a replacement policy, with failure and repair times distributed according to negative exponential distribution. MTTF=1000 hours

Let's calculate the limit availability of the system. The formulation of the limit availability in

<sup>=</sup> 0,1

After examining the substitution model, we now want to consider a second model for repairable system: the general model of minimal repair. According to this model, the time of

*MTTF* <sup>+</sup> *MTTR* <sup>=</sup> *MTTF*

the duration of the *i* - *th* corrective maintenance action and

Reliability and Maintainability in Operations Management

and *Di*

, …, *Tn*} is a sequence

105

http://dx.doi.org/10.5772/54161

, the fundamental result of the

*MTBF* (62)

be an exponential random variable

*<sup>λ</sup>* <sup>+</sup> *<sup>μ</sup>* (63)

0.101 =0.990 (64)

the assumption of perfect maintenance (as good as new), {*T*1, *T*2, …, *Ti*

of independent and identically distributed random variables.

Regardless of the probability distributions governing *Ti*

*<sup>A</sup>*<sup>=</sup> *<sup>E</sup>*(*Ti*

random variable with constant failure rate *λ*. Let also *Di*

*E*(*Ti* ) + *E*(*Di*

)

performance. For this system it can be shown that the limit availability is:

*<sup>A</sup>*<sup>=</sup> *<sup>μ</sup>*

general pattern of substitution is as follows:

**11. The substitution model for CFR**

this system is given by eq. 63, so we have:

**12. General model of minimal repair**

*<sup>A</sup>*<sup>=</sup> *<sup>μ</sup> <sup>λ</sup>* <sup>+</sup> *<sup>μ</sup>* =

This means that the system is available for 99% of the time.

The models of the impact of preventive and corrective maintenance on the age of the compo‐ nent, distinguish in perfect, minimal and imperfect maintenance. **Perfect maintenance** (perfect repair) returns the system **as good as new** after maintenance. The **minimal repair**, restores the system to a working condition, but does not reduce the actual age of the system, leaving it **as bad as old**. The imperfect maintenance refers to maintenance actions that have an intermediate impact between the perfect maintenance and minimal repair.

The average duration of maintenance activity is the expected value of the probability distri‐ bution of repair time and is called **Mean Time To Repair - MTTR** and is closely connected with the concept of **maintainability**. This consists in the probability of a system, in assigned operating conditions, to be reported in a state in which it can perform the required function.

Figure 17 shows the state functions of two repairable systems with increasing failure rate, maintained with perfect and minimal repair.

**Figure 17.** perfect maintenance vs minimal repair. In figure are represented the state functions of two systems both with IFR. *Y* (*t*) is equal to 1 when the system wotks, otherwise it's 0. The left system is subject to a policy of perfect repair and shows homogeneous durations of the periods of operation. The right system adopts the minimal repair for which the durations of the periods of operation are reducing as time goes by.

### **10. The general substitution model**

The general substitution model, states that the failure time of a repairable system is an unspecified random variable. The duration of corrective maintenance (perfect) is also a random variable. In this model it is assumed that preventive maintenance is not performed.

Let's denote by *Ti* the duration of the *i* - *th* interval of operation of the repairable system. For the assumption of perfect maintenance (as good as new), {*T*1, *T*2, …, *Ti* , …, *Tn*} is a sequence of independent and identically distributed random variables.

Let us now designate with *Di* the duration of the *i* - *th* corrective maintenance action and assume that these random variables are independent and identically distributed. Therefore, each cycle (whether it is an operating cycle or a corrective maintenance action) has an identical probabilistic behavior, and the completion of a maintenance action coincides with time when system state returns operating

Regardless of the probability distributions governing *Ti* and *Di* , the fundamental result of the general pattern of substitution is as follows:

$$A = \frac{E\{T\_i\}}{E\{T\_i\} + E\{D\_i\}} = \frac{MTTF}{MTTF + MTTR} = \frac{MTTF}{MTBF} \tag{62}$$

### **11. The substitution model for CFR**

actions (such as waiting for the maintenance, waiting for spare parts, testing...), and from the **operational availability** that encompasses all other factors that contribute to the unavailability of the system such as time of organization and preparation for action in complex and specific

The models of the impact of preventive and corrective maintenance on the age of the compo‐ nent, distinguish in perfect, minimal and imperfect maintenance. **Perfect maintenance** (perfect repair) returns the system **as good as new** after maintenance. The **minimal repair**, restores the system to a working condition, but does not reduce the actual age of the system, leaving it **as bad as old**. The imperfect maintenance refers to maintenance actions that have an intermediate

The average duration of maintenance activity is the expected value of the probability distri‐ bution of repair time and is called **Mean Time To Repair - MTTR** and is closely connected with the concept of **maintainability**. This consists in the probability of a system, in assigned operating conditions, to be reported in a state in which it can perform the required function.

Figure 17 shows the state functions of two repairable systems with increasing failure rate,

**Figure 17.** perfect maintenance vs minimal repair. In figure are represented the state functions of two systems both with IFR. *Y* (*t*) is equal to 1 when the system wotks, otherwise it's 0. The left system is subject to a policy of perfect repair and shows homogeneous durations of the periods of operation. The right system adopts the minimal repair for

The general substitution model, states that the failure time of a repairable system is an unspecified random variable. The duration of corrective maintenance (perfect) is also a random

variable. In this model it is assumed that preventive maintenance is not performed.

impact between the perfect maintenance and minimal repair.

which the durations of the periods of operation are reducing as time goes by.

**10. The general substitution model**

maintained with perfect and minimal repair.

business context [10].

104 Operations Management

Let us consider the special case of the general substitution model where *Ti* is an exponential random variable with constant failure rate *λ*. Let also *Di* be an exponential random variable with constant repair rate *μ*. Since the reparable system has a constant failure rate (CFR), we know that aging and the impact of corrective maintenance are irrelevant on reliability performance. For this system it can be shown that the limit availability is:

$$A = \frac{\mu}{\lambda + \mu} \tag{63}$$

Let us analyze, for example, a repairable system, subject to a replacement policy, with failure and repair times distributed according to negative exponential distribution. MTTF=1000 hours and MTTR=10 hours.

Let's calculate the limit availability of the system. The formulation of the limit availability in this system is given by eq. 63, so we have:

$$A = \frac{\mu}{\lambda + \mu} = \frac{\frac{1}{10}}{\frac{1}{1000} + \frac{1}{10}} = \frac{0.1}{0.101} = 0.990\tag{64}$$

This means that the system is available for 99% of the time.

### **12. General model of minimal repair**

After examining the substitution model, we now want to consider a second model for repairable system: the general model of minimal repair. According to this model, the time of system failure is a random variable. Corrective maintenance is instantaneous, the repair is minimal, and not any preventive activity is performed.

The times of arrival of faults, in a repairable system corresponding to the general model of minimal repair, correspond to a process of random experiments, each of which is regulated by the same negative exponential distribution. As known, having neglected the repair time, the number of faults detected by time *t*, {*N* (*t*), *t* ≥0}, is a non-homogeneous Poisson process, described by the Poisson distribution.

### **13. Minimal repair with CFR**

A well-known special case of the general model of minimal repair, is obtained if the failure time *T* is a random variable with exponential distribution, with failure rate λ.

In this case, the general model of minimal repair is simplified because the number *E N* (*t*) of faults that occur within the time *t*: {*N* (*t*), *t* ≥0} is described by a homogeneous Poisson process with intensity *z*(*t*)=*λ*, and is:

$$\mathbb{E}\mathbb{E}[N(t)] = \mu\_{N(t)} = Z\left(t\right) = \mathop{\mathbb{E}}\limits\_{0}^{t} \boldsymbol{\omega}\left(\mu\right) \bullet \, du = \mathop{\mathbb{E}}\limits\_{0}^{t} \boldsymbol{\lambda} \bullet \, du = \lambda\boldsymbol{t} \tag{65}$$

**Figure 18.** Poisson distribution. In the diagram you can see the probability of having *N* faults within a year, having a

Since in the model of minimal repair with CFR, repair time is supposed to be zero (MTTR = 0),

Suppose that a system, subjected to a repair model of minimal repair, shows failures according to a homogeneous Poisson process with failure rate *λ* = 0.0025 failures per hour. We'd like to estimate the average number of failures that the system will have during 5000 hours. Then, determine the probability of having not more than 15 faults in a operation period of 5000 hours.

The estimate of the average number of failures in 5000 hours, can be carried out with the

The probability of having not more than 15 faults in a period of 5000 hours of operation, is

*<sup>n</sup>* ! *<sup>e</sup>*-*λ*∙*<sup>t</sup>* <sup>=</sup> <sup>∑</sup>

*n*=0 <sup>15</sup> 12.5*<sup>n</sup>*

calculated with the sum of the probability mass function evaluated between 0 and 15:

*n*=0 <sup>15</sup> (*λt*)*<sup>n</sup>* *<sup>λ</sup>* (68)

Reliability and Maintainability in Operations Management

http://dx.doi.org/10.5772/54161

107

<sup>h</sup> ∙5000 h =12.5 failures (69)

*<sup>n</sup>* ! *<sup>e</sup>*-12.5 =0.806 (70)

*MTBF* <sup>=</sup>*MTTF* <sup>+</sup> *MTTR* <sup>=</sup>*MTTF* <sup>=</sup> <sup>1</sup>

homogeneous Poisson process with a rate of 5 faults each year.

*<sup>E</sup> <sup>N</sup>* (*t*) <sup>=</sup>*<sup>λ</sup>* <sup>∙</sup>*<sup>t</sup>* <sup>→</sup> *<sup>E</sup> <sup>N</sup>* (5000) =0.0025 failures

*P N* (5000)≤15 = ∑

the following relation applies:

expected value function:

If, for example, we consider *λ* =0.1 faults/hour, we obtain the following values at time 100, 1000 and 10000:

*E N* (100) =0,1∙100=10; *E N* (1000) =0,1∙1000=100; *E N* (10000) =0,1∙10000=1000. It should be noted, as well, a linear trend of the expected number of failures given the width of the interval taken.

Finally, we can obtain the probability mass function of *N* (*t*), being a Poisson distribution:

$$P\mathbb{E}\left[\text{N}\left(t\right)=n\right] = \frac{Z\left(t\right)^{n}}{n!}e^{-Z\left(t\right)} = \frac{(\lambda t)^{n}}{n!}e^{-\lambda t} \tag{66}$$

Also, the probability mass function of *N* (*t* + *s*) - *N* (*s*), that is the number of faults in a range of amplitude *t* shifted forward of *s*, is identical:

$$P\{N\left(t+s\right)-N\left(s\right)=n\}=\frac{(\lambda t)^{n}}{n!}e^{-\lambda t}\tag{67}$$

Since the two values are equal, the conclusion is that in the homogeneous Poisson process (CFR), the number of faults in a given interval depends only on the range amplitude.

The behavior of a Poisson mass probability distribution, with rate equal to 5 faults each year, representing the probability of having *n* ∈N faults within a year, is shown in Figure 18.

system failure is a random variable. Corrective maintenance is instantaneous, the repair is

The times of arrival of faults, in a repairable system corresponding to the general model of minimal repair, correspond to a process of random experiments, each of which is regulated by the same negative exponential distribution. As known, having neglected the repair time, the number of faults detected by time *t*, {*N* (*t*), *t* ≥0}, is a non-homogeneous Poisson process,

A well-known special case of the general model of minimal repair, is obtained if the failure

In this case, the general model of minimal repair is simplified because the number *E N* (*t*) of faults that occur within the time *t*: {*N* (*t*), *t* ≥0} is described by a homogeneous Poisson process

*z*(*u*)∙*du* = *∫*

0 *t*

*λ* ∙*du* =*λt* (65)

*<sup>n</sup>* ! *<sup>e</sup>*-*λ<sup>t</sup>* (66)

*<sup>n</sup>* ! *<sup>e</sup>*-*λ<sup>t</sup>* (67)

0 *t*

If, for example, we consider *λ* =0.1 faults/hour, we obtain the following values at time 100, 1000 and 10000: *E N* (100) =0,1∙100=10; *E N* (1000) =0,1∙1000=100; *E N* (10000) =0,1∙10000=1000. It should be noted, as well, a linear trend of the expected number of failures given the width of

Finally, we can obtain the probability mass function of *N* (*t*), being a Poisson distribution:

*<sup>n</sup>* ! *<sup>e</sup>*-*<sup>Z</sup>* (*t*)

Also, the probability mass function of *N* (*t* + *s*) - *N* (*s*), that is the number of faults in a range of

Since the two values are equal, the conclusion is that in the homogeneous Poisson process

The behavior of a Poisson mass probability distribution, with rate equal to 5 faults each year, representing the probability of having *n* ∈N faults within a year, is shown in Figure 18.

*<sup>P</sup> <sup>N</sup>* (*<sup>t</sup>* <sup>+</sup> *<sup>s</sup>*) - *<sup>N</sup>* (*s*)=*<sup>n</sup>* <sup>=</sup> (*λt*)*<sup>n</sup>*

(CFR), the number of faults in a given interval depends only on the range amplitude.

<sup>=</sup> (*λt*)*<sup>n</sup>*

time *T* is a random variable with exponential distribution, with failure rate λ.

*E N* (*t*) =*μN* (*t*)=*Z*(*t*)= *∫*

*<sup>P</sup> <sup>N</sup>* (*t*)=*<sup>n</sup>* <sup>=</sup> *<sup>Z</sup>* (*t*)*<sup>n</sup>*

amplitude *t* shifted forward of *s*, is identical:

minimal, and not any preventive activity is performed.

described by the Poisson distribution.

106 Operations Management

**13. Minimal repair with CFR**

with intensity *z*(*t*)=*λ*, and is:

the interval taken.

**Figure 18.** Poisson distribution. In the diagram you can see the probability of having *N* faults within a year, having a homogeneous Poisson process with a rate of 5 faults each year.

Since in the model of minimal repair with CFR, repair time is supposed to be zero (MTTR = 0), the following relation applies:

$$\text{MTBF} = \text{MTTF} \, + \, \text{MTTR} = \text{MTTF} = \frac{1}{\lambda} \, \tag{68}$$

Suppose that a system, subjected to a repair model of minimal repair, shows failures according to a homogeneous Poisson process with failure rate *λ* = 0.0025 failures per hour. We'd like to estimate the average number of failures that the system will have during 5000 hours. Then, determine the probability of having not more than 15 faults in a operation period of 5000 hours.

The estimate of the average number of failures in 5000 hours, can be carried out with the expected value function:

$$E\{N(t)\} = \lambda \bullet t \to E\{N(5000)\} = 0.0025\underbrace{\left[\begin{smallmatrix} \text{failures} \\ \text{h} \end{smallmatrix}\right]}\_{} \bullet 5000\text{[h]} = 12.5\text{[failures]}\tag{69}$$

The probability of having not more than 15 faults in a period of 5000 hours of operation, is calculated with the sum of the probability mass function evaluated between 0 and 15:

$$P\left[\text{N (5000)} \le 1.5\right] = \sum\_{n=0}^{15} \frac{(\lambda t)^n}{n!} e^{-\lambda} \text{\*} ^\text{\*} ^\text{t} = \sum\_{n=0}^{15} \frac{12.5^n}{n!} e^{-12.5} = 0.806\tag{70}$$

### **14. Minimal repair: Power law**

A second special case of the general model of minimal repair, is obtained if the failure time *T* is a random variable with a Weibull distribution, with shape parameter *β* and scale parameter *α*.

In this case the sequence of failure times is described by a **Non-Homogeneous Poisson Process - NHPP** with intensity *z*(*t*) equal to the probability density function of the Weibull distribution:

$$
\omega(t) = \frac{\beta}{\alpha^{\beta}} t^{\beta - 1} \tag{71}
$$

*<sup>E</sup> <sup>N</sup>* (*t*) <sup>=</sup>*μN* (*t*)=*Z*(*t*)=( *<sup>t</sup>*

*P N* (1000)≥2 =1 - *P N* (1000)<2 =1 - ∑

equation:

that, in this case, is:

**15. Conclusion**

optimizes reliability costs.

following relationship:

*E N* (*τ*) .

Then follows:

*α* )*β*

calculated as complementary to the probability of having zero or one failure:

*n*=0 <sup>1</sup> ( *<sup>t</sup> <sup>α</sup>* )*β*∙*<sup>n</sup> <sup>n</sup>* ! *<sup>e</sup>*-( *<sup>t</sup> α* )*β*

<sup>→</sup>*<sup>E</sup> <sup>N</sup>* (1000) =( <sup>1000</sup>

The probability of two or more failures during the first 1000 hours of operation can be

=1 - 0.41<sup>0</sup>

The average number of faults in the succeeding 1000 hours of operation is calculated using the

After seeing the main definitions of reliability and maintenance, let's finally see how we can use reliability knowledge also to carry out an economic optimization of replacement activities.

Consider a process that follows the power law with *β* >1. As time goes by, faults begin to take

Let us define with *τ*the time when the replacement (here assumed instantaneous) takes place. We can build a cost model to determine the optimal preventive maintenance time *τ \** which

Let's denote by *C <sup>f</sup>* the cost of a failure and with *Cr* the cost of replacing the repairable system.

If the repairable system is replaced every *τ* time units, in that time we will have the replacement costs *Cr* and so many costs of failure *C <sup>f</sup>* as how many are the expected number of faults in the time range (0;*τ* . The latter quantity coincides with the expected value of the number of faults

The average cost per unit of time *c*(*τ*), in the long term, can then be calculated using the

*<sup>τ</sup>* (79)

*<sup>c</sup>*(*τ*)= *<sup>C</sup> <sup>f</sup>* <sup>∙</sup> *<sup>E</sup> <sup>N</sup>* (*τ*) <sup>+</sup> *Cr*

place more frequently and, at some point, it will be convenient to replace the system.

<sup>0</sup> ! *<sup>e</sup>*-0.41 - 0.41<sup>1</sup>

*E N* (*t* + *s*) - *N* (*s*) =*Z*(*t* + *s*) - *Z*(*s*) (77)

*E N* (2000) - *N* (1000) =*Z*(2000) - *Z*(1000)=1.47 (78)

<sup>1500</sup> )2.2 =0.41 (75)

http://dx.doi.org/10.5772/54161

109

Reliability and Maintainability in Operations Management

<sup>1</sup> ! *<sup>e</sup>*-0.41 =1 - 0.663 - 0.272=0.064 (76)

Since the cumulative intensity of the process is defined by:

$$ZZ(t) = \bigcup\_{t=0}^{t} z(\mu) \bullet du\tag{72}$$

the cumulative function is:

$$\mathbb{E}\left(\mathbf{t}\right) = \int\_0^t \frac{\beta}{\alpha^{\beta}} \mu^{\beta - 1} \bullet \, du = \frac{\beta}{\alpha^{\beta}} \bullet \, \frac{\mu^{\beta}}{\beta} \, \Big|\, \frac{t}{0} = \frac{t^{\beta}}{\alpha^{\beta}} = \left(\frac{t}{\alpha}\right)^{\beta} \tag{73}$$

As it can be seen, the average number of faults occurring within the time *t* ≥0 of this not homogeneous poissonian process *E N* (*t*) =*Z*(*t*), follows the so-called **power law**.

If *β* >1, it means that the intensity function *z*(*t*) increases and, being this latter the expression of the average number of failures, it means that faults tend to occur more frequently over time. Conversely, if *β* <1, faults decrease over time.

In fact, if we take *α* =10 hours (*λ* =0.1 failures/h) and *β* =2, we have: *E N* (100) =(0.1∙100)2 =100=10<sup>2</sup> ; *E N* (1000) =(0.1∙1000)2 =10000=100<sup>2</sup> ; *E N* (10000) =(0.1∙10000)2 =1000000=1000<sup>2</sup> . We can observe a trend no longer linear but increasing according to a power law of a multiple of the time width considered.

The probability mass function of *N* (*t*) thus becomes:

$$P\left[\text{N}\left(t\right)=\text{n}\right] = \frac{Z\left(t\right)^{n}}{n!}e^{-Z\left(t\right)} = \frac{\left(\frac{t}{a}\right)^{\left\|\cdot\right\|\_{a}}}{n!}e^{-\left(\frac{t}{a}\right)^{\beta}}\tag{74}$$

For example, let us consider a system that fails, according to a power law, having *β* =2.2 and *α* =1500 hours. What is the average number of faults occurring during the first 1000 hours of operation? What is the probability of having two or more failures during the first 1000 hours of operation? Which is the average number of faults in the second 1000 hours of operation?

The average number of failures that occur during the first 1000 hours of operation, is calculated with the expected value of the distribution:

Reliability and Maintainability in Operations Management http://dx.doi.org/10.5772/54161 109

$$E\left[N\left(t\right)\right] = \mu\_{N\left(t\right)} = Z\left(t\right) = \left(\frac{t}{\alpha}\right)^{\beta} \to E\left[N\left(1000\right)\right] = \left(\frac{1000}{1500}\right)^{2.2} = 0.41\tag{75}$$

The probability of two or more failures during the first 1000 hours of operation can be calculated as complementary to the probability of having zero or one failure:

$$P\left[\text{N}\left(1000\right)\geq2\right] = 1 - P\left[\text{N}\left(1000\right)<2\right] = 1 - \sum\_{a=0}^{1} \frac{\left(\frac{i}{a}\right)^{a-a}}{a!}e^{-\left(\frac{i}{a}\right)^{a}} = 1 - \frac{0.41^{0}}{0!}e^{-0.41} - \frac{0.41^{1}}{1!}e^{-0.41} = 1 - 0.663 - 0.272 = 0.064 \qquad \left(\because \text{F1}\right)$$

The average number of faults in the succeeding 1000 hours of operation is calculated using the equation:

$$E\left[N\left(t+s\right) \cdot N\left(s\right)\right] = Z\left(t+s\right) \cdot Z\left(s\right) \tag{77}$$

that, in this case, is:

**14. Minimal repair: Power law**

the cumulative function is:

*E N* (100) =(0.1∙100)2 =100=10<sup>2</sup>

*α*.

108 Operations Management

A second special case of the general model of minimal repair, is obtained if the failure time *T* is a random variable with a Weibull distribution, with shape parameter *β* and scale parameter

In this case the sequence of failure times is described by a **Non-Homogeneous Poisson Process - NHPP** with intensity *z*(*t*) equal to the probability density function of the Weibull distribution:

> *<sup>α</sup> <sup>β</sup>* <sup>∙</sup> *<sup>u</sup> <sup>β</sup> <sup>β</sup>* | 0 *t* <sup>=</sup> *<sup>t</sup> <sup>β</sup> <sup>α</sup> <sup>β</sup>* =( *<sup>t</sup>*

As it can be seen, the average number of faults occurring within the time *t* ≥0 of this not

If *β* >1, it means that the intensity function *z*(*t*) increases and, being this latter the expression of the average number of failures, it means that faults tend to occur more frequently over

In fact, if we take *α* =10 hours (*λ* =0.1 failures/h) and *β* =2, we have:

*<sup>α</sup> <sup>β</sup> <sup>t</sup> <sup>β</sup>*-1 (71)

*z*(*u*)∙*du* (72)

; *E N* (1000) =(0.1∙1000)2 =10000=100<sup>2</sup>

. We can observe a trend no longer linear but

*<sup>α</sup>* )*<sup>β</sup>* (73)

;

(74)

*<sup>z</sup>*(*t*)= *<sup>β</sup>*

*Z*(*t*)= *∫* 0 *t*

*<sup>α</sup> <sup>β</sup> <sup>u</sup> <sup>β</sup>*-1∙*du* <sup>=</sup> *<sup>β</sup>*

homogeneous poissonian process *E N* (*t*) =*Z*(*t*), follows the so-called **power law**.

increasing according to a power law of a multiple of the time width considered.

*<sup>n</sup>* ! *<sup>e</sup>*-*<sup>Z</sup>* (*t*)

= ( *t <sup>α</sup>* )*β*∙*<sup>n</sup> <sup>n</sup>* ! *<sup>e</sup>*-( *<sup>t</sup> α* )*β*

For example, let us consider a system that fails, according to a power law, having *β* =2.2 and *α* =1500 hours. What is the average number of faults occurring during the first 1000 hours of operation? What is the probability of having two or more failures during the first 1000 hours of operation? Which is the average number of faults in the second 1000 hours of operation? The average number of failures that occur during the first 1000 hours of operation, is calculated

*<sup>P</sup> <sup>N</sup>* (*t*)=*<sup>n</sup>* <sup>=</sup> *<sup>Z</sup>* (*t*)*<sup>n</sup>*

Since the cumulative intensity of the process is defined by:

*Z*(*t*)= *∫* 0 *<sup>t</sup> β*

time. Conversely, if *β* <1, faults decrease over time.

The probability mass function of *N* (*t*) thus becomes:

*E N* (10000) =(0.1∙10000)2 =1000000=1000<sup>2</sup>

with the expected value of the distribution:

$$\mathbb{E}\left[N\text{(2000)}\text{ - N(1000)}\right] = \mathbb{Z}\text{(2000)} - \mathbb{Z}\text{(1000)} = 1.47\tag{78}$$

### **15. Conclusion**

After seeing the main definitions of reliability and maintenance, let's finally see how we can use reliability knowledge also to carry out an economic optimization of replacement activities.

Consider a process that follows the power law with *β* >1. As time goes by, faults begin to take place more frequently and, at some point, it will be convenient to replace the system.

Let us define with *τ*the time when the replacement (here assumed instantaneous) takes place. We can build a cost model to determine the optimal preventive maintenance time *τ \** which optimizes reliability costs.

Let's denote by *C <sup>f</sup>* the cost of a failure and with *Cr* the cost of replacing the repairable system.

If the repairable system is replaced every *τ* time units, in that time we will have the replacement costs *Cr* and so many costs of failure *C <sup>f</sup>* as how many are the expected number of faults in the time range (0;*τ* . The latter quantity coincides with the expected value of the number of faults *E N* (*τ*) .

The average cost per unit of time *c*(*τ*), in the long term, can then be calculated using the following relationship:

$$\mathbf{c}(\pi) = \frac{\mathbf{c}\_f \bullet E[N(\pi)] \bullet C\_r}{\pi} \tag{79}$$

Then follows:

$$\mathcal{L}\_{\mathcal{L}}(\pi) = \frac{\mathbb{C}\_{f} \bullet Z(\pi) \star \mathcal{C}\_{r}}{\pi} \tag{80}$$

**Author details**

Filippo De Carlo

**References**

Inc., 1988, 1988:129.

Mario Tucci; 2012.

2009, pp. 211–8.

Address all correspondence to: filippo.decarlo@unifi.it

[2] Barlow RE. Engineering Reliability. SIAM; 2003.

tions and their causes. M. Dekker; 1984.

plants. 34th ESReDA Seminar, San Sebastian, Spain: 2008.

Industrial Engineering Department, University of Florence, Florence, Italy

[1] Nakajima S. Introduction to TPM: Total Productive Maintenance. Productivity Press,

Reliability and Maintainability in Operations Management

http://dx.doi.org/10.5772/54161

111

[3] De Carlo F. Impianti industriali: conoscere e progettare i sistemi produttivi. New York:

[4] O'Connor P, Kleyner A. Practical Reliability Engineering. John Wiley & Sons; 2011. [5] Meyer P. Understanding Measurement: Reliability. Oxford University Press; 2010.

[6] Ascher H, Feingold H. Repairable systems reliability: modeling, inference, misconcep‐

[7] De Carlo F, Borgia O, Adriani PG, Paoli M. New maintenance opportunities in legacy

[8] Gertler J. Fault detection and diagnosis in engineering systems. Marcel Dekker; 1998. [9] Borgia O, De Carlo F, Tucci M. From diagnosis to prognosis: A maintenance experience for an electric locomotive. Safety, Reliability and Risk Analysis: Theory, Methods and Applications - Proceedings of the Joint ESREL and SRA-Europe Conference, vol. 1,

[10] Racioppi G, Monaci G, Michelassi C, Saccardi D, Borgia O, De Carlo F. Availability assessment for a gas plant. Petroleum Technology Quarterly 2008;13:33–7.

Differentiating *c*(*τ*) with respect to *τ* and placing the differential equal to zero, we can find the relative minimum of costs, that is, the optimal time *τ* \* of preventive maintenance. Manipulat‐ ing algebraically we obtain the following final result:

$$
\pi^\* = \alpha \bullet \boxed{\pi\_{C\_f} \, ^\circ \, ^\circ \, ^\circ} \, ^\circ \tag{81}
$$

Consider, for example, a system that fails according to a Weibull distribution with *β* =2.2 and *α* =1500 hours. Knowing that the system is subject to replacement instantaneous and that the cost of a fault *C <sup>f</sup>* =2500 € and the cost of replacing *Cr* =18000 €, we want to evaluate the optimal interval of replacement.

The application of eq. 81 provides the answer to the question:

$$
\sigma^\* = \alpha \bullet \left[ \frac{\mathbb{C}\_r}{\mathbb{C}\_f \left[ \beta \cdot 1 \right]} \right]^{\frac{1}{f}} = 1500 \bullet \left[ \frac{18000}{2500 \left[ 2.2 \cdot 1 \right]} \right]^{\frac{1}{2.2}} = 1500 \cdot 2.257 = 3387 \text{ [h]} \tag{82}
$$

### **Nomenclature**

RBD: Reliability Block Diagram CBM: Condition-Based Maintenance CFR: Constant Failure Rate CM: Corrective Maintenance DFR: Decreasing Failure Rate IFR: Increasing Failure Rate MCS: Minimal Cut Set MPS: Minimal Path Set MTTF: Mean Time To Failure MTTR: Mean Time To Repair NHPP: Non-Homogeneous Poisson Process PM: Preventive Maintenance

### **Author details**

*<sup>c</sup>*(*τ*)= *<sup>C</sup> <sup>f</sup>* <sup>∙</sup> *<sup>Z</sup>* (*τ*) <sup>+</sup> *Cr*

<sup>=</sup>*<sup>α</sup>* <sup>∙</sup> *Cr C <sup>f</sup>* (*β* - 1)

ing algebraically we obtain the following final result:

interval of replacement.

110 Operations Management

*τ* \*

RBD: Reliability Block Diagram

CFR: Constant Failure Rate

CM: Corrective Maintenance

DFR: Decreasing Failure Rate

IFR: Increasing Failure Rate

MTTF: Mean Time To Failure

MTTR: Mean Time To Repair

PM: Preventive Maintenance

NHPP: Non-Homogeneous Poisson Process

MCS: Minimal Cut Set

MPS: Minimal Path Set

CBM: Condition-Based Maintenance

**Nomenclature**

<sup>=</sup>*<sup>α</sup>* <sup>∙</sup> *Cr C <sup>f</sup>* (*β* - 1) *τ* \*

The application of eq. 81 provides the answer to the question:

*<sup>β</sup>* =1500<sup>∙</sup> <sup>18000</sup>

2500(2.2 - 1)

1

Differentiating *c*(*τ*) with respect to *τ* and placing the differential equal to zero, we can find the relative minimum of costs, that is, the optimal time *τ* \* of preventive maintenance. Manipulat‐

Consider, for example, a system that fails according to a Weibull distribution with *β* =2.2 and *α* =1500 hours. Knowing that the system is subject to replacement instantaneous and that the cost of a fault *C <sup>f</sup>* =2500 € and the cost of replacing *Cr* =18000 €, we want to evaluate the optimal

1

1

*<sup>τ</sup>* (80)

*<sup>β</sup>* (81)

2.2 =1500 · 2.257=3387 h (82)

Filippo De Carlo

Address all correspondence to: filippo.decarlo@unifi.it

Industrial Engineering Department, University of Florence, Florence, Italy

### **References**


**Chapter 5**

**Production Scheduling Approaches for Operations**

Scheduling is essentially the short-term execution plan of a production planning model. Production scheduling consists of the activities performed in a manufacturing company in order to manage and control the execution of a production process. A schedule is an assignment problem that describes into details (in terms of minutes or seconds) which activities must be performed and how the factory's resources should be utilized to satisfy the plan. Detailed scheduling is essentially the problem of allocating machines to competing jobs over time, subject to the constraints. Each work center can process one job at a time and each machine can handle at most one task at a time. A scheduling problem, typically, assumes a fixed number of jobs and each job has its own parameters (i.e., tasks, the necessary sequential constraints, the time estimates for each operation and the required resources, no cancellations). All scheduling approaches require some estimate of how long it takes to perform the work. Scheduling affects, and is affected by, the shop floor organization. All scheduling changes can be projected over time enabling the identification and analysis of starting time, completion

A right scheduling plan can drive the forecast to anticipate completion date for each released part and to provide data for deciding what to work on next. Questions about "Can we do it?" and/or "How are we doing?" presume the existence of approaches for optimisation. The aim of a scheduling study is, in general, to perform the tasks in order to comply with priority rules and to respond to strategy. An optimal short-term production planning model aims at gaining time and saving opportunities. It starts from the execution orders and it tries to allocate, in the best possible way, the production of the different items to the facilities. A good schedule starts from planning and springs from respecting resource conflicts, managing the release of jobs to

> © 2013 Fera et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

© 2013 Fera et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

Marcello Fera, Fabio Fruggiero, Alfredo Lambiase,

Giada Martino and Maria Elena Nenni

times, idle time of resources, lateness, etc….

Additional information is available at the end of the chapter

**Management**

http://dx.doi.org/10.5772/55431

**1. Introduction**

### **Production Scheduling Approaches for Operations Management**

Marcello Fera, Fabio Fruggiero, Alfredo Lambiase, Giada Martino and Maria Elena Nenni

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/55431

### **1. Introduction**

Scheduling is essentially the short-term execution plan of a production planning model. Production scheduling consists of the activities performed in a manufacturing company in order to manage and control the execution of a production process. A schedule is an assignment problem that describes into details (in terms of minutes or seconds) which activities must be performed and how the factory's resources should be utilized to satisfy the plan. Detailed scheduling is essentially the problem of allocating machines to competing jobs over time, subject to the constraints. Each work center can process one job at a time and each machine can handle at most one task at a time. A scheduling problem, typically, assumes a fixed number of jobs and each job has its own parameters (i.e., tasks, the necessary sequential constraints, the time estimates for each operation and the required resources, no cancellations). All scheduling approaches require some estimate of how long it takes to perform the work. Scheduling affects, and is affected by, the shop floor organization. All scheduling changes can be projected over time enabling the identification and analysis of starting time, completion times, idle time of resources, lateness, etc….

A right scheduling plan can drive the forecast to anticipate completion date for each released part and to provide data for deciding what to work on next. Questions about "Can we do it?" and/or "How are we doing?" presume the existence of approaches for optimisation. The aim of a scheduling study is, in general, to perform the tasks in order to comply with priority rules and to respond to strategy. An optimal short-term production planning model aims at gaining time and saving opportunities. It starts from the execution orders and it tries to allocate, in the best possible way, the production of the different items to the facilities. A good schedule starts from planning and springs from respecting resource conflicts, managing the release of jobs to

© 2013 Fera et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Fera et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

a shop and optimizing completion time of all jobs. It defines the starting time of each task and determines whatever and how delivery promises can be met. The minimization of one or more objectives has to be accomplished (e.g., the number of jobs that are shipped late, the minimi‐ zation set up costs, the maximum completion time of jobs, maximization of throughput, etc.). Criteria could be ranked from applying simple rules to determine which job has to be processed next at which work-centre (i.e., dispatching) or to the use of advanced optimizing methods that try to maximize the performance of the given environment. Fortunately many of these objectives are mutually supportive (e.g., reducing manufacturing lead time reduces work in process and increases probability to meeting due dates). To identify the exact sequence among a plethora of possible combinations, the final schedule needs to apply rules in order to quantify urgency of each order (e.g., assigned order's due date - defined as global exploited strategy; amount of processing that each order requires - generally the basis of a local visibility strategy). It's up to operations management to optimize the use of limited resources. Rules combined into *heuristic*<sup>1</sup> approaches and, more in general, in upper level multi-objective methodologies (i.e., *meta-heuristics*<sup>2</sup> ), become the only methods for scheduling when dimension and/or complexity of the problem is outstanding [1]. In the past few years, metaheuristics have received much attention from the hard optimization community as a powerful tool, since they have been demonstrating very promising results from experimentation and practices in many engineering areas. Therefore, many recent researches on scheduling problems focused on these techniques. Mathematical analyses of metaheuristics have been presented in literature [2, 3].

manufacturing firms, there are multiple types of scheduling, including the detailed scheduling of a shop order that shows when each operation must start and be completed [5]. Baker (1974) defined scheduling as "a plan than usually tells us when things are supposed to happen" [6]. Cox *et al.* (1992) defined *detailed scheduling* as "the actual assignment of starting and/or completion dates to operations or groups of operations to show when these must be done if the manufacturing order is to be completed on time"[7]. Pinedo (1995) listed a number of important surveys on production scheduling [8]. For Hopp and Spearman (1996) "scheduling is the allocation of shared resources over time to competing activities" [9]. Makowitz and Wein (2001) classified production scheduling problems based on attributes: the presence of setups,

Production Scheduling Approaches for Operations Management

http://dx.doi.org/10.5772/55431

115

Practical scheduling problems, although more highly constrained, are high difficult to solve due to the number and variety of jobs, tasks and potentially conflicting goals. Recently, a lot of Advanced Production Scheduling tools arose into the market (e.g., Aspen PlantTM Sched‐ uler family, Asprova, R2T – Resourse To Time, DS APS – DemandSolutions APS, DMS – Dynafact Manufacturing System, i68Group, ICRON-APS, JobPack, iFRP, Infor SCM, Schedu‐ lePro, Optiflow-Le, Production One APS, MQM – Machine Queue Management, MOM4, JDA software, Rob-ex, Schedlyzer, OMP Plus, MLS and MLP, Oracle Advanced Scheduling, Ortec Schedule, ORTEMS Productionscheduler, Outperform, AIMMS, Planet Together, Preactor, Quintiq, FactoryTalk Scheduler, SAP APO-PP/DS, and others). Each of these automatically reports graphs. Their goal is to drive the scheduling for assigned manufacturing processes. They implement rules and optimise an isolated sub-problem but none of the them will optimise

In a Job Shop (i.e., JS) problem a classic and most general factory environment, different tasks or operations must be performed to complete a job [10]; moreover, priorities and capacity problems are faced for different jobs, multiple tasks and different routes. In this contest, each job has its own individual flow pattern through assigned machines, each machine can process only one operation at a time and each operation can be processed by only one machine at a time. The purpose of the procedure is to obtain a schedule which aims to complete all jobs and, at the same time, to minimize (or maximize) the objective function. Mathematically, the JS Scheduling Problem (i.e., JSSP) can be characterized as a combinatorial optimization problem.

considered [4, 11, 12]. This means that the computation effort may grow too fast and there are not universal methods making it possible to solve all the cases effectively. Just to understand what the technical term means, consider the single-machine sequencing problem with three jobs. How many ways of sequencing three jobs do exist? Only one of the three jobs could be in the first position, which leaves two candidates for the second position and only one for the last position. Therefore the no. of permutations is 3!. Thus, if we want to optimize, we need to consider six alternatives. This means that as the no. of jobs to be sequenced becomes larger (i.e., *n*>80), the no. of possible sequences become quite ominous and an exponential function dominates the amount of time required to find the optimal solution [13]. Scheduling, however,

3 A problem is NP-complete if exists no algorithm that solves the problem in a polynomial time. A problem is NP-hard

belonging to the most intractable problems

the presence of due dates, the type of products.

a multi stage resource assignment and sequencing problem.

It has been generally shown to be NP-hard3

if it is possible to show that it can solve a NP-complete problem.

This research examines the main characteristics of the most promising meta-heuristic approaches for the general process of a Job Shop Scheduling Problems (i.e., JSSP). Being a NP complete and highly constrained problem, the resolution of the JSSP is recognized as a key point for the factory optimization process [4]. The chapter examines the soundness and key contributions of the 7 meta-heuristics (i.e., Genetics Approaches, Ants Colony Optimiza‐ tion, Bees Algorithm, Electromagnetic Like Algorithm, Simulating Annealing, Tabu Search and Neural Networks), those that improved the production scheduling vision. It reviews their accomplishments and it discusses the perspectives of each meta approach. The work represents a practitioner guide to the implementation of these meta-heuristics in schedul‐ ing job shop processes. It focuses on the logic, the parameters, representation schemata and operators they need.

### **2. The job shop scheduling problem**

The two key problems in production scheduling are "priorities" and "capacity". Wight (1974) described s*cheduling* as "establishing the timing for performing a task" and observes that, in

<sup>1</sup> The etymology of the word heuristic derives from a Greek word *heurìsco (єΰρισκω)* - it means "to find"- and is considered the art of discovering new strategy rules to solve problems. Heuristics aims at a solution that is "good enough" in a computing time that is "small enough".

<sup>2</sup> The term metaheuristc originates from union of prefix *meta (μєτα)* - it means "behind, in the sense upper level methodology" – and word *heuristic* - it means "to find". Metaheuristcs' search methods can be defined as upper level general methodologies guiding strategies in designing heuristics to obtain optimisation in problems.

manufacturing firms, there are multiple types of scheduling, including the detailed scheduling of a shop order that shows when each operation must start and be completed [5]. Baker (1974) defined scheduling as "a plan than usually tells us when things are supposed to happen" [6]. Cox *et al.* (1992) defined *detailed scheduling* as "the actual assignment of starting and/or completion dates to operations or groups of operations to show when these must be done if the manufacturing order is to be completed on time"[7]. Pinedo (1995) listed a number of important surveys on production scheduling [8]. For Hopp and Spearman (1996) "scheduling is the allocation of shared resources over time to competing activities" [9]. Makowitz and Wein (2001) classified production scheduling problems based on attributes: the presence of setups, the presence of due dates, the type of products.

a shop and optimizing completion time of all jobs. It defines the starting time of each task and determines whatever and how delivery promises can be met. The minimization of one or more objectives has to be accomplished (e.g., the number of jobs that are shipped late, the minimi‐ zation set up costs, the maximum completion time of jobs, maximization of throughput, etc.). Criteria could be ranked from applying simple rules to determine which job has to be processed next at which work-centre (i.e., dispatching) or to the use of advanced optimizing methods that try to maximize the performance of the given environment. Fortunately many of these objectives are mutually supportive (e.g., reducing manufacturing lead time reduces work in process and increases probability to meeting due dates). To identify the exact sequence among a plethora of possible combinations, the final schedule needs to apply rules in order to quantify urgency of each order (e.g., assigned order's due date - defined as global exploited strategy; amount of processing that each order requires - generally the basis of a local visibility strategy). It's up to operations management to optimize the use of limited resources. Rules combined into *heuristic*<sup>1</sup> approaches and, more in general, in upper level multi-objective methodologies

complexity of the problem is outstanding [1]. In the past few years, metaheuristics have received much attention from the hard optimization community as a powerful tool, since they have been demonstrating very promising results from experimentation and practices in many engineering areas. Therefore, many recent researches on scheduling problems focused on these techniques. Mathematical analyses of metaheuristics have been presented in literature [2, 3]. This research examines the main characteristics of the most promising meta-heuristic approaches for the general process of a Job Shop Scheduling Problems (i.e., JSSP). Being a NP complete and highly constrained problem, the resolution of the JSSP is recognized as a key point for the factory optimization process [4]. The chapter examines the soundness and key contributions of the 7 meta-heuristics (i.e., Genetics Approaches, Ants Colony Optimiza‐ tion, Bees Algorithm, Electromagnetic Like Algorithm, Simulating Annealing, Tabu Search and Neural Networks), those that improved the production scheduling vision. It reviews their accomplishments and it discusses the perspectives of each meta approach. The work represents a practitioner guide to the implementation of these meta-heuristics in schedul‐ ing job shop processes. It focuses on the logic, the parameters, representation schemata and

The two key problems in production scheduling are "priorities" and "capacity". Wight (1974) described s*cheduling* as "establishing the timing for performing a task" and observes that, in

1 The etymology of the word heuristic derives from a Greek word *heurìsco (єΰρισκω)* - it means "to find"- and is considered the art of discovering new strategy rules to solve problems. Heuristics aims at a solution that is "good enough" in a

2 The term metaheuristc originates from union of prefix *meta (μєτα)* - it means "behind, in the sense upper level methodology" – and word *heuristic* - it means "to find". Metaheuristcs' search methods can be defined as upper level

general methodologies guiding strategies in designing heuristics to obtain optimisation in problems.

), become the only methods for scheduling when dimension and/or

(i.e., *meta-heuristics*<sup>2</sup>

114 Operations Management

operators they need.

computing time that is "small enough".

**2. The job shop scheduling problem**

Practical scheduling problems, although more highly constrained, are high difficult to solve due to the number and variety of jobs, tasks and potentially conflicting goals. Recently, a lot of Advanced Production Scheduling tools arose into the market (e.g., Aspen PlantTM Sched‐ uler family, Asprova, R2T – Resourse To Time, DS APS – DemandSolutions APS, DMS – Dynafact Manufacturing System, i68Group, ICRON-APS, JobPack, iFRP, Infor SCM, Schedu‐ lePro, Optiflow-Le, Production One APS, MQM – Machine Queue Management, MOM4, JDA software, Rob-ex, Schedlyzer, OMP Plus, MLS and MLP, Oracle Advanced Scheduling, Ortec Schedule, ORTEMS Productionscheduler, Outperform, AIMMS, Planet Together, Preactor, Quintiq, FactoryTalk Scheduler, SAP APO-PP/DS, and others). Each of these automatically reports graphs. Their goal is to drive the scheduling for assigned manufacturing processes. They implement rules and optimise an isolated sub-problem but none of the them will optimise a multi stage resource assignment and sequencing problem.

In a Job Shop (i.e., JS) problem a classic and most general factory environment, different tasks or operations must be performed to complete a job [10]; moreover, priorities and capacity problems are faced for different jobs, multiple tasks and different routes. In this contest, each job has its own individual flow pattern through assigned machines, each machine can process only one operation at a time and each operation can be processed by only one machine at a time. The purpose of the procedure is to obtain a schedule which aims to complete all jobs and, at the same time, to minimize (or maximize) the objective function. Mathematically, the JS Scheduling Problem (i.e., JSSP) can be characterized as a combinatorial optimization problem. It has been generally shown to be NP-hard3 belonging to the most intractable problems considered [4, 11, 12]. This means that the computation effort may grow too fast and there are not universal methods making it possible to solve all the cases effectively. Just to understand what the technical term means, consider the single-machine sequencing problem with three jobs. How many ways of sequencing three jobs do exist? Only one of the three jobs could be in the first position, which leaves two candidates for the second position and only one for the last position. Therefore the no. of permutations is 3!. Thus, if we want to optimize, we need to consider six alternatives. This means that as the no. of jobs to be sequenced becomes larger (i.e., *n*>80), the no. of possible sequences become quite ominous and an exponential function dominates the amount of time required to find the optimal solution [13]. Scheduling, however,

<sup>3</sup> A problem is NP-complete if exists no algorithm that solves the problem in a polynomial time. A problem is NP-hard if it is possible to show that it can solve a NP-complete problem.

performs the definition of the optimal sequence of *n* jobs in *m* machines. If a set of *n* jobs is to be scheduled on *m* machines, there are *(n!)m* possible ways to schedule the job.

where *t* represent time (i.e. iteration steps)

\*

**3. Representation of scheduling instances**

that should have been done in that time [7].

) ; *T* =(

Network representation.

*M*<sup>11</sup> … *M*1*<sup>n</sup>* ⋮ ⋱ ⋮ *Mn*<sup>1</sup> ⋯ *Mnn*

schedule ends (T) *τ\*=* 0;

*M* =(

max

*i*

max () () min( ) { [ ]: , } min max

and *sik* ≥0 represents the starting time of *k*-th operation of *i*-th job. *sik* is the time value that we

The possible representation of a JS problem could be done through a Gantt chart or through a

Gantt (1916) created innovative charts for visualizing planned and actual production [18]. According to Cox *et al*. (1992), a *Gantt chart* is "the earliest and best known type of control chart especially designed to show graphically the relationship between planned performance and actual performance" [19]. Gantt designed his charts so that foremen or other supervisors could quickly know whether production was on schedule, ahead of schedule or behind schedule. A Gantt chart, or bar chart as it is usually named, measures activities by the amount of time needed to complete them and use the space on the chart to represent the amount of the activity

A Network representation was first introduced by Roy and Sussman [20]. The representation is based on "*disjunctive graph model*" [21]. This representation starts from the concept that a feasible and optimal solution of JSP can originate from a permutation of task's order. Tasks are defined in a network representation through a probabilistic model, observing the prece‐ dence constraints, characterized in a machine occupation matrix *M* and considering the

)

JS processes are mathematically described as disjunctive graph *G = (V, C, E).* The descriptions

**•** *V* is a set of nodes representing tasks of jobs. Two additional dummy tasks are to be considered: a *source(0)* node and a *sink(\*)* node which stand respectively for the Source (S) task τ0= 0, necessary to specify which job will be scheduled first, and an end fixed sink where

**•** *C* is the set of conjunctive arcs or direct arcs that connect two consecutive tasks belonging to the same job chain. These represent technological sequences of machines for each job;

processing time of each tasks, defined in a time occupation matrix *T*.

and notations as follow are due to Adams et. al. [22], where:

*τ*(*M*11) … *τ*(*M*1*n*) ⋮ ⋱ ⋮ *τ*(*Mn*1) ⋯ *τ*(*Mnn*)

*s JJM M* (2)

Production Scheduling Approaches for Operations Management

http://dx.doi.org/10.5772/55431

117

*ik ik <sup>i</sup> <sup>k</sup> <sup>C</sup> <sup>i</sup> <sup>C</sup> t t* = = + " Î" Î t

would like to determinate in order to establish the suited schedule activities order.

It has to undergo a discrete number of operations (i.e., *tasks*) on different resources (i.e., *machines*). Each product has a fixed route defined in the planning phase and following processing requirements (i.e., precedence constraints). Other constraints, e.g. zoning which binds the assignment of task to fixed resource, are also taken into consideration. Each machine can process only one operation at a time with no interruptions (pre-emption). The schedule we must derive aims to complete all jobs with minimization (maximization) of an objective function on the given production plant.

Let:


JSSP, marked as *Π<sup>j</sup>* , consists in a finite set *J* of *n* jobs {*Ji* }*i*=1 *<sup>n</sup>* . Each *Ji* is characterized by a manu‐ facturing cycle *CL <sup>i</sup>* regarded as a finite set *M* of *m* machines {*Mk* }*<sup>k</sup>* =1 *<sup>m</sup>* with an uninterrupted processing time *τik* . *Ji* , ∀ *i* =1, …, *n* , is processed on a fixed machine *mi* and requires a chain of tasks *Oi*1, *Oi*2, ......., *Oimi* , scheduled under precedence constraints. *Oik* is the task of job *Ji* which has to be processed on machine *Mk* for an uninterrupted processing time period *τik* and no operations may pre-empted.

To accommodate extreme variability in different parts of a job shop, schedulers separate workloads in each work-centres rather than aggregating them [14]. Of more than 100 different rules proposed by researchers and applied by practitioners exist, some have become common in Operations Management systems: First come- First served, Shortest Processing Time, Earliest Due Date, Slack Time Remaining, Slack Time Remaining For each Operation, Critical Ratio, Operation Due Date, etc. [15]. Besides these, Makespan is often the performance feature in the study of resource allocation [16]. Makespan represents the time elapsed from the start of the first task to the end of the last task in schedule. The minimisation of makespan arranges tasks in order to level the differences between the completion time of each work phase. It tries to smooth picks in work-centre occupancy to obtain batching in load assignment per time. Although direct time constraints, such as minimization of processing time or earliest due date, are sufficient to optimize industrial scheduling problems, for the reasons as above the minimization of the makespan is preferable for general/global optimization performances because it enhances the overall efficiency in shop floor and reduces manufacturing lead time variability [17].

Thus, in JSSP optimization variant of *Π<sup>j</sup>* , the objective of a scheduling problem is typically to assign the tasks to time intervals in order to minimise the makespan and referred to as:

$$\text{C}^{\uparrow}\_{\text{max}}(t) = f(\text{C}\text{L}\_{i}, {}^{\text{r}}\_{h}, {}^{\text{s}}\_{h}), \forall \text{ i} = 1...n; \forall \text{ k} = 1...m \tag{1}$$

where *t* represent time (i.e. iteration steps)

performs the definition of the optimal sequence of *n* jobs in *m* machines. If a set of *n* jobs is to

It has to undergo a discrete number of operations (i.e., *tasks*) on different resources (i.e., *machines*). Each product has a fixed route defined in the planning phase and following processing requirements (i.e., precedence constraints). Other constraints, e.g. zoning which binds the assignment of task to fixed resource, are also taken into consideration. Each machine can process only one operation at a time with no interruptions (pre-emption). The schedule we must derive aims to complete all jobs with minimization (maximization) of an objective

, ∀ *i* =1, …, *n* , is processed on a fixed machine *mi*

which has to be processed on machine *Mk* for an uninterrupted processing time period *τik* and

To accommodate extreme variability in different parts of a job shop, schedulers separate workloads in each work-centres rather than aggregating them [14]. Of more than 100 different rules proposed by researchers and applied by practitioners exist, some have become common in Operations Management systems: First come- First served, Shortest Processing Time, Earliest Due Date, Slack Time Remaining, Slack Time Remaining For each Operation, Critical Ratio, Operation Due Date, etc. [15]. Besides these, Makespan is often the performance feature in the study of resource allocation [16]. Makespan represents the time elapsed from the start of the first task to the end of the last task in schedule. The minimisation of makespan arranges tasks in order to level the differences between the completion time of each work phase. It tries to smooth picks in work-centre occupancy to obtain batching in load assignment per time. Although direct time constraints, such as minimization of processing time or earliest due date, are sufficient to optimize industrial scheduling problems, for the reasons as above the minimization of the makespan is preferable for general/global optimization performances because it enhances the overall efficiency in shop floor and reduces manufacturing lead time

assign the tasks to time intervals in order to minimise the makespan and referred to as:

max ( ) ( , , ), ... ; ... 1 1 *<sup>i</sup> ik ik <sup>C</sup> t f CL i n k m* = t

}*i*=1

regarded as a finite set *M* of *m* machines {*Mk* }*<sup>k</sup>* =1 *<sup>m</sup>* with an uninterrupted

, scheduled under precedence constraints. *Oik* is the task of job *Ji*

, the objective of a scheduling problem is typically to

*<sup>s</sup>* "= " = (1)

*<sup>n</sup>* . Each *Ji* is characterized by a manu‐

and requires a chain

be scheduled on *m* machines, there are *(n!)m* possible ways to schedule the job.

**•** *J* ={*J*1, *J*2, ......., *Jn*} the set of the job order existing inside the system;

**•** *M* ={*M*1, *M*2, ......., *Mm*} the set of machines that make up the system.

, consists in a finite set *J* of *n* jobs {*Ji*

function on the given production plant.

Let:

JSSP, marked as *Π<sup>j</sup>*

116 Operations Management

facturing cycle *CL <sup>i</sup>*

variability [17].

Thus, in JSSP optimization variant of *Π<sup>j</sup>*

\*

processing time *τik* . *Ji*

of tasks *Oi*1, *Oi*2, ......., *Oimi*

no operations may pre-empted.

$$\text{C}^{\*}\_{\text{max}}\left(\text{f}\right) = \min\{\text{C}\_{\text{max}}\left(\text{f}\right)\} = \min\{\max\_{\text{f}}\left\{\text{r}\_{\text{a}} + s\_{\text{a}}\right\} \colon \forall \text{ } f\_{\text{a}} \text{ a } f\_{\text{}}, \forall \text{ } M\_{\text{a}} \text{ a } M\} \tag{2}$$

and *sik* ≥0 represents the starting time of *k*-th operation of *i*-th job. *sik* is the time value that we would like to determinate in order to establish the suited schedule activities order.

### **3. Representation of scheduling instances**

*i*

The possible representation of a JS problem could be done through a Gantt chart or through a Network representation.

Gantt (1916) created innovative charts for visualizing planned and actual production [18]. According to Cox *et al*. (1992), a *Gantt chart* is "the earliest and best known type of control chart especially designed to show graphically the relationship between planned performance and actual performance" [19]. Gantt designed his charts so that foremen or other supervisors could quickly know whether production was on schedule, ahead of schedule or behind schedule. A Gantt chart, or bar chart as it is usually named, measures activities by the amount of time needed to complete them and use the space on the chart to represent the amount of the activity that should have been done in that time [7].

A Network representation was first introduced by Roy and Sussman [20]. The representation is based on "*disjunctive graph model*" [21]. This representation starts from the concept that a feasible and optimal solution of JSP can originate from a permutation of task's order. Tasks are defined in a network representation through a probabilistic model, observing the prece‐ dence constraints, characterized in a machine occupation matrix *M* and considering the processing time of each tasks, defined in a time occupation matrix *T*.

$$M = \begin{pmatrix} M\_{11} & \dots & M\_{1n} \\ \vdots & \ddots & \vdots \\ M\_{n1} & \dots & M\_{nn} \end{pmatrix}; \ T = \begin{pmatrix} \tau(M\_{11}) & \dots & \tau(M\_{1n}) \\ \vdots & \ddots & \vdots \\ \tau(M\_{n1}) & \dots & \tau(M\_{nn}) \end{pmatrix}$$

JS processes are mathematically described as disjunctive graph *G = (V, C, E).* The descriptions and notations as follow are due to Adams et. al. [22], where:


**•** *E=*<sup>⋃</sup> *r*=1 *m Dr*, where *Dr* is a set of disjunctive arcs or not-direct arcs representing pair of operations that must be performed on the same machine *Mr*.

Each job-tasks pair *(i,j)* is to be processed on a specified machine *M(i,j)* for *T(i,j)* time units, so each node of graph is weighted with *j* operation's processing time. In this representation all nodes are weighted with exception of source and sink node. This procedure makes always available feasible schedules which don't violate hard constraints4 . A graph representation of a simple instance of JSP, consisting of 9 operations partitioned into 3 jobs and 3 machines, is presented in fig. 1. Here the nodes correspond to operations numbered with consecutive ordinal values adding two fictitious additional ones:*S* = "source node" and *T* = "sink node". The processing time for each operation is the weighted value *τij* attached to the corresponding node,*v* ∈*V* , and for the special nodes, *τ*0 = *τ\**= 0.

Let sv be the starting time of an operation to a node *v*. By using the disjunctive graph notation, the JSPP can be formulated as a mathematical programming model as follows:

Minimize s\* subject to:

$$\mathbf{s}\_w \cdot \mathbf{s}\_v \succeq \mathbf{r}\_v \left( \upsilon \; \middle|\; w \right) \in \mathbb{C} \tag{3}$$

$$s\_v \ge 0 \mathbf{v} \in \mathbf{V} \tag{4}$$

s\* is equal to the completion time of the last operation of the schedule, which is therefore equal to *Cmax*. The first inequality ensures that when there is a conjunctive arc from a node *v* to a node *w*, *w* must wait of least *τv* time after *v* is started, so that the predefined technological constraints about sequence of machines for each job is not violated. The second condition ensures time to start continuities. The third condition affirms that, when there is a disjunctive arc between a node *v* and a node *w,* one has to select either *v* to be processed prior to *w* (and *w* waits for at least *τ<sup>v</sup>* time period) or the other way around, this avoids overlap in time due to contempora‐

In order to obtain a scheduling solution and to evaluate makespan, we have to collect all feasible permutations of tasks to transform the undirected arcs in directed ones in such a way

The total number of nodes, *n* =(|*O*| + 2) - fixed by taking into account the total number of tasks |*O*|, is properly the total number of operations with more two fictitious ones. While the total number of arcs, in job notation, is fixed considering the number of tasks and jobs of

*O O*

The number of arcs defines the possible combination paths. Each path from source to sink is

**O22** 

A logic has to be implemented in order to translate the scheduling problem into an algorithm structure. Academic researches on scheduling problems have produced countless papers [23]. Scheduling has been faced from many perspectives, using formulations and tools of various disciplines such as control theory, physical science and artificial intelligence systems [24]. Criteria for optimization could be ranked from applying simple priority rules to determine

**O12 O13** 

**O23** 

 **T**

**O33** 

*J J* ´ +´ = ´ æ ö ç ÷ è ø (6)

Production Scheduling Approaches for Operations Management

http://dx.doi.org/10.5772/55431

119

<sup>n</sup> (| |) {(| |) 1} arcs= | | +2 | | 2 2

2

a candidate solution for JSSP. The routing graph is reported in figure 2:

**O11** 

**O21** 

**O31 O32ì** 

**S**

**4. Meta-heuristics for solving the JSSP**

**Figure 2.** Problem routing representation.

neous operations on the same machine.

that there are no cycles.

instance:

$$\mathbf{s} \cdot \mathbf{s}\_v \cdot \mathbf{s}\_v \ge \tau\_v \mathbf{V} \cdot \mathbf{s}\_v \cdot \mathbf{s}\_w \ge \tau\_w \begin{Bmatrix} \mathbf{v}\_r & \mathbf{w} \end{Bmatrix} \in D\_{r,1} \le r \le m\_r \tag{5}$$

**Figure 1.** Disjunctive graph representation. There are disjunctive arcs between every pair of tasks that has to be proc‐ essed on the same machine (dashed lines) and conjunctive arcs between every pair of tasks that are in the same job (dotted lines). Omitting processing time, the problem specification is *O* ={*oij* , (*i*, *j*)∈{1, 2, 3}2}, *J* ={*Ji* ={*oij* }, (*i*, *j*)=1, 2, 3}, *M* ={*Mj* ={*oij* }, (*i*, *j*)=1, 2, 3}. Job notation is used.

<sup>4</sup> Hard constraints are physical ones, while soft constraints are generally those related to human factor e.g., relaxation, fatigue etc…

s\* is equal to the completion time of the last operation of the schedule, which is therefore equal to *Cmax*. The first inequality ensures that when there is a conjunctive arc from a node *v* to a node *w*, *w* must wait of least *τv* time after *v* is started, so that the predefined technological constraints about sequence of machines for each job is not violated. The second condition ensures time to start continuities. The third condition affirms that, when there is a disjunctive arc between a node *v* and a node *w,* one has to select either *v* to be processed prior to *w* (and *w* waits for at least *τ<sup>v</sup>* time period) or the other way around, this avoids overlap in time due to contempora‐ neous operations on the same machine.

In order to obtain a scheduling solution and to evaluate makespan, we have to collect all feasible permutations of tasks to transform the undirected arcs in directed ones in such a way that there are no cycles.

The total number of nodes, *n* =(|*O*| + 2) - fixed by taking into account the total number of tasks |*O*|, is properly the total number of operations with more two fictitious ones. While the total number of arcs, in job notation, is fixed considering the number of tasks and jobs of instance:

$$\text{arccse}\begin{pmatrix} \mathbf{n} \\ \mathbf{2} \end{pmatrix} + 2 \times \|\mathbf{J}\|\\_1 = \frac{\left(\|\mathcal{O}\|\\_\rangle \times \{\{\|\mathcal{O}\|\\_\cdot\} - 1\}\right)}{2} + 2 \times \|\mathbf{J}\|\\_\ast \tag{6}$$

The number of arcs defines the possible combination paths. Each path from source to sink is a candidate solution for JSSP. The routing graph is reported in figure 2:

**Figure 2.** Problem routing representation.

**•** *E=*<sup>⋃</sup> *r*=1 *m*

118 Operations Management

*Dr*, where *Dr* is a set of disjunctive arcs or not-direct arcs representing pair of operations

. A graph representation of

Each job-tasks pair *(i,j)* is to be processed on a specified machine *M(i,j)* for *T(i,j)* time units, so each node of graph is weighted with *j* operation's processing time. In this representation all nodes are weighted with exception of source and sink node. This procedure makes always

a simple instance of JSP, consisting of 9 operations partitioned into 3 jobs and 3 machines, is presented in fig. 1. Here the nodes correspond to operations numbered with consecutive ordinal values adding two fictitious additional ones:*S* = "source node" and *T* = "sink node". The processing time for each operation is the weighted value *τij* attached to the corresponding

Let sv be the starting time of an operation to a node *v*. By using the disjunctive graph notation,

s*<sup>w</sup>* - *sv* ≥*τ<sup>v</sup>* (*v*, *w*)∈ *C* (3)

s*<sup>w</sup>* - *sv* ≥*τv*⋁ s*<sup>v</sup>* - *sw* ≥*τw*(v, w)∈*Dr*,1 ≤*r* ≤*m*, (5)

**O13**

**O23**

**O33**

Disjunctive arc (pair of operations on the same machine).

*sv* ≥ 0v∈V (4)

**Sink**

, (*i*, *j*)∈{1, 2, 3}2},

**T**

the JSPP can be formulated as a mathematical programming model as follows:

that must be performed on the same machine *Mr*.

node,*v* ∈*V* , and for the special nodes, *τ*0 = *τ\**= 0.

Minimize s\* subject to:

*J* ={*Ji* ={*oij*

fatigue etc…

available feasible schedules which don't violate hard constraints4

**O11**

**O21**

**S**

**Source**

}, (*i*, *j*)=1, 2, 3}, *M* ={*Mj* ={*oij*

**O31 O32**

(dotted lines). Omitting processing time, the problem specification is *O* ={*oij*

**O22**

Conjunctive arc (technological sequences).

**Figure 1.** Disjunctive graph representation. There are disjunctive arcs between every pair of tasks that has to be proc‐ essed on the same machine (dashed lines) and conjunctive arcs between every pair of tasks that are in the same job

}, (*i*, *j*)=1, 2, 3}. Job notation is used.

4 Hard constraints are physical ones, while soft constraints are generally those related to human factor e.g., relaxation,

**O12**

### **4. Meta-heuristics for solving the JSSP**

A logic has to be implemented in order to translate the scheduling problem into an algorithm structure. Academic researches on scheduling problems have produced countless papers [23]. Scheduling has been faced from many perspectives, using formulations and tools of various disciplines such as control theory, physical science and artificial intelligence systems [24]. Criteria for optimization could be ranked from applying simple priority rules to determine which job has to be processed next at the work-centres (i.e., dispatching) to the use of advanced optimizing methods that try to maximize the performance of the given environment [25]. Their way to solution is generally approximate – heuristics – but it constitutes promising alternatives to the exact methods and becomes the only one possible when dimension and/or complexity of the problem is outstanding [26].

crossover (i.e., recombination of genetic characteristics of parents) across the sexual reproduc‐ tion, the chromosomal inheritance process performs to offspring. In each epoch a stochastic mutation procedure occurs. The implemented algorithm is able to simulate the natural process of evolution, coupling solution of scheduling route in order to determinate an optimal tasks assignment. Generally, GA has different basic component: representation, initial population, evaluation function, the reproduction selection scheme, genetic operators (mutation and crossover) and stopping criteria. Central to success of any GA is the suitability of its repre‐ sentation to the problem at hand [42]. This is the encoding from the solution of the problem

During the last decades, different representation's schemata for JS have been proposed, such as *permutation with repetition*. It uses sequence of repeated jobs identifier (e.g., its corresponding cardinal number) to represent solutions [43]. According to the instance in issue, each of the *N* jobs identifiers will be repeated *M* times, once for each task. The first time that job's identifier, reading from left to right, will appear means the first task of that job. In this way, precedence constraints are satisfied. The redundancy is the most common caveat of this representation. A proposal of permutation with repetition applying a Generalized Order crossover (GOX) with band |2 3 1 1| of parent 1 moves from PARENT1 [3 2 3 1 1 1 3 2 2] and PARENT2 [2 3 2 1 3 3

(a) (b)

**Figure 3.** The Genetic Algorithms (GAs) model; 3a. the pseudo-code of a GA; 3b. the flow chart of a general GA.

A mutation operator is applied changing the genes into the same genotype (in order to generate only feasible solutions, i.e., without the rejection procedure). Mutation allows to diversify the search over a broader solution domain and it is needed when there is low level of crossover. Among solutions, the allocation with favourable fitness will have higher probability to be

Another important issue for the GA is the selection mechanism (e.g., Tournament Selection procedure and Roulette Wheel as commonly used [44] - their performances are quite similar attending in the convergence time). The *tournament selection* procedure is based on analogy with competition field, between the genotypes in tournament, the individual which will win

Start

GA Problem's definition Population

Create

REPLACEMENT

SELECTION MECHANISM

Production Scheduling Approaches for Operations Management

http://dx.doi.org/10.5772/55431

121

Parents

RECOMBINATION & MUTATION

Offspring

Compute Fitness

Have all epoch runned?

NO YES

Optimal solution

2 1 1] to CHILD1 [2 3 1 1 3 2 3 2 1] and CHILD2 [3 2 1 3 2 1 1 3 2].

domain to the genetic representation.

**Input:** Instance *x є I* of *Пopt* set algorithm parameters ()

> *Pop0←*Initial population () **Evaluate**\_fitness (*Pop0*) **while** not termination condition **do**

**Selection** *Popi* from *Popi-1*  **Crossover** (*Popi*) **Mutation** (*Popi*) **Fitness** (*Popi*) **Replacement**\_procedure

*Sbest*←optimal solution *Popi*

**Output:** *Sbest "*candidate" to be the best found solution x є I

selected through the selection mechanisms.

*i←*0

*i←i+1* 

**end while** 

Guidelines in using heuristics in combinatorial optimization can be found in Hertz (2003) [27]. A classification of heuristic methods was proposed by Zanakis et al. (1989) [28]. Heuristics are generally classified into *constructive heuristics* and *improvement heuristics*. The first ones are focused on producing a solution based on an initial proposal, the goal is to decrease the solution until all the jobs are assigned to a machine, not considering the size of the problem [29]. The second ones are iterative algorithms which explore solutions by moving step by step form one solution to another. The method starts with an arbitrary solution and transits from one solution to another according to a series of basic modifications defined on case by case basis [30].

Relatively simple rules in guiding heuristic, with exploitation and exploration, are capable to produce better quality solutions than other algorithms from the literature for some classes of instances. These variants originate the class of meta-heuristic approaches [31]. The metaheuristics5 , and in general the heuristics, do not ensure optimal results but they usually tend to work well [32]. The purpose of the paper is to illustrate the most promising optimization methods for the JSSP.

As optimization techniques, metaheuristics are stochastic algorithms aiming to solve a broad range of hard optimization problems, for which one does not know more effective traditional methods. Often inspired by analogies with reality, such as physics science, Simulated Anneal‐ ing [33] and Electromagnetic like Methods [34], biology (Genetic Algorithms [35], Tabu Search [36]) and ethnology (Ant Colony [37,], Bees Algorithm [38]), human science (Neural Networks [39]), they are generally of discrete origin but can be adapted to the other types of problems.

### **4.1. Genetic Algorithms (GAs)**

The methodology of a GAs - based on the evolutionary strategy- trasforms a population (set) of individual objects, each with an associated *fitness* value, into a new *generation* of the popula‐ tion occurring genetic operations such as *crossover* (*sexual recombination*) and *mutation* (fig. 3).

The theory of evolutionary computing was formalized by Holland in 1975 [40]. GAs are stochastic search procedures for combinatorial optimization problems based on Darwinian principle of natural reproduction, survival and environment's adaptability [41]. The theory of evolution is biologically explained, the individuals with a stronger fitness are considered better able to survive.. Cells, with one or more strings of DNA (i.e., a chromosome), make up an individual. The gene (i.e., a bit of chromosome located into its particular locus) is, responsible for encoding traits (i.e., alleles). Physical manifestations are raised into genotype (i.e., dispo‐ sition of genes). Each genotype has is physical manifestation into phenotype. According to these parameters is possible to define a fitness value. Combining individuals through a

<sup>5</sup> The term metaheuristics was introduced by F. Glover in the paper about Tabu search.

crossover (i.e., recombination of genetic characteristics of parents) across the sexual reproduc‐ tion, the chromosomal inheritance process performs to offspring. In each epoch a stochastic mutation procedure occurs. The implemented algorithm is able to simulate the natural process of evolution, coupling solution of scheduling route in order to determinate an optimal tasks assignment. Generally, GA has different basic component: representation, initial population, evaluation function, the reproduction selection scheme, genetic operators (mutation and crossover) and stopping criteria. Central to success of any GA is the suitability of its repre‐ sentation to the problem at hand [42]. This is the encoding from the solution of the problem domain to the genetic representation.

which job has to be processed next at the work-centres (i.e., dispatching) to the use of advanced optimizing methods that try to maximize the performance of the given environment [25]. Their way to solution is generally approximate – heuristics – but it constitutes promising alternatives to the exact methods and becomes the only one possible when dimension and/or complexity

Guidelines in using heuristics in combinatorial optimization can be found in Hertz (2003) [27]. A classification of heuristic methods was proposed by Zanakis et al. (1989) [28]. Heuristics are generally classified into *constructive heuristics* and *improvement heuristics*. The first ones are focused on producing a solution based on an initial proposal, the goal is to decrease the solution until all the jobs are assigned to a machine, not considering the size of the problem [29]. The second ones are iterative algorithms which explore solutions by moving step by step form one solution to another. The method starts with an arbitrary solution and transits from one solution to another according to a series of basic modifications defined on case by case basis [30].

Relatively simple rules in guiding heuristic, with exploitation and exploration, are capable to produce better quality solutions than other algorithms from the literature for some classes of instances. These variants originate the class of meta-heuristic approaches [31]. The meta-

to work well [32]. The purpose of the paper is to illustrate the most promising optimization

As optimization techniques, metaheuristics are stochastic algorithms aiming to solve a broad range of hard optimization problems, for which one does not know more effective traditional methods. Often inspired by analogies with reality, such as physics science, Simulated Anneal‐ ing [33] and Electromagnetic like Methods [34], biology (Genetic Algorithms [35], Tabu Search [36]) and ethnology (Ant Colony [37,], Bees Algorithm [38]), human science (Neural Networks [39]), they are generally of discrete origin but can be adapted to the other types of problems.

The methodology of a GAs - based on the evolutionary strategy- trasforms a population (set) of individual objects, each with an associated *fitness* value, into a new *generation* of the popula‐ tion occurring genetic operations such as *crossover* (*sexual recombination*) and *mutation* (fig. 3). The theory of evolutionary computing was formalized by Holland in 1975 [40]. GAs are stochastic search procedures for combinatorial optimization problems based on Darwinian principle of natural reproduction, survival and environment's adaptability [41]. The theory of evolution is biologically explained, the individuals with a stronger fitness are considered better able to survive.. Cells, with one or more strings of DNA (i.e., a chromosome), make up an individual. The gene (i.e., a bit of chromosome located into its particular locus) is, responsible for encoding traits (i.e., alleles). Physical manifestations are raised into genotype (i.e., dispo‐ sition of genes). Each genotype has is physical manifestation into phenotype. According to these parameters is possible to define a fitness value. Combining individuals through a

5 The term metaheuristics was introduced by F. Glover in the paper about Tabu search.

, and in general the heuristics, do not ensure optimal results but they usually tend

of the problem is outstanding [26].

120 Operations Management

heuristics5

methods for the JSSP.

**4.1. Genetic Algorithms (GAs)**

During the last decades, different representation's schemata for JS have been proposed, such as *permutation with repetition*. It uses sequence of repeated jobs identifier (e.g., its corresponding cardinal number) to represent solutions [43]. According to the instance in issue, each of the *N* jobs identifiers will be repeated *M* times, once for each task. The first time that job's identifier, reading from left to right, will appear means the first task of that job. In this way, precedence constraints are satisfied. The redundancy is the most common caveat of this representation. A proposal of permutation with repetition applying a Generalized Order crossover (GOX) with band |2 3 1 1| of parent 1 moves from PARENT1 [3 2 3 1 1 1 3 2 2] and PARENT2 [2 3 2 1 3 3 2 1 1] to CHILD1 [2 3 1 1 3 2 3 2 1] and CHILD2 [3 2 1 3 2 1 1 3 2].

**Figure 3.** The Genetic Algorithms (GAs) model; 3a. the pseudo-code of a GA; 3b. the flow chart of a general GA.

A mutation operator is applied changing the genes into the same genotype (in order to generate only feasible solutions, i.e., without the rejection procedure). Mutation allows to diversify the search over a broader solution domain and it is needed when there is low level of crossover. Among solutions, the allocation with favourable fitness will have higher probability to be selected through the selection mechanisms.

Another important issue for the GA is the selection mechanism (e.g., Tournament Selection procedure and Roulette Wheel as commonly used [44] - their performances are quite similar attending in the convergence time). The *tournament selection* procedure is based on analogy with competition field, between the genotypes in tournament, the individual which will win (e.g., the one with the best fitness value) is placed in the mating pool. Likewise, in the *roulette wheel selection* mechanism each individual of population has a selection's likelihood propor‐ tional to its objective score (in analogy with the real roulette item) and with a probability equal to one of a ball in a roulette, one of the solutions is chosen.

(AS) to job scheduling problem [49] and dubbed this approach as **A**nt **C**olony **O**ptimization (ACO). They iteratively create route, adding components to partial solution, by taking into account heuristic information on the problem instance being solved (i.e. visibility) and "artificial" pheromone trials (with its storing and evaporation criteria). Across the represen‐ tation of scheduling problem like acyclic graph, see fig. 2, the ant's rooting from source to food is assimilated to the scheduling sequence. Think at ants as agents, nodes like tasks and arcs as the release of production order. According to constraints, the ants perform a path from the

Production Scheduling Approaches for Operations Management

http://dx.doi.org/10.5772/55431

123

**Figure 4.** The Ant Colony Optimization (ACO) model; 4a. the pseudo-code of an ACO algorithm; 4b. the flow chart of

Constraints are introduced hanging from jobs and resources. Fitness is introduced to translate how good the explored route was. Artificial ants live in a computer realized world. They have an overview of the problem instance they are going to solve across a visibility factor. In the Job Shop side of ACO implementation the visibility has chosen tied with the run time of the task (Eq. 7). The information was about the inquired task's (i.e., *j*) completion time *Ctimej*

1 1

*jj j*

The colony is composed of a fixed number of agents *ant=1,…, n*. A probability is associated to each feasible movement (*Sant(t)*) and a selection procedure (generally based on *RWS* or

(7)

*Ctime Itime Rtime*

from the previous position (i.e., *i*):

( )

*t*

h

( ) *ij*

= =

and

row material warehouse to the final products one.

a general ACO procedure.

idle time *Itimej*

*Tournament* procedure) is applied.

It is very important, for the GAs success, to select the correct ratio between crossover and mutation, because the first one allows to allows to diversify a search field, while a mutation to modify a solution.

### **4.2. Ant Colony Optimization (ACO) algorithms**

If we are on a pic-nic and peer into our cake bitten by a colony of ants, moving in a tidy way and caring on a lay-out that is the optimal one in view of stumbling-blocks and length, we discover how remarkable is nature and we find its evolution as the inspiring source for investigations on intelligence operation scheduling techniques [45]. Natural ants are capable to establish the shortest route path from their colony to feeding sources, relying on the phenomena of *swarm intelligence* for survival. They make decisions that seemingly require an high degree of cooperation, smelling and following a chemical substance (i.e. pheromone6 ) laid on the ground and proportional to goodness load that they carry on (i.e. in a scheduling approach, the goodness of the objective function, reported to makespan in this applicative case).

The same behaviour of natural ants can be overcome in an artificial system with an artificial communication strategy regard as a direct metaphoric representation of natural evolution. The essential idea of an ACO model is that "good solutions are not the result of a sporadic good approach to the problem but the incremental output of good partial solutions item. Artificial ants are quite different by their natural progenitors, maintaining a memory of the step before the last one [37]. Computationally, ACO [46] are population based approach built on stochastic solution construction procedures with a retroactive control improvement, that build solution route with a probabilistic approach and through a suitable selection procedure by taking into account: *(a) heuristic information* on the problem instance being solved; *(b)* (mat-made) *pheromone amount*, different from ant to ant, which stores up and evaporates dynamically at run-time to reflect the agents' acquired search training and elapsed time factor.

The initial schedule is constructed by taking into account heuristic information, initial pheromone setting and, if several routes are applicable, a self-created selection procedure chooses the task to process. The same process is followed during the whole run time. The probabilistic approach focused on pheromone. Path's attractive raises with path choice and probability increases with the number of times that the same path was chosen before [47]. At the same time, the employment of heuristic information can guide the ants towards the most promising solutions and additionally, the use of an agent's colony can give the algorithm: *(i)* Robustness on a fixed solution; *(ii)* Flexibility between different paths.

The approach focuses on co-operative ant colony food retrieval applied to scheduling routing problems. Colorni et al, basing on studies of Dorigo *et al.* [48], were the first to apply Ant System

<sup>6</sup> It is an organic compound highly volatile that shares on central neural system as an actions' releaser.

(AS) to job scheduling problem [49] and dubbed this approach as **A**nt **C**olony **O**ptimization (ACO). They iteratively create route, adding components to partial solution, by taking into account heuristic information on the problem instance being solved (i.e. visibility) and "artificial" pheromone trials (with its storing and evaporation criteria). Across the represen‐ tation of scheduling problem like acyclic graph, see fig. 2, the ant's rooting from source to food is assimilated to the scheduling sequence. Think at ants as agents, nodes like tasks and arcs as the release of production order. According to constraints, the ants perform a path from the row material warehouse to the final products one.

(e.g., the one with the best fitness value) is placed in the mating pool. Likewise, in the *roulette wheel selection* mechanism each individual of population has a selection's likelihood propor‐ tional to its objective score (in analogy with the real roulette item) and with a probability equal

It is very important, for the GAs success, to select the correct ratio between crossover and mutation, because the first one allows to allows to diversify a search field, while a mutation to

If we are on a pic-nic and peer into our cake bitten by a colony of ants, moving in a tidy way and caring on a lay-out that is the optimal one in view of stumbling-blocks and length, we discover how remarkable is nature and we find its evolution as the inspiring source for investigations on intelligence operation scheduling techniques [45]. Natural ants are capable to establish the shortest route path from their colony to feeding sources, relying on the phenomena of *swarm intelligence* for survival. They make decisions that seemingly require an high degree of co-

and proportional to goodness load that they carry on (i.e. in a scheduling approach, the goodness

The same behaviour of natural ants can be overcome in an artificial system with an artificial communication strategy regard as a direct metaphoric representation of natural evolution. The essential idea of an ACO model is that "good solutions are not the result of a sporadic good approach to the problem but the incremental output of good partial solutions item. Artificial ants are quite different by their natural progenitors, maintaining a memory of the step before the last one [37]. Computationally, ACO [46] are population based approach built on stochastic solution construction procedures with a retroactive control improvement, that build solution route with a probabilistic approach and through a suitable selection procedure by taking into account: *(a) heuristic information* on the problem instance being solved; *(b)* (mat-made) *pheromone amount*, different from ant to ant, which stores up and evaporates dynamically at

The initial schedule is constructed by taking into account heuristic information, initial pheromone setting and, if several routes are applicable, a self-created selection procedure chooses the task to process. The same process is followed during the whole run time. The probabilistic approach focused on pheromone. Path's attractive raises with path choice and probability increases with the number of times that the same path was chosen before [47]. At the same time, the employment of heuristic information can guide the ants towards the most promising solutions and additionally, the use of an agent's colony can give the algorithm: *(i)*

The approach focuses on co-operative ant colony food retrieval applied to scheduling routing problems. Colorni et al, basing on studies of Dorigo *et al.* [48], were the first to apply Ant System

) laid on the ground

operation, smelling and following a chemical substance (i.e. pheromone6

of the objective function, reported to makespan in this applicative case).

run-time to reflect the agents' acquired search training and elapsed time factor.

Robustness on a fixed solution; *(ii)* Flexibility between different paths.

6 It is an organic compound highly volatile that shares on central neural system as an actions' releaser.

to one of a ball in a roulette, one of the solutions is chosen.

**4.2. Ant Colony Optimization (ACO) algorithms**

modify a solution.

122 Operations Management

**Figure 4.** The Ant Colony Optimization (ACO) model; 4a. the pseudo-code of an ACO algorithm; 4b. the flow chart of a general ACO procedure.

Constraints are introduced hanging from jobs and resources. Fitness is introduced to translate how good the explored route was. Artificial ants live in a computer realized world. They have an overview of the problem instance they are going to solve across a visibility factor. In the Job Shop side of ACO implementation the visibility has chosen tied with the run time of the task (Eq. 7). The information was about the inquired task's (i.e., *j*) completion time *Ctimej* and idle time *Itimej* from the previous position (i.e., *i*):

$$\eta\_{\neq}(t) = \frac{1}{\{Ctime\_{\neq} - ltime\_{\neq}\}} = \frac{1}{Rtime\_{\neq}} \tag{7}$$

The colony is composed of a fixed number of agents *ant=1,…, n*. A probability is associated to each feasible movement (*Sant(t)*) and a selection procedure (generally based on *RWS* or *Tournament* procedure) is applied.

0≤*Pij ant*(*t*)≤1 is the probability that at time *t* the generic agent *ant* chooses edge *i* → *j* as next routing path; at time *t* each *ant* chooses the next operation where it will be at time *t+1.* This value is valuated through visibility (*η*) and pheromone (*τ*) information. The probability val‐ ue (Eq. 8) is associated to a fitness into selection step.

$$P\_{ij}^{\text{out}}(t) = \left\{ \frac{\left[\text{r}\_{\neq}(t)\right]^{\circ}\left[\eta\_{\neq}(t)\right]^{\circ}}{\sum\limits\_{\neq}\left[\text{r}\_{\neq}(t)\right]^{\circ}\left[\eta\_{\neq}(t)\right]^{\circ}\big|\_{\neq}} \right\} \tag{8}$$

$$\left\{ 0 \right\} \tag{8}$$

ant-th followed edge (i,j)

Production Scheduling Approaches for Operations Management

http://dx.doi.org/10.5772/55431

(11)

125

ì ü ï ï í ý ï ï î þ

Visibility and updated pheromone trail fixes the probability (i.e., the fitness values) of each node (i.e., task) at each iteration; for each cycle, it is evaluated the output of the objective function (*Lant(t)*). An objective function value is optimised accordingly to partial good solution. In this improvement, relative importance is given to the parameters α and β. Good elements for choosing these two parameters are: *α* / *β* ≅0 (which means low level of *α*) and little value

A colony of bees exploits, in multiple directions simultaneously, food sources in the form of antera with plentiful amounts of nectar or pollen. They are able to cover kilometric distances for good foraging fields [50]. Flower paths are covered based on a stigmergic approach – more

The foraging strategies in colonies of bees starts by scout bees – a percentage of beehive population. They wave randomly from one patch to another. Returning at the hive, those scout bees deposit their nectar or polled and start a recruiting mechanism rated above a certain quality threshold on nectar stored [52]. The recruiting mechanism is properly a launching into a wild dance over the honeycomb. This natural process is known as waggle dance" [53]. Bees, stirring up for discovery, flutter in a number from one to one hundred circuits with a waving and returning phase. The waving phase contains information about direction and distance of flower patches. Waving phases in ascending order on vertical honeycomb suggest flower patches on straightforward line with sunbeams. This information is passed using a kind of dance, that is possible to be developed on right or on left. So through this dance, it is possible to understand the distance from the flower, the presence of nectar and the sunbeam side to

The waggle dance is used as a guide or a map to evaluate merits of explored different patches and to exploit better solutions. After waggle dancing on the dance floor, the dancer (i.e. the scout bee) goes back to the flower patch with follower bees that were waiting inside the hive. A squadron moves forward into the patches. More follower bees are sent to more promising patches, while harvest paths are explored but they are not carried out in the long term. A swarm intelligent approach is constituted [55]. This allows the colony to gather food quickly and

The Bees Algorithm (i.e., BA) is a population-based search; it is inspired to this natural process [38]. In its basic version, the algorithm performs a kind of neighbourhood search combined with random search. Advanced mechanisms could be guided by genetics [57] or taboo operators [58]. The standard Bees Algorithm first developed in Pham and Karaboga in 2006 [59, 60] requires a set of parameters: no. of scout bees (*n*), no. of sites selected out of *n* visited sites (*m*), no. of best sites out of *m* selected sites (*e*), no. of bees recruited for the best *e* sites (*nep*),

otherwise

if

0

*Q*

( )

*ant ij ant*

of α (0<*α* ≤2) while ranging *β* in a larger range (0<*β* ≤6).

nectar places should be visited by more bees [51].

efficiently with a recursive recruiting mechanism [56].

**4.3. Bees Algorithm (BA) approach**

choose [54].

D = t*t L*

Where: *τij*(*t*) represents the intensity of trail on connection (*i*, *j*) at time *t*. Set the intensity of pheromone at iteration *t=0: τij(o)* to a general small positive constant in order to ensure the avoiding of local optimal solution; α and β are user's defined values tuning the relative importance of the pheromone vs. the heuristic time-distance coefficient. They have to be chosen 0<*α*, *β* ≤10 (in order to assure a right selection pressure).

For each cycle the agents of the colony are going out of source in search of food. When all colony agents have constructed a complete path, i.e. the sequence of feasible order of visited nodes, a pheromone update rule is applied (Eq. 9):

$$\mathcal{T}\_{\frac{q}{q}}(t+1) = (1-\lambda)\mathcal{T}\_{\frac{q}{q}}(t+1) + \Lambda \mathcal{T}\_{\frac{q}{q}}(t) \tag{9}$$

Besides ants' activity, *pheromon*e *trail evaporation* has been included trough a coefficient representing pheromone vanishing during elapsing time. These parameters imitate the natural world decreasing of pheromone trail intensity over time. It implements a useful form of forgetting. It has been considered a simple decay coefficient (i.e., 0<*λ* <1) that works on total laid pheromone level between time *t* and *t+1*.

The laid pheromone on the inquired path is evaluated taking into consideration how many agents chose that path and how was the objective value of that path (Eq. 10). The weight of the solution goodness is the makespan (i.e., *Lant*). A constant of pheromone updating (i.e., *Q*), equal for all ants and user, defined according to the tuning of the algorithm, is introduced as quantity of pheromone per unit of time (Eq. 11). The algorithm works as follow. It is computed the makespan value for each agent of the colony (*Lant(0)*), following visibility and pheromone defined initially by the user (*τij(0)*) equal for all connections. It is evaluated and laid, according to the disjunctive graph representation of the instance in issue, the amount of pheromone on each arc (evaporation coefficient is applied to design the environment at the next step).

$$
\Delta \boldsymbol{\tau}\_{\boldsymbol{\cdot}\_{\bar{\boldsymbol{y}}}} (t) = \sum\_{\text{ant}=1}^{\text{ants}} \Delta \boldsymbol{\tau}\_{\boldsymbol{\cdot}\_{\bar{\boldsymbol{y}}}}^{\text{ant}} \tag{10}
$$

$$\Delta \tau\_{ij}^{\text{aut}} \text{(t)} = \begin{cases} \frac{\text{Q}}{\text{L}\_{\text{ant-lb following edge (i)}}} & \text{if } \text{ant-lb following edge (i)}\\ \text{L}\_{\text{ant}} & \\ \text{0} & \text{otherwise} \end{cases} \tag{11}$$

Visibility and updated pheromone trail fixes the probability (i.e., the fitness values) of each node (i.e., task) at each iteration; for each cycle, it is evaluated the output of the objective function (*Lant(t)*). An objective function value is optimised accordingly to partial good solution. In this improvement, relative importance is given to the parameters α and β. Good elements for choosing these two parameters are: *α* / *β* ≅0 (which means low level of *α*) and little value of α (0<*α* ≤2) while ranging *β* in a larger range (0<*β* ≤6).

### **4.3. Bees Algorithm (BA) approach**

0≤*Pij*

124 Operations Management

*ant*(*t*)≤1 is the probability that at time *t* the generic agent *ant* chooses edge *i* → *j* as next routing path; at time *t* each *ant* chooses the next operation where it will be at time *t+1.* This value is valuated through visibility (*η*) and pheromone (*τ*) information. The probability val‐

otherwise

if j ( )

Î

*ant*

*S t*

å (8)

(9)

ue (Eq. 8) is associated to a fitness into selection step.

*ant ij*

0<*α*, *β* ≤10 (in order to assure a right selection pressure).

nodes, a pheromone update rule is applied (Eq. 9):

t

laid pheromone level between time *t* and *t+1*.

( )

t

*hS t*

Î

=

*P t t t*

0 *ant*

( ) [ ( )] [ ( )]

t

[ ( )] [ ( )]

a

*t t*

 h

a

*ih ih i*

 h  b

ì ü ï ï í ý ï ï î þ

Where: *τij*(*t*) represents the intensity of trail on connection (*i*, *j*) at time *t*. Set the intensity of pheromone at iteration *t=0: τij(o)* to a general small positive constant in order to ensure the avoiding of local optimal solution; α and β are user's defined values tuning the relative importance of the pheromone vs. the heuristic time-distance coefficient. They have to be chosen

For each cycle the agents of the colony are going out of source in search of food. When all colony agents have constructed a complete path, i.e. the sequence of feasible order of visited

> ( 1) (1 ) ( 1) ( ) *ij ij ij*

Besides ants' activity, *pheromon*e *trail evaporation* has been included trough a coefficient representing pheromone vanishing during elapsing time. These parameters imitate the natural world decreasing of pheromone trail intensity over time. It implements a useful form of forgetting. It has been considered a simple decay coefficient (i.e., 0<*λ* <1) that works on total

The laid pheromone on the inquired path is evaluated taking into consideration how many agents chose that path and how was the objective value of that path (Eq. 10). The weight of the solution goodness is the makespan (i.e., *Lant*). A constant of pheromone updating (i.e., *Q*), equal for all ants and user, defined according to the tuning of the algorithm, is introduced as quantity of pheromone per unit of time (Eq. 11). The algorithm works as follow. It is computed the makespan value for each agent of the colony (*Lant(0)*), following visibility and pheromone defined initially by the user (*τij(0)*) equal for all connections. It is evaluated and laid, according to the disjunctive graph representation of the instance in issue, the amount of pheromone on each arc (evaporation coefficient is applied to design the environment at the next step).

( )

<sup>D</sup> <sup>=</sup> å <sup>D</sup> (10)

*t*

*ant ij*

t

1

=

*ants*

*ant*

( ) *ij*

t*t*

 *t tt* + = + +D l

tt

 b

*ij ij*

A colony of bees exploits, in multiple directions simultaneously, food sources in the form of antera with plentiful amounts of nectar or pollen. They are able to cover kilometric distances for good foraging fields [50]. Flower paths are covered based on a stigmergic approach – more nectar places should be visited by more bees [51].

The foraging strategies in colonies of bees starts by scout bees – a percentage of beehive population. They wave randomly from one patch to another. Returning at the hive, those scout bees deposit their nectar or polled and start a recruiting mechanism rated above a certain quality threshold on nectar stored [52]. The recruiting mechanism is properly a launching into a wild dance over the honeycomb. This natural process is known as waggle dance" [53]. Bees, stirring up for discovery, flutter in a number from one to one hundred circuits with a waving and returning phase. The waving phase contains information about direction and distance of flower patches. Waving phases in ascending order on vertical honeycomb suggest flower patches on straightforward line with sunbeams. This information is passed using a kind of dance, that is possible to be developed on right or on left. So through this dance, it is possible to understand the distance from the flower, the presence of nectar and the sunbeam side to choose [54].

The waggle dance is used as a guide or a map to evaluate merits of explored different patches and to exploit better solutions. After waggle dancing on the dance floor, the dancer (i.e. the scout bee) goes back to the flower patch with follower bees that were waiting inside the hive. A squadron moves forward into the patches. More follower bees are sent to more promising patches, while harvest paths are explored but they are not carried out in the long term. A swarm intelligent approach is constituted [55]. This allows the colony to gather food quickly and efficiently with a recursive recruiting mechanism [56].

The Bees Algorithm (i.e., BA) is a population-based search; it is inspired to this natural process [38]. In its basic version, the algorithm performs a kind of neighbourhood search combined with random search. Advanced mechanisms could be guided by genetics [57] or taboo operators [58]. The standard Bees Algorithm first developed in Pham and Karaboga in 2006 [59, 60] requires a set of parameters: no. of scout bees (*n*), no. of sites selected out of *n* visited sites (*m*), no. of best sites out of *m* selected sites (*e*), no. of bees recruited for the best *e* sites (*nep*), no. of bees recruited for the other *m-e* selected sites (*nsp*), initial size of patches (*ngh*). The standard BA starts with random search.

natural law of attraction and repulsion between charges (Coulomb's law) [62]. EM simulates electromagnetic interaction [63]. The algorithm evaluates fitness of solutions considering charge of particles. Each particle represents a solution. Two points into the space had different charges in relation to what electromagnetic field acts on them [64]. An electrostatic force, in repulsion or attraction, manifests between two points charges. The electrostatic force is directly proportional to the magnitudes of each charge and inversely proportional to the square of the distance between the charges. The fixed charge at time iteration (*t*) of particle *i* is shown as

()( ) ( ( ) ( )) 1 ( ) exp \* , , / , , 1,.., *m*

Where *t* represents the iteration step, *qi (t)* is the charge of particle *i* at iteration *t*, *f(xi,,t)*, *f(xbest,t)*, and *f(xk,t)* denote the objective value of particle *i*, the best solution, and particle *k* from *m* particles at time *t*; finally, *n* is the dimension of search space.The charge of each point *i*, *qi*

The particles move along with total force and so diversified solutions are generated. The

2

å (13)

2

The following notes described an adapted version of EM for JSSP. According to this applica‐ tion, the initial population is obtained by choosing randomly from the list or pending tasks, as for the feasibility of solution, particles' path. The generic pseudo-code for the EM is reported in figure 6. Each particle is initially located into a source node (see disjunctive graph of figure 2). Particle is uniquely defined by a charge and a location into the node's space. Particle's position in each node is defined in a multigrid discrete set. While moving, particle jumps in a node based on its attraction force, defined in module and direction and way. If the force from starting line to arrival is in relation of positive inequality, the particles will be located in a plane position in linear dependence with force intensity. A selection mechanism could be set in order to decide where particle is directed, based on node force intensity. Force is therefore the resultant of particles acting in node. A solution for the JS is obtained only after a complete path from the source to the sink and the resulting force is updated according to the normalized

( ) ( ) ( ) ( )

ì ü ï ï < = í ý " = ï ï ³ î þ

( )\* ( ) () () \* :, , () () ( ) , 1,..., ( )\* ( ) () () \* :, , () ()

*i j j i j i j i*

*qt qt xt xt fxt fxt xt xt F t i m qt qt xt xt fxt fxt xt xt*

*i j i j j i j i*

( ) ( ) ( ) ( )

æ ö æ ö æ ö = ç ÷ ç ÷ ç ÷ " = ç ÷ è ø è ø è ø <sup>å</sup> (12)

*(t)*,

127

) could be evaluated as a task

Production Scheduling Approaches for Operations Management

http://dx.doi.org/10.5772/55431

*i qt n fxt fx t fx t fx t i m* =

*i i best i best*

determines point's power of attraction or repulsion. Points (*xi*

following formulation is the resultant force of particle *i*:

into the graph representation (fig. 2).

*i*

makespan of different solutions.

follows:

The honey bees' effective foraging strategy can be applied in operation management problems such as JSSP. For each solution, a complete schedule of operations in JSP is produced. The makespan of the solution is analogous to the profitability of the food source in terms of distance and sweetness of the nectar. Bees, *n* scouts, explore patches, *m* sites - initially a scout bee for each path could be set, over total ways, *ngh*, accordingly to the disjunctive graph of fig. 2, randomly at the first stage, choosing the shorter makespan and the higher profitability of the solution path after the first iteration.

**Figure 5.** The Bees Algorithm model; 6a. the BA pseudo code; 6b. the flow chart of a general BA procedure.

Together with scouting, this differential recruitment is the key operation of the BA. Once a feasible solution is found, each bee will return to the hive to perform a waggle dance. The output of the waggle dance will be represented by a list of "elite solutions", *e* best selected sites, from which recruited bees, *nep*, are chosen for exploration from the population into the hive. Researches of patches are conducted, other *nsp* bees, in the neighbourhood of the selected sites, *m-e* sites. System maintains, step repetition: *imax*, where each bee of the colony of bees will traverse a potential solution. Flower patches, *e* sites, with better fitness (makespan) will have a higher probability for "elite solutions", promoting the exploitation to an optimal solution.

### **4.4. Electromagnetism like Method (EM)**

The Electromagnetic Like Algorithm is a population based meta-heuristics proposed by Birbil and Fang [61] to tackle with combinatorial optimisation problems. Algorithm is based on the natural law of attraction and repulsion between charges (Coulomb's law) [62]. EM simulates electromagnetic interaction [63]. The algorithm evaluates fitness of solutions considering charge of particles. Each particle represents a solution. Two points into the space had different charges in relation to what electromagnetic field acts on them [64]. An electrostatic force, in repulsion or attraction, manifests between two points charges. The electrostatic force is directly proportional to the magnitudes of each charge and inversely proportional to the square of the distance between the charges. The fixed charge at time iteration (*t*) of particle *i* is shown as follows:

no. of bees recruited for the other *m-e* selected sites (*nsp*), initial size of patches (*ngh*). The

The honey bees' effective foraging strategy can be applied in operation management problems such as JSSP. For each solution, a complete schedule of operations in JSP is produced. The makespan of the solution is analogous to the profitability of the food source in terms of distance and sweetness of the nectar. Bees, *n* scouts, explore patches, *m* sites - initially a scout bee for each path could be set, over total ways, *ngh*, accordingly to the disjunctive graph of fig. 2, randomly at the first stage, choosing the shorter makespan and the higher profitability of the

Start

Initialize a polulation of *n* Scout Bees

Evaluate fitness of the Population

Select *m*sites for the Neighbourhood Search Determine the Size of the Neighbourhood (Patch Size *ngh*) Recruit Bees for Selected Sites (more Bees for the Best *e*Sites) Select the Fittest Bees from Each Patch

Assign the *(n-m)* Remaining Bees to Random Search Generate New Populate of Scout Bees

Are stopping criteria met ?

NO

Optimal solution

YES

(a) (b)

Together with scouting, this differential recruitment is the key operation of the BA. Once a feasible solution is found, each bee will return to the hive to perform a waggle dance. The output of the waggle dance will be represented by a list of "elite solutions", *e* best selected sites, from which recruited bees, *nep*, are chosen for exploration from the population into the hive. Researches of patches are conducted, other *nsp* bees, in the neighbourhood of the selected sites, *m-e* sites. System maintains, step repetition: *imax*, where each bee of the colony of bees will traverse a potential solution. Flower patches, *e* sites, with better fitness (makespan) will have a higher probability for "elite solutions", promoting the exploitation to an optimal

The Electromagnetic Like Algorithm is a population based meta-heuristics proposed by Birbil and Fang [61] to tackle with combinatorial optimisation problems. Algorithm is based on the

**Figure 5.** The Bees Algorithm model; 6a. the BA pseudo code; 6b. the flow chart of a general BA procedure.

**NeighbourhooodSearch**

standard BA starts with random search.

126 Operations Management

solution path after the first iteration.

**Input:** Instance *x є I* of *Пopt*

**end while** 

*Sbest*←optimal solution *Popi* 

x є I

solution.

random solutions.

N(s)

fitnesses.

**4.4. Electromagnetism like Method (EM)**

set algorithm parameters ()

**Initialize** *Pop0←*Initial population () with

**Select** sites for neighbourhood search

**Recruit** bees for selected sites (more bees for best e sites) and evaluate

**Select** the fittest bee from each patch. **Assign** remaining bees to search randomly and evaluate their fitnesses.

Generate a new population of scout bees

**Output:** *Sbest "*candidate" to be the best found solution

**Evaluate** fitness of the population(*Pop0*). **while** not termination condition **do**

$$q\_i(t) = \exp\left(-n^\* \left(f\left(\mathbf{x}\_i, t\right) - f\left(\mathbf{x}\_{\text{best}}, t\right) / \left(\sum\_{i=1}^m \left(f\left(\mathbf{x}\_i, t\right) - f\left(\mathbf{x}\_{\text{best}}, t\right)\right)\right)\right)\right) \quad \forall i = 1, \dots, m \tag{12}$$

Where *t* represents the iteration step, *qi (t)* is the charge of particle *i* at iteration *t*, *f(xi,,t)*, *f(xbest,t)*, and *f(xk,t)* denote the objective value of particle *i*, the best solution, and particle *k* from *m* particles at time *t*; finally, *n* is the dimension of search space.The charge of each point *i*, *qi (t)*, determines point's power of attraction or repulsion. Points (*xi* ) could be evaluated as a task into the graph representation (fig. 2).

The particles move along with total force and so diversified solutions are generated. The following formulation is the resultant force of particle *i*:

$$F\_i(t) = \sum \begin{pmatrix} \left(x\_j(t) - x\_i(t)\right) \* \frac{\left(q\_j(t) \* q\_j(t)\right)}{\left\|x\_j(t) - x\_i(t)\right\|^2} : f\left(x\_j, t\right) < f\left(x\_i, t\right) \\\\ \left(x\_i(t) - x\_j(t)\right) \* \frac{\left(q\_j(t) \* q\_j(t)\right)}{\left\|x\_j(t) - x\_i(t)\right\|^2} : f\left(x\_j, t\right) \ge f\left(x\_i, t\right) \end{pmatrix}, \forall i = 1, \ldots, m \tag{13}$$

The following notes described an adapted version of EM for JSSP. According to this applica‐ tion, the initial population is obtained by choosing randomly from the list or pending tasks, as for the feasibility of solution, particles' path. The generic pseudo-code for the EM is reported in figure 6. Each particle is initially located into a source node (see disjunctive graph of figure 2). Particle is uniquely defined by a charge and a location into the node's space. Particle's position in each node is defined in a multigrid discrete set. While moving, particle jumps in a node based on its attraction force, defined in module and direction and way. If the force from starting line to arrival is in relation of positive inequality, the particles will be located in a plane position in linear dependence with force intensity. A selection mechanism could be set in order to decide where particle is directed, based on node force intensity. Force is therefore the resultant of particles acting in node. A solution for the JS is obtained only after a complete path from the source to the sink and the resulting force is updated according to the normalized makespan of different solutions.

**Figure 6.** The Electromagnetic like Method; **6a**. the EM pseudo code; **6b**. the flow chart of a general EM procedure.

**Figure 7.** The Simulated Annealing model; 7a. the SA pseudo code; 7b. the flow chart of a general SA procedure

*Aij* ={*min* 1, *exp* ( - *<sup>C</sup>*( *<sup>j</sup>*) - *<sup>C</sup>*(*i*)

acceptance probability can be measured as following:

iterations of the procedure to be computed.

**4.6. Tabu Search (TS)**

For the scheduling issues, the application of the SA techniques requires the solutions fitness generated by each iteration, that is generally associated to the cost of a specific scheduling solution; the cost is represented by the temperature that is reduced for each iteration [68]. The

Another facet to be analysed is the stopping criteria, which can be fixed as the total number of

Tabu search (Glover, 1986) is an iterative search approach characterised by the use of a flexible memory [69]. The process with which tabu search overcomes local optimality is based on the evaluation function that chooses the highest evaluation solution at each iteration. The evalu‐ ation function selects the move, in the neighbourhood of the current solution, that produces the most improvement or the least deterioration in the objective function. Since, movement are accepted based on a probability function, a tabu list is employed to store characteristics of accepted moves so to classify them as taboo (i.e., to be avoided) in the later iteration. This is used to dodge cycling movements. A strategy called forbidding is employed to control and update the tabu list. This method was formalized by Glover [69]. An algorithm based on tabu search requires some elements: (i) the move, (ii) the neighbourhood, (iii) an initial solution, (iv) a search strategy, (v) a memory, (vi) an objective function and (vii) a stop criterion. The of TS is based on the definition of a first feasible solution S, which is stored as the current seed

*<sup>c</sup>* ) } (14)

Production Scheduling Approaches for Operations Management

http://dx.doi.org/10.5772/55431

129

### **4.5. Simulated Annealing (SA)**

1

The simulated annealing was presented by Scott Kirkpatrick *et al*. in 1983 [65] and by Vlado Černý in 1985 [66]. This optimization method is based on works of Metropolis *et al.,* [67] which allows describing the behaviour of a system in thermodynamic equilibrium at a certain temperature. It is a generic probabilistic metaheuristic used to find a good approximation to the global optimum of a given objective function. Mostly it is used with discrete problems such as the main part of the operations management problems.

Name and inspiration come from annealing in metallurgy, a technique that, through the heating and a controlled process of cooling, can increase the dimensions of the crystals inside the fuse piece and can reduce the defects inside the crystals structure. The technique deals with the minimization of the global energy *E* inside the material, using a control parameter called temperature, to evaluate the probability of accepting an uphill move inside the crystals structure. The procedure starts with an initial level of temperature *T* and a new random solution is generated at each iteration, if this solution improves the objective function, i.e., the *E* of the system is lower than the previous one. Another technique to evaluate the improvement of the system is to accept the new random solution with a likelihood according to a probability *exp(-ΔE),* where *ΔE* is the variation of the objective function. Afterwards a new iteration of the procedure is implemented.

As follows there is the pseudo-code of a general simulated annealing procedure:

**Figure 7.** The Simulated Annealing model; 7a. the SA pseudo code; 7b. the flow chart of a general SA procedure

For the scheduling issues, the application of the SA techniques requires the solutions fitness generated by each iteration, that is generally associated to the cost of a specific scheduling solution; the cost is represented by the temperature that is reduced for each iteration [68]. The acceptance probability can be measured as following:

$$\begin{array}{ll}\text{Aij} = \begin{Bmatrix} \min & \begin{bmatrix} 1 & \exp\left[\left(-\frac{C\left(\bar{j}\right) - C\left(\bar{i}\right)}{c}\right)\right] \end{bmatrix} \end{array} \tag{14}$$

Another facet to be analysed is the stopping criteria, which can be fixed as the total number of iterations of the procedure to be computed.

#### **4.6. Tabu Search (TS)**

(a) (b)

**Figure 6.** The Electromagnetic like Method; **6a**. the EM pseudo code; **6b**. the flow chart of a general EM procedure.

The simulated annealing was presented by Scott Kirkpatrick *et al*. in 1983 [65] and by Vlado Černý in 1985 [66]. This optimization method is based on works of Metropolis *et al.,* [67] which allows describing the behaviour of a system in thermodynamic equilibrium at a certain temperature. It is a generic probabilistic metaheuristic used to find a good approximation to the global optimum of a given objective function. Mostly it is used with discrete problems such

Name and inspiration come from annealing in metallurgy, a technique that, through the heating and a controlled process of cooling, can increase the dimensions of the crystals inside the fuse piece and can reduce the defects inside the crystals structure. The technique deals with the minimization of the global energy *E* inside the material, using a control parameter called temperature, to evaluate the probability of accepting an uphill move inside the crystals structure. The procedure starts with an initial level of temperature *T* and a new random solution is generated at each iteration, if this solution improves the objective function, i.e., the *E* of the system is lower than the previous one. Another technique to evaluate the improvement of the system is to accept the new random solution with a likelihood according to a probability *exp(-ΔE),* where *ΔE* is the variation of the objective function. Afterwards a new iteration of the

As follows there is the pseudo-code of a general simulated annealing procedure:

Start

INITIALIZE algorithm's (define algotihm parameter – no. particles and MaxIterations)

SET Random CHARGE and FORCE

Update Global OBJ function

MOVE Particles along Points Update particles' POSITION Calculate CHARGE

**Local Search**

**Global Search**

> Evaluate OBJ functions between particles

YES

Are paths along total completed ?

Calculate FORCE

Are stopping criteria met ?

NO

NO

NO

Optimal solution

YES

**Input:** Instance *x є I* of *Пopt* 

*i←*0

128 Operations Management

set algorithm parameters ()

from the feasible region.

*GLOBAL SEARCH* 

**Initialize:** *m* sample points are selected at random

**Evaluate** Force and Distance of current state **while** not termination condition **do** 

> **while** job is not completed **do** *LOCAL SEARCH:*

> > (*qi*).

**end while**  *Sbest*←optimal solution **Update** *Global Force* according to S

**end while** 

**4.5. Simulated Annealing (SA)**

procedure is implemented.

1

force vector. **Evaluate** particles()

**output:** *Sbest "*candidate" to be the best found solution x є I

as the main part of the operations management problems.

 **Move** the sample points toward the local minimums (distance and force need to be evaluated). **Calculate force***: F* ← CalcF charge value is assigned to each point

**Move** *(*F*)* points along the total

Tabu search (Glover, 1986) is an iterative search approach characterised by the use of a flexible memory [69]. The process with which tabu search overcomes local optimality is based on the evaluation function that chooses the highest evaluation solution at each iteration. The evalu‐ ation function selects the move, in the neighbourhood of the current solution, that produces the most improvement or the least deterioration in the objective function. Since, movement are accepted based on a probability function, a tabu list is employed to store characteristics of accepted moves so to classify them as taboo (i.e., to be avoided) in the later iteration. This is used to dodge cycling movements. A strategy called forbidding is employed to control and update the tabu list. This method was formalized by Glover [69]. An algorithm based on tabu search requires some elements: (i) the move, (ii) the neighbourhood, (iii) an initial solution, (iv) a search strategy, (v) a memory, (vi) an objective function and (vii) a stop criterion. The of TS is based on the definition of a first feasible solution S, which is stored as the current seed and the best solution, at each iteration, after the set of the neighbours is selected between the possible solutions deriving from the application of a movement. The value of the objective function is evaluated for all the possible movements, and the best one is chosen. The new solution is accepted even if its value is worse than the previous one, and the movement is recorded in a list, named taboo list.

suggested actions to solve the problem, even though output cannot be generated by a known mathematical function. NNs are an adaptive system, constituted by several artificial neurons interconnected to form a complex network, those change their structure depending on internal or external information. In other words, this model is not programmed to solve a problem but it learns how to do that, by performing a *training* (or *learning) process* which uses a record of examples. This data record, called *training set,* is constituted by inputs with their corresponding outputs. This process reproduces almost exactly the behaviour of human brain that learns from

Production Scheduling Approaches for Operations Management

http://dx.doi.org/10.5772/55431

131

The basic architecture of a neural network, starting from the taxonomy of the problems faceable with NNs, consists of three layers of neurons: the *input layer*, which receives the signal from the external environment and is constituted by a number of neurons equal to the number of input variables of the problem; the *hidden layer* (one or more depending on the complexity of the problem), which processes data coming from the input layer; and the *output layer*, which gives the results of the system and is constituted by as many neurons as the output variables

The error of NNs is set according to a testing phase (to confirm the actual predictive power of the network while adjusting the weights of links). After having built a training set of examples coming from historical data and having chosen the kind of architecture to use (among feedforward networks, recurrent networks), the most important step of the implementation of NNs is the learning process. Through the training, the network can infer the relation between input and output defining the "strength" (weight) of connections between single neurons. This means that, from a very large number of extremely simple processing units (neurons), each of them performing a weighted sum of its inputs and then firing a binary signal if the total input exceeds a certain level (activation threshold), the network manages to perform extremely complex tasks. It is important to note that different categories of learning algorithms exists: (i) supervised learning, with which the network learns the connection between input and output thank to known examples coming from historical data; (ii) unsupervised learning, in which only input values are known and similar stimulations activate close neurons otherwise different stimulations activate distant neurons; and (iii) reinforcement learning, which is a retro-activated algorithm capable to define new values of the connection weights starting from the observation of the changes in the environment. Supervised learning by back error propa‐ gation (BEP) algorithm has become the most popular method of training NNs. Application of BEP in Neural Network for production scheduling is in: Dagli et al. (1991) [73], Cedimoglu

The mostly NNs architectures used for JSSP are: searching network (Hopfield net) and *error correction network* (Multi Layer Perceptron). The Hopfield Network (a content addressable memory systems with weighted threshold nodes) dominates, however, neural network based scheduling systems [77]. They are the only structure that reaches any adequate result with benchmark problems [78]. It is also the best NN method for other machine scheduling problems [79]. In Storer *et al.* (1995) [80] this technique was combined with several iterated local search algorithms among which space genetic algorithms clearly outperform other implementations [81]. The technique's objective is to minimize the energy function *E* that

(1993) [74], Sim et al. (1994) [75], Kim et al. (1995) [76].

previous experience.

of the system.

For the problem of the scheduling in the job shops, generally a row of assignments of *n* jobs to *m* machines is randomly generated and the *cost* associated is calculated to define the fitness of the solution [70]. Some rules of movements can be defined as the crossover of some jobs to different machines and so on, defining new solutions and generating new values of the objective functions. The best solution between the new solutions is chosen and the movement is recorded in a specific file named taboo list. The stopping criterion can be defined in many ways, but simplest way is to define a maximum number of iterations [71].

In figure 8 are reported the pseudo-code and the flowchart for the application of TS to JSSP.

**Figure 8.** The Tabu Search approach; 8a. the TS pseudo code; 8b. the flow chart of a general TS procedure.

### **4.7. Neural Networks (NNs)**

Neural networks are a technique based on models of biological brain structure. Artificial Neural Networks (NN), firstly developed by McCulloch and Pitts in 1943, are a mathematical model which wants to reproduce the learning process of human brain [72]. They are used to simulate and analyse complex systems starting from known input/output examples. An algorithm processes data through its interconnected network of processing units compared to neurons. Consider the Neural Network procedure to be a "black box". For any particular set of inputs (particular scheduling instance), the black box will give a set of outputs that are suggested actions to solve the problem, even though output cannot be generated by a known mathematical function. NNs are an adaptive system, constituted by several artificial neurons interconnected to form a complex network, those change their structure depending on internal or external information. In other words, this model is not programmed to solve a problem but it learns how to do that, by performing a *training* (or *learning) process* which uses a record of examples. This data record, called *training set,* is constituted by inputs with their corresponding outputs. This process reproduces almost exactly the behaviour of human brain that learns from previous experience.

and the best solution, at each iteration, after the set of the neighbours is selected between the possible solutions deriving from the application of a movement. The value of the objective function is evaluated for all the possible movements, and the best one is chosen. The new solution is accepted even if its value is worse than the previous one, and the movement is

For the problem of the scheduling in the job shops, generally a row of assignments of *n* jobs to *m* machines is randomly generated and the *cost* associated is calculated to define the fitness of the solution [70]. Some rules of movements can be defined as the crossover of some jobs to different machines and so on, defining new solutions and generating new values of the objective functions. The best solution between the new solutions is chosen and the movement is recorded in a specific file named taboo list. The stopping criterion can be defined in many

In figure 8 are reported the pseudo-code and the flowchart for the application of TS to JSSP.

**Figure 8.** The Tabu Search approach; 8a. the TS pseudo code; 8b. the flow chart of a general TS procedure.

Neural networks are a technique based on models of biological brain structure. Artificial Neural Networks (NN), firstly developed by McCulloch and Pitts in 1943, are a mathematical model which wants to reproduce the learning process of human brain [72]. They are used to simulate and analyse complex systems starting from known input/output examples. An algorithm processes data through its interconnected network of processing units compared to neurons. Consider the Neural Network procedure to be a "black box". For any particular set of inputs (particular scheduling instance), the black box will give a set of outputs that are

ways, but simplest way is to define a maximum number of iterations [71].

recorded in a list, named taboo list.

130 Operations Management

**4.7. Neural Networks (NNs)**

The basic architecture of a neural network, starting from the taxonomy of the problems faceable with NNs, consists of three layers of neurons: the *input layer*, which receives the signal from the external environment and is constituted by a number of neurons equal to the number of input variables of the problem; the *hidden layer* (one or more depending on the complexity of the problem), which processes data coming from the input layer; and the *output layer*, which gives the results of the system and is constituted by as many neurons as the output variables of the system.

The error of NNs is set according to a testing phase (to confirm the actual predictive power of the network while adjusting the weights of links). After having built a training set of examples coming from historical data and having chosen the kind of architecture to use (among feedforward networks, recurrent networks), the most important step of the implementation of NNs is the learning process. Through the training, the network can infer the relation between input and output defining the "strength" (weight) of connections between single neurons. This means that, from a very large number of extremely simple processing units (neurons), each of them performing a weighted sum of its inputs and then firing a binary signal if the total input exceeds a certain level (activation threshold), the network manages to perform extremely complex tasks. It is important to note that different categories of learning algorithms exists: (i) supervised learning, with which the network learns the connection between input and output thank to known examples coming from historical data; (ii) unsupervised learning, in which only input values are known and similar stimulations activate close neurons otherwise different stimulations activate distant neurons; and (iii) reinforcement learning, which is a retro-activated algorithm capable to define new values of the connection weights starting from the observation of the changes in the environment. Supervised learning by back error propa‐ gation (BEP) algorithm has become the most popular method of training NNs. Application of BEP in Neural Network for production scheduling is in: Dagli et al. (1991) [73], Cedimoglu (1993) [74], Sim et al. (1994) [75], Kim et al. (1995) [76].

The mostly NNs architectures used for JSSP are: searching network (Hopfield net) and *error correction network* (Multi Layer Perceptron). The Hopfield Network (a content addressable memory systems with weighted threshold nodes) dominates, however, neural network based scheduling systems [77]. They are the only structure that reaches any adequate result with benchmark problems [78]. It is also the best NN method for other machine scheduling problems [79]. In Storer *et al.* (1995) [80] this technique was combined with several iterated local search algorithms among which space genetic algorithms clearly outperform other implementations [81]. The technique's objective is to minimize the energy function *E* that corresponds to the makespan of the schedule. The values of the function are determined by the precedence and resource constraints which violation increases a penalty value. The Multi Layer Perceptron (i.e., MLP) consists in a black box of several layers allowing inputs to be added together, strengthened, stopped, non-linearized [82], and so on [83]. The black box has a great no. of knobs on the outside which can be filled with to adjust the output. For the given input problem, the training (network data set is used to adjust the weights on the neural network) is set as optimum target. Training an MLP is NP-complete in general.

and the results are subject to tuning of algorithm's parameters. A common rule is: less parameters generate more stable performances but local optimum solutions. Moreover, the problem has to be concisely encoded such that the job sequence will respect zoning and sequence constraints. All the proposed approaches use probabilistic transition rules and fitness

Production Scheduling Approaches for Operations Management

http://dx.doi.org/10.5772/55431

133

ACO and BE manifest common performances in JSSP. They do not need a coding system. This factor makes the approaches more reactive to the particular problem instance in issue. Notwithstanding, too many parameters have to be controlled in order to assure diversification of search. GAs surpasses their cousins in the request for robustness. The matching between genotype and phenotype across the schemata must be investigated in GAs in order to obtain promising results. The difficult of GA is to translate a correct phenotype from a starting genotype. A right balancing between crossover and mutation effect can control the perform‐ ance of this algorithm. The EM approach is generally affected by local stability that avoid global exploration and global performance. It is, moreover, subject to infeasibility in solutions because of its way to approach at the problem. SA and TS, as quite simpler approaches, dominate the panorama of metaheuristics proposal for JS scheduling. They manifest simplicity in imple‐ mentation and reduction in computation effort but suffer in local optimum falls. These approaches are generally used to improve performances of previous methodologies and they enhance their initial score. The influence of initial solutions on the results, for overall ap‐ proaches, is marked. Performances of NNs are generally affected by the learning process, over fitting. Too much data slow down the learning process without improving in optimal solution. Neural Network is, moreover, affected by difficulties in including job constraints with network representation. The activating signal needs to be subordinated to the constraints analysis.

Based on authors experience and reported paragraphs, it is difficult to definitively choose any of those techniques as outstanding in comparison with the others. Measurement of output and cost-justification (computational time and complexity) are vital to making good decision about which approach has to be implemented. They are vital for a good scheduling in operations management. In many cases there are not enough data to compare – benchmark instances, as from literature for scheduling could be useful - those methods thoroughly. In most cases it is evident that the efficiency of a given technique is problem dependent. It is possible that the parameters may be set in such way that the results of the algorithms are excellent for those benchmark problems but would be inferior for others. Thus, comparison of methods creates many problems and usually leads to the conclusion that there is no the only best technique. There is, however, a group of several methods that dominates, both in terms of quality of

What is important to notice here is: performance is usually not improved by algorithms for scheduling; it is improved by supporting the human scheduler and creating a direct (visual) link between scheduling actions and performances. It is reasonable to expect that humans will intervene in any schedule. Humans are smarter and more adaptable than computers. Even if users don't intervene, other external changes will happen that impact the schedule. Contingent maintenance plan and product quality may affect performance of scheduling. An algorithmic approach could be obviously helpful but it has to be used as a computerised support to the

solutions and computational time. But this definition is case dependent.

information function of payoff (i.e., the objective function).

In figure 9 it is possible to see the pseudo-code and the flow chart for the neural networks.

**Figure 9.** The NNs model; 9a. the implemented NNs pseudo code; 9b. the flow chart of generic NNs.

### **5. Discussion and conclusions**

In this chapter, it was faced the most intricate problem (i.e., Job Shop) in order to explain approaches for scheduling in manufacturing. The JSP is one of the most formidable issues in the domain of optimization and operational research. Many methods were proposed, but only application of approximate methods (metaheuristics) allowed to efficiently solve large scheduling instances. Most of the best performing metaheuristics for JSSP were described and illustrated.

The likelihood of solving JSP can be greatly improved by finding an appropriate problem representation in computer domain. The acyclic graph representation is a quite good way to model alternatives in scheduling. How to fit approaches with problem domain (industrial manufacturing system) is generally a case in issue. Approaches are obviously affected by data and the results are subject to tuning of algorithm's parameters. A common rule is: less parameters generate more stable performances but local optimum solutions. Moreover, the problem has to be concisely encoded such that the job sequence will respect zoning and sequence constraints. All the proposed approaches use probabilistic transition rules and fitness information function of payoff (i.e., the objective function).

corresponds to the makespan of the schedule. The values of the function are determined by the precedence and resource constraints which violation increases a penalty value. The Multi Layer Perceptron (i.e., MLP) consists in a black box of several layers allowing inputs to be added together, strengthened, stopped, non-linearized [82], and so on [83]. The black box has a great no. of knobs on the outside which can be filled with to adjust the output. For the given input problem, the training (network data set is used to adjust the weights on the neural

In figure 9 it is possible to see the pseudo-code and the flow chart for the neural networks.

network) is set as optimum target. Training an MLP is NP-complete in general.

**Figure 9.** The NNs model; 9a. the implemented NNs pseudo code; 9b. the flow chart of generic NNs.

In this chapter, it was faced the most intricate problem (i.e., Job Shop) in order to explain approaches for scheduling in manufacturing. The JSP is one of the most formidable issues in the domain of optimization and operational research. Many methods were proposed, but only application of approximate methods (metaheuristics) allowed to efficiently solve large scheduling instances. Most of the best performing metaheuristics for JSSP were described and

The likelihood of solving JSP can be greatly improved by finding an appropriate problem representation in computer domain. The acyclic graph representation is a quite good way to model alternatives in scheduling. How to fit approaches with problem domain (industrial manufacturing system) is generally a case in issue. Approaches are obviously affected by data

**5. Discussion and conclusions**

illustrated.

132 Operations Management

ACO and BE manifest common performances in JSSP. They do not need a coding system. This factor makes the approaches more reactive to the particular problem instance in issue. Notwithstanding, too many parameters have to be controlled in order to assure diversification of search. GAs surpasses their cousins in the request for robustness. The matching between genotype and phenotype across the schemata must be investigated in GAs in order to obtain promising results. The difficult of GA is to translate a correct phenotype from a starting genotype. A right balancing between crossover and mutation effect can control the perform‐ ance of this algorithm. The EM approach is generally affected by local stability that avoid global exploration and global performance. It is, moreover, subject to infeasibility in solutions because of its way to approach at the problem. SA and TS, as quite simpler approaches, dominate the panorama of metaheuristics proposal for JS scheduling. They manifest simplicity in imple‐ mentation and reduction in computation effort but suffer in local optimum falls. These approaches are generally used to improve performances of previous methodologies and they enhance their initial score. The influence of initial solutions on the results, for overall ap‐ proaches, is marked. Performances of NNs are generally affected by the learning process, over fitting. Too much data slow down the learning process without improving in optimal solution. Neural Network is, moreover, affected by difficulties in including job constraints with network representation. The activating signal needs to be subordinated to the constraints analysis.

Based on authors experience and reported paragraphs, it is difficult to definitively choose any of those techniques as outstanding in comparison with the others. Measurement of output and cost-justification (computational time and complexity) are vital to making good decision about which approach has to be implemented. They are vital for a good scheduling in operations management. In many cases there are not enough data to compare – benchmark instances, as from literature for scheduling could be useful - those methods thoroughly. In most cases it is evident that the efficiency of a given technique is problem dependent. It is possible that the parameters may be set in such way that the results of the algorithms are excellent for those benchmark problems but would be inferior for others. Thus, comparison of methods creates many problems and usually leads to the conclusion that there is no the only best technique. There is, however, a group of several methods that dominates, both in terms of quality of solutions and computational time. But this definition is case dependent.

What is important to notice here is: performance is usually not improved by algorithms for scheduling; it is improved by supporting the human scheduler and creating a direct (visual) link between scheduling actions and performances. It is reasonable to expect that humans will intervene in any schedule. Humans are smarter and more adaptable than computers. Even if users don't intervene, other external changes will happen that impact the schedule. Contingent maintenance plan and product quality may affect performance of scheduling. An algorithmic approach could be obviously helpful but it has to be used as a computerised support to the scheduling decision - evaluation of large amount of paths - where computational tractability is high. So it makes sense to see what optimal configuration is before committing to the final answer.

[10] Muth, J. F, & Thompson, G. L. Industrial Scheduling. Prentice-Hall, Englewood

Production Scheduling Approaches for Operations Management

http://dx.doi.org/10.5772/55431

135

[11] Garey, M. R, Johnson, D. S, & Sethi, R. The Complexity of Flow Shop and Job Shop

[12] Blazewicz, J. Domschke W. and Pesch E..,The jobshops cheduling problem: conven‐

[13] Aarts E.H.L., Van Laarhoven P.J.M., LenstraJ.K., and Ulder N.L.J. A computational study of local search algorithms for job shop scheduling. ORSA Journal on Comput‐

[15] Panwalkar, S. S. and Iskander Wafix. A survey of scheduling rules. Operations Re‐

[16] Carlier, J, & Pinson, E. An algorithm for solving the job-shop problem. Management

[17] Bertrand, J. W. M. The use of workload information to control job lateness in control‐ led and uncontrolled release production systems, J. of Oper. Manag., (1983). , 3(2),

[18] GanttHenry L.. Work,Wages,andProfits,secondedition,Engineering Magazine Co., NewYork, (1916). Reprinted by Hive Publishing Company, Easton, Maryland, 1973.

[19] CoxJames F., John H. Blackstone, Jr., and Michael S. Spencer, edt.,APICS Dictionary, American Production &Inventory Control Society, Falls Church, Virginia, (1992).

[20] Roy, B, & Sussman, B. Les problèmes d'ordonnancement avec contraintes disjunc‐

[21] Fruggiero, F, Lovaglio, C, Miranda, S, & Riemma, S. From Ants Colony to Artificial Ants: A Nature Inspired Algorithm to Solve Job Shop Scheduling Problems. In Proc.

[22] Adams, J, Balas, E, & Zawack, D. The shifting bottleneck procedure for job shop

[23] Reeves, C. R. Modern Heuristic Techniques for Combinatorial Problems. John Wiley

[24] Giffler, B. and Thompson G.L. Algorithms for solving productions cheduling prob‐

[25] Gere, W. S. Jr., Heuristics in Jobshop Scheduling, Manag. Science, (1966). , 13(1),

Scheduling. Math. of Operation Research, (1976). , 2(2), 117-129.

[14] Brucker, P. Scheduling Algorithms. Springer-Verlag, Berlin, (1995).

tional and new solution techniques,EJOR,1996; 93: 1-33.

Cliffs, N.J., (1963).

ing, 1994; 6 (2): 118-125.

search, (1977). , 25(1), 45-61.

79-92.

tive, (1964).

ICRP-18; (2005).

& Sons, Inc; (1993).

167-175.

scheduling. Management Science, (1988). , 34

lems. OperationsResearch, 1960;Vol.8: 487-503.

Science, (1989). , 35(2), 164-176.

### **Author details**

Marcello Fera1 , Fabio Fruggiero2 , Alfredo Lambiase1 , Giada Martino1 and Maria Elena Nenni3

1 University of Salerno – Dpt. of Industrial Engineering, Fisciano (Salerno), Italy

2 University of Basilicata – School of Engineering, Potenza, Italy

3 University of Naples Federico II – Dpt. of Economic Management, Napoli, Italy

### **References**


scheduling decision - evaluation of large amount of paths - where computational tractability is high. So it makes sense to see what optimal configuration is before committing to the final

, Alfredo Lambiase1

1 University of Salerno – Dpt. of Industrial Engineering, Fisciano (Salerno), Italy

3 University of Naples Federico II – Dpt. of Economic Management, Napoli, Italy

[1] Stutzle, T. G. Local Search Algorithms for Combinatorial Problems- Analysis, Algo‐

[2] Reeves, C. R. Heuristic search methods: A review. In D.Johnson and F.O'Brien Op‐ erational Research: Keynote Papers, Operational Research Society, Birmingham, UK,

[3] Trelea, I. C. The Particle Swarm Optimization Algorithm: convergence analysis and

[4] Garey, M. R, & Johnson, D. S. Computers and intractability: a guide to the theory of

[5] Wight Oliver WProduction Inventory Management in the computer Age. Boston,

[6] Baker, K. R. Introduction to sequencing and scheduling, John Wiley, New York,

[7] Cox James F., John H. Blackstone, Jr., Michael S. Spencer editors,APICS Dictionary, American Production &Inventory Control Society, Falls Church, Virginia, (1992).

[8] Pinedo Michael, Scheduling Theory, Algorithms, and Systems, Prentice Hall, Engle‐

[9] Hopp, W, & Spearman, M. L. Factory Physics. Foundations of manufacturing Mana-

parameter selection.Information Processing Letters(2003). , 85(6), 317-325.

Van Nostrand Reinhold Company, Inc., New York, (1974).

2 University of Basilicata – School of Engineering, Potenza, Italy

, Giada Martino1

and

answer.

134 Operations Management

**Author details**

Maria Elena Nenni3

, Fabio Fruggiero2

rithms and New Applications; (1998).

NP-completeness, Freeman, (1979).

wood Cliffs, New Jersey, (1995).

gement. Irwin/McGraw-Hill, Boston; (1996).

(1996). , 122-149.

(1974).

Marcello Fera1

**References**


[26] Rajendran, C, & Holthaus, O. A comparative Study of Dispatching rules in dynamics flowshops and job shops, European J. Of Operational Research. (1991). , 116(1), 156-170.

[44] Moon, I, & Lee, J. Genetic Algorithm Application to the Job Shop Scheduling Prob‐ lem with alternative Routing, Industrial Engineering Pusan National Universit;

Production Scheduling Approaches for Operations Management

http://dx.doi.org/10.5772/55431

137

[45] Goss, S, Aron, S, Deneubourg, J. L, & Pasteels, J. M. Self-organized shortcuts in the

[46] Van Der Zwaan, S, & Marques, C. Ant colony optimization for job shop scheduling. In Proc. of the 3rd Workshop on genetic algorithms and Artificial Life (GAAL'99),

[47] Dorigo, M, Maniezzo, V, & Colorni, A. The Ant System: Optimization by a colony of

[48] Cornea, D, Dorigo, M, & Glover, F. editors. New ideas in Optimization. McGraw-Hill

[49] Colorni, A, Dorigo, M, Maniezzo, V, & Trubian, M. Ant system for Job-shop Schedul‐ ing". JORBEL-Belgian Journal of Operations Research, Statistics and Computer Sci‐

[50] Gould, J. L. Honey bee recruitment: the dance-language controversy.Science.(1975).

[51] Grosan, C, Ajith, A, & Ramos, V. Stigmergic Optimization: Inspiration, Technologies and Perspectives.Studies in Computational Intelligence.(2006). Springer Berlin/

[52] Camazine, S, Deneubourg, J, Franks, N. R, Sneyd, J, Theraula, G, & Bonabeau, E. Self-Organization in Biological Systems. Princeton: Princeton University Press, (2003).

[53] Von Frisch, K. Bees: Their Vision, Chemical Senses and Language. (Revised edn) Cor‐

[54] Riley, J. R, Greggers, U, Smith, A. D, Reynolds, D. R, & Menzel, R. The flight paths of

[56] Seeley, T. D. The wisdom of the Hive: The Socal Physiology of Honey Bee Colonies.

[57] Tuba, M. Artificial BeeColony(ABC) with crossover and mutation. Advances in com‐

[58] Chong CS Low, MYH Sivakumar AI, Gay KL. Using a Bee Colony Algorithm for Neighborhood Search in Job Shop Scheduling Problems, In 21st European Confer‐

[59] Karaboga, D, & Basturk, B. On The performance of Artificial Bee Colony (ABC) algo‐

honeybees recruited by the waggle dance". Nature(2005). , 435, 205-207.

Massachusetts: Harward University Press, Cambridge, (1996).

ence On Modelling and Simulation ECMS (2007).

rithm. Applied Soft Computing, (2008).

[55] Eberhart, R, & Shi, . ., SwarmIntelligence.Morgan Kaufmann, San Francisco, 2001.

cooperating agents. IEEE Transactions on Systems, (1996). , 26, 1-13.

Argentine ant. Naturwissenschaften. (1989). , 76, 579-581.

(2000).

(1999).

ence, (1994).

Heidelberg., 31

International, Published in (1999).

nell University Press, N.Y., Ithaca, (1976).

puter Science, (2012). , 157-163.


[44] Moon, I, & Lee, J. Genetic Algorithm Application to the Job Shop Scheduling Prob‐ lem with alternative Routing, Industrial Engineering Pusan National Universit; (2000).

[26] Rajendran, C, & Holthaus, O. A comparative Study of Dispatching rules in dynamics flowshops and job shops, European J. Of Operational Research. (1991). , 116(1),

[27] Hertz, A, & Widmer, M. Guidelines for the use of meta-heuristics in combinatorial

[28] Zanakis, H. S, Evans, J. R, & Vazacopoulos, A. A. Heuristic methods and applica‐ tions: a categorized survey, European Journal of Operational Research,(1989). , 43 [29] Gondran, M, & Minoux, M. Graphes et algorithmes, Eyrolles Publishers, Paris,

[30] Hubscher, R, & Glover, . Applying tabu search with influential diversification to mul‐

[31] Blum, C, & Roli, A. Metaheuristics in combinatorial optimization: Overview and con‐

[33] Kirkpatrick, S, Gelatt, C. D, & Vecchi, M. P. (1983). Optimization by Simulated An‐

[34] Birbil, S. I. Fang S An Electromagnetism-like Mechanism for Global Optimization.

[36] Glover, F, & Laguna, M. (1997). Tabu Search. Norwell, MA: Kluwer Academic Publ. [37] Dorigo, M, & Di, G. Caro and L. M. Gambardella. Ant algorithm for discrete optimi‐

[38] Pham, D. T, Ghanbarzadeh, A, Koc, E, Otri, S, Rahim, S, & Zaidi, M. The Bees Algo‐ rithm. Technical Note, Manufacturing Engineering Centre, CardiffUniversity, UK,

[39] Zhou, D. N, Cherkassky, V, Baldwin, T. R, & Olson, D. E. Aneuralnetwork approach

[40] Holland John HAdaptation in Natural and Artificial Systems, University of Michi‐

[42] Beck, J. C, Prosser, P, & Selensky, E. Vehicle Routing and Job Shop Scheduling: What's the difference?, Proc. of 13th Int. Conf. on Autom. Plan. and Sched.

[43] Bierwirth, C, Mattfeld, C, & Kopfer, H. On Permutation Representations for Schedul‐

[32] Glover, F, & Kochenberger, G. A. Handbook of Metaheuristics, Springer, (2003).

optimization, European Journal of Operational Research, (2003). , 151

tiprocessor scheduling, Computers Ops. Res. 1994; 21(8): 877-884.

ceptual comparison,ACM Comput. Surv.,(2003). , 35, 268-308.

Journal of Global Optimization. (2003). , 25(3), 263-282.

[35] Mitchell, M. An Introduction to Genetic Algorithms. MIT Press, (1999).

to job-shop scheduling.IEEE Trans. on Neural Network,(1991).

nealing. Science 2000; , 220(4598), 671-680.

zation. Artificial Life, (1999). , 5(2), 137-172.

[41] Darwin Charles "Origin of the species"(1859).

ing Problems PPSN, (1996). , 310-318.

156-170.

136 Operations Management

(1985).

(2005).

gan,(1975).

(ICAPS03); (2003).


[60] Pham, D. T, Ghanbarzadeh, A, Koc, E, Otri, S, Rahim, S, & Zaidi, M. The Bees Algo‐ rithm- A Novel Tool for Complex Optimisation Problems, Proceedings of IPROMS (2006). Conference, , 454-461.

[75] Sim, S. K, Yeo, K. T, & Lee, W. H. An expert neural network system for dynamic jobshop scheduling, International Journal of Production Research, (1994). , 32(8),

Production Scheduling Approaches for Operations Management

http://dx.doi.org/10.5772/55431

139

[76] Kim, S. Y, Lee, Y. H, & Agnihotri, D. A hybrid approach for sequencing jobs using heuristic rules and neural networks, Prod. Planning and Control, (1995). , 6(5),

[77] Hopfield, J. J, & Tank, D. W. Neural computational of decisions in optimization prob‐

[78] Foo, S. Y, & Takefuji, Y. Stochastic neural networks for solving job-shop scheduling: Part 1. Problem representation, in: Kosko B, IEEE International Conference on Neural

[79] Haykin S Neural networks: a comprehensive foundation nd edn. Prentice Hall, New

[80] Storer, R. H, Wu, S. D, & Vaccari, R. Problem and heuristic space search strategies for job shop scheduling, ORSA Journal on Computing, (1995). , 7(4), 453-467.

[81] Van Hulle, M. M. A goal programming network for mixed integer linear program‐ ming: A case study for the jobshop scheduling problem, International Journal of

[82] Leshno, M, Lin, V. Y, Pinkus, A, & Schocken, S. Multilayer feedforward networks with a non-polynomial activation function can approximate any function. Neural

[83] Karlik, B, & Olgac, A. V. Performance analysis of various activation functions in gen‐ eralized MLP architectures of neural networks. Int J. Artif. Int. Expert Syst. (2011). ,

lems. Biological Cybernetics, (1985). , 52, 141-52.

Networks, San Diego, CA, USA, (1988). , 1988, 275-282.

1759-1773.

445-454.

Jersey; (2001).

1(4), 111-122.

Neural Systems, (1991).

Net. (1993). , 6(6), 861-867.


[75] Sim, S. K, Yeo, K. T, & Lee, W. H. An expert neural network system for dynamic jobshop scheduling, International Journal of Production Research, (1994). , 32(8), 1759-1773.

[60] Pham, D. T, Ghanbarzadeh, A, Koc, E, Otri, S, Rahim, S, & Zaidi, M. The Bees Algo‐ rithm- A Novel Tool for Complex Optimisation Problems, Proceedings of IPROMS

[61] Birbil, S. I. Fang S An Electromagnetism-like Mechanism for Global Optimization.

[62] Coulomb Premier mémoire sur l'électricité et le magnétismeHistoire de l'Académie

[63] Durney Carl H. and Johnson, Curtis C. Introduction to modern electromagnetics.

[65] Kirkpatrick, S, Gelatt, C. D, & Vecchi, M. P. Optimization by Simulated Annealing.

[66] Cerný, V. Thermodynamical approach to the traveling salesman problem: An effi‐ cient simulation algorithm. J. of Optimization Theory and Applications. (1985). , 45,

[67] Metropolis Nicholas; Rosenbluth, Arianna W.; Rosenbluth, Marshall N.; Teller, Au‐ gusta H.; Teller, Edward. Equation of State Calculations by Fast Computing Ma‐

[68] Van Laarhoven P.J.M, Aarts E.H.L., and J. K. Lenstra. Job shop scheduling by simula‐

[69] Glover, F. Future paths for Integer Programming and Links to Artificial Intelligence.

[70] Dell'Amico M and Trubian M. Applying tabu search to the job-shop scheduling

[71] Taillard, E.D. Parallel taboo search techniques for the job-shop scheduling problem.

[72] Marquez, L, Hill, T, Connor, O, & Remus, M. W., Neural network models for forecast a review. In: IEEE Proc of 25th Hawaii International Conference on System Sciences.

[73] Dagli, C. H, Lammers, S, & Vellanki, M. Intelligent scheduling in manufacturing us‐ ing neural networks, Journal of Neural Network Computing Technology Design and

[74] Cedimoglu, I. H. Neural networks in shop floor scheduling, Ph.D. Thesis, School of

Industrial and Manufacturing Science, Cranfield University, UK. (1993).

tedannealing. Operations Research, Vol. 40, No. 1, pp. 113-125, 1992.

Computers and Operations Research (1986). , 5(5), 533-549.

problem. Annals of Operations Research, (1993). , 41, 231-252.

[64] Griffiths David J.Introduction to Electrodynamics(3rd ed.). Prentice Hall; (1998).

Journal of Global Optimization. (2003). , 25(3), 263-282.

(2006). Conference, , 454-461.

Royale des Sciences, , 569-577.

Science. (1983). , 220(4598), 671-680.

chines. The Journal of Chemical Physics. (1953).

ORSAJournal on Computing, 1994; 6(2): 108-117.

McGraw-Hill. (1969).

(1992). , 4, 494-498.

Applications, (1991). , 2(4), 4-10.

41-51.

138 Operations Management


**Chapter 6**

**On Just-In-Time Production Leveling**

Francesco Giordano and Massimiliano M. Schiraldi

be the "one best way" to manage a manufacturing production site.

unleveled demand, stock levels in JIT may grow uncontrolled.

Since the 80's, the Japanese production techniques and philosophies spread among the Western manufacturing companies. This was possible because the Toyota Motor Company experience was indeed a success. The so-called "Toyota Production System" (TPS) seemed to

On the other side, it is also well known that not every implementation of Lean Production was a success, especially in Western companies: some enterprises – together with the consultancy firms that should have supported them – forgot that there are some main hypotheses and issues to comply with, in order to achieve Toyota-like results. On top of this, certain requisites are not related to a mere managerial approach, but depend on exogenous conditions, e.g. market behavior or supplier location; thus, not every company can successfully implement a TPS

One critical requirement for a TPS approach to be effective is that the production plan should be leveled both in quantity and in mix. This is indicated by the Japanese term *heijunka* (平準化), which stands for "leveling" or "smoothing". Here, we will focus our attention on why leveled production is a key factor for JIT implementation, and specifi‐ cally we will describe and analyze some approaches to deal with the leveling problem.

At first, the original Toyota Production System is briefly recalled, with specific regard to the *Just In Time* (JIT) approach to manage inventories in production. JIT is a stock replenishment policy that aims to reduce final product stocks and work-in-process (WIP); it coordinates requirements and replenishments in order to minimize stock-buffer needs, and it has reversed the old make-to-stock production approach, leading most companies to adopt "pull" instead of "push" policies to manage material and finished product flows. However, in case of

> © 2013 Giordano and Schiraldi; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 Giordano and Schiraldi; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/54994

**1. Introduction**

system.

### **On Just-In-Time Production Leveling**

Francesco Giordano and Massimiliano M. Schiraldi

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/54994

### **1. Introduction**

Since the 80's, the Japanese production techniques and philosophies spread among the Western manufacturing companies. This was possible because the Toyota Motor Company experience was indeed a success. The so-called "Toyota Production System" (TPS) seemed to be the "one best way" to manage a manufacturing production site.

On the other side, it is also well known that not every implementation of Lean Production was a success, especially in Western companies: some enterprises – together with the consultancy firms that should have supported them – forgot that there are some main hypotheses and issues to comply with, in order to achieve Toyota-like results. On top of this, certain requisites are not related to a mere managerial approach, but depend on exogenous conditions, e.g. market behavior or supplier location; thus, not every company can successfully implement a TPS system.

One critical requirement for a TPS approach to be effective is that the production plan should be leveled both in quantity and in mix. This is indicated by the Japanese term *heijunka* (平準化), which stands for "leveling" or "smoothing". Here, we will focus our attention on why leveled production is a key factor for JIT implementation, and specifi‐ cally we will describe and analyze some approaches to deal with the leveling problem.

At first, the original Toyota Production System is briefly recalled, with specific regard to the *Just In Time* (JIT) approach to manage inventories in production. JIT is a stock replenishment policy that aims to reduce final product stocks and work-in-process (WIP); it coordinates requirements and replenishments in order to minimize stock-buffer needs, and it has reversed the old make-to-stock production approach, leading most companies to adopt "pull" instead of "push" policies to manage material and finished product flows. However, in case of unleveled demand, stock levels in JIT may grow uncontrolled.

© 2013 Giordano and Schiraldi; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Giordano and Schiraldi; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Secondly, *kanban-*based production is described: *kanban*, a Japanese word meaning "visual record", is a card that contains information on a product in a given stage of the manufacturing process, and details on its path of completion. It is acknowledged as one of the most famous technique for material management in the JIT approach. Here we will present some common algorithms for managing *kanban* queues, along with their criticalities in terms of production smoothing requirements and reduced demand stochasticity. Some of the JIT-derivative approaches will be recalled as well: CONWIP, Quick Response Manufacturing, Theory of Constraints and the Just-In-Sequence approach.

in the right quantities, *just in time*, where they are needed". Differently from Orlicky's Material Requirement Planning (MRP) – which schedules the production run in advance compared to the moment in which a product is required [5] – JIT approach will replenish a stock only after

On Just-In-Time Production Leveling http://dx.doi.org/10.5772/54994 143

Indeed, generally speaking, processing a 10 product-batch requires one tenth of the time needed for a 100 product-batch. Thus, reducing the batch value (up to "one piece") would generate benefits in reducing either time-to-market or inventory level. This rule must come along with mixed-model production, which is the ability of manufacture different products alternating very small batches on shared resources. Demand-pull production indicates that the system is activated only after an order receipt; thus, no semi-finished product is processed if no downstream workstation asks for it. On top of this, in order to smooth out the material flow, the process operations should be organized to let each workstation complete different jobs in similar cycle times. The base reference is, thus, the *takt* time, a term derived from the German word *taktzeit* (cycle time), which is computed as a rapport between the net operating time, available for production, and the demand in terms of units required. These are the main differences between the *look-ahead* MRP and the *look-back* JIT system. For example, the MRP algorithm includes a lot-sizing phase, which results in product batching; this tends to generate higher stock levels compared to the JIT approach. Several studies have been carried out on MRP lot-sizing [6] and trying to improve the algorithm performance [7, 8, 9]; however, it seems that JIT can outperform MRP given the *heijunka* condition, in case of leveled production both in quantity and in mix. The traditional JIT technique to manage production flow is named

A *kanban* system is a multistage production scheduling and inventory control system [10]. Kanban cards are used to control production flow and inventories, keeping a reduced production lead time and work-in-process. Clearly, a kanban is not necessarily a physical

Since it was conceived as an easy and cheap way to control inventory levels, many different implementations of kanban systems have been experimented in manufacturing companies all over the world. In the following paragraphs, the most commonly used "one/two cards" kanban

paper/plastic card, as it can be either electronic or represented by the container itself.

its depletion. Among its pillars there are:

**•** one-piece flow;

**•** *takt* time;

*kanban.*

**3. The kanban technique**

systems are described.

**•** mixed-model production;

**•** demand-pull production;

Then, a review on the mixed-model JIT scheduling problem (MMJIT), along with the related solving approaches, is presented. Despite the huge literature on MMJIT mathematical pro‐ gramming approaches, here it will be described why the real-world production systems still prefer the simpler kanban approach and the old (1983) Goal Chasing Method algorithm. In the end, an overview on simulators advantages to test alternative heuristics to manage JIT production is presented.

### **2. Managing Just-In-Time production systems**

Just-in-Time was first proposed within the *Toyota Production System* (TPS) by Taiichi Ohno after the 50's when he conceived a more convenient way to manage inventory and control produc‐ tion systems [1]. *Lean Production* – the un-branded name of TPS – is a mix of a philosophy for production systems management and a collection of tools to improve the enterprise perform‐ ances [2]. Its cornerstones are the reduction of *muda* (wastes), *mura* (unevenness) and *muri* (overburden). Ohno identified seven wastes [3] that should be reduced to maximize the return of investment of a production site:


The TPS catchphrase emphasizes the "zero" concept: zero machine changeovers ("set-ups"), zero defects in the finished products, zero inventories, zero production stops, zero bureauc‐ racy, zero misalignments. This result may be reached through a continuous improvement activity, which takes cue from Deming's *Plan-Do-Check-Act* cycle [1]: the *kaizen* approach.

Just-In-Time is the TPS solution to reduce inventory and waiting times. Its name, according to [4], was coined by Toyota managers to indicate a method aimed to ensure "the right products, in the right quantities, *just in time*, where they are needed". Differently from Orlicky's Material Requirement Planning (MRP) – which schedules the production run in advance compared to the moment in which a product is required [5] – JIT approach will replenish a stock only after its depletion. Among its pillars there are:


Secondly, *kanban-*based production is described: *kanban*, a Japanese word meaning "visual record", is a card that contains information on a product in a given stage of the manufacturing process, and details on its path of completion. It is acknowledged as one of the most famous technique for material management in the JIT approach. Here we will present some common algorithms for managing *kanban* queues, along with their criticalities in terms of production smoothing requirements and reduced demand stochasticity. Some of the JIT-derivative approaches will be recalled as well: CONWIP, Quick Response Manufacturing, Theory of

Then, a review on the mixed-model JIT scheduling problem (MMJIT), along with the related solving approaches, is presented. Despite the huge literature on MMJIT mathematical pro‐ gramming approaches, here it will be described why the real-world production systems still prefer the simpler kanban approach and the old (1983) Goal Chasing Method algorithm. In the end, an overview on simulators advantages to test alternative heuristics to manage JIT

Just-in-Time was first proposed within the *Toyota Production System* (TPS) by Taiichi Ohno after the 50's when he conceived a more convenient way to manage inventory and control produc‐ tion systems [1]. *Lean Production* – the un-branded name of TPS – is a mix of a philosophy for production systems management and a collection of tools to improve the enterprise perform‐ ances [2]. Its cornerstones are the reduction of *muda* (wastes), *mura* (unevenness) and *muri* (overburden). Ohno identified seven wastes [3] that should be reduced to maximize the return

The TPS catchphrase emphasizes the "zero" concept: zero machine changeovers ("set-ups"), zero defects in the finished products, zero inventories, zero production stops, zero bureauc‐ racy, zero misalignments. This result may be reached through a continuous improvement activity, which takes cue from Deming's *Plan-Do-Check-Act* cycle [1]: the *kaizen* approach.

Just-In-Time is the TPS solution to reduce inventory and waiting times. Its name, according to [4], was coined by Toyota managers to indicate a method aimed to ensure "the right products,

Constraints and the Just-In-Sequence approach.

**2. Managing Just-In-Time production systems**

production is presented.

142 Operations Management

of investment of a production site:

**•** transportation;

**•** over-processing;

**•** over-producing;

**•** inventory;

**•** motion;

**•** waiting;

**•** defects.

Indeed, generally speaking, processing a 10 product-batch requires one tenth of the time needed for a 100 product-batch. Thus, reducing the batch value (up to "one piece") would generate benefits in reducing either time-to-market or inventory level. This rule must come along with mixed-model production, which is the ability of manufacture different products alternating very small batches on shared resources. Demand-pull production indicates that the system is activated only after an order receipt; thus, no semi-finished product is processed if no downstream workstation asks for it. On top of this, in order to smooth out the material flow, the process operations should be organized to let each workstation complete different jobs in similar cycle times. The base reference is, thus, the *takt* time, a term derived from the German word *taktzeit* (cycle time), which is computed as a rapport between the net operating time, available for production, and the demand in terms of units required. These are the main differences between the *look-ahead* MRP and the *look-back* JIT system. For example, the MRP algorithm includes a lot-sizing phase, which results in product batching; this tends to generate higher stock levels compared to the JIT approach. Several studies have been carried out on MRP lot-sizing [6] and trying to improve the algorithm performance [7, 8, 9]; however, it seems that JIT can outperform MRP given the *heijunka* condition, in case of leveled production both in quantity and in mix. The traditional JIT technique to manage production flow is named *kanban.*

### **3. The kanban technique**

A *kanban* system is a multistage production scheduling and inventory control system [10]. Kanban cards are used to control production flow and inventories, keeping a reduced production lead time and work-in-process. Clearly, a kanban is not necessarily a physical paper/plastic card, as it can be either electronic or represented by the container itself.

Since it was conceived as an easy and cheap way to control inventory levels, many different implementations of kanban systems have been experimented in manufacturing companies all over the world. In the following paragraphs, the most commonly used "one/two cards" kanban systems are described.

### **3.1. One-card kanban system**

The "one-card" is the simplest implementation of kanban systems. This approach is used when the upstream and downstream workstations (respectively, the preceding and succeeding processes) are physically close to each other, so they can share the same stock buffer. The card is called "Production Order Kanban" (POK) [11, 12]. The stock buffer acts either as the outbound buffer for the first (A) workstation or as the inbound buffer for the second (B) workstation. A schematic diagram of a one-card system is shown in Figure 1.

operator uses the WK board as a picking list to replenish the inbound buffer: he takes the WK off the board and look for the paired POK in the outbound buffer. Then, he moves the corresponding quantity of the indicated material from the A outbound to the B inbound buffer, while exchanging the related POK with the WK on the container, restoring the initial situation. Finally, he posts the left POK on the POK board. Hence, like in the previous scenario, A workstation operator knows that one container of that kind must be replenished in the outbound stock buffer. The effectiveness of this simple technique – which was described in details by several authors [3, 14, 15, 16] – is significantly influenced by the policy followed to

On Just-In-Time Production Leveling http://dx.doi.org/10.5772/54994 145

From the previously described procedure, it is clear that the each workstation bases its production sequence on kanban cards posted on the POK board. In literature, few traditional ways to manage the board are reported: each of them is quite easy to implement and does not

The most commonly used policy [3] requires having a board for each station, and this should be managed as a single First-In-First-Out (FIFO) queue. The board is usually structured as one vector (one column, multiple rows): POK are posted on the board in the last row. Rows are grouped in three zones (red/yellow/green) which indicate three levels of urgency (respectively, high/medium/low). Kanban are progressively moved from the green to the red zone and the workstation operator will process the topmost kanban. If a kanban reaches the red rows, it means that the correspondent material is likely to be requested soon, by the succeeding process. Thus, it should be urgently replenished in the outbound buffer, in order to avoid stock-outs.

determine the kanban processing order, in the boards.

**3.3. Standard approaches to manage the kanban board**

require significant investments in technology or other expensive assets.

**Figure 2.** A two-card kanban system

**Figure 1.** A one-card kanban system

Here, each container (the JIT unit load) has a POK attached, indicating the quantity of a certain material contained, along with eventual complementary information. The POK also represents a production order for the Workstation A, indicating to replenish the container with the same quantity. When a B operator withdraws a container from the buffer, he removes the POK from the container and posts it on a board. Hence, A operator knows that one container with a specific part-number must be replenished in the stock buffer.

### **3.2. Two-card kanban system**

In the two-card system, each workstation has separate inbound and outbound buffers [13, 14]. Two different types of cards are used: Production Order Kanbans (POK) and Withdrawal Kanbans (WK). A WK contains information on how much material (raw materials / semifinished materials) the succeeding process should withdraw. A schematic diagram of a twocard system is shown in Figure 2.

Each work-in-progress (WIP) container in the inbound buffer has a WK attached, as well as each WIP in the outbound buffer has a POK. WK and POK are paired, i.e. each given part number is always reported both in *n* POK and *n* WK. When a container is withdrawn from the inbound buffer, the B operator posts the WK on the WK board. Then, a warehouse-keeper operator uses the WK board as a picking list to replenish the inbound buffer: he takes the WK off the board and look for the paired POK in the outbound buffer. Then, he moves the corresponding quantity of the indicated material from the A outbound to the B inbound buffer, while exchanging the related POK with the WK on the container, restoring the initial situation. Finally, he posts the left POK on the POK board. Hence, like in the previous scenario, A workstation operator knows that one container of that kind must be replenished in the outbound stock buffer. The effectiveness of this simple technique – which was described in details by several authors [3, 14, 15, 16] – is significantly influenced by the policy followed to determine the kanban processing order, in the boards.

**Figure 2.** A two-card kanban system

**3.1. One-card kanban system**

144 Operations Management

**Figure 1.** A one-card kanban system

**3.2. Two-card kanban system**

card system is shown in Figure 2.

The "one-card" is the simplest implementation of kanban systems. This approach is used when the upstream and downstream workstations (respectively, the preceding and succeeding processes) are physically close to each other, so they can share the same stock buffer. The card is called "Production Order Kanban" (POK) [11, 12]. The stock buffer acts either as the outbound buffer for the first (A) workstation or as the inbound buffer for the second (B)

Here, each container (the JIT unit load) has a POK attached, indicating the quantity of a certain material contained, along with eventual complementary information. The POK also represents a production order for the Workstation A, indicating to replenish the container with the same quantity. When a B operator withdraws a container from the buffer, he removes the POK from the container and posts it on a board. Hence, A operator knows that one container with a

In the two-card system, each workstation has separate inbound and outbound buffers [13, 14]. Two different types of cards are used: Production Order Kanbans (POK) and Withdrawal Kanbans (WK). A WK contains information on how much material (raw materials / semifinished materials) the succeeding process should withdraw. A schematic diagram of a two-

Each work-in-progress (WIP) container in the inbound buffer has a WK attached, as well as each WIP in the outbound buffer has a POK. WK and POK are paired, i.e. each given part number is always reported both in *n* POK and *n* WK. When a container is withdrawn from the inbound buffer, the B operator posts the WK on the WK board. Then, a warehouse-keeper

specific part-number must be replenished in the stock buffer.

workstation. A schematic diagram of a one-card system is shown in Figure 1.

### **3.3. Standard approaches to manage the kanban board**

From the previously described procedure, it is clear that the each workstation bases its production sequence on kanban cards posted on the POK board. In literature, few traditional ways to manage the board are reported: each of them is quite easy to implement and does not require significant investments in technology or other expensive assets.

The most commonly used policy [3] requires having a board for each station, and this should be managed as a single First-In-First-Out (FIFO) queue. The board is usually structured as one vector (one column, multiple rows): POK are posted on the board in the last row. Rows are grouped in three zones (red/yellow/green) which indicate three levels of urgency (respectively, high/medium/low). Kanban are progressively moved from the green to the red zone and the workstation operator will process the topmost kanban. If a kanban reaches the red rows, it means that the correspondent material is likely to be requested soon, by the succeeding process. Thus, it should be urgently replenished in the outbound buffer, in order to avoid stock-outs.

Although this policy does not rely on any optimized procedure, it may ensure a leveled production rate in each workstation, given the fact that other TPS pillars are implemented, e.g. setup time reduction and mixed model scheduling. Indeed, if the final downstream demand is leveled, the production plan of the workstations will be leveled as well. Clearly, this policy is vulnerable to high setup times and differences among workstations cycle times: in this latter case, indeed, the ideal jobs sequence for a workstation may be far from optimal for the preceding. It is noticeable that the colored zones on the board only provide a visual support for the operators and do not influence the jobs processing order.

on the job sequence is left to the operator. Only in some enhanced version, the sequence is pre-

On Just-In-Time Production Leveling http://dx.doi.org/10.5772/54994 147

Given the fact that JIT is based on stock replenishment, constant production and withdrawal rates should be ensured in order to avoid either stock outs or stock proliferation. Mixed-model production requires a leveled Master Production Schedule (MPS) [19], but this is not sufficient to smooth the production rate in a short time period. While it is easy to obtain a leveled production in a medium or even medium-short period, it is difficult to do it in each hour, for

Indeed, demand is typically unstable under two points of view: random frequency, which is the chance that production orders are irregularly received, and random quantities, which is related to product mix changes. Indeed, since TPS assume minimal stock levels, the only chance to cope with demand peak is to recur to extra production capacity. However, available production capacity should be higher than required as the average (as TPS requires), but for sure cannot be limitless. Thus, the JIT management system should anyway be able to consider the opportunity of varying the maintenance plan as well as the setup scheduling, in case of need. On the other hand, if the production site faces a leveled production, changes in product mix should not represent a problem; however, they increase sequencing problem complexity. Most of the operational research solutions for JIT scheduling are designed for a fixed product mix, thus its changes can greatly affect the optimality of solutions, up to make them useless.

On the contrary, kanban board mechanism is not influenced by demand randomness: as long as demand variations are contained into a certain (small) interval, kanban-managed worksta‐ tions will handle their production almost without any problem. Therefore, in case of unstable demand, in order to prevent stock-outs, inventory managers can only increase the kanban number for each product: the greater are the variations, the greater is the need of kanban cards and, thus, the higher is the stock level. In order to prevent stock level raise, some authors [20, 21] proposed to adopt a frozen schedule to implement JIT production in real companies, where demand may clearly be unstable. Anyway, this solution goes in the opposite direction

Moreover, one-piece-flow conflicts with demand variability: the batch size should be chosen as its processing time exceeds the inter-arrival time of materials requests. Thus, the leveling algorithm must find the proper sequencing policy that, at the same time, reduces the batch size and minimize the inter-arrival time of each material request. This sequence clearly depends on the total demand of each material in the planning horizon. However, JIT does not use forecasting, except during system design; thus, scheduling may be refreshed daily. From a computational point of view, this is a non-linear integer optimization problem (defined *mixed-model just-in-time scheduling problem*, MMJIT), which has non-polynomial complexity and it currently cannot be solved in an acceptable time. Thus, reliable suppliers and a clock‐ work supply chain are absolutely required to implement JIT. Toyota faced this issue using

**•** moving suppliers in the areas around the production sites, in order to minimize the supply

defined applying some scheduling algorithm.

each workstation and each material.

compared to JIT foundations.

various approaches [22]:

lead time;

A *heijunka box* is a sort of enhanced kanban board: it still acts as a visual scheduling tool to obtain production leveling at the workstations. However, differently from the traditional board, it manages to keep evidence of materials distinctions. Usually, it is represented as a grid-shaped wall schedule. Analogously to the simpler board, each row represents a time interval (usually, 30-60 minutes), but multiple columns are present, each one associated to a different material. POKs are placed in the so-called "pigeon-holes" within the box, based on number of items to be processed in the job and on the material type. Workstation operators will process all the kanban placed in the current period row, removing them from the box. Hence, heijunka box not only provides a representation for each job queued for production, but for its scheduled time as well, and allows operators to pursue production leveling when inserting new POKs in the boxes.

### **3.4. Criticism on JIT**

During the last decades, Just-In-Time has been criticized from different authors [17]. Indeed, certain specific conditions – which, though, are not uncommon in manufacturing companies – can put in evidence some well-known weak points of the Japanese approach. Specifically, un-steady demand in multi-product environments where differences in processing lead times are not negligible represent a scenario where JIT would miserably fail, despite the commitment of the operations managers.

First, we have to keep in mind that one pillar of Lean Production is the "one-piece-flow" diktat. A one-piece batch would comply with the Economic Production Quantity theory [18] only when order cost (i.e. setup time) is zero. Having non-negligible setup times hampers JIT implementation and makes the production leveling problem even more complicated. It is peculiar that, originally, operations researchers concentrated on finding the best jobs sequence considering negligible setups time. This bound was introduced into the mixed model kanban scheduling problem only since 2000. Setups are inevitable in the Lean Production philosophy, but are considered already optimized as well. Given that setup times are *muda*, TPS approach focuses on quickening the setup time, e.g. through technical interventions on workstations or on the setup process with SMED techniques, not on reducing their frequency: the increased performance gained through setups frequency reduction is not worth the flexibility loss that the system may suffer as a consequence. Indeed, the standard kanban management system, ignoring the job sequencing, does not aim at reducing setup wastes at all. Analogously, the Heijunka box was developed for leveling production and can only assure that the product mix in the very short term reproduces that in the long term; in its original application, the decision on the job sequence is left to the operator. Only in some enhanced version, the sequence is predefined applying some scheduling algorithm.

Although this policy does not rely on any optimized procedure, it may ensure a leveled production rate in each workstation, given the fact that other TPS pillars are implemented, e.g. setup time reduction and mixed model scheduling. Indeed, if the final downstream demand is leveled, the production plan of the workstations will be leveled as well. Clearly, this policy is vulnerable to high setup times and differences among workstations cycle times: in this latter case, indeed, the ideal jobs sequence for a workstation may be far from optimal for the preceding. It is noticeable that the colored zones on the board only provide a visual support

A *heijunka box* is a sort of enhanced kanban board: it still acts as a visual scheduling tool to obtain production leveling at the workstations. However, differently from the traditional board, it manages to keep evidence of materials distinctions. Usually, it is represented as a grid-shaped wall schedule. Analogously to the simpler board, each row represents a time interval (usually, 30-60 minutes), but multiple columns are present, each one associated to a different material. POKs are placed in the so-called "pigeon-holes" within the box, based on number of items to be processed in the job and on the material type. Workstation operators will process all the kanban placed in the current period row, removing them from the box. Hence, heijunka box not only provides a representation for each job queued for production, but for its scheduled time as well, and allows operators to pursue production leveling when

During the last decades, Just-In-Time has been criticized from different authors [17]. Indeed, certain specific conditions – which, though, are not uncommon in manufacturing companies – can put in evidence some well-known weak points of the Japanese approach. Specifically, un-steady demand in multi-product environments where differences in processing lead times are not negligible represent a scenario where JIT would miserably fail, despite the commitment

First, we have to keep in mind that one pillar of Lean Production is the "one-piece-flow" diktat. A one-piece batch would comply with the Economic Production Quantity theory [18] only when order cost (i.e. setup time) is zero. Having non-negligible setup times hampers JIT implementation and makes the production leveling problem even more complicated. It is peculiar that, originally, operations researchers concentrated on finding the best jobs sequence considering negligible setups time. This bound was introduced into the mixed model kanban scheduling problem only since 2000. Setups are inevitable in the Lean Production philosophy, but are considered already optimized as well. Given that setup times are *muda*, TPS approach focuses on quickening the setup time, e.g. through technical interventions on workstations or on the setup process with SMED techniques, not on reducing their frequency: the increased performance gained through setups frequency reduction is not worth the flexibility loss that the system may suffer as a consequence. Indeed, the standard kanban management system, ignoring the job sequencing, does not aim at reducing setup wastes at all. Analogously, the Heijunka box was developed for leveling production and can only assure that the product mix in the very short term reproduces that in the long term; in its original application, the decision

for the operators and do not influence the jobs processing order.

inserting new POKs in the boxes.

**3.4. Criticism on JIT**

146 Operations Management

of the operations managers.

Given the fact that JIT is based on stock replenishment, constant production and withdrawal rates should be ensured in order to avoid either stock outs or stock proliferation. Mixed-model production requires a leveled Master Production Schedule (MPS) [19], but this is not sufficient to smooth the production rate in a short time period. While it is easy to obtain a leveled production in a medium or even medium-short period, it is difficult to do it in each hour, for each workstation and each material.

Indeed, demand is typically unstable under two points of view: random frequency, which is the chance that production orders are irregularly received, and random quantities, which is related to product mix changes. Indeed, since TPS assume minimal stock levels, the only chance to cope with demand peak is to recur to extra production capacity. However, available production capacity should be higher than required as the average (as TPS requires), but for sure cannot be limitless. Thus, the JIT management system should anyway be able to consider the opportunity of varying the maintenance plan as well as the setup scheduling, in case of need. On the other hand, if the production site faces a leveled production, changes in product mix should not represent a problem; however, they increase sequencing problem complexity. Most of the operational research solutions for JIT scheduling are designed for a fixed product mix, thus its changes can greatly affect the optimality of solutions, up to make them useless.

On the contrary, kanban board mechanism is not influenced by demand randomness: as long as demand variations are contained into a certain (small) interval, kanban-managed worksta‐ tions will handle their production almost without any problem. Therefore, in case of unstable demand, in order to prevent stock-outs, inventory managers can only increase the kanban number for each product: the greater are the variations, the greater is the need of kanban cards and, thus, the higher is the stock level. In order to prevent stock level raise, some authors [20, 21] proposed to adopt a frozen schedule to implement JIT production in real companies, where demand may clearly be unstable. Anyway, this solution goes in the opposite direction compared to JIT foundations.

Moreover, one-piece-flow conflicts with demand variability: the batch size should be chosen as its processing time exceeds the inter-arrival time of materials requests. Thus, the leveling algorithm must find the proper sequencing policy that, at the same time, reduces the batch size and minimize the inter-arrival time of each material request. This sequence clearly depends on the total demand of each material in the planning horizon. However, JIT does not use forecasting, except during system design; thus, scheduling may be refreshed daily. From a computational point of view, this is a non-linear integer optimization problem (defined *mixed-model just-in-time scheduling problem*, MMJIT), which has non-polynomial complexity and it currently cannot be solved in an acceptable time. Thus, reliable suppliers and a clock‐ work supply chain are absolutely required to implement JIT. Toyota faced this issue using various approaches [22]:

**•** moving suppliers in the areas around the production sites, in order to minimize the supply lead time;


process the job as soon as possible - POLCA simply authorizes the possibility to start the job. Analogously to CONWIP and Kanban, POLCA uses production control cards in order to control material flows. These cards are only used between, and not within, work cells. Inside each work cell, material flows resemble the CONWIP approach. On top of this, the POLCA cards, instead of being specifically assigned to a product as in a Kanban system, are assigned to pairs of cells. Moreover, whereas a POK card is an inventory replenishment signal, a POLCA card is a capacity signal. If a card returns from a downstream cell, it signals that there is enough capacity to process a job. Thus, the preceding cell will proceed only if the succeeding cell has available production capacity. According to some authors [20] a POLCA system may overcome the drawbacks of both standard MRPs and kanban systems, helping in managing both shortterm fluctuation in capacity (slowdowns, failures, setups, quality issues) and reducing unnecessary stocks, which is always present in any unlevelled replenishment system – i.e.

On Just-In-Time Production Leveling http://dx.doi.org/10.5772/54994 149

The Just in Sequence approach is an evolution of JIT, which embeds the CONWIP idea of mixing push/requirement and pull/replenishment production management systems. The overall goal of JIS is to synchronize the material flow within the supply chain and to reduce both safety stocks and material handling. Once the optimal production sequence is decided, it is adopted all along the process line and up to the supply chain. Thus, the suppliers are asked to comply not only to quantity requirements but also to the product sequence and mix, for a certain period of time. In this case the demand must be stable, or a frozen period should be defined (i.e. a time interval, prior to production, in which the demand cannot be changed) [26]. Clearly, when the demand mix significantly changes, the sequence must be re-computed, similarly to what happens in MRP. This makes the JIS system less flexible compared to JIT. Research results [27] proved that, applying some techniques to reduce unsteadiness – such as flexible order assignment or mixed bank buffers – the sequence can be preserved with a low stock level. Thanks to *ad-hoc* rescheduling points the sequence can be propagated downstream,

Leveraging on the common idea that "a chain is no stronger than its weakest link", the Israeli physicist E.M. Goldratt firstly introduced the Theory of Constraints (TOC) in his most famous business novel "the Goal" [28]. Looking to a production flow-shop as a chain, the weakest link is represented by the line bottleneck. Compared to the TPS approach of reducing wastes, this approach is focused on improving bottleneck operations, trying to maximize the *throughput*

(production rate), minimizing inventory and operational expenses at the same time.

**3.** alignment of the other operations to the constraint optimization;

where heijunka condition is not met.

reducing the impact of variability.

**1.** constraint identification; **2.** constraint optimization;

**4.4. The "Theory of Constraints" approach**

Its implementation is based on a loop of five steps:

**4.3. Just in sequence**

In the end, it should be noted that, considering that at each stage of the production process at least one unit of each material must be in stock, in case of a great product variety the total stock amount could be huge in JIT. This problem was known also by Toyota [1], who addressed it limiting the product customization opportunities and bundling optional combinations.

### **4. Alternative approaches**

### **4.1. CONWIP**

Many alternatives to JIT have been proposed since TPS appeared in Western countries. One of the most famous JIT-derivative approaches is CONWIP (CONstant Work-In-Proc‐ ess). This methodology, firstly proposed in the 90's [23], tries to mix push and pull ap‐ proaches: it schedules tasks for each station – with a push approach – while production is triggered by inventory events, which is a pull rule. Thus, CONWIP is card-based, as kanban systems, but cards do not trigger the production of a single component in the closest upward workstation; conversely, cards are used to start the whole production line, from beginning downwards. Then, from the first workstation up to the last one, the process is push-driven; materials are processed as they get to an inbound buffer, not‐ withstanding the stock levels. Only the last workstation has a predetermined stock level, similar to the JIT outbound buffer. All queues are managed through a FIFO policy. In order to have a leveled production rate and to avoid production spikes or idle times, the system is calibrated on the slowest workstation, the *bottleneck*. Results from simula‐ tions showed [24] that CONWIP could grant shorter lead times and more stable produc‐ tion rate if compared to Kanban; however, it usually needs a higher WIP level. A CONWIP system is also easier to implement and adjust, since it has only one card set.

#### **4.2. POLCA**

Another alternative technique mixing push and pull system is the POLCA (Paired-Cell Overlapping Loops of Cards with Authorization), which stands at the base of the Quick Response Manufacturing (QRM) approach, proposed in 1998 [25]. QRM aims to minimize lead times rather than addressing waste reduction, as TPS does. A series of tools, such as manu‐ facturing critical-path time, cellular organization, batch optimization and high level MRP, are used to minimize stock levels: the lesser is the lead time, the lesser is the on-hand inventory. Likewise CONWIP, POLCA handles WIP proliferation originating from multiple products, since it does not require each station to have a base stock of each component. At first, an MRPlike algorithm (called HL/MRP) creates some "Release Authorization Times". That means that the HL/MRP system defines when each cell may start each job, as MRP defines the "Start Dates". However, differently from a standard push system - where a workstation should process the job as soon as possible - POLCA simply authorizes the possibility to start the job. Analogously to CONWIP and Kanban, POLCA uses production control cards in order to control material flows. These cards are only used between, and not within, work cells. Inside each work cell, material flows resemble the CONWIP approach. On top of this, the POLCA cards, instead of being specifically assigned to a product as in a Kanban system, are assigned to pairs of cells. Moreover, whereas a POK card is an inventory replenishment signal, a POLCA card is a capacity signal. If a card returns from a downstream cell, it signals that there is enough capacity to process a job. Thus, the preceding cell will proceed only if the succeeding cell has available production capacity. According to some authors [20] a POLCA system may overcome the drawbacks of both standard MRPs and kanban systems, helping in managing both shortterm fluctuation in capacity (slowdowns, failures, setups, quality issues) and reducing unnecessary stocks, which is always present in any unlevelled replenishment system – i.e. where heijunka condition is not met.

### **4.3. Just in sequence**

**•** collaborating with the suppliers and helping them to introduce JIT in their factories;

situation.

148 Operations Management

**4.1. CONWIP**

**4.2. POLCA**

**4. Alternative approaches**

**•** always relying on two alternative suppliers for the same material, not to be put in a critical

In the end, it should be noted that, considering that at each stage of the production process at least one unit of each material must be in stock, in case of a great product variety the total stock amount could be huge in JIT. This problem was known also by Toyota [1], who addressed it limiting the product customization opportunities and bundling optional combinations.

Many alternatives to JIT have been proposed since TPS appeared in Western countries. One of the most famous JIT-derivative approaches is CONWIP (CONstant Work-In-Proc‐ ess). This methodology, firstly proposed in the 90's [23], tries to mix push and pull ap‐ proaches: it schedules tasks for each station – with a push approach – while production is triggered by inventory events, which is a pull rule. Thus, CONWIP is card-based, as kanban systems, but cards do not trigger the production of a single component in the closest upward workstation; conversely, cards are used to start the whole production line, from beginning downwards. Then, from the first workstation up to the last one, the process is push-driven; materials are processed as they get to an inbound buffer, not‐ withstanding the stock levels. Only the last workstation has a predetermined stock level, similar to the JIT outbound buffer. All queues are managed through a FIFO policy. In order to have a leveled production rate and to avoid production spikes or idle times, the system is calibrated on the slowest workstation, the *bottleneck*. Results from simula‐ tions showed [24] that CONWIP could grant shorter lead times and more stable produc‐ tion rate if compared to Kanban; however, it usually needs a higher WIP level. A CONWIP system is also easier to implement and adjust, since it has only one card set.

Another alternative technique mixing push and pull system is the POLCA (Paired-Cell Overlapping Loops of Cards with Authorization), which stands at the base of the Quick Response Manufacturing (QRM) approach, proposed in 1998 [25]. QRM aims to minimize lead times rather than addressing waste reduction, as TPS does. A series of tools, such as manu‐ facturing critical-path time, cellular organization, batch optimization and high level MRP, are used to minimize stock levels: the lesser is the lead time, the lesser is the on-hand inventory. Likewise CONWIP, POLCA handles WIP proliferation originating from multiple products, since it does not require each station to have a base stock of each component. At first, an MRPlike algorithm (called HL/MRP) creates some "Release Authorization Times". That means that the HL/MRP system defines when each cell may start each job, as MRP defines the "Start Dates". However, differently from a standard push system - where a workstation should

The Just in Sequence approach is an evolution of JIT, which embeds the CONWIP idea of mixing push/requirement and pull/replenishment production management systems. The overall goal of JIS is to synchronize the material flow within the supply chain and to reduce both safety stocks and material handling. Once the optimal production sequence is decided, it is adopted all along the process line and up to the supply chain. Thus, the suppliers are asked to comply not only to quantity requirements but also to the product sequence and mix, for a certain period of time. In this case the demand must be stable, or a frozen period should be defined (i.e. a time interval, prior to production, in which the demand cannot be changed) [26]. Clearly, when the demand mix significantly changes, the sequence must be re-computed, similarly to what happens in MRP. This makes the JIS system less flexible compared to JIT. Research results [27] proved that, applying some techniques to reduce unsteadiness – such as flexible order assignment or mixed bank buffers – the sequence can be preserved with a low stock level. Thanks to *ad-hoc* rescheduling points the sequence can be propagated downstream, reducing the impact of variability.

### **4.4. The "Theory of Constraints" approach**

Leveraging on the common idea that "a chain is no stronger than its weakest link", the Israeli physicist E.M. Goldratt firstly introduced the Theory of Constraints (TOC) in his most famous business novel "the Goal" [28]. Looking to a production flow-shop as a chain, the weakest link is represented by the line bottleneck. Compared to the TPS approach of reducing wastes, this approach is focused on improving bottleneck operations, trying to maximize the *throughput* (production rate), minimizing inventory and operational expenses at the same time.

Its implementation is based on a loop of five steps:


Unfortunately, the problem with these assumptions virtually never occurs in industry. However, the problem is of mathematical interest because of its high complexity (in a theo‐ retical mathematical sense). Because researchers drew their inspiration from the literature and

On Just-In-Time Production Leveling http://dx.doi.org/10.5772/54994 151

The objective of a MMJIT is to obtain a leveled production. This aim is formalized in the Output Rate Variation (ORV) objective function (OF) [34, 35]. Consider a M set of *m* product models, each one with a *dm* demand to be produced during a specific period (e.g., 1 day or shift) divided

Each product type *m* consists of different components *p* belonging to the set *P*. The production coefficients *apm* specify the number of units of part *p* needed in the assembly of one unit of product *m*. The matrix of coefficients A = (*apm*) represents the Bill Of Material (BOM). Given the total demand for part *p* required for the production of all *m* models in the planning horizon,

Given a set of binary variables *xmt* which represent whether a product *m* will be produced in

The first and second group of constraints indicate that for each time *t* exactly one model will be produced and that the total demand *dm* for each model will be fulfilled by the time *T*. More

A simplified version of this problem, labeled "Product Rate Variation Problem" (PRV) was studied by several authors [36, 37, 38], although it was found it is not sufficient to cope with the variety of production models of modern assembly lines [33]. Other adaptations of this problem were proposed along the years; after 2000, when some effective solving algorithms were proposed [39], the literature interest moved on to the MMJIT scheduling problem *with setups* [40]. In this case, a dual OF is used [41]: the first part is the ORV/PRV standard function,

constraints can be added if required, for instance in case of limited storage space.

not from industry, on MMJIT far more was published than practiced.

the target demand rate *rp* per production cycle is calculated as follows:

2

into *T* production cycles, with

∑*<sup>m</sup>*∈*<sup>M</sup> dm* <sup>⋅</sup> *apm*

∑*<sup>m</sup>*∈*<sup>M</sup> xmt* =1, <sup>∀</sup>*<sup>t</sup>* =1, ..., *<sup>T</sup>*

*xmt* ∈{0, 1}, ∀*m*∈*M* ;*t* =1, ..., *T*

while the second is simply:

*t*=2 *T st*

min*S* =1+ ∑

*xmt* =*dm*, ∀*m*∈*M*

*<sup>T</sup>* , <sup>∀</sup> *<sup>p</sup>* <sup>∈</sup>*<sup>P</sup>*

the *t* cycle, the problem is modeled as follows [33]:

*xmt* ⋅ *apm* −*t* ⋅ *rp*)

∑*<sup>m</sup>*∈*<sup>M</sup> dm* <sup>=</sup>*<sup>T</sup>*

*rp* =

min*Z* = ∑ *p*∈*P* ∑ *t*=1 *T* ( ∑ *m*∈*M* ∑ *t*=1 *t*

subject to

∑ *t*=1 *T*

Again, Deming's concept of "improvement cycle" is recalled. However, improvements are only focused on the bottleneck, the Critical Constraint Resource (CCR), whereas in the Lean Production's Kaizen approach bottom-up, an improvement may arise wherever wastes are identified; moreover, improvements only aim to increase throughput. It is though noticeable that the author includes, as a possible throughput constraint, not only machinery problem but also people (lack of proper skills) and policies (bad working). To this extent, Goldratt coined the "Drum-Buffer-Rope" (DBR) expression: the bottleneck workstation will define the production takt-time, giving the beat as with a drum. The remaining upstream and down‐ stream workstations will follow this beat. This requires the drum to have an optimized schedule, which is imposed to all the production line. Thus, takt-time is not defined from the final demand anymore, but is set equal to the CCR minimal cycle time, given that the bottleneck capacity cannot be overcome. A "buffer" stock is only placed before the CCR, assuring that no upward issue could affect the process pace, reducing line throughput. This helps in reducing the inventory level in comparison to replenishment approaches, where buffers are placed among all the workstations. Eventually, other stock buffers may be placed in few synchroni‐ zation points in the processes, besides the final product warehouse, which prevents stock-outs due to oscillating demand. The "rope" represents the job release authorization mechanism: a CONWIP approach is used between the CCR and the first phase of the process. Thus, the advanced entrance of a job in the system is proportional to the buffer size, measured in time. Failing to comply with this rule is likely to generate too high work-in-process, slowing down the entire system, or to generate a starvation condition on the CCR, with the risk of reducing the throughput. Several authors [29, 30, 31] analyzed the DBR rule in comparison to planning with mathematical linear programming techniques. Results on the most effective approach are controversial.

### **5. The mixed-model JIT scheduling problem**

The leveling problem in JIT operations research literature was formalized in 1983 as the "mixed-model just-in-time scheduling (or sequencing) problem" (MMJIT) [32], along with its first solution approach, the "Goal Chasing Method" (GCM I) heuristic.

Some assumptions are usually made to approach this problem [33]. The most common are:


Unfortunately, the problem with these assumptions virtually never occurs in industry. However, the problem is of mathematical interest because of its high complexity (in a theo‐ retical mathematical sense). Because researchers drew their inspiration from the literature and not from industry, on MMJIT far more was published than practiced.

The objective of a MMJIT is to obtain a leveled production. This aim is formalized in the Output Rate Variation (ORV) objective function (OF) [34, 35]. Consider a M set of *m* product models, each one with a *dm* demand to be produced during a specific period (e.g., 1 day or shift) divided into *T* production cycles, with

$$\sum\_{m \in M} d\_m = T$$

**4.** elevation of the constraint (improving throughput);

**5. The mixed-model JIT scheduling problem**

raw materials in finished products;

**•** demand is constant and known;

**•** zero setup times (or setup times are negligible);

**•** production lead time is the same for each product.

first solution approach, the "Goal Chasing Method" (GCM I) heuristic.

**•** no variability; the problem is defined in a deterministic scenario;

controversial.

150 Operations Management

**5.** if the constraint after the previous 4 steps has moved, restart the process.

Again, Deming's concept of "improvement cycle" is recalled. However, improvements are only focused on the bottleneck, the Critical Constraint Resource (CCR), whereas in the Lean Production's Kaizen approach bottom-up, an improvement may arise wherever wastes are identified; moreover, improvements only aim to increase throughput. It is though noticeable that the author includes, as a possible throughput constraint, not only machinery problem but also people (lack of proper skills) and policies (bad working). To this extent, Goldratt coined the "Drum-Buffer-Rope" (DBR) expression: the bottleneck workstation will define the production takt-time, giving the beat as with a drum. The remaining upstream and down‐ stream workstations will follow this beat. This requires the drum to have an optimized schedule, which is imposed to all the production line. Thus, takt-time is not defined from the final demand anymore, but is set equal to the CCR minimal cycle time, given that the bottleneck capacity cannot be overcome. A "buffer" stock is only placed before the CCR, assuring that no upward issue could affect the process pace, reducing line throughput. This helps in reducing the inventory level in comparison to replenishment approaches, where buffers are placed among all the workstations. Eventually, other stock buffers may be placed in few synchroni‐ zation points in the processes, besides the final product warehouse, which prevents stock-outs due to oscillating demand. The "rope" represents the job release authorization mechanism: a CONWIP approach is used between the CCR and the first phase of the process. Thus, the advanced entrance of a job in the system is proportional to the buffer size, measured in time. Failing to comply with this rule is likely to generate too high work-in-process, slowing down the entire system, or to generate a starvation condition on the CCR, with the risk of reducing the throughput. Several authors [29, 30, 31] analyzed the DBR rule in comparison to planning with mathematical linear programming techniques. Results on the most effective approach are

The leveling problem in JIT operations research literature was formalized in 1983 as the "mixed-model just-in-time scheduling (or sequencing) problem" (MMJIT) [32], along with its

Some assumptions are usually made to approach this problem [33]. The most common are:

**•** no details on the process phases: the process is considered as a black box, which transforms

Each product type *m* consists of different components *p* belonging to the set *P*. The production coefficients *apm* specify the number of units of part *p* needed in the assembly of one unit of product *m*. The matrix of coefficients A = (*apm*) represents the Bill Of Material (BOM). Given the total demand for part *p* required for the production of all *m* models in the planning horizon, the target demand rate *rp* per production cycle is calculated as follows:

$$r\_p = \frac{\sum\_{m \in M} d\_m \cdot a\_{pm}}{T}, \quad \forall \ p \in P$$

Given a set of binary variables *xmt* which represent whether a product *m* will be produced in the *t* cycle, the problem is modeled as follows [33]:

$$\text{minim}Z = \sum\_{p \in P} \sum\_{t=1}^{T} \left( \sum\_{m \in M} \sum\_{t=1}^{t} \left. \mathbf{x}\_{mt} \cdot \left. a\_{pm} - t \cdot r\_p \right. \right|^2 \right)$$

subject to

$$\begin{aligned} &\sum\_{m\in M} \mathbf{x}\_{mt} = \mathbf{1}, \ \forall \ t = \mathbf{1}, \ \dots, \ T\\ &\sum\_{t=1}^{T} \mathbf{x}\_{mt} = d\_{m\prime} \ \forall \ m \in M\\ &\mathbf{x}\_{mt} \in \{0, \ 1\}, \ \forall \ m \in M; t = \mathbf{1}, \ \dots, \ T\end{aligned}$$

The first and second group of constraints indicate that for each time *t* exactly one model will be produced and that the total demand *dm* for each model will be fulfilled by the time *T*. More constraints can be added if required, for instance in case of limited storage space.

A simplified version of this problem, labeled "Product Rate Variation Problem" (PRV) was studied by several authors [36, 37, 38], although it was found it is not sufficient to cope with the variety of production models of modern assembly lines [33]. Other adaptations of this problem were proposed along the years; after 2000, when some effective solving algorithms were proposed [39], the literature interest moved on to the MMJIT scheduling problem *with setups* [40]. In this case, a dual OF is used [41]: the first part is the ORV/PRV standard function, while the second is simply:

$$\text{minS} = 1 + \sum\_{t=2}^{T} s\_t$$

In this equation, *st* = 1 if a setup is required in position *t*; while *st =* 0, if no setup is required. The assumptions of this model are:

**•** Interactive methods;

**•** Decision aids methods;

**•** Dynamic Programming.

*Minimize inventory Maximize throughput*

*Resources*

**Parameter / major alternatives Alternatives**

*Kanban type* One-card Two-card

*Stochasticity* Random set-up times

*System reliability* Dynamic demand

**Table 1.** Alternative configurations of most common MMJIT models

*Decision variables* Kanban number Order interval Safety Stock level Other *Performance measures* Kanban number Utilization ratio Leveling effectiveness

> *Period number* Multi-period Single-period *Item number* Multi-item Single-item *Stage number* Multi-stage Single-stage *Machine number* Multiple machines Single machine

*capacity* Capacitated Non-capacitated

*Minimize cost* Setup cost Inventory holding cost Operating cost Stock-out cost

*Container size* Defined Ignored (container size equals one item)

Random lead times

Imbalance

Random processing times

between stages Reworks Scraps

Determinism

On Just-In-Time Production Leveling http://dx.doi.org/10.5772/54994 153

Random demand

*Production cycles* Manufacturing system Continuous production *Material handling* Zero withdrawal times Non-zero withdrawal times *Shortages* Ignored Computed as lost sales [43]

> Breakdowns possibility

In some experiments [44] Tabu Search and Simulated Annealing resulted to be more effective than GCM; however, the computational complexity of these meta-heuristics – and the consequent slowness of execution – makes them quite useless in practical cases, as the same

Another meta-heuristic based on an optimization approach with Pareto-efficiency frontier – the "multi objective particle swarm" (MOPS) – to solve the MMJIT with setups was proposed

In [48], the authors compared a Bounded Dynamic Programming (BPD) procedure with GCM and with an Ant Colony (AC) approach, using as OF the minimization of the total inventory

through a test case of 20 different products production on 40 time buckets [47].

*Layout* Flow-shop Job-shop Assembly tree

Simulation Markov Chains Other

Mathematical programming

**•** Fuzzy methods;

*Model structure*

*Objective function*

*Setting*

*Assumptions*

authors admitted.


The following sets of bounds must be added in order to shift *st* from "0" to "1" if the production switches from a product to another:

*xm*(*t*−1)− *xmt* ≤*st*, ∀*t* =2, ..., *T* , ∀*m*∈*M st* ∈{0, 1}, ∀*t* =2, ..., *T*

Being a multi-objective problem, the MMJIT with setups has been approached in different ways, but it seems that no one succeeded in solving the problem using a standard mathematical approach. A simulation approach was used in [42]. Most of the existing studies in the literature use mathematical representations, Markov chains or simulation approaches. Some authors [10, 40] reported that the following parameters may vary within the research carried out in the recent years, as shown in Table 1 below.

### **5.1. A review on solving approaches**

The MMJIT problem, showing nonlinear OF and binary variables, has no polynomial solutions as far as we know. However, a heuristic solution approach can be effective. To get to a good solution, one among dynamic programming, integer programming, linear programming, mixed integer programming or nonlinear integer programming (NLP) techniques can be used. However, those methodologies usually require a long time to find a solution, so are infre‐ quently used in real production systems [44]. Just a few studies used other methods such as statistical analysis or the Toyota formula [45]. The most renowned heuristics are the Milten‐ burg's [36] and the cited Goal Chasing Method (GCM I) developed in Toyota by Y. Monden. Given the products quantities to be processed and the associated processing times, GCM I computes an "average consumption rate" for the workstation. Then, the processing sequence is defined choosing each successive product according to its processing time, so that the cumulated consumption rate "chases" its average value. A detailed description of the algo‐ rithm can be found in [32]. GCM I was subsequently refined by its own author, resulting in the GCM II and the Goal Coordination Method heuristics [46].

The most known meta-heuristics to solve the MMJIT [44, 47, 48] are:


In this equation, *st*

152 Operations Management

assumptions of this model are:

the *t* index follows on "2";

switches from a product to another:

*st* ∈{0, 1}, ∀*t* =2, ..., *T*

*xm*(*t*−1)− *xmt* ≤*st*, ∀*t* =2, ..., *T* , ∀*m*∈*M*

recent years, as shown in Table 1 below.

**5.1. A review on solving approaches**

**•** Simulated Annealing;

**•** Genetic Algorithms;

**•** Scalar methods;

**•** Tabu Search;

= 1 if a setup is required in position *t*; while *st =* 0, if no setup is required. The

from "0" to "1" if the production

**•** an initial setup is required regardless of sequence; this is the reason for the initial "1" and

Being a multi-objective problem, the MMJIT with setups has been approached in different ways, but it seems that no one succeeded in solving the problem using a standard mathematical approach. A simulation approach was used in [42]. Most of the existing studies in the literature use mathematical representations, Markov chains or simulation approaches. Some authors [10, 40] reported that the following parameters may vary within the research carried out in the

The MMJIT problem, showing nonlinear OF and binary variables, has no polynomial solutions as far as we know. However, a heuristic solution approach can be effective. To get to a good solution, one among dynamic programming, integer programming, linear programming, mixed integer programming or nonlinear integer programming (NLP) techniques can be used. However, those methodologies usually require a long time to find a solution, so are infre‐ quently used in real production systems [44]. Just a few studies used other methods such as statistical analysis or the Toyota formula [45]. The most renowned heuristics are the Milten‐ burg's [36] and the cited Goal Chasing Method (GCM I) developed in Toyota by Y. Monden. Given the products quantities to be processed and the associated processing times, GCM I computes an "average consumption rate" for the workstation. Then, the processing sequence is defined choosing each successive product according to its processing time, so that the cumulated consumption rate "chases" its average value. A detailed description of the algo‐ rithm can be found in [32]. GCM I was subsequently refined by its own author, resulting in

**•** the setup time is standard and it is not dependent from the product type;

**•** the setup number and setup time are directly proportional each other.

The following sets of bounds must be added in order to shift *st*

the GCM II and the Goal Coordination Method heuristics [46].

The most known meta-heuristics to solve the MMJIT [44, 47, 48] are:



**Table 1.** Alternative configurations of most common MMJIT models

In some experiments [44] Tabu Search and Simulated Annealing resulted to be more effective than GCM; however, the computational complexity of these meta-heuristics – and the consequent slowness of execution – makes them quite useless in practical cases, as the same authors admitted.

Another meta-heuristic based on an optimization approach with Pareto-efficiency frontier – the "multi objective particle swarm" (MOPS) – to solve the MMJIT with setups was proposed through a test case of 20 different products production on 40 time buckets [47].

In [48], the authors compared a Bounded Dynamic Programming (BPD) procedure with GCM and with an Ant Colony (AC) approach, using as OF the minimization of the total inventory cost. They found that BDP is effective (1,03% as the average relative deviation from optimum) but not efficient, requiring roughly the triple of the time needed by the AC approach. Mean‐ while, GCM was able to find the optimum (13% as the average relative deviation from optimum) on less than one third of the scenarios in which the AC was successful.

an easy condition to be reached in real industries. Consequently, despite the competence of its operations managers, even a big multinational manufacturer may encounter several problems in implementing JIT if a significant part of its supplier is made of small or medium-size enterprises (SMEs), which are naturally more exposed to variability issues. On top of this, differently from MRP – where the algorithm lies within a software and is transparent for users – in JIT the product sequencing is performed by the workforce and managed through the use of simple techniques, such as the heijunka box, the kanban board or other visual management tools, e.g. *andons*. Thus, any approach to organize JIT production should be easily compre‐ hensible to the workers and should not require neither expert knowledge nor a supercomputer

On Just-In-Time Production Leveling http://dx.doi.org/10.5772/54994 155

As it has been said, finding good solutions for the MMJIT problem with setups using an algorithmic approach may take too long and, on top of this, the solution can be vulnerable to product mix changes. Indeed, Kanban technique and GCM I methods are the most used approaches to manage JIT production thanks to their simplicity [44]. Some companies, where SMED techniques [51] failed to reduce setup times, use a modified version of the kanban FIFO board, in order to prevent setups proliferation. Thus, a simple batching process is introduced: when more than one kanban is posted on the board, the workstation operator shall not start the job on the first row but, on the contrary, chooses the job which allows the workstation to skip the setup phase. As an example, given the original job sequence A-B-A-C-A-B for a workstation, if the operator is allowed to look two positions ahead, he would process A-A-B-C-A-B, saving one setup time. In such situations, where setup times cannot be reduced under a certain value, rather than giving up the idea of adopting the Lean Production approach, heuristics can be developed and tested in order to obtain a leveled production even if coping

The most common method to analyze and validate heuristics is through simulation. Several authors agree that simulation is one of the best ways to analyze the dynamic and stochastic behavior of manufacturing system, predicting its operational performance [52, 53, 54]. Simulating, a user can dynamically reproduce how a system works and how the subsystems interact between each other; on top of this, a simulation tool can be used as a decision support system tool since it natively embeds the *what-if* logic [55]. Indeed, simulation can be used to test the solutions provided by Genetic Algorithms, Simulated Annealing, Ant Colony, etc. since these algorithms handle stochasticity and do not assume determinism. Simulation can

to be applied.

be used for:

**•** productivity analysis [56],

**7. Using simulations to validate JIT heuristics**

with long setup times or demand variability.

**•** production performances increase [1, 57, 58],

**•** solving scheduling problems [50, 60].

**•** confrontation of different production policies [59]

A broad literature survey on MMJIT with setups can be found in [49] while a comprehensive review of the different approaches to determine both kanban number and the optimal sequence to smooth production rates is present in [10].

### **6. Criticism on MMJIT problem solution**

Assumed that time wastes are a clear example of MUDA in Lean Production [3], complex mathematical approaches which require several minutes to compute one optimal sequence for MMJIT [44] should be discarded, given that the time spent calculating new scheduling solutions does not add any value to products. On the other side, it is notable that MRP computation requires a lot of time, especially when it is run for a low-capacity process (in which CRP-MRP or capacitated MRPs are required). However, being MRP a look-ahead system which considers the demand forecasts, its planning is updated only at the end of a predefined "refresh period", not as frequently as it may be required in a non-leveled JIT context. MRP was conceived with the idea that, merging the Bill-Of-Materials information with inventory levels and requirements, the production manager could define a short-term work plan. In most cases, MRP is updated no more than every week; thus, an MRP run may also take one day to be computed and evaluated, without any consequences for the production plan. On the contrary, the situation in JIT environment evolves every time a product is required from downstream. While MRP assumes the Master Production Schedule forecasts as an input, in JIT nobody may know what is behind the curtain, minute by minute.

Indeed, while a perfect JIT system does not need any planning update – simply because in a steady environment (e.g. heijunka) the optimal sequence should always be almost the same, at least in the medium term – real-world short-term variations can deeply affect the optimality of a fixed schedule production. For instance, a one-day strike of transport operators in a certain geographical area can entirely stop the production of a subset of models, and the lack of a raw material for one hour can turn the best scheduling solution into the worst. On top of this, while MRP relies on its "frozen period", JIT is exposed to variability because is supposed to effec‐ tively react to small changes in the production sequence. However, some authors noticed that the JIT sequences [10, 48, 50] are not so resistant to demand changes, so a single variation in the initial plan can completely alter the best solution. This is particularly true when the required production capacity gets near to the available. Thus, developing algorithm for solving the MMJIT problem under the hypothesis of constant demand or constant product mix seems useless.

JIT was developed for manual or semi-automated assembly line systems, not for completely automated manufacturing systems. The flexibility of the JIT approach requires a flexible production environment (i.e. the process bottleneck should not be saturated) and this is not an easy condition to be reached in real industries. Consequently, despite the competence of its operations managers, even a big multinational manufacturer may encounter several problems in implementing JIT if a significant part of its supplier is made of small or medium-size enterprises (SMEs), which are naturally more exposed to variability issues. On top of this, differently from MRP – where the algorithm lies within a software and is transparent for users – in JIT the product sequencing is performed by the workforce and managed through the use of simple techniques, such as the heijunka box, the kanban board or other visual management tools, e.g. *andons*. Thus, any approach to organize JIT production should be easily compre‐ hensible to the workers and should not require neither expert knowledge nor a supercomputer to be applied.

### **7. Using simulations to validate JIT heuristics**

cost. They found that BDP is effective (1,03% as the average relative deviation from optimum) but not efficient, requiring roughly the triple of the time needed by the AC approach. Mean‐ while, GCM was able to find the optimum (13% as the average relative deviation from

A broad literature survey on MMJIT with setups can be found in [49] while a comprehensive review of the different approaches to determine both kanban number and the optimal sequence

Assumed that time wastes are a clear example of MUDA in Lean Production [3], complex mathematical approaches which require several minutes to compute one optimal sequence for MMJIT [44] should be discarded, given that the time spent calculating new scheduling solutions does not add any value to products. On the other side, it is notable that MRP computation requires a lot of time, especially when it is run for a low-capacity process (in which CRP-MRP or capacitated MRPs are required). However, being MRP a look-ahead system which considers the demand forecasts, its planning is updated only at the end of a predefined "refresh period", not as frequently as it may be required in a non-leveled JIT context. MRP was conceived with the idea that, merging the Bill-Of-Materials information with inventory levels and requirements, the production manager could define a short-term work plan. In most cases, MRP is updated no more than every week; thus, an MRP run may also take one day to be computed and evaluated, without any consequences for the production plan. On the contrary, the situation in JIT environment evolves every time a product is required from downstream. While MRP assumes the Master Production Schedule forecasts as an input,

Indeed, while a perfect JIT system does not need any planning update – simply because in a steady environment (e.g. heijunka) the optimal sequence should always be almost the same, at least in the medium term – real-world short-term variations can deeply affect the optimality of a fixed schedule production. For instance, a one-day strike of transport operators in a certain geographical area can entirely stop the production of a subset of models, and the lack of a raw material for one hour can turn the best scheduling solution into the worst. On top of this, while MRP relies on its "frozen period", JIT is exposed to variability because is supposed to effec‐ tively react to small changes in the production sequence. However, some authors noticed that the JIT sequences [10, 48, 50] are not so resistant to demand changes, so a single variation in the initial plan can completely alter the best solution. This is particularly true when the required production capacity gets near to the available. Thus, developing algorithm for solving the MMJIT problem under the hypothesis of constant demand or constant product mix seems

JIT was developed for manual or semi-automated assembly line systems, not for completely automated manufacturing systems. The flexibility of the JIT approach requires a flexible production environment (i.e. the process bottleneck should not be saturated) and this is not

optimum) on less than one third of the scenarios in which the AC was successful.

in JIT nobody may know what is behind the curtain, minute by minute.

to smooth production rates is present in [10].

154 Operations Management

useless.

**6. Criticism on MMJIT problem solution**

As it has been said, finding good solutions for the MMJIT problem with setups using an algorithmic approach may take too long and, on top of this, the solution can be vulnerable to product mix changes. Indeed, Kanban technique and GCM I methods are the most used approaches to manage JIT production thanks to their simplicity [44]. Some companies, where SMED techniques [51] failed to reduce setup times, use a modified version of the kanban FIFO board, in order to prevent setups proliferation. Thus, a simple batching process is introduced: when more than one kanban is posted on the board, the workstation operator shall not start the job on the first row but, on the contrary, chooses the job which allows the workstation to skip the setup phase. As an example, given the original job sequence A-B-A-C-A-B for a workstation, if the operator is allowed to look two positions ahead, he would process A-A-B-C-A-B, saving one setup time. In such situations, where setup times cannot be reduced under a certain value, rather than giving up the idea of adopting the Lean Production approach, heuristics can be developed and tested in order to obtain a leveled production even if coping with long setup times or demand variability.

The most common method to analyze and validate heuristics is through simulation. Several authors agree that simulation is one of the best ways to analyze the dynamic and stochastic behavior of manufacturing system, predicting its operational performance [52, 53, 54]. Simulating, a user can dynamically reproduce how a system works and how the subsystems interact between each other; on top of this, a simulation tool can be used as a decision support system tool since it natively embeds the *what-if* logic [55]. Indeed, simulation can be used to test the solutions provided by Genetic Algorithms, Simulated Annealing, Ant Colony, etc. since these algorithms handle stochasticity and do not assume determinism. Simulation can be used for:


In spite of these potentialities, there seem to be few manufacturing simulation software really intended for industrial use, which go beyond a simple representation of the plant layout and modeling of the manufacturing flow. On top of some customized simulators – developed and built in a high-level programming language from some academic or research group in order to solve specific cases with drastic simplifying hypotheses – the major part of commercial software implements a graphical model-building approach, where experienced users can model almost any type of process using basic function blocks and evaluate the whole system behavior through some user-defined statistical functions [61]. The latters, being multi-purpose simulation software, require great efforts in translating real industrial processes logic into the modeling scheme, and it is thus difficult to "put down the simulation in the manufacturing process" [55]. Indeed, the lack of manufacturing archetypes to model building seems one of the most remarkable weakness for most simulator tools, since their presence could simplify the model development process for who speak the "language of business" [62]. Moreover, commercial simulators show several limitations if used to test custom heuristics, for example to level a JIT production or to solve a line-balancing problem: some authors report typical weaknesses in presenting the simulation output [63] or limited functionalities in terms of statistical analysis [64], on top of the lack of *user-friendliness*. For instance, most common commercial simulation software do not embed the most useful random distributions for manufacturing system analysis, such as the Weibull, Beta and Poisson distribution. When dealing with these cases, it is often easier to build custom software, despite it requires strong competences in operations research or statistics that have never represented the traditional background of industrial companies analysts [64].

solving approaches for MMJIT have been developed during the last decades. Most of them assume constant demand and product mix. Zero setup-times hypothesis has been removed only since 2000, and few approaches still cope with stochasticity. On top of this, these algo‐ rithms, although heuristic based, usually spend too much time in finding a good solution. Simplification hypotheses, operations research competences requirements and slow execution prevented these approaches to widespread in industry. Indeed, the heijunka box or the standard FIFO kanban approach with the simple Goal-Chasing-Method heuristic are still the most used tools to manage production in JIT environment. This is acknowledged also by the proponents of alternatives, and GCM is always used as a benchmark for every new MMJIT solution. However, these traditional approaches are not so effective in case of long setups and demand variations, given the fact that they have been conceived in pure JIT environments. In high stochastic scenarios, in order to prevent stock-outs, kanban number is raised along with the inventory levels. There are several cases of companies, operating in unstable contexts and where setup times cannot be reduced over a certain extent, that are interested in applying JIT techniques to reduce inventory carrying costs and manage the production flow in an effective and simple way. The development of kanban board / heijunka-box variations, in order to cope with the specific requirements of these companies, seems to offer better potentialities if compared to the development of difficult operations research algorithmic approaches. In order to solve industrial problems, researchers may concentrate in finding new policies that could really be helpful for production systems wishing to benefit from a JIT implementation but lacking in some lean production requirements, rather than studying new algorithm for MMJIT

On Just-In-Time Production Leveling http://dx.doi.org/10.5772/54994 157

For instance, kanban board / heijunka-box variations can effectively focus on job preemption opportunities in order to reduce setups abundance, or on new rules to manage priorities in case of breakdowns or variable quality rates. The parameters fine-tuning can be performed through simulation. In this sense, given the limitations of most commercial software, the development of a simulation conceptual model – along with its requisites – of a model representation (objects and structures) and some communication rules between the subsys‐ tems (communication protocols) are the main issues that need to be addressed from academics

problem.

and developers.

**Author details**

Francesco Giordano and Massimiliano M. Schiraldi

Department of Enterprise Engineering, "Tor Vergata" University of Rome, Italy

In order to widespread simulation software usage among the manufacturing industry, some authors underline the need of a standard architecture to model production and logistics processes [65, 66, 67]. Literature suggested to focus on a new reference framework for manu‐ facturing simulation systems, that implement both a structure and a logic closer to real production systems and that may support industrial processes optimization [68, 69].

Moreover, given hardware increased performances, computational workload of a simulation tool is not a problem anymore [70] and it seems possible to develop simulators able to run in less than one minute even complex instances. The complexity of a manufacturing model is linked both to size and system stochasticity. A careful analysis of time series can provide useful information to be included in the simulator, in order to model stochastic variables linked to machine failures or scrap production. This allows a more truthful assessment of key perform‐ ance indicators (KPI) for a range of solutions under test.

### **8. Conclusions and a roadmap for research**

The effective application of JIT cannot be independent from other key components of a lean manufacturing system or it can "end up with the opposite of the desired result" [71]. Specifi‐ cally, leveled production (heijunka) is a critical factor. The leveling problem in JIT, a mixedmodel scheduling problem, was formalized in 1983 and named MMJIT. Several numbers of solving approaches for MMJIT have been developed during the last decades. Most of them assume constant demand and product mix. Zero setup-times hypothesis has been removed only since 2000, and few approaches still cope with stochasticity. On top of this, these algo‐ rithms, although heuristic based, usually spend too much time in finding a good solution. Simplification hypotheses, operations research competences requirements and slow execution prevented these approaches to widespread in industry. Indeed, the heijunka box or the standard FIFO kanban approach with the simple Goal-Chasing-Method heuristic are still the most used tools to manage production in JIT environment. This is acknowledged also by the proponents of alternatives, and GCM is always used as a benchmark for every new MMJIT solution. However, these traditional approaches are not so effective in case of long setups and demand variations, given the fact that they have been conceived in pure JIT environments. In high stochastic scenarios, in order to prevent stock-outs, kanban number is raised along with the inventory levels. There are several cases of companies, operating in unstable contexts and where setup times cannot be reduced over a certain extent, that are interested in applying JIT techniques to reduce inventory carrying costs and manage the production flow in an effective and simple way. The development of kanban board / heijunka-box variations, in order to cope with the specific requirements of these companies, seems to offer better potentialities if compared to the development of difficult operations research algorithmic approaches. In order to solve industrial problems, researchers may concentrate in finding new policies that could really be helpful for production systems wishing to benefit from a JIT implementation but lacking in some lean production requirements, rather than studying new algorithm for MMJIT problem.

For instance, kanban board / heijunka-box variations can effectively focus on job preemption opportunities in order to reduce setups abundance, or on new rules to manage priorities in case of breakdowns or variable quality rates. The parameters fine-tuning can be performed through simulation. In this sense, given the limitations of most commercial software, the development of a simulation conceptual model – along with its requisites – of a model representation (objects and structures) and some communication rules between the subsys‐ tems (communication protocols) are the main issues that need to be addressed from academics and developers.

### **Author details**

In spite of these potentialities, there seem to be few manufacturing simulation software really intended for industrial use, which go beyond a simple representation of the plant layout and modeling of the manufacturing flow. On top of some customized simulators – developed and built in a high-level programming language from some academic or research group in order to solve specific cases with drastic simplifying hypotheses – the major part of commercial software implements a graphical model-building approach, where experienced users can model almost any type of process using basic function blocks and evaluate the whole system behavior through some user-defined statistical functions [61]. The latters, being multi-purpose simulation software, require great efforts in translating real industrial processes logic into the modeling scheme, and it is thus difficult to "put down the simulation in the manufacturing process" [55]. Indeed, the lack of manufacturing archetypes to model building seems one of the most remarkable weakness for most simulator tools, since their presence could simplify the model development process for who speak the "language of business" [62]. Moreover, commercial simulators show several limitations if used to test custom heuristics, for example to level a JIT production or to solve a line-balancing problem: some authors report typical weaknesses in presenting the simulation output [63] or limited functionalities in terms of statistical analysis [64], on top of the lack of *user-friendliness*. For instance, most common commercial simulation software do not embed the most useful random distributions for manufacturing system analysis, such as the Weibull, Beta and Poisson distribution. When dealing with these cases, it is often easier to build custom software, despite it requires strong competences in operations research or statistics that have never represented the traditional

In order to widespread simulation software usage among the manufacturing industry, some authors underline the need of a standard architecture to model production and logistics processes [65, 66, 67]. Literature suggested to focus on a new reference framework for manu‐ facturing simulation systems, that implement both a structure and a logic closer to real

Moreover, given hardware increased performances, computational workload of a simulation tool is not a problem anymore [70] and it seems possible to develop simulators able to run in less than one minute even complex instances. The complexity of a manufacturing model is linked both to size and system stochasticity. A careful analysis of time series can provide useful information to be included in the simulator, in order to model stochastic variables linked to machine failures or scrap production. This allows a more truthful assessment of key perform‐

The effective application of JIT cannot be independent from other key components of a lean manufacturing system or it can "end up with the opposite of the desired result" [71]. Specifi‐ cally, leveled production (heijunka) is a critical factor. The leveling problem in JIT, a mixedmodel scheduling problem, was formalized in 1983 and named MMJIT. Several numbers of

production systems and that may support industrial processes optimization [68, 69].

background of industrial companies analysts [64].

156 Operations Management

ance indicators (KPI) for a range of solutions under test.

**8. Conclusions and a roadmap for research**

Francesco Giordano and Massimiliano M. Schiraldi

Department of Enterprise Engineering, "Tor Vergata" University of Rome, Italy

### **References**

[1] J. P. Womack, D. T. Jones & D. Roos, The machine that changed the world: The Story of Lean Production, New York (NY): HarperPerennial, 1991.

[16] H. Wang & H.-P. B. Wang, «Optimum number of kanbans between two adjacent workstations in a JIT system,» *International Journal of Production Economics,* vol. 22, n. 3,

On Just-In-Time Production Leveling http://dx.doi.org/10.5772/54994 159

[17] D. Y. Golhar & C. L. Stamm, «The just in time philosophy: a literature review,»

[18] F. W. Harris, «How many parts to make at once,» *Factory, The Magazine of Manage‐*

[19] R. B. Chase, N. J. Aquilano & R. F. Jacobs, Operations management for competitive

[20] R. Suri, «QRM and Polca: A winning combination for manufacturing enterprises in the 21st century,» Center for Quick Response Manufacturing, Madison, 2003.

[21] P. Ericksen, R. Suri, B. El-Jawhari & A. Armstrong, «Filling the Gap,» *APICS Maga‐*

[22] J. Liker, The Toyota Way: 14 Management principles from the world's greatest manu‐

[23] M. L. Spearman, D. L. Woodruff & W. J. Hopp, «CONWIP: a pull alternative to kanban,»

[24] R. P. Marek, D. A. Elkins & D. R. Smith, «Understanding the fundamentals of kanban and conwip pull systems using simulation,» in *Proceedings of the 2001 Winter simulation*

[25] R. Suri, Quick Response Manufacturing: A companywide approach to reducing lead

[27] S. Meissner, «Controlling just-in-sequence flow-production,» *Logistics Research,* vol. 2,

[29] M. Qui, L. Fredendall & Z. Zhu, «TOC or LP?,» *Manufacturing Engineer,* vol. 81, n. 4,

[30] D. Trietsch, «From management by constraints (MBC) to management by criticalities

[31] A. Linhares, «Theory of constraints and the combinatorial complexity of the productmix decision,» *International Journal of Production Economics,* vol. 121, n. 1, pp. 121-129,

[32] Y. Monden, Toyota Production System, Norcross: The Institute of Industrial Engineers,

*International Journal of Production Research,* vol. 28, n. 5, pp. 879-894, 1990.

[26] M. M. Schiraldi, La gestione delle scorte, Napoli: Sistemi editoriali, 2007.

[28] E. M. Goldratt, The Goal, Great Barrington, MA: North river press, 1984.

(MBC II),» *Human Systems Management,* vol. 24, pp. 105-115, 2005.

*International Journal of Production Research,* vol. 29, n. 4, pp. 657-676, 1991.

pp. 179-188, 1991.

*ment,* vol. 10, n. 2, pp. 135-136, 1913.

advantage, McGraw-Hill/Irwin, 2006.

*zine,* vol. 15, n. 2, pp. 27-31, 2005.

facturer, McGraw-Hill, 2003.

*conference*, Arlington (VA), 2001.

p. 45.53, 2010.

pp. 190-195, 2002.

2009.

1983.

times, Portland (OR): Productivity Press, 1998.


[16] H. Wang & H.-P. B. Wang, «Optimum number of kanbans between two adjacent workstations in a JIT system,» *International Journal of Production Economics,* vol. 22, n. 3, pp. 179-188, 1991.

**References**

158 Operations Management

Press, 1988.

New York (NY): Free Press, 1982.

ance» working paper, 2013.

& Control, 2013 (forthcoming).

delayed information in MRP» working paper, 2013.

*Research,* vol. 103, n. 3, pp. 439-452, 1997.

of India, 1999.

*tion and Operations Management,* vol. 1, n. 4, pp. 393-411, 1992.

[1] J. P. Womack, D. T. Jones & D. Roos, The machine that changed the world: The Story

[2] M. Rother & J. Shook, Learning to See: Value Stream Mapping to Add Value and

[3] T. Ohno, Toyota Production System: Beyond Large-Scale Production, Productivity

[4] R. J. Schonberg, Japanese manufacturing techniques: Nine hidden lessons in simplicity,

[6] L. Baciarello, M. D'Avino, R. Onori & M. Schiraldi, « Lot-Sizing Heuristics Perform‐

[7] A. Bregni, M. D'Avino e M. Schiraldi, «A revised and improved version of the MRP

[8] M. D'Avino, V. De Simone & M. Schiraldi, «Revised MRP for reducing inventory level and smoothing order releases: a case in manufacturing industry,» Production Planning

[9] M. D'Avino, M. Correale & M. Schiraldi, «No news, good news: positive impacts of

[10] C. Sendil Kumar & R. Panneerselvam, «Literature review of JIT-KANBAN system,» *International Journal of Advanced Manufacturing Technologies,* pp. 393-408, 2007.

[11] B. J. Berkley, «A review of the kanban production control research literature,» *Produc‐*

[12] B. Sharadapriyadarshini & R. Chandrasekharan, «Heuristics for scheduling in a Kanban system with dual blocking mechanisms,» *European Journal of Operational*

[13] O. Kimura & H. Terada, «Design and analysis of pull system, a method of multistage production control,» *International Journal of Production Research,* n. 19, pp. 241-253, 1981.

[14] B. Hemamalini & C. Rajendran, «Determination of the number of containers, produc‐ tion kanbans and withdrawal kanbans; and scheduling in kanban flowshops,»

[15] R. Panneerselvam, Production and Operations Management, New Delhi: Prentice Hall

*International Journal of Production Research,* vol. 38, n. 11, pp. 2549-2572, 2000.

[5] J. Orlicky, Material Requirement Planning, New York (NY): McGraw-Hill, 1975.

algorithm: Rev MRP,» *Advanced Materials Research (forthcoming),* 2013.

of Lean Production, New York (NY): HarperPerennial, 1991.

Eliminate Muda, Brookline (MA): The Lean Enterprise Institute, 1999.


[33] N. Boysen, M. Fliedner & A. Scholl, «Level Scheduling for batched JIT supply,» *Flexible Service Manufacturing Journal,* vol. 21, pp. 31-50, 2009.

[47] A. Rahimi-Vahed, S. M. Mirghorbani e M. Rabbani, «A new particle swarm algorithm for a multi-objective mixed-assembly line sequencing problem,» *Soft computing,* vol. 11,

On Just-In-Time Production Leveling http://dx.doi.org/10.5772/54994 161

[48] N. Boysen, M. Fliedner & A. Scholl, «Sequencing mixed-model assembly lines to minimize part inventory cost,» *Operational Research Spectrum,* pp. 611-633, 2008.

[49] A. Allahverdi, J. N. D. Gupta & T. Aldowaisan, «A review of scheduling research involving setup considerations,» *International Journal of Management Sciences,* vol. 27,

[50] P. Rogers & M. T. Flanagan, «Online simulation for real-time scheduling of manufac‐

[51] S. Shingo, A revolution in manufacturing: The SMED system, Productivity Press, 1985.

[52] V. A. Hlupic, «Guidelines for selection of manufacturing simulation software,» *IIE*

[54] J. Smith, «Survey of the use of simulation for manufacturing system design and operation,» *Journal of manufacturing systems,* vol. 22, n. 2, pp. 157-171, 2003.

[55] H. Berchet, «A model for manufacturing systems simulation with a control dimen‐

[56] A. Polajnar, B. Buchmeister & M. Leber, «Analysis of different transport solutions in the flexible manufacturing cell by using computer simulation,» *International Journal of*

[57] P. Rogers & R. J. Gordon, «Simulation for the real time decision making in manufac‐ turing systems,» in *Proceedings of the 25th conference on winter simulation*, Los Angeles

[58] P. Rogers, «Simulation of manufacturing operations: optimum-seeking simulation in the design and control of manufacturing systems: experience with optquest for arena,» in *Proceedings of the 34th conference on winter simulation: exploring new frontiers*, San Diego

[59] S. S. Chakravorty & J. B. Atwater, «Do JIT lines perform better than traditionally balanced lines,» International *Journal of Operations and Production Management*, pp.

[60] R. Iannone & S. Riemma, «Proposta di integrazione tra simulazione e tecniche reticolari a supporto della progettazione operativa.,» Università di Salerno, Salerno, 2004.

[61] D. A. Van Beek, A. T. Hofkamp, M. A. Reniers, J. E. Rooda & R. R. H. Schiffelers, «Syntax and formal semantics of Chi 2.0,» Eindhoven University of Technology, Eindhoven,

[53] A. M. Law, Simulation modeling and analysis, Singapore: McGraw-Hill, 1991.

sion.,» *Simulation Modelling Practice and Theory,* pp. p.55-57, 2003.

*Operations and Production Management,* pp. 51-58, 1995.

turing systems,» *Industrial Engineering,* pp. p. 37-40, 2000.

*Transactions,* vol. 31, n. 1, pp. 21-29, 1999.

pp. 997-1012, 2007.

pp. 219-239, 1999.

(CA), 1993.

(CA), 2002.

77-88, 1995.

2008.


[47] A. Rahimi-Vahed, S. M. Mirghorbani e M. Rabbani, «A new particle swarm algorithm for a multi-objective mixed-assembly line sequencing problem,» *Soft computing,* vol. 11, pp. 997-1012, 2007.

[33] N. Boysen, M. Fliedner & A. Scholl, «Level Scheduling for batched JIT supply,» *Flexible*

[34] W. Kubiak, «Minimizing variation of production rates in just-in-time systems: A survey,» *European Journal of Operational Research,* vol. 66, pp. 259-271, 1993.

[35] Y. Monden, Toyota Production System, An Integrated Approach to Just-In-Time,

[36] J. Miltenburg, «A Theoretical Basis for Scheduling Mixed-Model Production Lines,»

[37] W. Kubiak & S. Sethi, «A note on ''level schedules for mixed-model assembly lines in just-in-time production systems",» *Management Science,* vol. 37, n. 1, pp. 121-122, 1991.

[38] G. Steiner & J. S. Yeomans, «Optimal level schedules in mixed-model, multilevel JIT, assembly systems with pegging,» *European Journal of Operational Research,* pp. 38-52,

[39] T. N. Dhamala & S. R. Khadka, «A review on sequencing approaches for mixed-model just-in-time production systems,» *Iranian Journal of Optimization,* vol. 1, pp. 266-290,

[40] M. S. Akturk & F. Erhun, «An overview of design and operational issues of kanban systems,» *International Journal of Production Research,* vol. 37, n. 17, pp. 3859-3881, 1999.

[41] P. R. McMullen & P. Tarasewich, «A beam search heuristic method for mixed-model scheduling with setups,» *International Journal of Production Economics,* vol. 96, n. 2, pp.

[42] F. Mooeni, S. M. Sanchez & A. J. Vakharia, «A robust design methodology for Kanban system design,» *International Journal of Production Research,* vol. 35, pp. 2821-2838, 1997.

[43] G. N. Krieg & H. Kuhn, «A decomposition method for multi-product kanban systems with setup times and lost sales,» *IEE Transactions,* vol. 34, pp. 613-625, 2002.

[44] T. Tamura, S. Nishikawa, T. S. Dhakar e K. Ohno, «Computational Efficiencies of Goal Chasing, SA, TS and GA Algorithms to Optimize Production Sequence in a Free Flow Assembly Line,» in *Proceedings of the 9th Asia Pasific Industrial Engineering & Management*

[45] K. Ohno, K. Nakashima & M. Kojima, «Optimal numbers of two kinds of kanbans in a JIT production system,» *International Journal of Production Research,* vol. 33, pp.

[46] H. Aigbedo, «On bills of materials structure and optimum product-level smoothing of parts usage in JIT assembly systems,» *International Journal of Systems Science,* vol. 40, n.

*Service Manufacturing Journal,* vol. 21, pp. 31-50, 2009.

Norcross (GA): Engineering & Management Press, 1998.

*Management Science,* vol. 35, pp. 192-207, 1989.

1996.

160 Operations Management

2009.

273-283, 2005.

*Systems Conference*, Bali, 2008.

1387-1401, 1995.

8, pp. 787-798, 2009.


[62] J. Banks, E. Aviles, J. R. McLaughlin & R. C. Yuan, «The simulator: new member of the simulation family,» *Interfaces,* pp. 21-34, 1991.

**Chapter 7**

**Enterprise Risk Management to Drive Operations**

Global competition characterizes the market of the new millennium where uncertainty and volatility are the main elements affecting the decision making process of managers that need to determine scenarios, define strategies, plan interventions and investments, develop projects

Risks have been always part of entrepreneurships but a growing attention to the issues related to Risk Management is nowadays spreading. Along with the financial scandals in the affairs

> © 2013 Di Gravio et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 Di Gravio et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

**Performances**

Massimo Tronci

**1. Introduction**

**Figure 1.** Decision hierarchy

http://dx.doi.org/10.5772/54442

and execute operations (figure 1).

Giulio Di Gravio, Francesco Costantino and

Additional information is available at the end of the chapter


### **Enterprise Risk Management to Drive Operations Performances**

[62] J. Banks, E. Aviles, J. R. McLaughlin & R. C. Yuan, «The simulator: new member of the

[63] A. M. Law & S. W. Haider, «Selecting simulation software for manufacturing applica‐ tions: practical guidelines and software survey.,» *Industrial Engineering,* 1989.

[64] L. Davis & G. Williams, «Evaluating and Selecting Simulation Software Using the Analytic Hierarchy Process,» *Integrated Manufacturing Systems,* n. 5, pp. 23-32, 1994.

[65] D. A. Bodner & L. F. McGinnis, «A structured approach to simulation modeling of manufacturing systems,» in *Proceedings of the 2002 Industrial Engineering Research*

[66] S. Narayanan, D. A. Bodner, U. Sreekanth, T. Govindaraj, L. F. McGinnis & C. M. Mitchell, «Research in object-oriented manufacturing simulations: an assessment of the

[67] M. S. Mujtabi, «Simulation modeling of manufacturing enterprise with complex material. Information and control flows,» *International journal of computer integrated*

[68] S. Robinson, «Conceptual modeling for simulation: issues and research requirements,» in *Proceedings of the 2006 Winter Simulation Conference*, Piscataway (NJ), 2006.

[69] C. Battista, G. Dello Stritto, F. Giordano, R. Iannone & M. M. Schiraldi, «Manufacturing Systems Modelling and Simulation Software Design: A Reference Model,» in *XXII*

[70] A. M. Law & W. D. Kelton, Simulation Modelling and Analysis, New York: McGraw-

[71] S. Shingo, A study of the Toyota Production System, Productivity Press, 1989.

simulation family,» *Interfaces,* pp. 21-34, 1991.

state of the art,» *IIE Transactions,* vol. 30, n. 9, 1998.

*DAAAM International World Symposium* , Vienna, 2011.

*manufacturing,* vol. 7, n. 1, pp. 29-46, 1994.

*Conference*, Georgia, 2002.

162 Operations Management

Hill., 1991, pp. 60-80.

Giulio Di Gravio, Francesco Costantino and Massimo Tronci

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/54442

### **1. Introduction**

Global competition characterizes the market of the new millennium where uncertainty and volatility are the main elements affecting the decision making process of managers that need to determine scenarios, define strategies, plan interventions and investments, develop projects and execute operations (figure 1).

**Figure 1.** Decision hierarchy

Risks have been always part of entrepreneurships but a growing attention to the issues related to Risk Management is nowadays spreading. Along with the financial scandals in the affairs

> © 2013 Di Gravio et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Di Gravio et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

of some major corporations, the high degree of dynamism and the evolutions of markets need organizations to rapidly adapt their business models to changes, whether economic, political, regulatory, technological or social [1].

**2. Corporate governance and risk management**

targets, according to the rights and expectations of stakeholders.

and, therefore, of the top management responsibility.

**•** the nature and intensity of the different types of risks;

**•** the mitigation strategies of the different types of risks.

**•** the risk system affecting the enterprise;

The management of the company risk profile requires the knowledge of:

**•** the probability of occurrence of each risk and its expected impact;

been issued.

management system.

Over the years, the attention to the basic tenets of corporate governance has radically increased. In response to the requirements of supporting business leaders in managing organizations and in protecting the various stakeholders towards the evolution of the political, economic and social environment, guide lines and reference models in the field of corporate governance have

Enterprise Risk Management to Drive Operations Performances

http://dx.doi.org/10.5772/54442

165

Within this body of rules, risk management plays a main role. It relates directly to the recog‐ nition of the strategic connotations of corporate governance as the means to achieve business

Since the mid-nineties onwards, the themes of risk management and corporate governance are strictly intertwined and almost coincident: the systematic management of risks has become a synonym of a "healthy" management of the business. At the same time, the techniques of risk analysis, historically associated with assessing financial risks, have been revised or replaced by methods that pervade the organization in depth. Along with the use of specific and complex control models (i.e. the experience of the Code of Conduct of the Italian Stock Exchange), responsibility for risk management is placed at the level of senior management. In some coun‐ tries, such as Germany, Australia and New Zealand, these indications reached the level of compulsory requirements as national legislation asks all companies to have an operational risk

From the above, the close link between corporate governance and risk management is abso‐ lutely clear. It has to be considered not only as an operational practice but rather as an essential component of decision making, based on the continuous development of definition systems

To ensure that the approved, deliberated and planned risk management strategies are executed in an effective and efficient way, the company's top management shall periodically review

**•** effectiveness of internal control systems to monitor risks and their possible evolution.

Corporate governance is thus to be seen as the strategic platform on which the tactical and operational system of risk & control acts, i.e. the set of processes, tools and resources at all levels of the organization to ensure the achievement of corporate objectives. On these argu‐

and, if necessary, implement corrective and/or preventive action with regard to:

**•** reliability of existing systems for the identification and assessment of risks;

In particular, managerial trends of business disintegration, decentralization and outsourcing, pushed organizations towards practices of information sharing, coordination and partnership. The difficulties that generally arise during the implementation of these practices underline the impact that critical risk factors can have on corporate governance. Operations, at any level, are highly affected in their performance by uncertainty, reducing their efficiency and effectiveness while losing control on the evolution of the value chain.

Studies on risk management have to be extended, involving not only internal processes of companies but considering also the relationship and the level of integration of supply chain partners. This can be viewed as a strategic issue of operations management to enable inter‐ ventions of research, development and innovation.

In a vulnerable economy, where the attention to quality and efficiency through cost reduction is a source of frequent perturbations, an eventual error in understanding the sensibility of the operations to continuous changes can seriously and irreparably compromise the capability of fitting customers' requirements.

Managers need to have personal skills and operational tools to ensure that risk management strategies can be suitably implemented and integrated in the production and logistics business environment. In order to face internal and external uncertainty, take advantage of it and exploit opportunities, it is necessary to identify, analyze and evaluate operational risks through stand‐ ard methodologies that help to:


While studies and standards on risk management for health and safety, environment or se‐ curity of information defined a well-known and universally recognized state of the art, cor‐ porate and operational risk management already needs a systematic approach and a common view. The main contributions in these fields are the reference models issued by international bodies [2-5].

Starting from the most advanced international experiences, in this chapter some principles are defined and developed in a framework that, depending on the maturity level of organizations, may help to adequately support their achievements and drive operations performance.

### **2. Corporate governance and risk management**

of some major corporations, the high degree of dynamism and the evolutions of markets need organizations to rapidly adapt their business models to changes, whether economic, political,

In particular, managerial trends of business disintegration, decentralization and outsourcing, pushed organizations towards practices of information sharing, coordination and partnership. The difficulties that generally arise during the implementation of these practices underline the impact that critical risk factors can have on corporate governance. Operations, at any level, are highly affected in their performance by uncertainty, reducing their efficiency and effectiveness

Studies on risk management have to be extended, involving not only internal processes of companies but considering also the relationship and the level of integration of supply chain partners. This can be viewed as a strategic issue of operations management to enable inter‐

In a vulnerable economy, where the attention to quality and efficiency through cost reduction is a source of frequent perturbations, an eventual error in understanding the sensibility of the operations to continuous changes can seriously and irreparably compromise the capability of

Managers need to have personal skills and operational tools to ensure that risk management strategies can be suitably implemented and integrated in the production and logistics business environment. In order to face internal and external uncertainty, take advantage of it and exploit opportunities, it is necessary to identify, analyze and evaluate operational risks through stand‐

**•** select, plan and implement interventions, managing actions and collecting feedbacks.

While studies and standards on risk management for health and safety, environment or se‐ curity of information defined a well-known and universally recognized state of the art, cor‐ porate and operational risk management already needs a systematic approach and a common view. The main contributions in these fields are the reference models issued by international

Starting from the most advanced international experiences, in this chapter some principles are defined and developed in a framework that, depending on the maturity level of organizations, may help to adequately support their achievements and drive operations performance.

regulatory, technological or social [1].

164 Operations Management

while losing control on the evolution of the value chain.

ventions of research, development and innovation.

fitting customers' requirements.

ard methodologies that help to:

**•** identify risks in scope;

**•** assess risks;

bodies [2-5].

**•** classify the different types of risks;

**•** identify possible interventions and relative priorities;

Over the years, the attention to the basic tenets of corporate governance has radically increased.

In response to the requirements of supporting business leaders in managing organizations and in protecting the various stakeholders towards the evolution of the political, economic and social environment, guide lines and reference models in the field of corporate governance have been issued.

Within this body of rules, risk management plays a main role. It relates directly to the recog‐ nition of the strategic connotations of corporate governance as the means to achieve business targets, according to the rights and expectations of stakeholders.

Since the mid-nineties onwards, the themes of risk management and corporate governance are strictly intertwined and almost coincident: the systematic management of risks has become a synonym of a "healthy" management of the business. At the same time, the techniques of risk analysis, historically associated with assessing financial risks, have been revised or replaced by methods that pervade the organization in depth. Along with the use of specific and complex control models (i.e. the experience of the Code of Conduct of the Italian Stock Exchange), responsibility for risk management is placed at the level of senior management. In some coun‐ tries, such as Germany, Australia and New Zealand, these indications reached the level of compulsory requirements as national legislation asks all companies to have an operational risk management system.

From the above, the close link between corporate governance and risk management is abso‐ lutely clear. It has to be considered not only as an operational practice but rather as an essential component of decision making, based on the continuous development of definition systems and, therefore, of the top management responsibility.

The management of the company risk profile requires the knowledge of:


To ensure that the approved, deliberated and planned risk management strategies are executed in an effective and efficient way, the company's top management shall periodically review and, if necessary, implement corrective and/or preventive action with regard to:


Corporate governance is thus to be seen as the strategic platform on which the tactical and operational system of risk & control acts, i.e. the set of processes, tools and resources at all levels of the organization to ensure the achievement of corporate objectives. On these argu‐ ments, it is appropriate to consider that the application of a system based on the principles of risk & control governance allows the creation of a virtuous circle of performances that has a positive impact on the environment inside and outside the company, beyond regulatory re‐ quirements.

**Controllability**

Enterprise Risk Management to Drive Operations Performances

http://dx.doi.org/10.5772/54442

167

*Controllable Partially controllable Uncontrollable*



Technological, Legal/regulatory, Environmental) - Operational risks (delivery, capacity and capability,



Decisional level - External risks (PESTLE - Political, Economic, Socio-cultural,

performance)



Developing the classification to an extended level and considering all the sources of uncer‐ tainty that affects business targets, vulnerability of organizations can be assessed on five dif‐

*Internal* Quality and cost of products Environmental impacts Incidents and accidents

*External* Technological progress Demand variation Natural disasters

Further classifications can also be taken from the already mentioned risk management models, where the descriptive categories are represented as a function of different objectives and de‐

**Organization**

**Table 1.** Example of risk classification by perimeter

**Model Dimension Classes**

and external)

Level of interaction (internal

FIRM Risk Scorecard [8] Area of impact - Infrastructural risks

Area of impact - Strategic risks

cision-making levels (Table 2).

Risk Management Standard [3]

Strategy Survival Guide

Enterprise Risk Management [4]

**Table 2.** Example of risk classification by target

ferent areas (Table 3).

[7]

Management has the responsibility to plan, organize and direct initiatives to ensure the ach‐ ievement of company goals, in terms of:


The management acts through a regular review of its objectives, changes in processes accord‐ ing to changes in the internal and external environment, promoting and maintaining a busi‐ ness-oriented culture and a climate.

### **3. Risk classification**

Uncertain events can have both a positive and a negative effect: on the one hand, in fact, they are a threat to the achievement of business objectives, on the other hand can become a signif‐ icant source of opportunities for companies able to understand, anticipate and manage them. According to [6], *risks* are *"events with negative impacts that can harm the creation of business value or erode the existing one*" while *opportunities* are *"events with positive impact that may offset negative impacts"*. The opportunities are chances that an event will occur and positively affect the ach‐ ievement of objectives, contributing thus to the creation of value or preserving the existing one. Management needs to assess the opportunities, reconsidering its strategies and processes of setting goals and developing new plans to catch benefits derived from them.

An inherent risk can so be defined as "the possibility that an event occurs and have a negative impact on the achievement of objectives" while the control can be defined as "any means used by management to increase the likelihood that the business objectives set are achieved", mit‐ igating the risks in an appropriate manner. In this context, a hazard is a "potential source of risk" while a residual risk is the "risk that still remains after mitigations".

Along with these definitions, it is possible to organize the different types of risks in different classes and their possible combinations. In Table 1 a first example of classification is shown, based on two characteristics that relate the origin and generation of the risk (organizational perimeter) with the possibilities of intervention (controllability of risk).


**Table 1.** Example of risk classification by perimeter

ments, it is appropriate to consider that the application of a system based on the principles of risk & control governance allows the creation of a virtuous circle of performances that has a positive impact on the environment inside and outside the company, beyond regulatory re‐

Management has the responsibility to plan, organize and direct initiatives to ensure the ach‐

The management acts through a regular review of its objectives, changes in processes accord‐ ing to changes in the internal and external environment, promoting and maintaining a busi‐

Uncertain events can have both a positive and a negative effect: on the one hand, in fact, they are a threat to the achievement of business objectives, on the other hand can become a signif‐ icant source of opportunities for companies able to understand, anticipate and manage them. According to [6], *risks* are *"events with negative impacts that can harm the creation of business value or erode the existing one*" while *opportunities* are *"events with positive impact that may offset negative impacts"*. The opportunities are chances that an event will occur and positively affect the ach‐ ievement of objectives, contributing thus to the creation of value or preserving the existing one. Management needs to assess the opportunities, reconsidering its strategies and processes

An inherent risk can so be defined as "the possibility that an event occurs and have a negative impact on the achievement of objectives" while the control can be defined as "any means used by management to increase the likelihood that the business objectives set are achieved", mit‐ igating the risks in an appropriate manner. In this context, a hazard is a "potential source of

Along with these definitions, it is possible to organize the different types of risks in different classes and their possible combinations. In Table 1 a first example of classification is shown, based on two characteristics that relate the origin and generation of the risk (organizational

quirements.

166 Operations Management

ievement of company goals, in terms of:

**•** protection of company assets;

**•** protection of ethical and social values.

ness-oriented culture and a climate.

**3. Risk classification**

**•** definition of business and government targets;

**•** formulation of strategies to reach business and government targets;

**•** compliance with laws, regulations, contracts and corporate ethical standards;

of setting goals and developing new plans to catch benefits derived from them.

risk" while a residual risk is the "risk that still remains after mitigations".

perimeter) with the possibilities of intervention (controllability of risk).

**•** effective and efficient use of the resources of the organization; **•** relevance and reliability of financial and operational reporting;

Further classifications can also be taken from the already mentioned risk management models, where the descriptive categories are represented as a function of different objectives and de‐ cision-making levels (Table 2).


**Table 2.** Example of risk classification by target

Developing the classification to an extended level and considering all the sources of uncer‐ tainty that affects business targets, vulnerability of organizations can be assessed on five dif‐ ferent areas (Table 3).


**Risk Category Risk factors**

Environment (Externalities) - Regulations

**Table 3.** Risk classification by organization

company.




Enterprise Risk Management to Drive Operations Performances

http://dx.doi.org/10.5772/54442

169




The competitiveness of an organization depends on its ability to create value for its stake‐ holders. The management maximizes the value when objectives and strategies are formulated in order to achieve an optimal balance between growth, profitability and associated risks, using resources in an efficient and effective way. These statements are the basic philosophy of "risk management business". As seen, all businesses face uncertain events and the challenge of management is to determine the amount of uncertainty acceptable to create value. The uncer‐ tainty is both a risk and an opportunity and can potentially reduce or increase the value of the

The Enterprise Risk Management (ERM) is the set of processes that deals with the risks and opportunities that have an impact on the creation or preservation of value. ERM is put in place by the Board of Administration, the management and other professionals in an organization to formulate strategies designed to identify potential events that may affect the business, to manage risk within the limits of acceptable risk and to provide reasonable assurance regarding the achievement of business targets. It is an ongoing and pervasive process that involves the whole organization, acted by people of different roles at all levels and throughout the corporate

This definition is intentionally broad and includes key concepts, critical to understand how companies must manage risk, and provides the basic criteria to apply in all organizations, whatever their nature. The ERM enables to effectively deal with uncertainty, enhancing the


**4. Enterprise risk management for strategic planning**

structure, both on its specific assets and on the company as a whole.

company's ability to generate value through the following actions:


**Table 3.** Risk classification by organization

**Risk Category Risk factors**

168 Operations Management

Demand (Customers) - Number and size of customers

Offer (Suppliers) - Number and size of suppliers



















> - Capacity - Handling


Network and collaboration

(Relations)







### **4. Enterprise risk management for strategic planning**

The competitiveness of an organization depends on its ability to create value for its stake‐ holders. The management maximizes the value when objectives and strategies are formulated in order to achieve an optimal balance between growth, profitability and associated risks, using resources in an efficient and effective way. These statements are the basic philosophy of "risk management business". As seen, all businesses face uncertain events and the challenge of management is to determine the amount of uncertainty acceptable to create value. The uncer‐ tainty is both a risk and an opportunity and can potentially reduce or increase the value of the company.

The Enterprise Risk Management (ERM) is the set of processes that deals with the risks and opportunities that have an impact on the creation or preservation of value. ERM is put in place by the Board of Administration, the management and other professionals in an organization to formulate strategies designed to identify potential events that may affect the business, to manage risk within the limits of acceptable risk and to provide reasonable assurance regarding the achievement of business targets. It is an ongoing and pervasive process that involves the whole organization, acted by people of different roles at all levels and throughout the corporate structure, both on its specific assets and on the company as a whole.

This definition is intentionally broad and includes key concepts, critical to understand how companies must manage risk, and provides the basic criteria to apply in all organizations, whatever their nature. The ERM enables to effectively deal with uncertainty, enhancing the company's ability to generate value through the following actions:

**•** *the alignment of strategy at acceptable risk:* management establishes the level of acceptable risks in evaluating strategies, setting objectives and developing mechanisms to manage the as‐ sociated risks;

**•** the protection and enhancement of corporate assets and image;

are detailed in the following sections.

stages of treatment and control:

**5.1. Risk assessment**

**•** the risk profile;

**•** the risk appetite.

pression of the:

or not);

ysis:

plemented.

taking any decision about the treatment.

fiable in terms of costs or other performance;

pared with all the losses eventually arising from a failure.

**•** the development and support to the people and to their knowledge base.

modeling and decision-making [9-10] that can become part of the analysis.

Figure 2 represents a process of risk management in its different stages of development that

Enterprise Risk Management to Drive Operations Performances

http://dx.doi.org/10.5772/54442

171

Risk assessment is a sequence of various activities aimed at identifying and evaluate the set of risks that the organization has to face. The international literature offers several techniques of

The results of risk assessment can be summed up in two outputs that address the following

The risk profile represents the level of overall exposure of the organization, defining in a com‐ plete way the complexity of the risks to be managed and their ranking, according to their entity and significance. A segmentation for entities (areas, functions, people, sites) or decisional levels and the actual measures of treatment and control complete the profile. This takes to the ex‐

**•** *net profile:* the level of exposure, according to the measures of treatment in place (if effective

**•** *net future profile:* the level of exposure surveyed after all the measures of treatment are im‐

The definition of the risk appetite is a key outcome of the assessment process: on the one hand it is appropriate to draft it before the risk identification (where the level of accuracy of analysis can also depend on the risk appetite itself), on the other it is absolutely necessary to fix it before

In any case, the risk appetite presents two different dimensions according to the scope of anal‐

**•** *threat*: the threshold level of exposure considered acceptable by the organization and justi‐

**•** *opportunity*: what the organization is willing to risk to achieve the benefits in analysis, com‐

The so defined risk appetite can be adjusted through the delegation of responsibilities, strengthening the capability of taking decisions at different levels according to cost dynamics.

**•** *gross profile:* the level of exposure to the events without any measure of treatment;


These characteristics help management to achieve performance targets without wasting re‐ sources. Furthermore, it ensures the effectiveness of reporting in compliance with laws and regulations, so to prevent damages to corporate reputation and relative consequences. Sum‐ marizing, the ERM supports organizations to accomplish their goals while avoiding pitfalls and unexpected path.

### **5. The risk management process**

The risk management process consists of a series of logical steps for analyzing, in a systematic way, the hazards, the dangers and the associated risks that may arise in the management of an organization. The goal is realized in giving maximum sustainable value to any activity, through a continuous and gradual process that moves from the definition of a strategy along its implementation. By understanding all potential positive and negative factors that affect the system, it is possible to increase the probability of success and reduce the level of uncertainty.

In particular, risk management protects and supports the requirements of the organization in its relationship with stakeholders through:


Figure 2 represents a process of risk management in its different stages of development that are detailed in the following sections.

### **5.1. Risk assessment**

**•** *the alignment of strategy at acceptable risk:* management establishes the level of acceptable risks in evaluating strategies, setting objectives and developing mechanisms to manage the as‐

**•** *the improvement of the response to identified risks:* ERM needs a rigorous methodology to iden‐ tify and select the most appropriate among several alternatives of responses to risks (avoid,

**•** *the reduction of contingencies and resulting losses*: companies, increasing their ability to identify potential events, assess the risks and formulate responses, reducing the frequency of unex‐

**•** *the identification and management of multiple and correlated risks*: every business needs to face an high number of risks affecting different areas and the ERM facilitates the formulation of

**•** *the identification of opportunities:* through the analysis of all possible events, management is

**•** *the improvement of capital expenditure:* the acquisition of reliable information on risks allows management to effectively assess the overall financial needs, improving the allocation of

These characteristics help management to achieve performance targets without wasting re‐ sources. Furthermore, it ensures the effectiveness of reporting in compliance with laws and regulations, so to prevent damages to corporate reputation and relative consequences. Sum‐ marizing, the ERM supports organizations to accomplish their goals while avoiding pitfalls

The risk management process consists of a series of logical steps for analyzing, in a systematic way, the hazards, the dangers and the associated risks that may arise in the management of an organization. The goal is realized in giving maximum sustainable value to any activity, through a continuous and gradual process that moves from the definition of a strategy along its implementation. By understanding all potential positive and negative factors that affect the system, it is possible to increase the probability of success and reduce the level of uncertainty. In particular, risk management protects and supports the requirements of the organization in

**•** a methodological framework that allows a consistent and controlled development of activ‐

**•** the improvement of the decision-making process, creating priorities by really understand‐ ing the natural and exceptional variability of activities and their positive or negative effects;

**•** the contribution to a more efficient use and allocation of resources;

sociated risks;

170 Operations Management

resources.

and unexpected path.

ities;

**5. The risk management process**

its relationship with stakeholders through:

reduce, share, accept the risk);

pected events as well as the subsequent costs and losses.

a unique response to clusters of risks and associated impacts;

able to proactively identify and seize the opportunities that emerge;

Risk assessment is a sequence of various activities aimed at identifying and evaluate the set of risks that the organization has to face. The international literature offers several techniques of modeling and decision-making [9-10] that can become part of the analysis.

The results of risk assessment can be summed up in two outputs that address the following stages of treatment and control:


The risk profile represents the level of overall exposure of the organization, defining in a com‐ plete way the complexity of the risks to be managed and their ranking, according to their entity and significance. A segmentation for entities (areas, functions, people, sites) or decisional levels and the actual measures of treatment and control complete the profile. This takes to the ex‐ pression of the:


The definition of the risk appetite is a key outcome of the assessment process: on the one hand it is appropriate to draft it before the risk identification (where the level of accuracy of analysis can also depend on the risk appetite itself), on the other it is absolutely necessary to fix it before taking any decision about the treatment.

In any case, the risk appetite presents two different dimensions according to the scope of anal‐ ysis:


The so defined risk appetite can be adjusted through the delegation of responsibilities, strengthening the capability of taking decisions at different levels according to cost dynamics.

source of uncertainty in order to describe and proactively manage different scenarios. The identification is the first step to define the *risk profile* and the *risk appetite* of the organization.

Enterprise Risk Management to Drive Operations Performances

http://dx.doi.org/10.5772/54442

173

**•** initial identification of risks: to be developed for organizations without a systematic ap‐ proach to risk management. It is required to gather information on hazards and their pos‐

**•** ongoing identification of risks: to update the risk profile of an organization and its relations, taking into account the generation of new risks or modifications to the already identified

All the process mapping techniques are extremely useful to associate and connect risks with activities (Figure 3). The level of detail is determined by the necessity of identifying the specific impact associated with risks, of assigning responsibility of management and defining the sub‐

This can be developed with the support of external consultants or through a self-assessment which, if conducted with adequate methodological tools, provides a better awareness of the

Among the others, the most common and widely used (successfully tested in other fields as

This activity has to be repeated continuously and can be divided into two distinct stages:

sible evolutions;

sequent actions to ensure control.

**•** SWOT analysis and Field Force;

profile and an upgrade of the management system.

**•** techniques of data collection and statistical analysis;

**•** techniques of problem finding and problem solving;

**•** benchmarking with competitors or best in class.

**Figure 3.** Example of risk identification for collaboration risks

for marketing and quality management) are:

ones.

**Figure 2.** Risk Management process

### *5.1.1. Scope definition*

The target of this stage is the identification of assets and people exposed to the risks and the identification of factors that determine the risks themselves. The definition of the scope has a critical importance in order to evaluate internal and external influences on the organization.

As this analysis requires requires a thorough knowledge of the environmental components (business, market, political, social and cultural issues), it has to be developed for all the deci‐ sion-making levels (strategic, tactical and operational) and for all the stakeholders. Further‐ more, the relationships with the output of the strategic planning have to be determined as the relevance of a risk and the priorities of interventions can be identified only with reference to the targets to uncertainty, while the eventual impact can vary widely according to a proper assignment and commitment of resources.

Despite this stage is found to be of fundamental importance for the effectiveness of the others, in particular for the identification of risks, it is too often executed with an inappropriate level of attention or it is not developed at all.

### *5.1.2. Risk identification*

The identification of risks allows to acquire knowledge on possible future events, trying to measure the level of exposure of the organization. The target is to identify all the significant source of uncertainty in order to describe and proactively manage different scenarios. The identification is the first step to define the *risk profile* and the *risk appetite* of the organization.

This activity has to be repeated continuously and can be divided into two distinct stages:


All the process mapping techniques are extremely useful to associate and connect risks with activities (Figure 3). The level of detail is determined by the necessity of identifying the specific impact associated with risks, of assigning responsibility of management and defining the sub‐ sequent actions to ensure control.

This can be developed with the support of external consultants or through a self-assessment which, if conducted with adequate methodological tools, provides a better awareness of the profile and an upgrade of the management system.

Among the others, the most common and widely used (successfully tested in other fields as for marketing and quality management) are:


**Figure 2.** Risk Management process

assignment and commitment of resources.

of attention or it is not developed at all.

*5.1.2. Risk identification*

The target of this stage is the identification of assets and people exposed to the risks and the identification of factors that determine the risks themselves. The definition of the scope has a critical importance in order to evaluate internal and external influences on the organization.

As this analysis requires requires a thorough knowledge of the environmental components (business, market, political, social and cultural issues), it has to be developed for all the deci‐ sion-making levels (strategic, tactical and operational) and for all the stakeholders. Further‐ more, the relationships with the output of the strategic planning have to be determined as the relevance of a risk and the priorities of interventions can be identified only with reference to the targets to uncertainty, while the eventual impact can vary widely according to a proper

Despite this stage is found to be of fundamental importance for the effectiveness of the others, in particular for the identification of risks, it is too often executed with an inappropriate level

The identification of risks allows to acquire knowledge on possible future events, trying to measure the level of exposure of the organization. The target is to identify all the significant

*5.1.1. Scope definition*

172 Operations Management

**•** benchmarking with competitors or best in class.

**Figure 3.** Example of risk identification for collaboration risks

### *5.1.3. Risk description*

The results of identification should be developed in an appropriate stage of description by means of specific information support systems (i.e. Risk Register, table 4). Depending on the scope, the documentary support can assume different forms to improve the sharing of infor‐ mation and increase efficiency of management. Whatever the solution adopted for the de‐ scription, this has to be dynamically completed with data coming from the different stages of the risk management process and updated according to changes of internal and external con‐ text. Inheriting the best practices already in use for environmental and safety management systems, when the risks are in any way related to regulations (i.e. Sarban Oaxley's act), a Com‐ pliance Register has to be associated to the Risk Register to ensure the conformity to require‐ ments.

Consequences Description of impacts (direct or indirect)

Risk appetite Threshold level of tolerance of the specific risk Treatment Extended description of the mitigations Residual risk Estimation of the risk after the of mitigation

Risk owner Responsibility of the risk and related activities Control owner Responsibility of the control and related activities




**Value Indicator Description**




analysis, with many repetitions - it happened recently

analysis, with some repetitions




Control Extended description of the control

**High** - financial impact on the organization probably higher than xxx €

**Medium** - financial impact on the organization probably among yyy € and xxx €




**Table 5.** Impacts of threats and opportunities [3]

High (Probable) Probable every year or in more

Medium (Possible) Probable in 10 years or in less

Low (Remote) Improbable in 10 years or in less

**Table 6.** Probability of the event: threats [3]

than 25% of cases

than 25% of cases

than 2% of cases

**Table 4.** Risk register

Emergency Potential emergency related to the risk and associate plans of recovery

Inherent risk Combination of the probability (or frequency) of the event and the

impact or relevance of the effects

Enterprise Risk Management to Drive Operations Performances

http://dx.doi.org/10.5772/54442

175

### *5.1.4. Risk estimation*

The risk assessment has to end up with the association of a qualitative or quantitative measure of any risk, in terms of technical, economic or financial intensity. The choice of the methodology is related to the level of details required by the comparison among risk profile and risk appetite and to the availability of data and information. The metrics can refer to:


The estimation of risk can be performed using different qualitative, quantitative or mixed criteria each with a different level of detail and reliability of the results. While the first are characterized by a strong subjectivity that only a high level of experience can compensate, the second need harmonization and conversion of the scales and of the values found. The choice is also related to the desired output of the stage, typically a hierarchical ordering of the risks identified (e.g. some types of exposures and tolerability are defined by regulations, especially for safety and environment). Examples of simple evaluation criteria, according to the already mentioned reference model, are shown in table 4, 5 and 6.



#### **Table 4.** Risk register

*5.1.3. Risk description*

174 Operations Management

ments.

*5.1.4. Risk estimation*

The results of identification should be developed in an appropriate stage of description by means of specific information support systems (i.e. Risk Register, table 4). Depending on the scope, the documentary support can assume different forms to improve the sharing of infor‐ mation and increase efficiency of management. Whatever the solution adopted for the de‐ scription, this has to be dynamically completed with data coming from the different stages of the risk management process and updated according to changes of internal and external con‐ text. Inheriting the best practices already in use for environmental and safety management systems, when the risks are in any way related to regulations (i.e. Sarban Oaxley's act), a Com‐ pliance Register has to be associated to the Risk Register to ensure the conformity to require‐

The risk assessment has to end up with the association of a qualitative or quantitative measure of any risk, in terms of technical, economic or financial intensity. The choice of the methodology is related to the level of details required by the comparison among risk profile and risk appetite

**•** value at risk or vulnerability, which is the possible value of benefits or threats in relation to

The estimation of risk can be performed using different qualitative, quantitative or mixed criteria each with a different level of detail and reliability of the results. While the first are characterized by a strong subjectivity that only a high level of experience can compensate, the second need harmonization and conversion of the scales and of the values found. The choice is also related to the desired output of the stage, typically a hierarchical ordering of the risks identified (e.g. some types of exposures and tolerability are defined by regulations, especially for safety and environment). Examples of simple evaluation criteria, according to the already

and to the availability of data and information. The metrics can refer to: **•** probability of occurrence and magnitude of the effects and impacts;

the characteristics of the organization.

mentioned reference model, are shown in table 4, 5 and 6.

Identification code ID to associate and create links among information

Organizational level Corporate, business unit, site, process or activities involved Related target Relation to the strategic planning and decisional level

Causes First, second and third level causes (direct or indirect)

Regulation Relation to compulsory (laws or directives) or voluntary (procedures) requirements

Description Extended description of the event and its possible evolutions (hazard)

Category According to the classification adopted

Stakeholders Involvement of the different stakeholders


#### **Table 5.** Impacts of threats and opportunities [3]


**Table 6.** Probability of the event: threats [3]


tools give an effective support mainly where the consequences of an event can be both

Enterprise Risk Management to Drive Operations Performances

http://dx.doi.org/10.5772/54442

177

Treatment of risks must be determined after a first evaluation and comparison of the risk profile and the risk appetite of the organization. The actions arising from this decision-making

**•** *terminate*: remove, dispose or outsource, where possible, the factors that can cause the risk. It can take the organization to refuse opportunities if the value at risk is higher than the risk

**•** *treat:* develop measures of mitigation in order to intervene on the values of significance of the risk, reducing the probability of occurrence (prevention), the potential impacts of the effects (protection) or determining actions of restoring (recovery) after damages are occur‐ red. Passing from prevention to protection and recovery, the capability of controlling risks

**•** *tolerate*: accept the risk profile as compatible with the risk appetite, in relation to the resource

**•** *transfer*: transfer the impacts to third parties through, for example, insurances or risk sharing

**•** *neutralize:* balance two or more risk, for example increasing the number of unit exposed, so

**•** *take the opportunity:* when developing actions of treatment, opportunities of positive impacts

The key target of the review stage is to monitor the changes in the risk profile and in the risk appetite of the organization and to provide assurance to all stakeholders that the risk man‐ agement process is appropriate to the context, effectively and efficiently implemented.

The frequency of the review should be determined depending on the characteristics of the risk

**•** *a review of the risks*, to verify the evolution of already existing risks and the arise of new risks,

**•** *a review of the risk management process*, to ensure that all activities are under control and to

tends to decrease, while increasing the exposure of the organization;

actions. Possible uncertain effects are converted in certain payments;

positive and negative, applying cost-benefit analysis.

stage can be classified according to the following scheme:

**5.3. Risk treatment**

appetite;

involved;

**5.4. Risk review**

that they can cancel each other;

can be identified and explored.

management system, to execute:

detect changes in the structure of the process.

assessing their entity;

**Table 7.** Probability of the event: opportunities [3]

### **5.2. Risk evaluation**

The evaluation of risks provides a judgment concerning the acceptability or the need of miti‐ gations, according to the comparison between the risk profile and the risk appetite. The stage is a decision-making process in which, if the risk is acceptable, the assessment can be termi‐ nated, otherwise it goes on to next stage of treatment and management. To verify the accept‐ ability after the interventions, the results of the mitigations have to be iteratively compared to the expected targets. At this stage it is possible to use, with adaptation when necessary, meth‐ ods and techniques widely tested in safety management:


tools give an effective support mainly where the consequences of an event can be both positive and negative, applying cost-benefit analysis.

### **5.3. Risk treatment**

**Value Indicator Description**

The evaluation of risks provides a judgment concerning the acceptability or the need of miti‐ gations, according to the comparison between the risk profile and the risk appetite. The stage is a decision-making process in which, if the risk is acceptable, the assessment can be termi‐ nated, otherwise it goes on to next stage of treatment and management. To verify the accept‐ ability after the interventions, the results of the mitigations have to be iteratively compared to the expected targets. At this stage it is possible to use, with adaptation when necessary, meth‐

**•** *Event Tree Analysis (ETA) and Fault Tree Analysis (FTA):* analysis of the cause-effect tree of the risk profile. The top event (an event that is at the end of the shaft) is usually a cause of loss of value in the organization, related to exclusionary or concurrent events of a lower-

**•** *Failure Modes Effects Analysis (FMEA) and Failure Modes Effects and Criticality Analysis (FME‐ CA)*: FMEA is a technique that allows a qualitative analysis of a system, decomposing the problem in a hierarchy of functions up to a determined level of detail. For each of the con‐ stituents, possible "failure modes" (adverse events) are identified and actions to eliminate or reduce the effects can be considered. FMECA adds a quantitative assessment of the crit‐ icalities: for each mode, an index is calculated as the combination of the occurrence of the

**•** *Hazard and Operability (HAZOP) analysis*: qualitative methodology that has both deductive (search for causes) and inductive (consequence analysis) aspects. The method seeks for the risks and operational problems that degrade system performances and then find solutions

**•** *Multi-criteria decision tools (i.e. Analytic Hierarchy Process and Analytic Network Process)*: deci‐ sion support techniques for solving complex problems in which both qualitative and quan‐ titative aspects have to be considered. Through a hierarchical or network modeling, the definition of a ranking of the critical aspects of the problem is enabled. Multi-criteria decision

event, the severity of its effects and the detectability of the symptoms;




management



or in more than 75% of cases

Reasonable advantages in the year or between 75% and 25% of

Possible advantages in the midterm or in less than 25% of

High (Probable) Probable advantages in the year

cases

cases

ods and techniques widely tested in safety management:

**Table 7.** Probability of the event: opportunities [3]

Medium (Possible)

176 Operations Management

Low (Remote)

**5.2. Risk evaluation**

level type;

to the problems identified;

Treatment of risks must be determined after a first evaluation and comparison of the risk profile and the risk appetite of the organization. The actions arising from this decision-making stage can be classified according to the following scheme:


### **5.4. Risk review**

The key target of the review stage is to monitor the changes in the risk profile and in the risk appetite of the organization and to provide assurance to all stakeholders that the risk man‐ agement process is appropriate to the context, effectively and efficiently implemented.

The frequency of the review should be determined depending on the characteristics of the risk management system, to execute:


### **6. The control system**

The conceptual path that characterizes this approach to risk management is strictly related to the existence of an indissoluble connection between risks and controls. Most current control systems recognize the risk as part of the corporate governance that has to be:

**7. The business continuity**

Management and its main features:

ruption, unavailability and related effects;

**•** establishment of systematic test of recovery plans;

efficiency of the plans and their revision.

effects of real and unexpected emergencies.

A recovery plan must therefore meet the following requirements:

interruption.

But how can organizations deal with those types of risks generally unknown and not predict‐ able? The answer comes from a different kind of strategic vision that is not only based on the analysis of identified risks but looks at the possible modes of disruption of processes regardless of the cause. For example, once defined the logistics distribution as a key factor for the success of the business, you can evaluate how to recover any link regardless of the specific reasons of

Enterprise Risk Management to Drive Operations Performances

http://dx.doi.org/10.5772/54442

179

The Business Continuity Management is an approach generally used in case of possible serious consequences related to crisis or emergency [11-13]: an organization that evaluates the effects of damage to a warehouse caused by a sudden storm or defines actions following the failure of a partner is performing risk management; when it arranges structured actions related to the unavailability of a warehouse or a provider moves up to the level of Business Continuity

**•** analysis and identification of the elements of the organization that may be subject to inter‐

**•** definition of action plans and programs to be implemented when an element is missing, to ensure the continuity of material and information flows or recover as quickly as possible;

**•** once recovered, structured analysis of events to evaluate the success of the initiatives, the

The Business Continuity Management accompanies organizations during disaster recovery of unexpected risks, particularly rare and with high magnitudes of the effects, where the opera‐ tions must be carried out with the utmost speed and effectiveness. Events such as the earth‐ quake in Kobe (Japan) in 1995, that caused more than 6,400 deaths, 100,000 demolished buildings, closed the major ports of the country for two months, having a general impact on industries for more than 100 billion dollars, can easily be presented for examples of disasters. At the same time, also much smaller events can be recognized as a disaster for small and medium-sized enterprises, such as the loss of a key customer, a huge credit not collected, a wrong industrial initiative, a failure of the production system or the breakdown of a relation‐ ship with a partner.on dollars, the other much smaller events also can be recognized as a disaster for small and medium-sized enterprises, such as the loss of a key customer, a huge credit not collected, a wrong industrial initiative, a system failure or breakdown of a relation‐ ship with a partner. In the same way, any loss related to a failure of an infrastructure can generate adverse effects as well as an incorrect definition of strategic processes or the indis‐ criminate and uncoordinated introduction of new methods such as just-in-time: the majority of negative events come from managerial mistakes that could be avoided rather than from the

**•** monitoring of processes to anticipate possible crises or to start emergency plans;


The control system traditionally represents a reactive approach in response to adverse events, fragmented in different areas and occasional frequencies. From a standard dimension, gener‐ ally limited to financial risks or internal audit, it has to evolve towards a proactive continuous process, results-oriented and with widespread responsibility. The challenge for management is to determine a sustainable amount of uncertainty to create value in relation to the resources assigned, facing a costs and benefits trade-off where the marginal cost of control is not greater than the benefit obtained.

The main components of the control system can be summarized as follows:


### **7. The business continuity**

**6. The control system**

178 Operations Management

than the benefit obtained.

constitute this environment;

are in place to efficiently reduce the risks.

The conceptual path that characterizes this approach to risk management is strictly related to the existence of an indissoluble connection between risks and controls. Most current control

The control system traditionally represents a reactive approach in response to adverse events, fragmented in different areas and occasional frequencies. From a standard dimension, gener‐ ally limited to financial risks or internal audit, it has to evolve towards a proactive continuous process, results-oriented and with widespread responsibility. The challenge for management is to determine a sustainable amount of uncertainty to create value in relation to the resources assigned, facing a costs and benefits trade-off where the marginal cost of control is not greater

**•** *control environment*: it is the base of the whole system of controls as it determines the sensi‐ tivity level of management and staff on the execution of processes. The adoption and dis‐ semination of codes of ethics and values, policies and management style, the definition of a clear organizational structure and responsibilities (including specific bodies of internal control), the development of professional skills of human resources are the elements that

**•** *control activities*: it is the operational component of the control system, configured as a set of initiatives and procedures to be executed, both on process and interfaces, to reduce business

**•** information and communication: a structured information system at all levels enables the control on processes, recomposing flows managed by different subsystems and applications that need to be integrated. Adequate information, synthetic and timely, must be provided

to allow the execution of activities, taking responsibilities and ensuring monitoring;

**•** *monitoring*: it is the continuous supervision and periodic evaluation of the performances of the control system. The scope and techniques of monitoring depend on the results of the risk assessment and on the effectiveness of the procedures in order to ensure that the controls

systems recognize the risk as part of the corporate governance that has to be:

**•** continuous, integrating control in the decision-making processes;

**•** pervasive, spreading the risk management at all decisional levels;

**•** formalized, through the use of clear and shared methodologies;

**•** structured, through the adoption of suitable organizational solutions.

The main components of the control system can be summarized as follows:

risks to a reasonable level, ensuring the achievement of the targets;

But how can organizations deal with those types of risks generally unknown and not predict‐ able? The answer comes from a different kind of strategic vision that is not only based on the analysis of identified risks but looks at the possible modes of disruption of processes regardless of the cause. For example, once defined the logistics distribution as a key factor for the success of the business, you can evaluate how to recover any link regardless of the specific reasons of interruption.

The Business Continuity Management is an approach generally used in case of possible serious consequences related to crisis or emergency [11-13]: an organization that evaluates the effects of damage to a warehouse caused by a sudden storm or defines actions following the failure of a partner is performing risk management; when it arranges structured actions related to the unavailability of a warehouse or a provider moves up to the level of Business Continuity Management and its main features:


The Business Continuity Management accompanies organizations during disaster recovery of unexpected risks, particularly rare and with high magnitudes of the effects, where the opera‐ tions must be carried out with the utmost speed and effectiveness. Events such as the earth‐ quake in Kobe (Japan) in 1995, that caused more than 6,400 deaths, 100,000 demolished buildings, closed the major ports of the country for two months, having a general impact on industries for more than 100 billion dollars, can easily be presented for examples of disasters. At the same time, also much smaller events can be recognized as a disaster for small and medium-sized enterprises, such as the loss of a key customer, a huge credit not collected, a wrong industrial initiative, a failure of the production system or the breakdown of a relation‐ ship with a partner.on dollars, the other much smaller events also can be recognized as a disaster for small and medium-sized enterprises, such as the loss of a key customer, a huge credit not collected, a wrong industrial initiative, a system failure or breakdown of a relation‐ ship with a partner. In the same way, any loss related to a failure of an infrastructure can generate adverse effects as well as an incorrect definition of strategic processes or the indis‐ criminate and uncoordinated introduction of new methods such as just-in-time: the majority of negative events come from managerial mistakes that could be avoided rather than from the effects of real and unexpected emergencies.

A recovery plan must therefore meet the following requirements:

**•** ensure the physical integrity of employees, customers, visitors and in any case all subjects interacting with current activities;

[3] Association of Insurance and Risk Managers (AIRMIC), GB Institute of Risk Manage‐

[4] AU Committee of Sponsoring Organizations of the Treadway Commission. Enterprise

[5] European Foundation for Quality Management. EFQM framework for risk manage‐

[8] Information Security Forum. Fundamental Information Risk Management (FIRM). ISF;

Risk Management – Integrated Framework. 2004.

[6] ISO Guide 73:2009 - Risk Management - Vocabulary. ISO; 2009

[7] UK Prime Minister's Strategy Unit. Strategy survival guide. 2004.

[9] ISO 31000:2009. Risk management - Principles and guidelines. ISO; 2009

[10] ISO/IEC 31010:2009. Risk Management - Risk Assessment Techniques. ISO; 2009

[12] Chartered Management Institute. Business Continuity Management. CMI; 2005.

Continuity Management. Stationery Office; 2006.

[11] British Standard Institute. PAS56: Guide to Business Continuity Management. BSI;

[13] Department of Trade and Industry. Information Security: Understanding Business

Standard. 2002

ment. EFQM; 2005.

2000.

2006.

ment (IRM), ALARM National Forum for Risk Management. A Risk Management

Enterprise Risk Management to Drive Operations Performances

http://dx.doi.org/10.5772/54442

181


### **8. Conclusions**

The main advantages that companies could obtain from Enterprise Risk Management were deeply investigated in the sections above. Anyway, this novel approach could present some difficulties, common to many businesses, related to the absence of a culture of strategic plan‐ ning aimed at prevention rather than response, to a general lack of professionals and of ap‐ propriate tools capable to really integrate processes. But while complexity is becoming a part of the corporate governance system, absorbing a great amount of time and resources, the need for competitiveness requires a specific attention to performances and results. A new attitude of organizations towards risk-sensitive areas, able to ensure the coordination among all its components, helps to transform the management of risk from a cost factor to an added value. This business view, allows, with a little effort, to reduce the overall risk of the company and helps the dialogue among business functions and with the stakeholders.

### **Author details**

Giulio Di Gravio\* , Francesco Costantino and Massimo Tronci

\*Address all correspondence to: giulio.digravio@uniroma1.it

Department of Mechanical and Aerospace Engineering, University of Rome "La Sapienza", Italy

### **References**


**•** ensure the physical integrity of employees, customers, visitors and in any case all subjects

**•** implement procedures to restore a minimum level of service, while reducing the impact on

**•** work with partners to redefine the appropriate services: once reorganized the internal ac‐ tivities, it is necessary to seek outside to assess the effects and consequences of actions taken;

**•** return all processes to performance standard in time and at reasonable cost: the speed with

The main advantages that companies could obtain from Enterprise Risk Management were deeply investigated in the sections above. Anyway, this novel approach could present some difficulties, common to many businesses, related to the absence of a culture of strategic plan‐ ning aimed at prevention rather than response, to a general lack of professionals and of ap‐ propriate tools capable to really integrate processes. But while complexity is becoming a part of the corporate governance system, absorbing a great amount of time and resources, the need for competitiveness requires a specific attention to performances and results. A new attitude of organizations towards risk-sensitive areas, able to ensure the coordination among all its components, helps to transform the management of risk from a cost factor to an added value. This business view, allows, with a little effort, to reduce the overall risk of the company and

**•** protect as much as possible facilities and resources to ensure a rapid recovery;

which the repairs must be carried out is balanced with the associated costs.

helps the dialogue among business functions and with the stakeholders.

, Francesco Costantino and Massimo Tronci

Department of Mechanical and Aerospace Engineering, University of Rome "La Sapienza",

[1] Minahan T.A. The Supply Risk Benchmark Report. Aberdeen Group; 2005.

[2] UK HM Treasury. Orange book management of risk – principles and concepts. 2004.

\*Address all correspondence to: giulio.digravio@uniroma1.it

interacting with current activities;

the organization;

180 Operations Management

**8. Conclusions**

**Author details**

Giulio Di Gravio\*

**References**

Italy


**Chapter 8**

**The Important Role of Packaging in Operations**

The chapter focuses on the analysis of the impact of packaging in Operations Management (OM) along the whole supply chain. The product packaging system (i.e. primary, secondary and tertiary packages and accessories) is highly relevant in the supply chain and its impor‐ tance is growing because of the necessity to minimize costs, reduce the environmental im‐

A typical supply chain is an end-to-end process with the main purpose of production, trans‐ portation, and distribution of products. It is relative to the products' movements normally from the supplier to the manufacturer, distributor, retailer and finally the end consumer. All products moved are contained in packages and for this reason the analysis of the physical logistics flows and the role of packaging is a very important issue for the definition and de‐ sign of manufacturing processes, improvement of layout and increase in companies' effi‐

In recent years, companies have started to consider packaging as a critical issue. It is neces‐ sary to analyse the packages' characteristics (e.g. shape, materials, transport, etc.) in order to improve the performance of companies and minimize their costs. Packaging concerns all ac‐ tivities of a company: from the purchasing of raw materials to the production and sale of

In order to manage the activities directly linked with the manufacturing of products (and consequently with the packaging system), the OM discipline is defined. It is responsible for collecting various inputs and converting them into desired outputs through operations [1]. Recently, more and more companies have started to use web operations. Electronic com‐ merce (e-commerce) is the most promising application of information technology witnessed

> © 2013 Regattieri and Santarelli; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 Regattieri and Santarelli; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

pact and also due to the development of web operations (i.e. electronic commerce).

**Management**

http://dx.doi.org/10.5772/54073

**1. Introduction**

ciency.

Alberto Regattieri and Giulia Santarelli

Additional information is available at the end of the chapter

finished products, and during transport and distribution.

### **The Important Role of Packaging in Operations Management**

Alberto Regattieri and Giulia Santarelli

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/54073

### **1. Introduction**

The chapter focuses on the analysis of the impact of packaging in Operations Management (OM) along the whole supply chain. The product packaging system (i.e. primary, secondary and tertiary packages and accessories) is highly relevant in the supply chain and its impor‐ tance is growing because of the necessity to minimize costs, reduce the environmental im‐ pact and also due to the development of web operations (i.e. electronic commerce).

A typical supply chain is an end-to-end process with the main purpose of production, trans‐ portation, and distribution of products. It is relative to the products' movements normally from the supplier to the manufacturer, distributor, retailer and finally the end consumer. All products moved are contained in packages and for this reason the analysis of the physical logistics flows and the role of packaging is a very important issue for the definition and de‐ sign of manufacturing processes, improvement of layout and increase in companies' effi‐ ciency.

In recent years, companies have started to consider packaging as a critical issue. It is neces‐ sary to analyse the packages' characteristics (e.g. shape, materials, transport, etc.) in order to improve the performance of companies and minimize their costs. Packaging concerns all ac‐ tivities of a company: from the purchasing of raw materials to the production and sale of finished products, and during transport and distribution.

In order to manage the activities directly linked with the manufacturing of products (and consequently with the packaging system), the OM discipline is defined. It is responsible for collecting various inputs and converting them into desired outputs through operations [1].

Recently, more and more companies have started to use web operations. Electronic com‐ merce (e-commerce) is the most promising application of information technology witnessed

in recent years. It is revolutionising supply chain management and has enormous potential for manufacturing, retail and service operations. The role of packaging changes with the in‐ crease in the use of e-commerce: from the traditional "shop window" it has become a means of information and containment of products.

the packaging system can represent a source of waste, but at the same time, a possible source of opportunities. Before waste-to-energy solutions, for example, it is possible to con‐ sider the use of recycled packages for shipping products. The same package may be used more than once; for example, if a product is sent back by the consumer, the product package

The Important Role of Packaging in Operations Management

http://dx.doi.org/10.5772/54073

185

OM can be viewed as the tool behind the technical improvements that make production effi‐ cient [6]. It may include three performance aims: efficiency, effectiveness, and customer sat‐ isfaction. Whether the organization is in the private or the public sector, a manufacturing or non-manufacturing organization, a profit or a non-profit organization, the optimal utiliza‐ tion of resources is always a desired objective. According to Waters [1], OM can improve ef‐ ficiency of an operation system to do things right and as a broader concept. Effectiveness involves optimality in the fulfilment of multiple objectives with possible prioritization with‐ in them; it refers to doing the seven right things well: the right operation, right quantity, right quality, right supplier, right time, right place and right price. The OM system has to be

According to Kleinforfer et al. [7], the tools and elements of the management system need to be integrated with company strategy. The locus of the control and methodology of these tools and management systems is directly associated with operations. With the growing re‐ alization of the impact of these innovations on customers and profit, operations began their transformation from a "neglected stepsister needed to support marketing and finance to a

According to Hammer [9], a wave of change began in the 1980s called Business Process Re‐

efforts that Total Quality Management2 (TQM) and Just In Time3 (JIT) had applied to manu‐ facturing. Gradually, this whole evolution came to be known as Process Management, a name that emphasized the crucial importance of processes in value creation and manage‐ ment. Process management is given further impetus by the core competency movement [13], which stressed the need for companies to develop technology-based and organizational competencies that their competitors could not easily imitate. The confluence of the core com‐ petency and process management movements led to many of the past decade's changes in‐ cluding the unbundling of value chains, outsourcing, and innovations in contracting and supply chains. People now recognize the importance of aligning strategy and operations, a

As companies developed their core competencies and included them in their business proc‐ esses, the tools and concepts of TQM and JIT have been applied to the development of new products and supply chain management [7]. Generally, companies first incorporated JIT be‐

1 BRP is the fundamental re-thinking and radical re-design of business processes to achieve improvements in critical

2 TQM is an integrative philosophy of management for continuously improving the quality of products and processes

3 JIT is a manufacturing program with the primary goal of continuously reducing, and ultimately eliminating all forms

contemporary measures of performance, such as cost, quality, service, and speed [10].

(BRP). BPR provided benefits to non-manufacturing processes by applying the

not only profitable and/or efficient, but must necessarily satisfy customers.

could be used for the next shipment.

cherished handmaiden of value creation" [8].

notion championed by Skinner [14].

engineering1

[11].

of waste [12].

The purpose of the chapter is to briefly describe a model of OM discipline usable to high‐ light the role of packaging along the supply chain, describing different implications of an efficient product packaging system for successful management of operations. Particular at‐ tention is paid to the role of product packaging in modern web operations.

The chapter is organised as follows: Section 2 presents a brief description of OM in order to engage the topic of packaging. The packaging logistics system is described in Section 3, be‐ fore presenting experimental results of studies dealing with packaging perception by both companies and customers [2; 3]. Moreover, Section 3 introduces the packaging logistics sys‐ tem also including the analysis of the role of packaging in OM and a description of a com‐ plete mathematical model for the evaluation of total packaging cost is presented. Section 4 presents background about modern e-commerce and its relationship with OM. Packaging and e-commerce connected with OM is described in Section 5 and a case study on packaging e-commerce in operations is analysed in Section 6. Finally, the conclusion and further re‐ search are presented.

### **2. Operations management in brief**

The brief introduction to OM wants to introduce the important role of packaging in all activ‐ ities of a company. This section will describe a model of OM discipline that the authors have taken as a reference for dealing with the packaging topic.

According to Drejer et al. [4], the "Scientific Management" approach to industrial engineer‐ ing developed by Frederick Taylor in the 1910s is widely regarded as the basis on which OM, as a discipline, is founded. This approach involved reducing a system to its simplest elements, analysing them, and calculating how to improve each element.

OM is the management function applied to manufacturing, service industries and no-profit organizations [5] and is responsible for all activities directly concerned with making a prod‐ uct, collecting various inputs and converting them into desired outputs through operations [1]. Thus, OM includes inputs, outputs, and operations. Examples of inputs might be raw materials, money, people, machines, and time. Outputs are goods, services, staff wages, and waste materials. Operations include activities such as manufacturing, assembly, packing, serving, and training [1]. The operations can be of two categories: those that add value and those with no added value. The first category includes the product processing steps (e.g. op‐ erations that transform raw materials into good products). The second category is actually a kind of waste. Waste consists of all unnecessary movements for completing an operation, which should therefore be eliminated. Examples of this are waiting time, piling products, reloading, and movements. Moreover, it is important to underline, right from the start, that the packaging system can represent a source of waste, but at the same time, a possible source of opportunities. Before waste-to-energy solutions, for example, it is possible to con‐ sider the use of recycled packages for shipping products. The same package may be used more than once; for example, if a product is sent back by the consumer, the product package could be used for the next shipment.

in recent years. It is revolutionising supply chain management and has enormous potential for manufacturing, retail and service operations. The role of packaging changes with the in‐ crease in the use of e-commerce: from the traditional "shop window" it has become a means

The purpose of the chapter is to briefly describe a model of OM discipline usable to high‐ light the role of packaging along the supply chain, describing different implications of an efficient product packaging system for successful management of operations. Particular at‐

The chapter is organised as follows: Section 2 presents a brief description of OM in order to engage the topic of packaging. The packaging logistics system is described in Section 3, be‐ fore presenting experimental results of studies dealing with packaging perception by both companies and customers [2; 3]. Moreover, Section 3 introduces the packaging logistics sys‐ tem also including the analysis of the role of packaging in OM and a description of a com‐ plete mathematical model for the evaluation of total packaging cost is presented. Section 4 presents background about modern e-commerce and its relationship with OM. Packaging and e-commerce connected with OM is described in Section 5 and a case study on packaging e-commerce in operations is analysed in Section 6. Finally, the conclusion and further re‐

The brief introduction to OM wants to introduce the important role of packaging in all activ‐ ities of a company. This section will describe a model of OM discipline that the authors have

According to Drejer et al. [4], the "Scientific Management" approach to industrial engineer‐ ing developed by Frederick Taylor in the 1910s is widely regarded as the basis on which OM, as a discipline, is founded. This approach involved reducing a system to its simplest

OM is the management function applied to manufacturing, service industries and no-profit organizations [5] and is responsible for all activities directly concerned with making a prod‐ uct, collecting various inputs and converting them into desired outputs through operations [1]. Thus, OM includes inputs, outputs, and operations. Examples of inputs might be raw materials, money, people, machines, and time. Outputs are goods, services, staff wages, and waste materials. Operations include activities such as manufacturing, assembly, packing, serving, and training [1]. The operations can be of two categories: those that add value and those with no added value. The first category includes the product processing steps (e.g. op‐ erations that transform raw materials into good products). The second category is actually a kind of waste. Waste consists of all unnecessary movements for completing an operation, which should therefore be eliminated. Examples of this are waiting time, piling products, reloading, and movements. Moreover, it is important to underline, right from the start, that

tention is paid to the role of product packaging in modern web operations.

of information and containment of products.

**2. Operations management in brief**

taken as a reference for dealing with the packaging topic.

elements, analysing them, and calculating how to improve each element.

search are presented.

184 Operations Management

OM can be viewed as the tool behind the technical improvements that make production effi‐ cient [6]. It may include three performance aims: efficiency, effectiveness, and customer sat‐ isfaction. Whether the organization is in the private or the public sector, a manufacturing or non-manufacturing organization, a profit or a non-profit organization, the optimal utiliza‐ tion of resources is always a desired objective. According to Waters [1], OM can improve ef‐ ficiency of an operation system to do things right and as a broader concept. Effectiveness involves optimality in the fulfilment of multiple objectives with possible prioritization with‐ in them; it refers to doing the seven right things well: the right operation, right quantity, right quality, right supplier, right time, right place and right price. The OM system has to be not only profitable and/or efficient, but must necessarily satisfy customers.

According to Kleinforfer et al. [7], the tools and elements of the management system need to be integrated with company strategy. The locus of the control and methodology of these tools and management systems is directly associated with operations. With the growing re‐ alization of the impact of these innovations on customers and profit, operations began their transformation from a "neglected stepsister needed to support marketing and finance to a cherished handmaiden of value creation" [8].

According to Hammer [9], a wave of change began in the 1980s called Business Process Re‐ engineering1 (BRP). BPR provided benefits to non-manufacturing processes by applying the efforts that Total Quality Management2 (TQM) and Just In Time3 (JIT) had applied to manu‐ facturing. Gradually, this whole evolution came to be known as Process Management, a name that emphasized the crucial importance of processes in value creation and manage‐ ment. Process management is given further impetus by the core competency movement [13], which stressed the need for companies to develop technology-based and organizational competencies that their competitors could not easily imitate. The confluence of the core com‐ petency and process management movements led to many of the past decade's changes in‐ cluding the unbundling of value chains, outsourcing, and innovations in contracting and supply chains. People now recognize the importance of aligning strategy and operations, a notion championed by Skinner [14].

As companies developed their core competencies and included them in their business proc‐ esses, the tools and concepts of TQM and JIT have been applied to the development of new products and supply chain management [7]. Generally, companies first incorporated JIT be‐

<sup>1</sup> BRP is the fundamental re-thinking and radical re-design of business processes to achieve improvements in critical contemporary measures of performance, such as cost, quality, service, and speed [10].

<sup>2</sup> TQM is an integrative philosophy of management for continuously improving the quality of products and processes [11].

<sup>3</sup> JIT is a manufacturing program with the primary goal of continuously reducing, and ultimately eliminating all forms of waste [12].

tween suppliers and production units. The 1980s' introduction of TQM and JIT in manufac‐ turing gave rise to the recognition that the principles of excellence applied to manufacturing operations could also improve business processes and that organizations structured accord‐ ing to process management principles would improve. According to Kleindorfer et al. [7], the combination of these process management fundamentals, information and communica‐ tion technologies, and globalization has provided the foundations and tools for managing today's outsourcing, contract manufacturing, and global supply chains. In the 1990s compa‐ nies moved over to optimized logistics (including Efficient Consumer Response4 (ECR)) be‐ tween producers and distributors, then to Customer Relationship Management5 (CRM) and finally to global fulfilment architecture and risk management. These supply chain-focused trends inspired similar trends at the corporate level as companies moved from lean opera‐ tions to lean enterprises and now to lean consumption [17]. Figure 1 shows these trends and drivers, based on Kleindorfer et al. [7].

packaging system and respect of the requirements) point of view, allows companies to find

*Strategic level* 

*Tactical level* 

*Operational level* 

The Important Role of Packaging in Operations Management

http://dx.doi.org/10.5772/54073

187

A general framework of product packaging and the packaging logistics system will be pre‐

OM is increasingly connected with the environment and sustainable development (i.e. the development that meets the needs of the present without compromising the ability of future generations to meet their own needs), and it now concerns both the operational drivers of

Following the definition of sustainability by the World Commission on Environment and Development (WCED), Sustainable Operations Management (SOM) is defined as *the set of skills and concepts that allow a company to structure and manage its business processes in order to obtain competitive returns on its capital assets without sacrificing the needs of stakeholders and with*

In order to perform sustainable operations, it is necessary to enlarge the perspective of OM, in‐ cluding people and the planet. According to Kleindorfer et al. [7], SOM integrates the profit and efficiency orientation of traditional OM with broader considerations of the company's in‐ ternal and external stakeholders and its environmental impact. SOM helps companies to be‐

Figure 1 has shown the evolution of OM since the 1980s. Figure 3 shows the impact of the SOM in the supply chain [7]. SOM has emerged over recent years and it influences the entire life cycle of the product (e.g. the management of product, recovery and reverse flows).

come agile, adaptive and aligned, balancing the people and the planet with profits [7].

the optimal management of the packaging system and to reduce packaging cost.

A

B

C

sented in Section 3.

**2.1. Sustainable operations management**

**Figure 2.** Graphical representation of OM's decision levels [18]

profitability and their relationship with people and the planet.

*regard for the impact of its operations on people and environment*.

**Figure 1.** Trends and drivers of Operations Management (1980-2000) [7]

In order to manage the supply chain, organizations have to make different decisions about OM that can be classified as strategic, tactical, and operational. A graphical representation of the three decision levels of OM is shown in Figure 2 [18].

The three decisions' levels of OM interact and depend on each other: the strategic level is a prerequisite for the tactical level, and this in turn is a prerequisite for the operational level.

Strategic, tactical, and operational levels of OM are closely connected with the packaging system. Packaging is cross-functional to all company operations, since it is handled in sever‐ al parts of the supply chain (e.g. marketing, production, logistics, purchasing, etc.). A prod‐ uct packaging system plays a fundamental role in the successful design and management of the operations in the supply chain. An integrated management of the packaging system from the strategic (e.g. decision of defining a new packaging solution), tactical (e.g. defini‐ tion of the main packaging requirements) and operational (e.g. development of the physical

<sup>4</sup> ECR is an attempt to increase the velocity of inventory in the packaged goods industry throughout the supply chain of wholesalers, distributors and ultimately end consumers [15].

<sup>5</sup> CRM is a widely implemented model for managing company's interactions with customers. It involves using tech‐ nology to organize, automate, and synchronize business processes [16].

**Figure 2.** Graphical representation of OM's decision levels [18]

tween suppliers and production units. The 1980s' introduction of TQM and JIT in manufac‐ turing gave rise to the recognition that the principles of excellence applied to manufacturing operations could also improve business processes and that organizations structured accord‐ ing to process management principles would improve. According to Kleindorfer et al. [7], the combination of these process management fundamentals, information and communica‐ tion technologies, and globalization has provided the foundations and tools for managing today's outsourcing, contract manufacturing, and global supply chains. In the 1990s compa‐

finally to global fulfilment architecture and risk management. These supply chain-focused trends inspired similar trends at the corporate level as companies moved from lean opera‐ tions to lean enterprises and now to lean consumption [17]. Figure 1 shows these trends and

> 2005 Global Fulfillment Architecture and Risk Management

> > /Retailer Customer

2000s CRM

(ECR)) be‐

(CRM) and

nies moved over to optimized logistics (including Efficient Consumer Response4

tween producers and distributors, then to Customer Relationship Management5

Supplier Producer Distributor

1990s ECR

In order to manage the supply chain, organizations have to make different decisions about OM that can be classified as strategic, tactical, and operational. A graphical representation of

The three decisions' levels of OM interact and depend on each other: the strategic level is a prerequisite for the tactical level, and this in turn is a prerequisite for the operational level. Strategic, tactical, and operational levels of OM are closely connected with the packaging system. Packaging is cross-functional to all company operations, since it is handled in sever‐ al parts of the supply chain (e.g. marketing, production, logistics, purchasing, etc.). A prod‐ uct packaging system plays a fundamental role in the successful design and management of the operations in the supply chain. An integrated management of the packaging system from the strategic (e.g. decision of defining a new packaging solution), tactical (e.g. defini‐ tion of the main packaging requirements) and operational (e.g. development of the physical

4 ECR is an attempt to increase the velocity of inventory in the packaged goods industry throughout the supply chain

5 CRM is a widely implemented model for managing company's interactions with customers. It involves using tech‐

drivers, based on Kleindorfer et al. [7].

186 Operations Management

1980s TQM-JIT

**Figure 1.** Trends and drivers of Operations Management (1980-2000) [7]

the three decision levels of OM is shown in Figure 2 [18].

of wholesalers, distributors and ultimately end consumers [15].

nology to organize, automate, and synchronize business processes [16].

packaging system and respect of the requirements) point of view, allows companies to find the optimal management of the packaging system and to reduce packaging cost.

A general framework of product packaging and the packaging logistics system will be pre‐ sented in Section 3.

### **2.1. Sustainable operations management**

OM is increasingly connected with the environment and sustainable development (i.e. the development that meets the needs of the present without compromising the ability of future generations to meet their own needs), and it now concerns both the operational drivers of profitability and their relationship with people and the planet.

Following the definition of sustainability by the World Commission on Environment and Development (WCED), Sustainable Operations Management (SOM) is defined as *the set of skills and concepts that allow a company to structure and manage its business processes in order to obtain competitive returns on its capital assets without sacrificing the needs of stakeholders and with regard for the impact of its operations on people and environment*.

In order to perform sustainable operations, it is necessary to enlarge the perspective of OM, in‐ cluding people and the planet. According to Kleindorfer et al. [7], SOM integrates the profit and efficiency orientation of traditional OM with broader considerations of the company's in‐ ternal and external stakeholders and its environmental impact. SOM helps companies to be‐ come agile, adaptive and aligned, balancing the people and the planet with profits [7].

Figure 1 has shown the evolution of OM since the 1980s. Figure 3 shows the impact of the SOM in the supply chain [7]. SOM has emerged over recent years and it influences the entire life cycle of the product (e.g. the management of product, recovery and reverse flows).

re-use secondary packages of back products for future shipments). The next section de‐ scribes the packaging system and its crucial role for the activities along the supply chain,

The Important Role of Packaging in Operations Management

http://dx.doi.org/10.5772/54073

189

During recent decades, the importance of the packaging system and its different functions has been increasing. Traditionally, packaging is intended as a means of protecting and pre‐ serving goods, handling, transport, and storage of products [19]. Other packaging functions like sales promotion, customer attention and brand communication have consistently grown in importance [20]. It means that when a packaging developer makes a package, it needs to be designed in order to meet the demand from a sales and a marketing perspective, and not

The European Federation defines packaging *as all products made of any materials of any nature to be used for the containment, protection, delivery and presentation of goods, from raw materials to*

Packaging is built up as a system usually consisting of a primary, secondary, and tertiary level [22]. The primary package concerns the structural nature of the package; it is usually the smallest unit of distribution or use and is the package in direct contact with the contents. The secondary package relates to the issues of visual communication and it is used to group primary packages together. Finally, the tertiary package is used for warehouse storage and

The packaging system is cross-functional, since it interacts with different industrial depart‐ ments, with their specific requests of how packages should be designed, and these are often

**•** Physical protection: the objects enclosed in the package may require protection from me‐ chanical shock, vibration, electrostatic discharge, compression, temperature, etc.;

**•** Hygiene: a barrier from e.g. oxygen, water vapour, dust, etc. is often required. Keeping the contents clean, fresh, sterile and safe for the intended shelf life is a primary function;

**•** Containment or agglomeration: small objects have to be grouped together in one package

**•** Information transmission: packages can communicate how to use, store, recycle, or dis‐

**•** Marketing: packages can be used by marketers to encourage potential buyers to purchase

only from a manufacturing process and transportation network perspective [21].

A graphical representation of packaging system is shown in Figure 4:

contradictory. Thus, packages have to satisfy several purposes, such as:

**3. A theoretical framework of the packaging system**

and then in OM.

*processed goods*.

transport shipping [23].

for efficiency reasons;

the product;

pose of the package or product;

**Figure 3.** The impact of SOM in the supply chain (1980-2010) [7]

Considering the sustainability, environmental responsibility and recycling regulations, the packaging system plays an increasingly important role. Several environmental aspects are affected by packaging issues:


According to the studies conducted by Regattieri et al. [2; 3], users and companies have shown an interest in the environment and its link with the packaging system. Indeed, they believe that careful use of packaging can lead to an important reduction in environmental impact. Companies have begun to use recyclable materials (e.g. cardboard, paper, and plas‐ tic) and to re-use packages for other activities (for example online retailers are beginning to re-use secondary packages of back products for future shipments). The next section de‐ scribes the packaging system and its crucial role for the activities along the supply chain, and then in OM.

### **3. A theoretical framework of the packaging system**

Supplier Producer Distributor

1990s ECR

2010 **Sustainable Operations Management** Lean operations pervade and permeate the entire life of the product, including the management of product recovery and reverse flows.

Considering the sustainability, environmental responsibility and recycling regulations, the packaging system plays an increasingly important role. Several environmental aspects are

**•** Waste prevention: packages should be used only where needed. Usually, the energy con‐ tent and material usage of the product being packaged are much greater than that of the pack‐

**•** Material minimization: the mass and volume of packages is one of the criteria to mini‐ mize during the package design process. The use of "reduced" packaging helps to reduce

**•** Re-use: the re-use of a package or its component for other purposes is encouraged. Re‐ turnable packages have long been used for closed loop logistics systems. Some manufac‐ turers re-use the packages of the incoming parts for a product, either as packages for the

**•** Recycling: the emphasis focuses on recycling the largest primary components of a pack‐

**•** Energy recovery: waste-to-energy and refuse-derived fuel in facilities are able to make

**•** Disposal: incineration, and placement in a sanitary landfill are needed for some materials. According to the studies conducted by Regattieri et al. [2; 3], users and companies have shown an interest in the environment and its link with the packaging system. Indeed, they believe that careful use of packaging can lead to an important reduction in environmental impact. Companies have begun to use recyclable materials (e.g. cardboard, paper, and plas‐ tic) and to re-use packages for other activities (for example online retailers are beginning to

2005 Global Fulfillment Architecture and Risk Management

1980s TQM-JIT

**Figure 3.** The impact of SOM in the supply chain (1980-2010) [7]

outgoing product or as part of the product itself;

use of the heat available from the packaging components;

age: steel, aluminium, paper, plastic, etc.;

affected by packaging issues:

the environmental impacts;

age;

188 Operations Management

/Retailer Customer

2000s CRM

During recent decades, the importance of the packaging system and its different functions has been increasing. Traditionally, packaging is intended as a means of protecting and pre‐ serving goods, handling, transport, and storage of products [19]. Other packaging functions like sales promotion, customer attention and brand communication have consistently grown in importance [20]. It means that when a packaging developer makes a package, it needs to be designed in order to meet the demand from a sales and a marketing perspective, and not only from a manufacturing process and transportation network perspective [21].

The European Federation defines packaging *as all products made of any materials of any nature to be used for the containment, protection, delivery and presentation of goods, from raw materials to processed goods*.

Packaging is built up as a system usually consisting of a primary, secondary, and tertiary level [22]. The primary package concerns the structural nature of the package; it is usually the smallest unit of distribution or use and is the package in direct contact with the contents. The secondary package relates to the issues of visual communication and it is used to group primary packages together. Finally, the tertiary package is used for warehouse storage and transport shipping [23].

A graphical representation of packaging system is shown in Figure 4:

The packaging system is cross-functional, since it interacts with different industrial depart‐ ments, with their specific requests of how packages should be designed, and these are often contradictory. Thus, packages have to satisfy several purposes, such as:


clable and to use the least material possible. Figure 5 shows the main interactions of the

Logistics Handle, transport, store, distribution

The Important Role of Packaging in Operations Management

Packaging Containment, protection, convenience, communication, apportionment, unitization

> Marketing Sell, differentiate, promote, value, inform

Logistics Handle, transport, store, distribution

Packaging Containment, protection, convenience, communication, apportionment, unitization

> Marketing Sell, differentiate, promote, value, inform

Scholars dealing with packaging disagree about its main function: some researchers empha‐ size that packaging is a highly versatile marketing tool [20], while others consider it mainly as an integral element of the logistics function [19; 25]. It is necessary to balance the techno‐ logical and marketing aspects of packaging, indeed it has a significant impact on the effi‐ ciency of both logistics (e.g. manufacturing and distribution costs, time required for completing manufacturing and packing operations, which affect product lead time and due date performance to the customer) and the marketing function (e.g. products' selling, shelf

Scholars dealing with packaging disagree about its main function: some researchers emphasize that packaging is a highly versatile marketing tool [20], while others consider it mainly as an integral element of the logistics function [19; 25]. It is necessary to balance the technological and marketing aspects of packaging, indeed it has a significant impact on the efficiency of both logistics (e.g. manufacturing and distribution costs, time required for completing manufacturing and packing operations, which affect product lead time and due date performance to the customer) and the marketing function (e.g. products' selling, shelf presentation, etc.).

Production Produce, make, assemble, fit

http://dx.doi.org/10.5772/54073

Production Produce, make, assemble, fit

191

Scholars dealing with packaging disagree about its main function: some researchers emphasize that packaging is a highly versatile marketing tool [20], while others consider it mainly as an integral element of the logistics function [19; 25]. It is necessary to balance the technological and marketing aspects of packaging, indeed it has a significant impact on the efficiency of both logistics (e.g. manufacturing and distribution costs, time required for completing manufacturing and packing operations, which affect product lead time and due date performance to the customer) and the marketing function (e.g. products' selling, shelf presentation, etc.).

During the recent decades, the environmental aspect is considered by companies that deal with the packaging system. According to Johansson [26] the packaging system can be divided in three main functions, that interact each other: flow, market and

The flow function consists of packaging features that contribute to more efficient handling in distribution. Packaging logistics,

Packaging logistics is a relatively new discipline that in recent years has been developed and has gained increasing attention in terms of the strategic role of logistics in delivering competitive advantage by the industrial and scientific community [22; 25]. Industry and science attribute different maturity levels to the subject depending on country and culture. According to Saghir [22], the concept of packaging logistics focuses on *the synergies achieved by integrating packaging and logistics systems with the potential of increased supply chain efficiency and effectiveness, through the improvement of both packaging and logistics related activities*. A more recent definition of packaging logistics is attributed to Chan et al. [27], who describe packaging logistics as the *interaction and relationship between logistics and packaging systems that improve add-on values on the whole supply chain, from raw material producers to end users, and the disposal of the empty package, by re-use, material recycling, incineration or landfill.* Both the definitions ([22; 27]) focus on the

In the market function, things like design, layout, communication, ergonomic aspects that create value for the product and the brand are important features for the packaging system [18]. The purpose of the market function is to satisfy customers and to

During recent decades the link between packaging and marketing is analysed in depth by several authors, and packaging has been studied as a marketing instrument that can influence some specific aspects, such as product positioning, consumer attention,

During the recent decades, the environmental aspect is considered by companies that deal with the packaging system. According to Johansson [26] the packaging system can be divided in three main functions, that interact each other: flow, market and

The flow function consists of packaging features that contribute to more efficient handling in distribution. Packaging logistics,

Packaging logistics is a relatively new discipline that in recent years has been developed and has gained increasing attention in terms of the strategic role of logistics in delivering competitive advantage by the industrial and scientific community [22; 25]. Industry and science attribute different maturity levels to the subject depending on country and culture. According to Saghir [22], the concept of packaging logistics focuses on *the synergies achieved by integrating packaging and logistics systems with the potential of increased supply chain efficiency and effectiveness, through the improvement of both packaging and logistics related activities*. A more recent definition of packaging logistics is attributed to Chan et al. [27], who describe packaging logistics as the *interaction and relationship between logistics and packaging systems that improve add-on values on the whole supply chain, from raw material producers to end users, and the disposal of the empty package, by re-use, material recycling, incineration or landfill.* Both the definitions ([22; 27]) focus on the

internal material flows, distribution, unpacking, disposal and return handling are included in this function.

In the market function, things like design, layout, communication, ergonomic aspects that create value for the product and the brand are important features for the packaging system [18]. The purpose of the market function is to satisfy customers and to

During recent decades the link between packaging and marketing is analysed in depth by several authors, and packaging has been studied as a marketing instrument that can influence some specific aspects, such as product positioning, consumer attention,

importance of the packaging logistics system, mainly in order to improve the efficiency of the whole supply chain.

internal material flows, distribution, unpacking, disposal and return handling are included in this function.

Environment

Environment

Flow

Flow

Market

importance of the packaging logistics system, mainly in order to improve the efficiency of the whole supply chain.

packaging system.

Figure 5. The main interactions of the packaging system

Figure 5. The main interactions of the packaging system

**Figure 5.** The main interactions of the packaging system

Figure 6. The main functions of the packaging system [26]

**Figure 6.** The main functions of the packaging system [26]

Figure 6. The main functions of the packaging system [26]

Market

environment (Figure 6).

environment (Figure 6).

presentation, etc.).

Environment Reduce, re-use, recover, dispose

Environment Reduce, re-use, recover, dispose

increase product sales.

increase product sales.

**Figure 4.** Graphical representation of the packaging system

**•** Security: packages can play an important role in reducing the risks associated with ship‐ ment. Organizations may install electronic devices like RFID tags on packages, to identify the products in real time, reducing the risk of thefts and increasing security.

### **3.1. Packaging system and operations management**

In recent years, packaging design has developed into a complete and mature communica‐ tion discipline [24]. Clients now realize that packages can be a central and critical element in the development of an effective brand identity. The packaging system fulfils a complex ser‐ ies of functions, of which communication is only one. Ease of processing and handling, as well as transport, storage, protection, convenience, and re-use are all affected by packaging.

The packaging system has significant implications in OM. In order to obtain successful man‐ agement of operations, packaging assumes a fundamental role along the whole supply chain and has to be connected with logistics, marketing, production, and environment aspects. For example, logistics requires the packages to be as easy as possible to handle through all proc‐ esses and for customers. Marketing demands a package that looks nice and is the right size. Packages do not only present the product on the shelf but they also arouse consumers' ex‐ pectations and generate a desire to try out the product. Once the product is purchased, packages reassure the consumer of a product's quality and reinforce confidence [24]. Pro‐ duction requires only one size of packaging for all kinds of products in order to minimize time and labour cost. The environmental aspect demands the packaging system to be recy‐ Scholars dealing with packaging disagree about its main function: some researchers emphasize that packaging is a highly versatile

clable and to use the least material possible. Figure 5 shows the main interactions of the packaging system. Logistics

> Handle, transport, store, distribution

Figure 5. The main interactions of the packaging system **Figure 5.** The main interactions of the packaging system

Figure 5. The main interactions of the packaging system

**•** Security: packages can play an important role in reducing the risks associated with ship‐ ment. Organizations may install electronic devices like RFID tags on packages, to identify

Tertiary packaging

Secondary packaging

Primary packaging

In recent years, packaging design has developed into a complete and mature communica‐ tion discipline [24]. Clients now realize that packages can be a central and critical element in the development of an effective brand identity. The packaging system fulfils a complex ser‐ ies of functions, of which communication is only one. Ease of processing and handling, as well as transport, storage, protection, convenience, and re-use are all affected by packaging.

The packaging system has significant implications in OM. In order to obtain successful man‐ agement of operations, packaging assumes a fundamental role along the whole supply chain and has to be connected with logistics, marketing, production, and environment aspects. For example, logistics requires the packages to be as easy as possible to handle through all proc‐ esses and for customers. Marketing demands a package that looks nice and is the right size. Packages do not only present the product on the shelf but they also arouse consumers' ex‐ pectations and generate a desire to try out the product. Once the product is purchased, packages reassure the consumer of a product's quality and reinforce confidence [24]. Pro‐ duction requires only one size of packaging for all kinds of products in order to minimize time and labour cost. The environmental aspect demands the packaging system to be recy‐

the products in real time, reducing the risk of thefts and increasing security.

**3.1. Packaging system and operations management**

**Figure 4.** Graphical representation of the packaging system

190 Operations Management

Scholars dealing with packaging disagree about its main function: some researchers emphasize that packaging is a highly versatile marketing tool [20], while others consider it mainly as an integral element of the logistics function [19; 25]. It is necessary to balance the technological and marketing aspects of packaging, indeed it has a significant impact on the efficiency of both logistics (e.g. manufacturing and distribution costs, time required for completing manufacturing and packing operations, which affect product lead time and due date performance to the customer) and the marketing function (e.g. products' selling, shelf presentation, etc.). During the recent decades, the environmental aspect is considered by companies that deal with the packaging system. According to Johansson [26] the packaging system can be divided in three main functions, that interact each other: flow, market and environment (Figure 6). Scholars dealing with packaging disagree about its main function: some researchers empha‐ size that packaging is a highly versatile marketing tool [20], while others consider it mainly as an integral element of the logistics function [19; 25]. It is necessary to balance the techno‐ logical and marketing aspects of packaging, indeed it has a significant impact on the effi‐ ciency of both logistics (e.g. manufacturing and distribution costs, time required for completing manufacturing and packing operations, which affect product lead time and due date performance to the customer) and the marketing function (e.g. products' selling, shelf presentation, etc.). marketing tool [20], while others consider it mainly as an integral element of the logistics function [19; 25]. It is necessary to balance the technological and marketing aspects of packaging, indeed it has a significant impact on the efficiency of both logistics (e.g. manufacturing and distribution costs, time required for completing manufacturing and packing operations, which affect product lead time and due date performance to the customer) and the marketing function (e.g. products' selling, shelf presentation, etc.). During the recent decades, the environmental aspect is considered by companies that deal with the packaging system. According to Johansson [26] the packaging system can be divided in three main functions, that interact each other: flow, market and environment (Figure 6).

Packaging logistics is a relatively new discipline that in recent years has been developed and has gained increasing attention in terms of the strategic role of logistics in delivering competitive advantage by the industrial and scientific community [22; 25]. Industry and science attribute different maturity levels to the subject depending on country and culture. According to Saghir [22], the concept of packaging logistics focuses on *the synergies achieved by integrating packaging and logistics systems with the potential of increased supply chain efficiency and effectiveness, through the improvement of both packaging and logistics related activities*. A more recent definition of packaging logistics is attributed to Chan et al. [27], who describe packaging logistics as the *interaction and relationship between logistics and packaging systems that improve add-on values on the whole supply chain, from raw material producers to end users, and the disposal of the empty package, by re-use, material recycling, incineration or landfill.* Both the definitions ([22; 27]) focus on the

internal material flows, distribution, unpacking, disposal and return handling are included in this function.

The flow function consists of packaging features that contribute to more efficient handling in distribution. Packaging logistics,

Packaging logistics is a relatively new discipline that in recent years has been developed and has gained increasing attention in terms of the strategic role of logistics in delivering competitive advantage by the industrial and scientific community [22; 25]. Industry and science attribute different maturity levels to the subject depending on country and culture. According to Saghir [22], the concept of packaging logistics focuses on *the synergies achieved by integrating packaging and logistics systems with the potential of increased supply chain efficiency and effectiveness, through the improvement of both packaging and logistics related activities*. A more recent definition of packaging logistics is attributed to Chan et al. [27], who describe packaging logistics as the *interaction and relationship between logistics and packaging systems that improve add-on values on the whole supply chain, from raw material producers to end users, and the disposal of the empty package, by re-use, material recycling, incineration or landfill.* Both the definitions ([22; 27]) focus on the

In the market function, things like design, layout, communication, ergonomic aspects that create value for the product and the brand are important features for the packaging system [18]. The purpose of the market function is to satisfy customers and to

During recent decades the link between packaging and marketing is analysed in depth by several authors, and packaging has been studied as a marketing instrument that can influence some specific aspects, such as product positioning, consumer attention,

In the market function, things like design, layout, communication, ergonomic aspects that create value for the product and the brand are important features for the packaging system [18]. The purpose of the market function is to satisfy customers and to

During recent decades the link between packaging and marketing is analysed in depth by several authors, and packaging has been studied as a marketing instrument that can influence some specific aspects, such as product positioning, consumer attention,

importance of the packaging logistics system, mainly in order to improve the efficiency of the whole supply chain.

importance of the packaging logistics system, mainly in order to improve the efficiency of the whole supply chain.

The flow function consists of packaging features that contribute to more efficient handling in distribution. Packaging logistics, internal material flows, distribution, unpacking, disposal and return handling are included in this function. Figure 6. The main functions of the packaging system [26] **Figure 6.** The main functions of the packaging system [26]

increase product sales.

increase product sales.

During the recent decades, the environmental aspect is considered by companies that deal with the packaging system. According to Johansson [26] the packaging system can be divided in three main functions, that interact each other: flow, market and environment (Figure 6).

**1.** *Facilitate goods handling*. This function considers the following aspects:

age;

**2.** a. Volume efficiency: this is a function of packaging design and product shape. In order to optimize the volume efficiency of a package, this function can be split into two parts, internal and external filling degree. The first regards how well the space within a pack‐ age is utilized. When using standardized packages with fixed sizes, the internal filling degree might not always be optimal. The external filling degree concerns the fitting of the primary packages with secondary and of secondary with tertiary [7]. Packages that perfectly fill each other can eliminate unnecessary handling and the risk of damage, but it is important not to be too ambitious. Too much packaging may be too expensive, and there is a point where it is less costly to allow some damage than to pack for zero dam‐

The Important Role of Packaging in Operations Management

http://dx.doi.org/10.5772/54073

193

**3.** b. Consumption adaptation: the quantity of packages must be adapted to the consump‐ tion in order to keep costs low and not to tie unnecessary capital. Moreover it is desira‐

**4.** c. Weight efficiency: the package must have the lowest possible weight, because volume and weight limit the possible amount to transport. The weight is even more important

**5.** d. Handleability: the packaging must be easy to handle for people and automatic sys‐ tems working in the supply chain, and final customers [7]. According to Regattieri et al. [2; 3], the handleability is considered the most critical packaging quality attribute by

**6.** *Identify the product*. The need to trace the position of goods during transport to the final destination can be achieved in different ways, for example by installing RFID tags in packages. Thanks to this new technology, it is possible to identify the position of both packages and products in real time. This system leads to a reduction in thefts, increase

in security, mapping of the path of products and control of the work in progress; **7.** *Protect the product*. The protection of the product is one of the basic functions of packag‐ ing for both companies and users [2; 3]. An unprotected product could cause product waste, which is negative from both the environmental and the economic point of view. Packages must protect products during manufacturing and assembly (within the facto‐ ry), storage and picking (within the warehouse) and transport (within the vehicle) from

Due to the different implications of the packaging system with all the activities of an organi‐ zation, as underlined in the previous paragraphs, packaging has to be considered an impor‐

The packaging function assumes a crucial role in all activities along the supply chain (e.g. purchase, production, sales, transport, etc.). It is transversal to other industrial functions such as logistics, production, marketing and environmental aspects. The packaging function

surrounding conditions, against loss, theft and manipulation of goods.

tant competitive factor for companies to obtain an efficient supply chain.

ble to have flexible packages and a high turnover of the packaging stock [7];

when packages are handled manually [7];

**3.2. The role of packaging along the supply chain**

Italian companies and users;

The flow function consists of packaging features that contribute to more efficient handling in distribution. Packaging logistics, internal material flows, distribution, unpacking, dispos‐ al and return handling are included in this function.

Packaging logistics is a relatively new discipline that in recent years has been developed and has gained increasing attention in terms of the strategic role of logistics in delivering com‐ petitive advantage by the industrial and scientific community [22; 25]. Industry and science attribute different maturity levels to the subject depending on country and culture. Accord‐ ing to Saghir [22], the concept of packaging logistics focuses on *the synergies achieved by inte‐ grating packaging and logistics systems with the potential of increased supply chain efficiency and effectiveness, through the improvement of both packaging and logistics related activities*. A more re‐ cent definition of packaging logistics is attributed to Chan et al. [27], who describe packag‐ ing logistics as the *interaction and relationship between logistics and packaging systems that improve add-on values on the whole supply chain, from raw material producers to end users, and the disposal of the empty package, by re-use, material recycling, incineration or landfill*. Both the defini‐ tions ([22; 27]) focus on the importance of the packaging logistics system, mainly in order to improve the efficiency of the whole supply chain.

In the market function, things like design, layout, communication, ergonomic aspects that create value for the product and the brand are important features for the packaging system [18]. The purpose of the market function is to satisfy customers and to increase product sales.

During recent decades the link between packaging and marketing is analysed in depth by several authors, and packaging has been studied as a marketing instrument that can influ‐ ence some specific aspects, such as product positioning, consumer attention, categorization and evaluation, usage behaviour, intention to purchase or brand communication [28]. The aspect is significant since the package plays the role of an important interface between the brand owner and the consumer. The initial impression of product quality by the consumers is often judged by the impression of the package [29].

In the current operational environment, planning innovations must take into account not only marketing and logistics functions, but also a factor that is emerging as increasingly im‐ portant: the environmental aspect. It aims to reduce the negative effects of the packaging system on the environment. Issues like the use of fewer inputs for the same outputs and the re-use of materials, facilitate the recycling of packaging [18]. Verruccio et al. [28] suggest that an increasing number of companies are choosing approaches that take care of the envi‐ ronmental aspects. It is further established that the design of the packaging system heavily influences the environmental aspect of activities in the supply chain [29; 30-32].

With regard to packaging logistics, the use of an appropriate packaging system (in terms of functions, materials, size and shape) can improve the management of operations [18]:

**1.** *Facilitate goods handling*. This function considers the following aspects:

During the recent decades, the environmental aspect is considered by companies that deal with the packaging system. According to Johansson [26] the packaging system can be divided in three main functions, that interact each other: flow, market and environment (Figure 6).

The flow function consists of packaging features that contribute to more efficient handling in distribution. Packaging logistics, internal material flows, distribution, unpacking, dispos‐

Packaging logistics is a relatively new discipline that in recent years has been developed and has gained increasing attention in terms of the strategic role of logistics in delivering com‐ petitive advantage by the industrial and scientific community [22; 25]. Industry and science attribute different maturity levels to the subject depending on country and culture. Accord‐ ing to Saghir [22], the concept of packaging logistics focuses on *the synergies achieved by inte‐ grating packaging and logistics systems with the potential of increased supply chain efficiency and effectiveness, through the improvement of both packaging and logistics related activities*. A more re‐ cent definition of packaging logistics is attributed to Chan et al. [27], who describe packag‐ ing logistics as the *interaction and relationship between logistics and packaging systems that improve add-on values on the whole supply chain, from raw material producers to end users, and the disposal of the empty package, by re-use, material recycling, incineration or landfill*. Both the defini‐ tions ([22; 27]) focus on the importance of the packaging logistics system, mainly in order to

In the market function, things like design, layout, communication, ergonomic aspects that create value for the product and the brand are important features for the packaging system [18]. The purpose of the market function is to satisfy customers and to increase product

During recent decades the link between packaging and marketing is analysed in depth by several authors, and packaging has been studied as a marketing instrument that can influ‐ ence some specific aspects, such as product positioning, consumer attention, categorization and evaluation, usage behaviour, intention to purchase or brand communication [28]. The aspect is significant since the package plays the role of an important interface between the brand owner and the consumer. The initial impression of product quality by the consumers

In the current operational environment, planning innovations must take into account not only marketing and logistics functions, but also a factor that is emerging as increasingly im‐ portant: the environmental aspect. It aims to reduce the negative effects of the packaging system on the environment. Issues like the use of fewer inputs for the same outputs and the re-use of materials, facilitate the recycling of packaging [18]. Verruccio et al. [28] suggest that an increasing number of companies are choosing approaches that take care of the envi‐ ronmental aspects. It is further established that the design of the packaging system heavily

With regard to packaging logistics, the use of an appropriate packaging system (in terms of

functions, materials, size and shape) can improve the management of operations [18]:

influences the environmental aspect of activities in the supply chain [29; 30-32].

al and return handling are included in this function.

improve the efficiency of the whole supply chain.

is often judged by the impression of the package [29].

sales.

192 Operations Management


### **3.2. The role of packaging along the supply chain**

Due to the different implications of the packaging system with all the activities of an organi‐ zation, as underlined in the previous paragraphs, packaging has to be considered an impor‐ tant competitive factor for companies to obtain an efficient supply chain.

The packaging function assumes a crucial role in all activities along the supply chain (e.g. purchase, production, sales, transport, etc.). It is transversal to other industrial functions such as logistics, production, marketing and environmental aspects. The packaging function has to satisfy different needs and requirements, trying to have a trade-off between them. Considering the simplified supply chain of a manufacturing company (Figure 7), it is possi‐ ble to analyse the role of the packaging function for all the parties of the supply chain.

Party Role of packaging

transport.

• logistics,

recyclable.

supply chain.

packages.

**Table 1.** The role of packaging for the parties along the supply chain

• marketing and the • environment.

• product protection and safety,

withstanding mechanical shocks and vibrations;

*n* Suppliers Suppliers are more interested in the logistics aspect of packaging than in marketing. They have to

Manufacturer The manufacturer produces finished products to sell to the distribution centre and, indirectly, to end consumers. It is important for the manufacturer to take into account all aspects:

Product protection and safety: the packages have to protect and contain the product,

Logistics: the manufacturer has to handle, store, pick and transport the product to the distribution centre. He has to make primary, secondary and tertiary packaging that is easy to

Marketing: the manufacturer has to sell its products to the distribution centre that in turn sells to the retailer and in turn to end consumers. The manufacturer is indirectly in contact with end consumers and has to make primary packaging (the package that the users see on the shelf) that can incite the consumer to buy that product instead of another one. As Pilditch [33] said, the package is a "silent salesman", the first thing that the consumer sees when buying a product; Environment: people are more and more careful about protecting the environment. The manufacturer has to study a package that minimizes the materials used and can be re-usable or

The manufacturer has to balance the aspects described above in order to obtain an efficient

distribution centre. He is mainly interested in the logistics aspect of packages since the most important functions are warehousing, picking and shipping the products. The wholesaler needs a package that is easy to handle and transport rather than one with an attractive shape and design.

interests the end consumers. Marketing and environmental aspects are important: marketing because the package is a "shop window" for the product; environment since people are careful about minimizing pollution preferring to buy products contained in recyclable or re-usable

Wholesaler The wholesaler purchases products from the manufacturer and transports them to the

Retailer The retailer has to sell products to end consumers and for this reason, needs to consider what

*m* End consumers End consumers are interested in marketing (indeed primary and secondary packages are effective

In conclusion, the packaging system plays a fundamental role along the entire supply chain where the parties often have opposite requirements and needs. Its design can be considered

tools for marketing in real shops [33]) and environmental aspects.

transport, minimizes logistics costs and improves the efficiency of the company;

send products to the manufacturer and their purpose is the minimization of the logistics costs (transport, distribution, warehousing), so they prefer a package that is easy to handle and

The Important Role of Packaging in Operations Management

http://dx.doi.org/10.5772/54073

195

**Figure 7.** Typical supply chain of a manufacturing company

*N* suppliers provide raw materials to the manufacturer, which produces the finished prod‐ ucts, sold to the distribution centre, then to the retailer and finally to *m* end consumers. In the middle, there are carriers that transport and distribute finished products along the sup‐ ply chain. Each party has different interests and requirements regarding the function of packaging. Table 1 shows the different role of packaging for the parties to the supply chain.


**Table 1.** The role of packaging for the parties along the supply chain

has to satisfy different needs and requirements, trying to have a trade-off between them. Considering the simplified supply chain of a manufacturing company (Figure 7), it is possi‐ ble to analyse the role of the packaging function for all the parties of the supply chain.

Manufacturer

Wholesaler Distributor Centre

Retailer Retail outlet

Supplier 1 Supplier 2 Supplier n

194 Operations Management

Production Warehousing

Receiving Warehousing Picking Shipping

Receiving Shipping Replenishing

End consumer 1 End consumer 2 End consumer m

*N* suppliers provide raw materials to the manufacturer, which produces the finished prod‐ ucts, sold to the distribution centre, then to the retailer and finally to *m* end consumers. In the middle, there are carriers that transport and distribute finished products along the sup‐ ply chain. Each party has different interests and requirements regarding the function of packaging. Table 1 shows the different role of packaging for the parties to the supply chain.

**Figure 7.** Typical supply chain of a manufacturing company

Reuse/ Recycling/ Disposal

Carriers

Carriers

In conclusion, the packaging system plays a fundamental role along the entire supply chain where the parties often have opposite requirements and needs. Its design can be considered an element of OM discipline and must be integrated in the product design process taking into account logistics, production, marketing and environmental needs.

### **3.3. The perception of packaging by Italian companies and consumers [2; 3]**

Regattieri et al. [2; 3] conducted two studies about the perception of packaging by Italian companies and users. The first deals with how Italian companies perceive and manage the packaging system, while the second discuss how Italian users perceive packaging quality at‐ tributes. The next two paragraphs briefly present the analysis conducted.

### *3.3.1. Packaging perception by Italian companies [2]*

The study conducted by Regattieri et al. [2] is based on an explorative study of packaging development and packaging logistics, conducted in several Italian companies, from different industrial sectors. After the analysis of the Italian situation, the findings have been com‐ pared with the corresponding situation in Sweden. The comparison is mainly based on pre‐ vious research conducted at the packaging logistics division of Lund University [34; 35].

In order to discuss the Italian industrial situation in terms of the packaging system, the au‐ thors implemented a questionnaire on packaging and its relationship with logistics, product and the environment. The quantitative content analysis of questionnaires allowed the au‐ thors to look in more depth at the Italian situation concerning packaging.

The first interesting data to underline is that more than half of companies (52.1%) think that packaging and its functions are critical and that their sales even depend on packaging (52.2%).

the most respondents compute packaging costs from the logistics point of view, only 39.1%

Function

them report evaluating the total cost of packaging.

86,3%

Figure 9. Classification of evaluating packaging logistics cost

Companies evaluate packaging logistics costs Companies dont' evaluate packaging logistics costs

(e.g. 90% of glass, 73% of metal and 74% of paper and cardboard packages [36]).

companies, unlike Swedish ones, usually develop packaging after the designing the product.

%

0

10

20

30

40

50

In order to obtain significant results on the product handling function, it is necessary to co-design product and packaging development. Companies are aware of the importance of integrating the development of the product with the development of the package: although a large percentage of Italian companies think the integration packaging and product is important and could reduce costs during the product life cycle, only 34.8% of them develop the packaging and the product at the same time. Italian

In the same way as Swedish industries [34], Italian companies also consider logistics and transport an important packaging function. Indeed, 86.3% of companies report evaluating packaging costs from the transport point of view, mainly focusing on compatibility with vehicles and protection of goods (Figure 9). This data underlines the importance of the link between packaging and logistics systems: companies know that packaging (in terms of material, shape and size) influences storage, transport and distribution of goods. Although the most respondents compute packaging costs from the logistics point of view, only 39.1% of

The questionnaire also pointed out the importance of the relationship between packaging and the environment: 77.3% of Italian companies report using methods and applications in order to evaluate environmental aspects and 56.5% report recycling packaging materials. It is still a low percentage compared with Swedish data: in Sweden, consumer packages are largely recycled

Compatibility with the vehicle

40,5%

Protection of the protect

32,4%

Traceability

Packaging from transport point of view

13,5%

Other

8,2%

Anti-theft

5,4%

The comparison between Italian and Swedish industries' perception of packaging has highlighted both Sweden's long-standing tradition in packaging development and in packaging logistics research and practice and the increasing attention of Italian industries on the importance of packaging functions (e.g. logistics and environmental aspects). Italian companies are following the

Figure 8. Classification of packaging functions

Move the product

10,3%

Communicate/Inform

Support the assembly

7,7% 7,7% 7,7%

Guarantee the product

Trace the product

2,6% 2,6%

Insure the product

The Important Role of Packaging in Operations Management

http://dx.doi.org/10.5772/54073

197

**5,6%**

Contain the product

25,5%

Preserve/Protect the product

0

10

20

30

40

35,9%

%

of them report evaluating the total cost of packaging.

**Figure 8.** Classification of packaging functions

13,7%

**Figure 9.** Classification of evaluating packaging logistics cost

Another interesting analysis relates to packaging functions: protection and containment of the product are considered the most relevant function of packaging since it has effects on all activities throughout the supply chain, followed by product handling and communication (Figure 8). Like Italian companies, the packaging function most frequently mentioned by Swedish industries is the protection of products [34].

In order to obtain significant results on the product handling function, it is necessary to codesign product and packaging development. Companies are aware of the importance of in‐ tegrating the development of the product with the development of the package: although a large percentage of Italian companies think the integration packaging and product is impor‐ tant and could reduce costs during the product life cycle, only 34.8% of them develop the packaging and the product at the same time. Italian companies, unlike Swedish ones, usual‐ ly develop packaging after the designing the product.

In the same way as Swedish industries [34], Italian companies also consider logistics and transport an important packaging function. Indeed, 86.3% of companies report evaluating packaging costs from the transport point of view, mainly focusing on compatibility with ve‐ hicles and protection of goods (Figure 9). This data underlines the importance of the link be‐ tween packaging and logistics systems: companies know that packaging (in terms of material, shape and size) influences storage, transport and distribution of goods. Although

reduce costs during the product life cycle, only 34.8% of them develop the packaging and the product at the same time. Italian

The questionnaire also pointed out the importance of the relationship between packaging and the environment: 77.3% of Italian companies report using methods and applications in order to evaluate environmental aspects and 56.5% report recycling packaging materials. It is still a low percentage compared with Swedish data: in Sweden, consumer packages are largely recycled

Compatibility with the vehicle

0

Protection of the protect

Traceability

Packaging from transport point of view

Other

8,2%

Anti-theft

5,4%

The comparison between Italian and Swedish industries' perception of packaging has highlighted both Sweden's long-standing tradition in packaging development and in packaging logistics research and practice and the increasing attention of Italian industries on the importance of packaging functions (e.g. logistics and environmental aspects). Italian companies are following the

Figure 8. Classification of packaging functions **Figure 8.** Classification of packaging functions

an element of OM discipline and must be integrated in the product design process taking

Regattieri et al. [2; 3] conducted two studies about the perception of packaging by Italian companies and users. The first deals with how Italian companies perceive and manage the packaging system, while the second discuss how Italian users perceive packaging quality at‐

The study conducted by Regattieri et al. [2] is based on an explorative study of packaging development and packaging logistics, conducted in several Italian companies, from different industrial sectors. After the analysis of the Italian situation, the findings have been com‐ pared with the corresponding situation in Sweden. The comparison is mainly based on pre‐ vious research conducted at the packaging logistics division of Lund University [34; 35].

In order to discuss the Italian industrial situation in terms of the packaging system, the au‐ thors implemented a questionnaire on packaging and its relationship with logistics, product and the environment. The quantitative content analysis of questionnaires allowed the au‐

The first interesting data to underline is that more than half of companies (52.1%) think that packaging and its functions are critical and that their sales even depend on packaging

Another interesting analysis relates to packaging functions: protection and containment of the product are considered the most relevant function of packaging since it has effects on all activities throughout the supply chain, followed by product handling and communication (Figure 8). Like Italian companies, the packaging function most frequently mentioned by

In order to obtain significant results on the product handling function, it is necessary to codesign product and packaging development. Companies are aware of the importance of in‐ tegrating the development of the product with the development of the package: although a large percentage of Italian companies think the integration packaging and product is impor‐ tant and could reduce costs during the product life cycle, only 34.8% of them develop the packaging and the product at the same time. Italian companies, unlike Swedish ones, usual‐

In the same way as Swedish industries [34], Italian companies also consider logistics and transport an important packaging function. Indeed, 86.3% of companies report evaluating packaging costs from the transport point of view, mainly focusing on compatibility with ve‐ hicles and protection of goods (Figure 9). This data underlines the importance of the link be‐ tween packaging and logistics systems: companies know that packaging (in terms of material, shape and size) influences storage, transport and distribution of goods. Although

into account logistics, production, marketing and environmental needs.

tributes. The next two paragraphs briefly present the analysis conducted.

thors to look in more depth at the Italian situation concerning packaging.

*3.3.1. Packaging perception by Italian companies [2]*

Swedish industries is the protection of products [34].

ly develop packaging after the designing the product.

(52.2%).

196 Operations Management

**3.3. The perception of packaging by Italian companies and consumers [2; 3]**

the most respondents compute packaging costs from the logistics point of view, only 39.1% of them report evaluating the total cost of packaging. In order to obtain significant results on the product handling function, it is necessary to co-design product and packaging development. Companies are aware of the importance of integrating the development of the product with the development of the package: although a large percentage of Italian companies think the integration packaging and product is important and could

companies, unlike Swedish ones, usually develop packaging after the designing the product.

Figure 9. Classification of evaluating packaging logistics cost

Companies evaluate packaging logistics costs Companies dont' evaluate packaging logistics costs

(e.g. 90% of glass, 73% of metal and 74% of paper and cardboard packages [36]).

**Figure 9.** Classification of evaluating packaging logistics cost

The questionnaire also pointed out the importance of the relationship between packaging and the environment: 77.3% of Italian companies report using methods and applications in order to evaluate environmental aspects and 56.5% report recycling packaging materials. It is still a low percentage compared with Swedish data: in Sweden, consumer packages are largely recy‐ cled (e.g. 90% of glass, 73% of metal and 74% of paper and cardboard packages [36]).

The comparison between Italian and Swedish industries' perception of packaging has high‐ lighted both Sweden's long-standing tradition in packaging development and in packaging logistics research and practice and the increasing attention of Italian industries on the im‐ portance of packaging functions (e.g. logistics and environmental aspects). Italian compa‐ nies are following the Swedish ones in the development of a packaging logistics system and in the integration of packaging and product development, while maintaining their own characteristics. For more details, see Regattieri et al. [2].

### *3.3.2. Packaging perception by Italian customers [3]*

The second analysis conducted by Regattieri et al. [3] is based on an explorative study con‐ ducted through a questionnaire distributed to Italian users. In order to understand how cus‐ tomer satisfaction may be increased, the authors analysed Italian consumers' perception of packaging quality attributes using the Theory of Attractive Quality, developed by Kano et al. in 1984 [37]. The findings are then compared with those of Swedish customers [38].

Kano et al. [37] defined a quality perspective in which quality attributes are divided into dif‐ ferent categories, based on the relationship between the physical fulfilment of a quality at‐ tribute and the perceived satisfaction of that attribute. The five categories are attractive, onedimensional, must-be, indifferent and reverse quality. All quality attributes can be satisfied or dissatisfied independently and they can change from one status to another according to the changes in customers' perspective. The packaging quality attributes are classified into three entities: *technical* (e.g. protection of the product, use of recyclable materials), *ergonomic* (everything relating to adaptations to human behaviour when using the product (e.g. ease of grip, ease of opening, user-friendly)) and *communicative* (the packaging's ability to commu‐ nicate with customers (e.g. use of symbols, instructions for using packaging, brand commu‐ nication)).

The analysis of the questionnaires shows that Italian users are mainly interested in the ergo‐ nomic entity, made up of packaging characteristics that permit easy of handling of the prod‐ uct. Italians believe that the most important packaging function is protection of the product, according to the traditional role that has always been attributed to the packaging function. For each packaging quality attribute, better and worse average values are calculated, indi‐ cating whether customer satisfaction can be increased by satisfying a certain requirement (better) or whether fulfilling this requirement may merely prevent customer dissatisfaction

**Figure 10.** Evaluation table to classify packaging quality attributes (table adapted by [38] from [39])

∀jWorse average=

The Worse-Better Diagram focuses on technical, ergonomic and communicative entities. Contrary to the ergonomic and communicative entities, it is not possible to identify a defi‐ nite cluster for the technical group, since the packaging quality attributes are scattered in the diagram, moving from one-dimensional (e.g. recyclable materials) to indifferent (e.g. addi‐ tional functions) to must-be (e.g. protection of the product). Ergonomic and communicative

i=1,…,n is the number of responses for each packaging quality attribute

∑ i=1 n (M+O)

(A+O+M+I)

∀j

The Important Role of Packaging in Operations Management

http://dx.doi.org/10.5772/54073

199

∑ i=1 n

(worse) [39].

Better average =

∑ i=1 n (A+O)

(A+O+M+I)

j=1,…,m represents packaging quality attributes

Figure 11 shows the Worse-Better Diagram for Italian users.

∑ i=1 n

The questionnaire is made up of three parts:


**Figure 10.** Evaluation table to classify packaging quality attributes (table adapted by [38] from [39])

The analysis of the questionnaires shows that Italian users are mainly interested in the ergo‐ nomic entity, made up of packaging characteristics that permit easy of handling of the prod‐ uct. Italians believe that the most important packaging function is protection of the product, according to the traditional role that has always been attributed to the packaging function.

For each packaging quality attribute, better and worse average values are calculated, indi‐ cating whether customer satisfaction can be increased by satisfying a certain requirement (better) or whether fulfilling this requirement may merely prevent customer dissatisfaction (worse) [39].

Better average = ∑ i=1 n (A+O) ∑ i=1 n (A+O+M+I) ∀jWorse average= ∑ i=1 n (M+O) ∑ i=1 n (A+O+M+I) ∀j

i=1,…,n is the number of responses for each packaging quality attribute

j=1,…,m represents packaging quality attributes

The questionnaire also pointed out the importance of the relationship between packaging and the environment: 77.3% of Italian companies report using methods and applications in order to evaluate environmental aspects and 56.5% report recycling packaging materials. It is still a low percentage compared with Swedish data: in Sweden, consumer packages are largely recy‐

The comparison between Italian and Swedish industries' perception of packaging has high‐ lighted both Sweden's long-standing tradition in packaging development and in packaging logistics research and practice and the increasing attention of Italian industries on the im‐ portance of packaging functions (e.g. logistics and environmental aspects). Italian compa‐ nies are following the Swedish ones in the development of a packaging logistics system and in the integration of packaging and product development, while maintaining their own

The second analysis conducted by Regattieri et al. [3] is based on an explorative study con‐ ducted through a questionnaire distributed to Italian users. In order to understand how cus‐ tomer satisfaction may be increased, the authors analysed Italian consumers' perception of packaging quality attributes using the Theory of Attractive Quality, developed by Kano et al. in 1984 [37]. The findings are then compared with those of Swedish customers [38].

Kano et al. [37] defined a quality perspective in which quality attributes are divided into dif‐ ferent categories, based on the relationship between the physical fulfilment of a quality at‐ tribute and the perceived satisfaction of that attribute. The five categories are attractive, onedimensional, must-be, indifferent and reverse quality. All quality attributes can be satisfied or dissatisfied independently and they can change from one status to another according to the changes in customers' perspective. The packaging quality attributes are classified into three entities: *technical* (e.g. protection of the product, use of recyclable materials), *ergonomic* (everything relating to adaptations to human behaviour when using the product (e.g. ease of grip, ease of opening, user-friendly)) and *communicative* (the packaging's ability to commu‐ nicate with customers (e.g. use of symbols, instructions for using packaging, brand commu‐

**•** Functional and dysfunctional question about packaging quality attributes. The classifica‐ tion into *attractive* (A), *one-dimensional* (O), *must-be* (M), *indifferent* (I), *reverse* (R) and *ques‐ tionable* (Q) (Q responses include sceptical answers (Kano et al., 1984)) is made using an evaluation table (Figure 10), adapted by Löfgren and Witell [38] from Berger et al. [39].

**•** Level of importance of packaging characteristics: customers had to assign a value be‐ tween 1 (not important) and 10 (very important) to the packaging quality attributes.

cled (e.g. 90% of glass, 73% of metal and 74% of paper and cardboard packages [36]).

characteristics. For more details, see Regattieri et al. [2].

*3.3.2. Packaging perception by Italian customers [3]*

The questionnaire is made up of three parts: **•** General information about the customers;

nication)).

198 Operations Management

Figure 11 shows the Worse-Better Diagram for Italian users.

The Worse-Better Diagram focuses on technical, ergonomic and communicative entities. Contrary to the ergonomic and communicative entities, it is not possible to identify a defi‐ nite cluster for the technical group, since the packaging quality attributes are scattered in the diagram, moving from one-dimensional (e.g. recyclable materials) to indifferent (e.g. addi‐ tional functions) to must-be (e.g. protection of the product). Ergonomic and communicative

companies do not estimate the total packaging costs and, to confirm this, literature analysis shows the lack of a complete function for calculating the total cost of packaging in a compa‐ ny. For this reason, the authors have developed a complete mathematical model, consider‐ ing all the cost parameters regarding the packaging system (primary, secondary and tertiary packages and accessories) along the whole supply chain of a manufacturing company.

The model represents added value for companies seeking to estimate the total costs of their packaging system and consequently its impact on total company costs. Moreover, it may be possible to find out the overlooked and oversized packaging factors. The former should be introduced in the calculation of the total packaging costs, while the latter could be reduced

RETAILER\_1

The Important Role of Packaging in Operations Management

END CONSUMERS

http://dx.doi.org/10.5772/54073

201

RETAILER\_2

RETAILER\_3

RETAILER\_r

Figure 12 shows the simplified supply chain of a manufacturing company.

Stock it <sup>i</sup> Production Finished Products

Stock PF PF

The manufacturing company can rent or purchase packages (primary, secondary and terti‐ ary and accessories) and raw materials (if the manufacturer produces packages internally) from the supplier *n*. When goods arrive, they are received in the manufacturer's receiving area, sorted and stored in the warehouse. If the company has to produce the packaging, the raw materials are picked and brought to the manufacturing area, where packages are made and subsequently stored in the warehouse. The raw materials not used during the manufac‐ turing stage are brought back to the warehouse, creating a reverse flow of materials. When the finished products are produced, the packages are picked from the warehouse and brought to the manufacturing area. The packages not used during the manufacturing stage

MANUFACTURING COMPANY

or eliminated.

SUPPLIER\_1

SUPPLIER\_2

SUPPLIER\_3

MAN it

**Figure 12.** Simplified supply chain of a manufacturing company

BUY it

SUPPLIER\_n

**Figure 11.** Worse-Better diagram for Italian perception on packaging quality attributes

entities assume definite clusters in the Worse-Better Diagram: the packaging quality attrib‐ utes belonging to the ergonomic entity are mainly classified as one-dimensional. They are distinctive attributes that customers consider during the purchase of a product, comparing different brands. Italian customers locate the communicative quality attributes in the middle of the diagram. They delineate a specific cluster, but the dimension to which they belong is not clear.

Another important analysis is the level of importance attributed by Italian users to each packaging quality attribute. The highest values of importance are assigned to the protection of the product (9.59), open-dating (9.47), and hygiene (9.52). Italian customers seem to be in‐ terested neither in the aesthetics of packaging (attractive and nice looking print and the aes‐ thetic appeal have low levels of importance: 4.52 and 5.00 respectively) nor in the additional functions (5.80).

From the comparison with the Swedish results [38], both Italians and Swedes have similar behaviour in terms of perception of packaging quality attributes. They consider the ergo‐ nomic quality characteristics the most significant packaging attributes, and the protection of the product the most important packaging function. Italians also perceive the use of recycla‐ ble material another important packaging attribute, in line with the growing importance of environmental considerations. Neither Italians nor Swedes place importance on aesthetics. For more details, see Regattieri et al. [3].

### **3.4. A mathematical model for packaging cost evaluation**

As the previous paragraphs have underlined, the packaging system has numerous implica‐ tions along the supply chain (e.g. marketing, production, logistics, purchasing, etc.). In order to define optimal management of the packaging system, it is necessary to evaluate the total packaging cost, made up of e.g. purchasing cost, manufacturing cost, transport and labour cost, management cost, etc. The study conducted by Regattieri et al. [2] underlines that most companies do not estimate the total packaging costs and, to confirm this, literature analysis shows the lack of a complete function for calculating the total cost of packaging in a compa‐ ny. For this reason, the authors have developed a complete mathematical model, consider‐ ing all the cost parameters regarding the packaging system (primary, secondary and tertiary packages and accessories) along the whole supply chain of a manufacturing company.

The model represents added value for companies seeking to estimate the total costs of their packaging system and consequently its impact on total company costs. Moreover, it may be possible to find out the overlooked and oversized packaging factors. The former should be introduced in the calculation of the total packaging costs, while the latter could be reduced or eliminated.

Figure 12 shows the simplified supply chain of a manufacturing company.

**Figure 12.** Simplified supply chain of a manufacturing company

entities assume definite clusters in the Worse-Better Diagram: the packaging quality attrib‐ utes belonging to the ergonomic entity are mainly classified as one-dimensional. They are distinctive attributes that customers consider during the purchase of a product, comparing different brands. Italian customers locate the communicative quality attributes in the middle of the diagram. They delineate a specific cluster, but the dimension to which they belong is

0,0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1,0

Indifferent Must-be

Communicative

Attractive One-dimensional

Ergonomic

Technical entities Ergonomic entities Communicative entities

Worse

**Figure 11.** Worse-Better diagram for Italian perception on packaging quality attributes

Another important analysis is the level of importance attributed by Italian users to each packaging quality attribute. The highest values of importance are assigned to the protection of the product (9.59), open-dating (9.47), and hygiene (9.52). Italian customers seem to be in‐ terested neither in the aesthetics of packaging (attractive and nice looking print and the aes‐ thetic appeal have low levels of importance: 4.52 and 5.00 respectively) nor in the additional

From the comparison with the Swedish results [38], both Italians and Swedes have similar behaviour in terms of perception of packaging quality attributes. They consider the ergo‐ nomic quality characteristics the most significant packaging attributes, and the protection of the product the most important packaging function. Italians also perceive the use of recycla‐ ble material another important packaging attribute, in line with the growing importance of environmental considerations. Neither Italians nor Swedes place importance on aesthetics.

As the previous paragraphs have underlined, the packaging system has numerous implica‐ tions along the supply chain (e.g. marketing, production, logistics, purchasing, etc.). In order to define optimal management of the packaging system, it is necessary to evaluate the total packaging cost, made up of e.g. purchasing cost, manufacturing cost, transport and labour cost, management cost, etc. The study conducted by Regattieri et al. [2] underlines that most

not clear.

0,0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8

Better

200 Operations Management

functions (5.80).

For more details, see Regattieri et al. [3].

**3.4. A mathematical model for packaging cost evaluation**

The manufacturing company can rent or purchase packages (primary, secondary and terti‐ ary and accessories) and raw materials (if the manufacturer produces packages internally) from the supplier *n*. When goods arrive, they are received in the manufacturer's receiving area, sorted and stored in the warehouse. If the company has to produce the packaging, the raw materials are picked and brought to the manufacturing area, where packages are made and subsequently stored in the warehouse. The raw materials not used during the manufac‐ turing stage are brought back to the warehouse, creating a reverse flow of materials. When the finished products are produced, the packages are picked from the warehouse and brought to the manufacturing area. The packages not used during the manufacturing stage are brought back to the warehouse, creating a reverse flow of materials. The finished prod‐ ucts are packed, put onto a pallet, and delivered to the retailer *m*. The model considers the possibility to re-use packages after the delivery of the finished products to the final custom‐ ers and the possible disposal of packages if they are damaged. In addition, the model con‐ siders the possibility for the manufacturer to make a profit from sub-products derived from the disposal of packages and/or from the sale of tertiary packages to the final customers.

**Variable Units Description Domain**

sub-products.

retailer r.

urit [pieces/year]

NEXT TRAN nit [trips/year]

NINT TRAN it [trips/year]

NINT TRAN <sup>1</sup> it [trips/year]

NINT TRAN <sup>2</sup> it [trips/year]

NINT TRAN <sup>3</sup> it [trips/year]

NREV INT TRAN <sup>2</sup> it [trips/year]

[orders/ year]

NORD

company has a profit from

Quantity of package i of type t sold by the company to the

Number of orders for buying raw materials and/or packages i of type t.

Number of trips of raw materials and/or packages i of type t from the supplier n to

the manufacturer.

type t from the

to the warehouse.

warehouse to the manufacturing area to produce packages from xit.

the warehouse.

products.

to the warehouse.

Number of trips of raw materials i of type t from the

Number of trips of raw materials and/or packages i of

manufacturer's receiving area

Number of trips of packages i of type t produced by the manufacturer and transported from the production area to

Number of trips of packages (produced/bought/rented) i of type t from the warehouse to the production area in order to support finished

Number of trips of packages i of type t not used during the production of finished products and transported from the manufacturing area

i=1,…,4; t=1,…,m; r=1,…,q

The Important Role of Packaging in Operations Management

http://dx.doi.org/10.5772/54073

203

i=1,…,4; t=1,…,m; n=1,…,s

i=1,…,4; t=1,…,m

i=1,…,4; t=1,…,m

i=1,…,4; t=1,…,m

i=1,…,4; t=1,…,m

i=1,…,4; t=1,…,m

Table 2, 3 and 4 describe the indices, variables and cost parameters used in the model.


**Table 2.** Indices of the model



are brought back to the warehouse, creating a reverse flow of materials. The finished prod‐ ucts are packed, put onto a pallet, and delivered to the retailer *m*. The model considers the possibility to re-use packages after the delivery of the finished products to the final custom‐ ers and the possible disposal of packages if they are damaged. In addition, the model con‐ siders the possibility for the manufacturer to make a profit from sub-products derived from the disposal of packages and/or from the sale of tertiary packages to the final customers.

Table 2, 3 and 4 describe the indices, variables and cost parameters used in the model.

i 1,…,4

xnit [pieces/year]

x'it [pieces/year]

ynit [pieces/year]

wnit [pieces/year]

**Table 2.** Indices of the model

202 Operations Management

**Index Domain Description**

<sup>t</sup> 1,…,m Different packages for

n 1,…,s Suppliers r 1,…,q Retailers

**Variable Units Description Domain**

produced by the

raw materials.

supplier n.

the supplier n.

rit [pieces/year] Quantity of disposed package

Quantity of raw materials bought by the company from the supplier n to produce package i of type t.

Quantity of package i of type t

manufacturer company from

Quantity of package i of type t bought by the company from

Quantity of package i of type t rented by the company from

i of type t from which the i=1,…,4; t=1,…,m

Level of package: i=1 (primary package) i=2 (secondary package)

i=3 (tertiary package) i=4 (accessories)

> i=1,…,4; t=1,…,m; n=1,…,s

i=1,…,4; t=1,…,m

i=1,…,4; t=1,…,m; n=1,…,s

i=1,…,4; t=1,…,m; n=1,…,s

each level i


**Parameter Nomenclatures Units Description**

etc.).

the distance travelled;

CSTOCK <sup>1</sup> Cost of Stocking1 [€/piece] Cost for stocking packages produced internally by the company. It includes

forklift) for picking the packages.

of the distance travelled.

distance travelled;

customer. It includes:

of the distance travelled;

Cost for transporting raw materials from the warehouse to the

Cost for producing packages internally; it includes the labour costs, depreciation of production plants and utilities (e.g. electricity, water, gas,

Cost of transport for bringing the raw materials not used during

used), for example for unpacking and re-packing products.

the labour costs and cost of the space for storing the packages.

Cost for picking packages (produced/bought/rented) from the warehouse. It includes the labour costs and depreciation of vehicles (e.g.

Cost of transport for bringing packages not used during the

used), for example for unpacking and re-packing products.

used), for example for unpacking and re-packing products.

used (e.g. truck), cost of the distance travelled).

CREV INT TRAN 1: the cost of transport for coming back to the warehouse. It comprises labour costs, depreciation of vehicles used (e.g. forklift), cost of

CREV INT COND 1: the cost of conditioning packages to make them re-usable. It comprises the labour costs and depreciation of mechanical devices (if

Cost for transporting the packages produced by the company from the production area to the warehouse. It includes the labour costs, depreciation of vehicles (e.g. forklift), cost of the distance travelled.

Cost for transporting packages from the warehouse to the manufacturing area. It includes the labour costs, depreciation of vehicles (e.g. forklift), cost

manufacturing of finished products back to the warehouse. It includes: CREV INT TRAN 2: the cost of transport for coming back to the warehouse. It comprises the labour costs, depreciation of vehicles used, cost of the

CREV INT COND 2: the cost of conditioning packages to make them re-usable. It comprises the labour costs and depreciation of mechanical devices (if

Cost of re-using packaging after the delivery of finished products to the

CREV EXT COND: the cost of conditioning packages to make them re-usable. It comprises the labour costs and depreciation of mechanical devices (if

Cost of disposing of damaged packages during the manufacturing stage. It comprises the cost of disposal, the cost of transporting damaged packages from the company to the landfill (labour costs, depreciation of vehicles

CREV EXT TRAN: the cost of transport for coming back to the company. It comprises the labour costs, depreciation of vehicles used (e.g. truck), cost

manufacturing back to the warehouse. It includes:

manufacturing area to produce the packages. It includes the labour costs, depreciation of vehicles (e.g. forklift), cost of the distance travelled.

The Important Role of Packaging in Operations Management

http://dx.doi.org/10.5772/54073

205

CINT TRAN <sup>1</sup> Cost of Internal

CREV <sup>1</sup> Cost of Internal

CINT TRAN <sup>2</sup> Cost of Internal

CINT TRAN <sup>3</sup> Cost of Internal

CREV <sup>2</sup> Cost of Internal

Cost of Packages

CMAN

Transport1 [€/travel]

Manufacturing [€/piece]

Reverse Logistics1 [€/travel]

Transport2 [€/travel]

Transport3 [€/travel]

Reverse Logistics2 [€/travel]

CRE-USE Cost of Re-Use [€/year]

CDISP Cost of Disposal [€/piece]

CPICK <sup>1</sup> Cost of Picking1 [€/piece]

#### **Table 3.** Variables of the model



**Variable Units Description Domain**

warehouse.

the manufacturer.

Number of trips of the quantity of raw materials i of type t not used during the production of packages and transported from the manufacturing area to the

Number of trips of packages i of type t from the retailer r to

Engineering [€/year] Cost for studying each type of packaging and for making prototypes. It includes the labour costs of engineering the product.

Purchasing [€/piece] Purchase cost of raw materials (to produce packaging) and/or packages.

labour costs for making the order.

truck), cost of the distance travelled.

forklift) for picking the products.

Cost for managing the internal purchase orders if the manufacturer produces the packaging internally; otherwise it represents the purchase orders for buying and/or renting packaging from suppliers. It includes the

Cost for transporting raw materials and/or packages from the supplier to the manufacturer: it comprises labour costs, depreciation of vehicles (e.g.

Cost for receiving raw materials and/or packages. It includes the labour costs and depreciation of vehicles (e.g. truck, forklift) used to unload

Cost for sorting raw materials and/or packages before storing them in the warehouse. It includes the labour costs and depreciation of mechanical devices (if used), for example for unpacking and re-packing products.

Cost for transporting raw materials and/or packages from the manufacturer's receiving area to the warehouse. It includes the labour costs, depreciation of vehicles (e.g. forklift), cost of the distance travelled.

Cost for storing raw materials and/or packages in the warehouse. It includes the labour costs and the cost of the space for storing the

Cost for picking raw materials from the warehouse for producing the packages. It includes the labour costs and depreciation of vehicles (e.g.

i=1,…,4; t=1,…,m

i=1,…,4; t=1,…,m; r=1,…,q

NREV INT TRAN <sup>1</sup> it [trips/year]

NREV EXT TRAN rit [trips/year]

**Parameter Nomenclatures Units Description**

products.

packages.

**Table 3.** Variables of the model

Cost of

Cost of Purchase

Cost of

Cost of External

CREC Cost of Receiving [€/year]

Cost of

Cost of Internal

CSTOCK Cost of Stocking [€/piece]

CPICK Cost of Picking [€/piece]

Order [€/order]

Transport [€/travel]

Conditioning [€/year]

Transport [€/travel]

CRENT Cost of Rent [€/piece] Cost to rent packages.

CENG

204 Operations Management

CORD

CPUR

CEXT TRAN

CCOND

CINT TRAN


**4. E-commerce**

distance and time between individuals [40].

will represent a large share of retail markets in the future [43].

collaborating electronically on research and development projects [42].

suppliers, data collection, and data analysis processes [50].

gest how e-commerce might support functional activities.

Gunasekaran et al. [48], Internet-based e-commerce enables companies to:

Among all operations, web operations are taking on an important role in the global trend of the purchasing process. During recent years, more and more people have begun to use the Internet and to buy a wide range of goods online. The World Wide Web (WWW) allows people to communicate simultaneously or asynchronously easily and effectively, shortening

The Important Role of Packaging in Operations Management

http://dx.doi.org/10.5772/54073

207

E-commerce is a new sales tool, in which consumers are able to participate in all the stages of a purchasing decision, while going through processes electronically rather than in a real shop. E-commerce is the process of trading goods, information, or services via computer networks including the Internet [41; 42]. There is an increasing consensus that e-commerce

E-commerce channels in traditional companies have changed their operations and business strategy. That impact has been described by three main issues: integration, customization, and internationalization. First, e-commerce networks improve value chain integration by re‐ ducing transaction costs, facilitating JIT delivery, and improving information collection and processing [41; 42]. Secondly, e-commerce databases and direct links between producers and customers support high levels of product and service customization [44]. Finally, the Inter‐ net's international scope allows small companies to reach customers worldwide [45; 46].

As the Internet becomes more popular, e-commerce promises to become a mainstay of mod‐ ern business [47]. There are dozens of e-commerce applications such as home banking, shop‐ ping in online stores and malls, buying stocks, finding a job, conducting an auction and

According to Gunasekaran et al. [48], e-commerce supports functional activities in organiza‐ tion: marketing, purchasing, design production, sales and distribution, human resource management, warehousing and supplier development. For example, the advent of e-com‐ merce has changed marketing practice [48]. E-commerce systems should provide sure access to use, overcoming differences in time to business, location, and language between suppliers and customers and at the same time support the entire trading process in Business to Busi‐ ness (B2B) e-commerce [49]. Communication and data collection constraints are reduced with web-based production of goods and services. Using database management, data ware‐ house, and data mining technologies, the web can facilitate interaction with customers and

Table 5 [48] summarises e-commerce applications and e-commerce tools and systems to sug‐

The open standard of the Internet ensures that large organizations can easily extend their trading communities, by increasing the efficiency of their business operations. According to

**•** Shorten procurement cycles through the use of online catalogues, ordering, and payment;

**Table 4.** Cost parameters of the model

Equation (1) introduces the general formula of the model.

$$\begin{aligned} \mathsf{C}\_{\text{TOT}} &= \mathsf{C}\_{\text{ENG}} + \mathsf{C}\_{\text{ORD}} + \mathsf{C}\_{\text{PLR}} + \mathsf{C}\_{\text{RENT}} + \mathsf{C}\_{\text{EXT}\text{ }\text{TRAN}} + \mathsf{C}\_{\text{REC}} + \\ &+ \mathsf{C}\_{\text{COND}} + \mathsf{C}\_{\text{INT }\text{TRAN}} + \mathsf{C}\_{\text{STOCK}} + \mathsf{C}\_{\text{PICK}} + \mathsf{C}\_{\text{INT }\text{TRAN}^{1}} + \mathsf{C}\_{\text{MAN}} + \mathsf{C}\_{\text{REV}^{1}} + \\ &+ \mathsf{C}\_{\text{INT }\text{TRAN}^{2}} + \mathsf{C}\_{\text{STOCK}^{1}} + \mathsf{C}\_{\text{PICK}^{1}} + \mathsf{C}\_{\text{INT }\text{TRAN}^{3}} + \mathsf{C}\_{\text{REV}^{2}} + \mathsf{C}\_{\text{RE-MSE}} + \mathsf{C}\_{\text{DISP}} \cdot \mathsf{R}\_{\text{DDC}} \end{aligned} \tag{1}$$

Equation (2) presents the mathematical model, explaining each cost parameter in detail.

*CTOT* =(∑ *i*=1 4 ∑ *t*=1 *m CENG it*) + (*NORD* ⋅ ∑ *i*=1 4 ∑ *t*=1 *m CORD it*) + ( ∑ *n*=1 *s* ∑ *i*=1 4 ∑ *t*=1 *m CPUR nit* ⋅ (*xnit* + *ynit*)) + +( ∑ *n*=1 *s* ∑ *i*=1 4 ∑ *t*=1 *m CRENT nit* ⋅*wnit*) + ( ∑ *n*=1 *s* ∑ *i*=1 4 ∑ *t*=1 *m CEXT TRAN nit* ⋅ *NEXT TRAN nit*) + +(∑ *i*=1 4 ∑ *t*=1 *m CREC it*) + (∑ *i*=1 4 ∑ *t*=1 *m CCOND it*) + (∑ *i*=1 4 ∑ *t*=1 *m C INT TRAN it* ⋅ *NINT TRAN it*) + +(∑ *i*=1 4 ∑ *t*=1 *m CSTOCK it* ⋅ (*xit* + *yit* + *wit*)) + (∑ *i*=1 4 ∑ *t*=1 *m C PICK it* ⋅ *xit*) + +(∑ *i*=1 4 ∑ *t*=1 *m CINT TRAN* <sup>1</sup> *it* ⋅ *NINT TRAN* <sup>1</sup> *it*) + (∑ *i*=1 4 ∑ *t*=1 *m CMAN it* ⋅ *xit* ' ) + +((∑ *i*=1 4 ∑ *t*=1 *m CREV INT TRAN* <sup>1</sup> *it* ⋅ *NREV INT TRAN* <sup>1</sup> *it*) + (∑ *i*=1 4 ∑ *t*=1 *m CREV INT COND*<sup>1</sup> *it*)) + +(∑ *i*=1 4 ∑ *t*=1 *m CINT TRAN* <sup>2</sup> *it* ⋅ *NINT TRAN* <sup>2</sup> *it*) + (∑ *i*=1 4 ∑ *t*=1 *m CSTOCK* <sup>1</sup> *it* ⋅ *xit* ' ) + +(∑ *i*=1 4 ∑ *t*=1 *m CPICK* <sup>1</sup> *it* <sup>⋅</sup> (*xit* ' + *yit* + *wit*)) + (∑ *i*=1 4 ∑ *t*=1 *m CINT TRAN* <sup>3</sup> *it* ⋅ *NINT TRAN* <sup>3</sup> *it*) + +((∑ *i*=1 4 ∑ *t*=1 *m CREV INT TRAN* <sup>2</sup> *it* ⋅ *NREV INT TRAN* <sup>2</sup> *it*) + (∑ *i*=1 4 ∑ *t*=1 *m CREV INT COND*<sup>2</sup> *it*)) + +((∑ *r*=1 *q* ∑ *i*=1 4 ∑ *t*=1 *m CREV EXT TRAN rit* ⋅ *NREV EXT TRAN riit*) + (∑ *i*=1 4 ∑ *t*=1 *m CREV EXT COND it*)) + +(∑ *i*=1 4 ∑ *t*=1 *m CDISP it*) - (∑ *i*=1 4 ∑ *t*=1 *m RSUB it* ⋅ *rit*) - (∑ *r*=1 *q* ∑ *i*=1 4 ∑ *t*=1 *m RUDC rit* ⋅*urit*) (2)

The mathematical model allows companies to have a complete tool for analysing the total packaging costs in order to understand packaging cost reductions and consequently the minimization of the impact of total packaging cost on total company cost.

### **4. E-commerce**

**Parameter Nomenclatures Units Description**

Equation (1) introduces the general formula of the model.

+*CINT TRAN* <sup>2</sup> + *CSTOCK* <sup>1</sup> + *CPICK* <sup>1</sup> + *CINT TRAN* <sup>3</sup> + *C*

*CENG it*) + (*NORD* ⋅ ∑

*i*=1 4 ∑ *t*=1 *m*

*CINT TRAN* <sup>1</sup>

*CINT TRAN* <sup>2</sup>

*it* <sup>⋅</sup> (*xit* '

*CDISP it*) - (∑

*CREC it*) + (∑

*CREV INT TRAN* <sup>1</sup>

*CREV INT TRAN* <sup>2</sup>

+(∑ *i*=1 4 ∑ *t*=1 *m*

+(∑ *i*=1 4 ∑ *t*=1 *m*

+(∑ *i*=1 4 ∑ *t*=1 *m*

*CPICK* <sup>1</sup>

+(∑ *i*=1 4 ∑ *t*=1 *m*

*CRENT nit* ⋅*wnit*) + ( ∑

damaged products.

*CTOT* =*CENG* + *CORD* + *CPUR* + *CRENT* + *CEXT TRAN* + *CREC* +

Equation (2) presents the mathematical model, explaining each cost parameter in detail.

*i*=1 4 ∑ *t*=1 *m*

*i*=1 4 ∑ *t*=1 *m*

*RSUB it* ⋅ *rit*) - (∑

The mathematical model allows companies to have a complete tool for analysing the total packaging costs in order to understand packaging cost reductions and consequently the

*CORD it*) + ( ∑

+*CCOND* + *CINT TRAN* + *CSTOCK* + *CPICK* + *CINT TRAN* <sup>1</sup> + *CMAN* + *C*

*i*=1 4 ∑ *t*=1 *m*

> *n*=1 *s* ∑ *i*=1 4 ∑ *t*=1 *m*

*CCOND it*) + (∑

*CSTOCK it* ⋅ (*xit* + *yit* + *wit*)) + (∑

*it* ⋅ *NINT TRAN* <sup>1</sup>

*it* ⋅ *NREV INT TRAN* <sup>1</sup>

*it* ⋅ *NINT TRAN* <sup>2</sup>

*it* ⋅ *NREV INT TRAN* <sup>2</sup>

+ *yit* + *wit*)) + (∑

*CREV EXT TRAN rit* ⋅ *NREV EXT TRAN riit*) + (∑

minimization of the impact of total packaging cost on total company cost.

*i*=1 4 ∑ *t*=1 *m*

Product [€/piece] The parameter identifies the possible gain obtained from the disposal of

*REV* <sup>1</sup> <sup>+</sup>

(1)

(2)

+*CRE*-*USE* + *C DISP* - *RSUB* - *RUDC*

*CPUR nit* ⋅ (*xnit* + *ynit*)) +

Sale of Pallet [€/piece] This parameter identifies the possible gain obtained from the sale of tertiary packaging to the final customer.

*REV* <sup>2</sup>

*n*=1 *s* ∑ *i*=1 4 ∑ *t*=1 *m*

*i*=1 4 ∑ *t*=1 *m*

*it*) + (∑ *i*=1 4 ∑ *t*=1 *m*

*CINT TRAN* <sup>3</sup>

*it*) + (∑ *i*=1 4 ∑ *t*=1 *m*

> *r*=1 *q* ∑ *i*=1 4 ∑ *t*=1 *m*

*i*=1 4 ∑ *t*=1 *m*

*it*) + (∑ *i*=1 4 ∑ *t*=1 *m*

*it*) + (∑ *i*=1 4 ∑ *t*=1 *m*

*CEXT TRAN nit* ⋅ *NEXT TRAN nit*) +

*C INT TRAN it* ⋅ *NINT TRAN it*) +

*C PICK it* ⋅ *xit*) +

*CMAN it* ⋅ *xit*

*CSTOCK* <sup>1</sup>

'

*CREV INT COND*<sup>1</sup>

*it* ⋅ *xit* '

*it* ⋅ *NINT TRAN* <sup>3</sup>

*CREV INT COND*<sup>2</sup>

*RUDC rit* ⋅*urit*)

) +

) +

*CREV EXT COND it*)) +

*it*)) +

*it*) +

*it*)) +

RSUB

206 Operations Management

RUDC

Gain from Sub-

Gain from Direct

**Table 4.** Cost parameters of the model

*CTOT* =(∑ *i*=1 4 ∑ *t*=1 *m*

> +( ∑ *n*=1 *s* ∑ *i*=1 4 ∑ *t*=1 *m*

+(∑ *i*=1 4 ∑ *t*=1 *m*

+((∑ *i*=1 4 ∑ *t*=1 *m*

> +(∑ *i*=1 4 ∑ *t*=1 *m*

+((∑ *i*=1 4 ∑ *t*=1 *m*

+((∑ *r*=1 *q* ∑ *i*=1 4 ∑ *t*=1 *m*

Among all operations, web operations are taking on an important role in the global trend of the purchasing process. During recent years, more and more people have begun to use the Internet and to buy a wide range of goods online. The World Wide Web (WWW) allows people to communicate simultaneously or asynchronously easily and effectively, shortening distance and time between individuals [40].

E-commerce is a new sales tool, in which consumers are able to participate in all the stages of a purchasing decision, while going through processes electronically rather than in a real shop. E-commerce is the process of trading goods, information, or services via computer networks including the Internet [41; 42]. There is an increasing consensus that e-commerce will represent a large share of retail markets in the future [43].

E-commerce channels in traditional companies have changed their operations and business strategy. That impact has been described by three main issues: integration, customization, and internationalization. First, e-commerce networks improve value chain integration by re‐ ducing transaction costs, facilitating JIT delivery, and improving information collection and processing [41; 42]. Secondly, e-commerce databases and direct links between producers and customers support high levels of product and service customization [44]. Finally, the Inter‐ net's international scope allows small companies to reach customers worldwide [45; 46].

As the Internet becomes more popular, e-commerce promises to become a mainstay of mod‐ ern business [47]. There are dozens of e-commerce applications such as home banking, shop‐ ping in online stores and malls, buying stocks, finding a job, conducting an auction and collaborating electronically on research and development projects [42].

According to Gunasekaran et al. [48], e-commerce supports functional activities in organiza‐ tion: marketing, purchasing, design production, sales and distribution, human resource management, warehousing and supplier development. For example, the advent of e-com‐ merce has changed marketing practice [48]. E-commerce systems should provide sure access to use, overcoming differences in time to business, location, and language between suppliers and customers and at the same time support the entire trading process in Business to Busi‐ ness (B2B) e-commerce [49]. Communication and data collection constraints are reduced with web-based production of goods and services. Using database management, data ware‐ house, and data mining technologies, the web can facilitate interaction with customers and suppliers, data collection, and data analysis processes [50].

Table 5 [48] summarises e-commerce applications and e-commerce tools and systems to sug‐ gest how e-commerce might support functional activities.

The open standard of the Internet ensures that large organizations can easily extend their trading communities, by increasing the efficiency of their business operations. According to Gunasekaran et al. [48], Internet-based e-commerce enables companies to:

**•** Shorten procurement cycles through the use of online catalogues, ordering, and payment;


ing change, since the shelf presentation of the product becomes less important [24]. Visser [24] stated that it is difficult to translate the existing packaging design used for the tradition‐ al way of buying in a real shop and marketing tactics into online retailing. E-commerce re‐ quires a new paradigm for the entire product packaging system. For example, in real shop the traditional primary package is a good agent for any products, not only because of the text descriptions, but also for its visual communication. It can effectively deliver product in‐ formation and brand identity, and is a good cognitive agent for recognition. In an online shop, users cannot directly see the package nor touch the product, but other characteristics such as protection and re-usability for efficient take-back of products take on great impor‐ tance [40]. The direct feeling with customers is less important since the contact is mediated

The Important Role of Packaging in Operations Management

http://dx.doi.org/10.5772/54073

209

The Internet does not determine the design of packages. However, if online shopping is be‐ coming more common, packaging design must be reconsidered [24]. The changing role of packaging in the purchase of a product makes it desirable and possible to give more atten‐ tion to the consumer's perception of a brand while the user is using it, and less attention to its shelf presentation. Retailers that sell online have to consider packages as a means of mar‐

**•** Product promotion: e-commerce enhances the promotion of products and services

**•** New sales channels: e-commerce creates a new distribution channel for existing products, owing to its direct support of research on customers and the bidirectional nature of com‐

**•** Direct savings: the cost of delivering information to customers by Internet results in sub‐ stantial savings. Greater savings are also made in the direct delivery of digitized products

**•** Reduced cycle time: the delivery time for digitized products and services can be reduced. Also, the administrative work related to physical delivery, especially across international

**•** Customer service: it can be greatly enhanced for customers to find detailed information online. In addition, intelligent agents can answer standard e-mail questions in few sec‐

The advent of e-commerce has also had several implications on logistics and the environ‐ ment. From the logistics point of view, packaging has to increase its function of protection and covering of products, since products have to be transported to reach the customer. The theme of reverse logistics takes on great importance since customers can return wrong and/or unsuitable products. The advent of Internet distribution produces significant savings in shipping, and can facilitate delivery. Even those who use transportations can use Internetbased tools to increase customer service. Web-based order tracking has become common‐ place. It allows customers to trace the shipment of their orders without having to contact the

keting and disseminating information instead of a mere covering for a product [24].

Block and Segev [51] suggest the following e-commerce impacts on marketing:

through direct information and interactive contact with customers;

compared to the costs of traditional delivery;

borders, can be reduced significantly;

by the computer.

munication;

onds.

**Table 5.** E-commerce applications areas, tools and systems [48]


### **5. Packaging and e-commerce in operations management**

Every year Internet-based companies ship millions of packages throughout the world [24].

Online shopping influences packaging and its interactions with industrial function, mainly with marketing. The more people shop online, the more the role and the function of packag‐ ing change, since the shelf presentation of the product becomes less important [24]. Visser [24] stated that it is difficult to translate the existing packaging design used for the tradition‐ al way of buying in a real shop and marketing tactics into online retailing. E-commerce re‐ quires a new paradigm for the entire product packaging system. For example, in real shop the traditional primary package is a good agent for any products, not only because of the text descriptions, but also for its visual communication. It can effectively deliver product in‐ formation and brand identity, and is a good cognitive agent for recognition. In an online shop, users cannot directly see the package nor touch the product, but other characteristics such as protection and re-usability for efficient take-back of products take on great impor‐ tance [40]. The direct feeling with customers is less important since the contact is mediated by the computer.

The Internet does not determine the design of packages. However, if online shopping is be‐ coming more common, packaging design must be reconsidered [24]. The changing role of packaging in the purchase of a product makes it desirable and possible to give more atten‐ tion to the consumer's perception of a brand while the user is using it, and less attention to its shelf presentation. Retailers that sell online have to consider packages as a means of mar‐ keting and disseminating information instead of a mere covering for a product [24].

Block and Segev [51] suggest the following e-commerce impacts on marketing:

**•** Reduce development cycles and accelerate time-to-market through collaborative engi‐

Supplier development Partnership, supplier development. WWW assisted supplier selection, e-mails,

**Functional areas E-commerce applications E-commerce tools and systems**

B2B e-commerce, Internet ordering, website

Electronic funds transfer, bar-coding system, ERP, WWW integrated inventory management, Internet delivery of products and services.

E-mails, interactive web sites, WWW based

EDI, WWW integrated inventory management.

research on suppliers and products with WWW

WWW integrated CAD, Hyperlinks, 3D navigation, Internet for data and information

B2B e-commerce, MRP, ERP, SAP.

multimedia applications.

and intelligent agents.

for the company.

exchange.

savings, reduced cycle time, customer services.

Purchasing Ordering, fund transfer, supplier selection. EDI, Internet-purchasing.

requirements, product design, quality function deployment, data mining and warehousing.

inventory management, quality control.

channels, transportation, scheduling, third

E-recruitment, benefit selection and management, training and education using

Marketing Product promotion, new sales channels, direct

Design Customer feedback, research on customer

Production Production planning and control, scheduling,

Sales and distribution Internet sales, selection of distribution

party logistics.

WWW.

Warehousing Inventory management, forecasting,

**Table 5.** E-commerce applications areas, tools and systems [48]

scheduling of work force.

**•** Significantly increase the speed of communication, especially international communica‐

**•** Reduce the cost of communication that in turn can reduce inventory and purchasing

**•** Provide a quick and easy way of exchanging information about a company and its prod‐

Every year Internet-based companies ship millions of packages throughout the world [24]. Online shopping influences packaging and its interactions with industrial function, mainly with marketing. The more people shop online, the more the role and the function of packag‐

**•** Gain access to worldwide markets at a fraction of traditional costs;

**•** Drastically reduce purchasing and production cycles;

ucts, both internally and outside the organization.

**•** Promote a closer relationship with customers and suppliers;

**5. Packaging and e-commerce in operations management**

neering, product, and process design;

tion;

Human resource management

208 Operations Management

costs;


The advent of e-commerce has also had several implications on logistics and the environ‐ ment. From the logistics point of view, packaging has to increase its function of protection and covering of products, since products have to be transported to reach the customer. The theme of reverse logistics takes on great importance since customers can return wrong and/or unsuitable products. The advent of Internet distribution produces significant savings in shipping, and can facilitate delivery. Even those who use transportations can use Internetbased tools to increase customer service. Web-based order tracking has become common‐ place. It allows customers to trace the shipment of their orders without having to contact the shipper directly [48]. Several electronic tools, like Electronic Data Interchange (i.e. the struc‐ tured transmission of data between organizations by electronic means, EDI) can have a sig‐ nificant impact on the management of online packaging. EDI enables minimal stocks to be held with the consequent saving in storage, insurance, warehousing and labour costs (re‐ duction in manual processing reduces the need for people) [48]. The packaging system must ensure secure shipping, reduce the possibility of theft, increase security and identify where the products are in real time.

are picked from the shelves and packed according to the retailer's order. After that, the products packed in secondary packages are loaded onto the truck and dispatched to the re‐ tailer. Finally, he sells the products to end consumers in real shops. The packages are not labelled with identification technology (e.g. barcodes, RFID, etc.). Figure 13 shows in detail

RECEIVING

WAREHOUSING

DISPOSAL

The Important Role of Packaging in Operations Management

Packaging are used for waste-to-energy

http://dx.doi.org/10.5772/54073

211

solution

PICKING

PREPARING

The project concerns the study of a new packaging system (in terms of material, shape, ac‐ cessories used for protecting the product) to be used for online shopping. The new package has to take into account mainly the logistics aspects required by the e-commerce business.

**•** Protection of the product: products contained in secondary packages have to be protected

**•** Handleability: the ergonomic aspect, that is everything relating to adaptations to the hu‐ man physique and behaviour when using the product, has to be considered; the package

The wholesaler has defined several requirements for the new packaging solution:

from mechanical shocks, vibrations, electrostatic discharge, compression, etc.;

the activities of the wholesaler.

Unpacked and sorted of the

Stocked of the product in the

When arrive an order by a retailer, products are picked packed in cardboard boxes

Orders are prepared for the

The products contained in the

**Figure 13.** The wholesaler's activities

packages are shipped DISPATCHING

has to be easy to open, easy to grip and user-friendly;

Activities:

product

shelves

shipping

Quality control

From the environmental point of view, packaging in e-commerce has very similar require‐ ments to traditional shopping, such as the use of recyclable materials, reduction of the amount of materials used, possibility to re-use packages in case of returned products from customers, disposal of damaged packages with the minimum production of pollution.

Table 6 shows the main interactions between packaging and other industrial issues in both real and online shopping.


**Table 6.** Packaging and industrial issues in real and online shops

### **6. A case study: Packaging e-commerce logistics in operations management**

This section presents a case study on an Italian wholesaler; its main activities consist of pur‐ chasing goods from suppliers and selling and distributing them to retailers that in turn sell to end consumers through a "real shop".

The wholesaler is interested in starting a new business: the e-commerce activity. The whole‐ saler wants to sell directly to end consumers, bypassing the retailers and, at the same time, continue the B2B transactions.

Traditionally, the wholesaler receives goods from suppliers in the receiving area; the goods are unpacked, sorted and stored in the warehouse. When a retailer asks for products, they are picked from the shelves and packed according to the retailer's order. After that, the products packed in secondary packages are loaded onto the truck and dispatched to the re‐ tailer. Finally, he sells the products to end consumers in real shops. The packages are not labelled with identification technology (e.g. barcodes, RFID, etc.). Figure 13 shows in detail the activities of the wholesaler.

**Figure 13.** The wholesaler's activities

shipper directly [48]. Several electronic tools, like Electronic Data Interchange (i.e. the struc‐ tured transmission of data between organizations by electronic means, EDI) can have a sig‐ nificant impact on the management of online packaging. EDI enables minimal stocks to be held with the consequent saving in storage, insurance, warehousing and labour costs (re‐ duction in manual processing reduces the need for people) [48]. The packaging system must ensure secure shipping, reduce the possibility of theft, increase security and identify where

From the environmental point of view, packaging in e-commerce has very similar require‐ ments to traditional shopping, such as the use of recyclable materials, reduction of the amount of materials used, possibility to re-use packages in case of returned products from customers, disposal of damaged packages with the minimum production of pollution.

Table 6 shows the main interactions between packaging and other industrial issues in both

**Real shop Online shop**

Marketing:

promotion,

Environment:

Logistics:

Brand identity, means of

disseminating information, product

Protection and covering the products, transport, reverse logistics, security

Reduction of materials, recyclable materials, re-use, disposal

the products are in real time.

210 Operations Management

real and online shopping.

**management**

Marketing:

Logistics:

communication

Environment:

recover, disposal

to end consumers through a "real shop".

continue the B2B transactions.

**Table 6.** Packaging and industrial issues in real and online shops

Sell, differentiate, promote, value, inform, shelf presentation, visual

Handle, transport, store, distribution

Reduction of materials used, re-use,

**6. A case study: Packaging e-commerce logistics in operations**

This section presents a case study on an Italian wholesaler; its main activities consist of pur‐ chasing goods from suppliers and selling and distributing them to retailers that in turn sell

The wholesaler is interested in starting a new business: the e-commerce activity. The whole‐ saler wants to sell directly to end consumers, bypassing the retailers and, at the same time,

Traditionally, the wholesaler receives goods from suppliers in the receiving area; the goods are unpacked, sorted and stored in the warehouse. When a retailer asks for products, they The project concerns the study of a new packaging system (in terms of material, shape, ac‐ cessories used for protecting the product) to be used for online shopping. The new package has to take into account mainly the logistics aspects required by the e-commerce business.

The wholesaler has defined several requirements for the new packaging solution:


The research activity starts from the study of several typical orders defined by the wholesal‐ er in order to determine the best packaging configurations that optimize the combination of logistics, protection of the product and re-use of packages. The wholesaler decided to re-use the cardboard boxes in which the products are sent by suppliers. This solution minimizes the packaging system costs and reduces the environmental impact. According to these con‐ siderations, Figure 14 shows an example of the secondary package chosen.

**Figure 15.** Accessories used for protecting products (courtesy of Soropack Group)

he can verify the position of the product he has ordered;

port;

of the OM decision levels.

The new packaging solution presents several advantages in terms of:

**•** Handleability: the package is user-friendly, easy to handle and to open;

ucts arrive from the suppliers for dispatching products to end consumers.

**•** Protection of the product: the products inside the packages are protected thanks to the ac‐ cessories used that increase the protection of products, damping the shocks during trans‐

The Important Role of Packaging in Operations Management

http://dx.doi.org/10.5772/54073

213

**•** Security: the installation of RFID tags in the secondary packages allows the wholesaler to increase security during transport, reduce the number of thefts, and find out the position of the package at all times. This aspect may also be important for the end consumer since

**•** Respect for the environment: the packages and accessories used for the e-commerce busi‐ ness can be recycled (the cardboard box is paper and the interior cushioning plastic) and secondary packages are re-used: the wholesaler use the cardboard with which the prod‐

In order to define a new packaging solution for the e-commerce business and, according to OM discipline, the strategic, tactical and operational levels have to be analysed. The defini‐ tion of a new packaging solution for the e-commerce business, allowing transactions costs to be minimized and leading to an increase in business, is a strategic decision. The tactical management defines the main packaging requirements and the operational level has to im‐ plement the solution. The activities of the operational level are to test the products and pack‐ ages in order to verify the resistance to shocks, build the website from sell by the WWW, study the shape, materials and accessories for packages, define a package that is as easy as possible to handle and transport and analyse the installation of RFID tags in secondary packages. Figure 16 shows in detail the decisions and operations at all levels in the pyramid

**Figure 14.** The typical cardboard box used as secondary package

After that, the accessories are chosen in order to protect products from mechanical shocks, vibrations and compression during transport. Pluriball, polystyrene and interior cushioning are chosen as flexible protective accessories (an example of interior cushioning is shown in Figure 15).

The authors have analysed the possibility to install RFID tags on secondary packages in or‐ der to find out the position of the products in real time and to increase security during trans‐ port, minimizing the possibility of thefts and loss of products.

**Figure 15.** Accessories used for protecting products (courtesy of Soropack Group)

**•** Security: packages must ensure secure shipping. It is necessary to install identification technologies, like RFID tags or barcodes, in secondary packages in order to reduce thefts,

**•** Respect for the environment: the package has to be recyclable, in line with the require‐

The research activity starts from the study of several typical orders defined by the wholesal‐ er in order to determine the best packaging configurations that optimize the combination of logistics, protection of the product and re-use of packages. The wholesaler decided to re-use the cardboard boxes in which the products are sent by suppliers. This solution minimizes the packaging system costs and reduces the environmental impact. According to these con‐

After that, the accessories are chosen in order to protect products from mechanical shocks, vibrations and compression during transport. Pluriball, polystyrene and interior cushioning are chosen as flexible protective accessories (an example of interior cushioning is shown in

The authors have analysed the possibility to install RFID tags on secondary packages in or‐ der to find out the position of the products in real time and to increase security during trans‐

increase security, and reduce costs and time spent on the traceability of products;

ments of end consumers and has to have minimum environmental impact;

siderations, Figure 14 shows an example of the secondary package chosen.

**Figure 14.** The typical cardboard box used as secondary package

port, minimizing the possibility of thefts and loss of products.

Figure 15).

212 Operations Management

**•** Re-use of packages from the supplier when the products back to the wholesaler.

The new packaging solution presents several advantages in terms of:


In order to define a new packaging solution for the e-commerce business and, according to OM discipline, the strategic, tactical and operational levels have to be analysed. The defini‐ tion of a new packaging solution for the e-commerce business, allowing transactions costs to be minimized and leading to an increase in business, is a strategic decision. The tactical management defines the main packaging requirements and the operational level has to im‐ plement the solution. The activities of the operational level are to test the products and pack‐ ages in order to verify the resistance to shocks, build the website from sell by the WWW, study the shape, materials and accessories for packages, define a package that is as easy as possible to handle and transport and analyse the installation of RFID tags in secondary packages. Figure 16 shows in detail the decisions and operations at all levels in the pyramid of the OM decision levels.

tect the products inside) could be divided into three main functions that interact with each other. They are flow, market and environment. The flow function consists of packaging fea‐ tures that contribute to more efficient handling during transport. The market function con‐ siders the aesthetics aspect in order to create value for the product and finally, the environment function has the purpose of reducing the negative effects of packaging on the environment. Packaging has an important role along the whole supply chain: all the parties (e.g. suppliers, manufacturers, retailers, end consumers) are interested in the packaging fea‐ tures (e.g. protection of the product, aesthetics aspects, reduction of the environmental im‐

The Important Role of Packaging in Operations Management

http://dx.doi.org/10.5772/54073

215

In order to find the optimal packaging system management, the authors have developed a complete mathematical model that represents added value for companies seeking to esti‐ mate the total costs of their packaging system and consequently its impact on total company costs. The model considers all the cost parameters regarding the packaging system, e.g. en‐

The packaging system takes on a fundamental role in online shopping. In recent years, web operations have evolved and organizations who want to start online business have to reconsider the role of packaging: from merely "shop window" in real shops, packaging has to transform into a means of information and transport. The changing role of packag‐ ing in the purchase of a product makes it desirable and possible to give more attention to the consumer's perception of a brand while he is using it, and less attention to its shelf

The correlation between packaging and e-commerce is a relatively new aspect. The case study described in Section 5 has shown the will of organizations to enter into the new ecommerce business, but also the changes that they have to make to the packaging system, since the packaging requirements of online shopping are different from those of a real shop. Organizations gain important benefits from e-commerce, such as the increase in labour cost

Several modifications have to be considered for future thinking concerning online packag‐ ing. Communicative and information functions must be built in to help consumers to identi‐ fy the products easily and to assist them in making precise decisions and reinforcing brand identity for consumers online. In addition, the ability to attract consumers' attention and in‐ cite their curiosity about the products are important points to analyse in the future in order

to increase the potential development of packages for online shopping.

1 DIN – Department of Industrial Engineering, University of Bologna, Bologna, Italy

2 DTG – Department of Management and Engineering, University of Padova, Padova, Italy

and Giulia Santarelli2

gineering cost, warehousing cost, labour cost, transport cost, etc.

pact, etc.).

presentation [24].

**Author details**

Alberto Regattieri1

savings.

**Figure 16.** The pyramid of OM's decision levels for the case study

The new solution is implemented by the wholesaler and implies several benefits: an increase in sales with minimum effort, a reduction in transaction costs and an increase in customer satisfaction thanks to the environmentally friendly packaging. Moreover, the products are now traced every time and in real time, thanks to the installation of RFID tags in secondary packages, reducing thefts, loss and increasing security.

### **7. Conclusion**

Operations Management is defined as the management function responsible for all activities directly concerned with making a product, collecting various inputs and converting them into desired outputs through operations [5]; OM discipline can be applied to manufacturing, service industries and non-profit organizations.

Over the years, new tools and elements such as TQM, JIT, and ECR have become part of the OM discipline that recognizes the need to integrate these tools and elements of the manage‐ ment system with the company's strategy. In order to manage all operations, organizations have to define a strategy, whose decisions are based on three levels: strategic, tactical and operational. Each level is integrated with the others and has to be interrelated in order to follow a common purpose. Strategic, tactical and operational decision levels are strictly con‐ nected with packaging features.

Packaging is a multidimensional function that takes on a fundamental role in organizations to achieve successful management of operations. Johansson [26] stated that the packaging system (made up of primary, secondary and tertiary packaging and accessories used to pro‐ tect the products inside) could be divided into three main functions that interact with each other. They are flow, market and environment. The flow function consists of packaging fea‐ tures that contribute to more efficient handling during transport. The market function con‐ siders the aesthetics aspect in order to create value for the product and finally, the environment function has the purpose of reducing the negative effects of packaging on the environment. Packaging has an important role along the whole supply chain: all the parties (e.g. suppliers, manufacturers, retailers, end consumers) are interested in the packaging fea‐ tures (e.g. protection of the product, aesthetics aspects, reduction of the environmental im‐ pact, etc.).

In order to find the optimal packaging system management, the authors have developed a complete mathematical model that represents added value for companies seeking to esti‐ mate the total costs of their packaging system and consequently its impact on total company costs. The model considers all the cost parameters regarding the packaging system, e.g. en‐ gineering cost, warehousing cost, labour cost, transport cost, etc.

The packaging system takes on a fundamental role in online shopping. In recent years, web operations have evolved and organizations who want to start online business have to reconsider the role of packaging: from merely "shop window" in real shops, packaging has to transform into a means of information and transport. The changing role of packag‐ ing in the purchase of a product makes it desirable and possible to give more attention to the consumer's perception of a brand while he is using it, and less attention to its shelf presentation [24].

The correlation between packaging and e-commerce is a relatively new aspect. The case study described in Section 5 has shown the will of organizations to enter into the new ecommerce business, but also the changes that they have to make to the packaging system, since the packaging requirements of online shopping are different from those of a real shop. Organizations gain important benefits from e-commerce, such as the increase in labour cost savings.

Several modifications have to be considered for future thinking concerning online packag‐ ing. Communicative and information functions must be built in to help consumers to identi‐ fy the products easily and to assist them in making precise decisions and reinforcing brand identity for consumers online. In addition, the ability to attract consumers' attention and in‐ cite their curiosity about the products are important points to analyse in the future in order to increase the potential development of packages for online shopping.

### **Author details**

A

STRATEGIC LEVEL

TACTICAL LEVEL

OPERATIONAL LEVEL

B

C

Protection of the product: use interior cushioning as accessories

Definition of the packaging requirements:

• Protection of the product • Handleability • Respect of the environment

• Security

The new solution is implemented by the wholesaler and implies several benefits: an increase in sales with minimum effort, a reduction in transaction costs and an increase in customer satisfaction thanks to the environmentally friendly packaging. Moreover, the products are now traced every time and in real time, thanks to the installation of RFID tags in secondary

Operations Management is defined as the management function responsible for all activities directly concerned with making a product, collecting various inputs and converting them into desired outputs through operations [5]; OM discipline can be applied to manufacturing,

Over the years, new tools and elements such as TQM, JIT, and ECR have become part of the OM discipline that recognizes the need to integrate these tools and elements of the manage‐ ment system with the company's strategy. In order to manage all operations, organizations have to define a strategy, whose decisions are based on three levels: strategic, tactical and operational. Each level is integrated with the others and has to be interrelated in order to follow a common purpose. Strategic, tactical and operational decision levels are strictly con‐

Packaging is a multidimensional function that takes on a fundamental role in organizations to achieve successful management of operations. Johansson [26] stated that the packaging system (made up of primary, secondary and tertiary packaging and accessories used to pro‐

Increase of the business

> Security: use of RFID increase security and know the position of products in real time

Respect of the environment: re-use and/or recycle secondary packages

Study of the materials and shapes and building of the web site

New packaging solution for e-commerce business

Test of packages in order to evaluate its resistance to shocks during the transport

Less packaging varieties and sizes as possible

Minimization of management costs

Handleability: packages are easy to handle, to open and to transport

**Figure 16.** The pyramid of OM's decision levels for the case study

packages, reducing thefts, loss and increasing security.

service industries and non-profit organizations.

nected with packaging features.

**7. Conclusion**

214 Operations Management

Alberto Regattieri1 and Giulia Santarelli2


### **References**

[1] Waters D. Operations management – Producing goods and services. Addison-Wes‐ ley (eds.). Great Britain; 1996.

[15] Coyle J.J., Bardi E.J., Langley C.J. Jr. The management of business logistics. West pub‐

The Important Role of Packaging in Operations Management

http://dx.doi.org/10.5772/54073

217

[17] Womack J.P., Jones D.T. Lean consumption. Harvard Business Review 2005; 83(3)

[18] Hansson E., Olsson M. Ellos: a case study in operations management and packaging logistics. School of economics and commercial low, Göteborg University, Sweden;

[19] Hellström D., Saghir M. Packaging and logistics interactions in retail supply chains.

[20] Underwood R.L. The communicative power of product packaging: creating brand identity via lived and mediated experience. Journal of Marketing Theory and Prac‐

[21] Silversson J., Jonson G. Handling time and the influence of packaging design. Licen‐

[22] Saghir M. Packaging logistics evaluation in the Swedish retail supply chain. PhD

[23] Long D.Y. Commercial packaging design. In: Yellow Lemon. 1982, Taipei, Taiwan.

[24] Visser E. Packaging on the web: an underused resource. Design Management Journal

[25] Twede D. The process of packaging logistical innovation. Journal of Business Logis‐

[26] Johansson K., Lorenszon-Karlsson A., Olsmats C., Tiliander L. Packaging logistics.

[27] Chan F.T.S., Chan H.K., Choy K.L. A systematic approach to manufacturing packag‐ ing logistics. The International Journal of Advanced Manufacturing Technology 2006;

[28] Verruccio M., Cozzolino A., Michelini L. An exploratory study of marketing, logis‐ tics, and ethics in packaging innovation. European Journal of Innovation Manage‐

[29] Olsson A., Larsson A.C. Value creation in PSS design through product and packag‐ ing innovation processes. In Sakao and Lindahl (eds.) Introduction to product/serv‐

[30] Nilsson F., Olsson, A., Wikström F. Toward sustainable goods flows – a framework from a packaging perspective. Proceedings of 23rd NOFOMA Conference, June 2001,

Packaging Technology and Science 2006; 20(3) 197-216.

tiate Thesis Lund University, Sweden; 1998.

Thesis, Lund University, Sweden; 2002.

[16] Shaw R. Computer aided marketing & selling. In:Butterworth Heinemann; 1991.

lishing company, St Paul, MN, 1996.

58-68.

2000.

tice 2003; 11(1) 61-65.

2002; 62-67.

tics 1992; 13(1) 69-94.

Packforsk, Kista; 1997.

29(9;10) 1088-1101.

Norway.

ment 2010; 13(3) 333-354.

ice-system design; 2009. p93-108.


**References**

216 Operations Management

ley (eds.). Great Britain; 1996.

2000; 16 45-66.

2004. p288-305.

son (ed.). London; 2003.

view 1990; 68(4) 104-112.

olution. National Bestseller, 1993.

ment Science 1997; 27(6) 91-105.

Management 1996; 5(1) 3-14.

ations Management 1998; 17 97-113.

manufacturing. In:Wiley, New York; 1984.

[1] Waters D. Operations management – Producing goods and services. Addison-Wes‐

[2] Regattieri A., Olsson A., Santarelli G., Manzini R. An empirical survey on packaging perception for Italian companies and a comparison with Swedish situation. Proceed‐

[3] Regattieri A., Santarelli G. and Olsson A. The Customers' Perception of Primary Packaging: a Comparison between Italian and Swedish Situations. Proceedings of the 18th IAPRI World Packaging Conference, June 2012, San Luis Obispo, California.

[4] Drejer A., Blackmon K., Voss C. Worlds apart? – A look at the operations manage‐ ment area in the US, UK and Scandinavia. Scandinavian Journal of Management

[5] Waller D.L. Operations management: a supply chain approach. 2nd edition. Thomp‐

[6] Schmenner R.W., Swink M.L. On theory in operations management. Journal of Oper‐

[7] Kleindorfer P.R., Van Wassenhove L.N. Strategies for building successful global busi‐ nesses. In: Gatignon and Kimberley (eds.) Managing risk in global supply chains.

[8] Hayes R.H., Wheelwright S.C. Restoring out competitive edge: competing through

[9] Hammer M. Re-engineering work: don't automate, obliterate. Harvard Business Re‐

[10] Hammer M., Champy J. Reengineering the corporation: a manifesto for business rev‐

[11] Ahire S.L. Total Quality Management interfaces: an integrative framework. Manage‐

[12] Sugimori Y., Kusunoki F., Cho F., Uchikawa S. Toyota production system and kan‐ ban system: materialization of just-in-time and respect for human systems. Interna‐

[13] Hamel G., Prahalad C.K. Competing for the future: break-through strategies for siz‐ ing control of your industry and creating the markets of tomorrow. Harvard Busi‐

[14] Skinner W.S. Manufacturing strategy on the "S" curve. Production and Operations

tional Journal of Production Research 1977; 15(6) 553-564.

ness School Press (ed.). Boston, Massachussetts; 1994.

ings of the 24th NOFOMA Conference, June 2012, Turku, Finland.


[31] Sonneveld K., James K., Fitzpatrick L., Lewis H. Sustainable packaging, how we de‐ fine and measure it? 22nd IAPRI symposium of packaging; 2005.

[46] Zugelder M.T. Flaherty T.B. and Johnson J.P. Legal issues associated with interna‐ tional internet marketing. International Marketing Review 2000; 17(3) 253-271.

The Important Role of Packaging in Operations Management

http://dx.doi.org/10.5772/54073

219

[47] Altmiller J.C., Nudge B.S. The future of electronic commerce law: proposed changes to the uniform commercial code. IEEE Communication Magazine 1998; 36(2) 20-22.

[48] Gunasekaran A., Marri H.B., McGaughey R.E., Nebhwani M.D. E-commerce and its impact on operations management. International Journal of Production Economics

[49] Boll S., Gruner A., Haaf A., Klas W. EMP – A database-driven electronic market place for business-to-business commerce on the internet. Distributed and Parallel Database

[50] Wang F., Head M., Archer N. A relationship-building model for the web retail mar‐

[51] Block M., Segev A. Leveraging electronic commerce for competitive advantage: a business value framework. Proceedings of the Ninth International Conference on

2002; 75 185-197.

1999; 7(2) 149-177.

EDI-ISO. 1996, Bled, Slovenia.

ketplace. Internet Research 2000; 10(5) 374-384.


[46] Zugelder M.T. Flaherty T.B. and Johnson J.P. Legal issues associated with interna‐ tional internet marketing. International Marketing Review 2000; 17(3) 253-271.

[31] Sonneveld K., James K., Fitzpatrick L., Lewis H. Sustainable packaging, how we de‐

[32] Svanes E. Vold M., Møller H., Kvalvåg Pettersen M., Larsen H., Hanssen O.J. Sustain‐ able packaging design: a holistic methodology for packaging design. Packaging

[33] Pilditch J. The silent salesman. 2nd ed. In: Doble & Brendon (eds.). 1973, Plymouth.

[34] Bramklev C. A survey on the integration of product and package development. Inter‐ national Journal Manufacturing Technology and Management 2010; 19(3;4) 258-278.

[35] Bjärnemo R., Jönson G., Johnsson M. Packaging logistics in product development. In Singh J., Lew S.C. & Gay R. (eds.). Proceedings of the 5th International Conference: computer integrated manufacturing technologies for new millennium manufactur‐

[36] Helander F. Svensk Förpacknignsindustri. Var är vi idag och vad påverkar utvecklin‐

[37] Kano N., Seraku N., Takahashi F., Tsjui F. Attractive quality and must-be-quality.

[38] Löfgren M., Witell L. Kano's theory of attractive quality and packaging. The Quality

[39] Berger C., Blauth R., Boger D., Bolster C., Burchill G., DuMouchel W., Poulist F., Richter R., Rubinoff A., Shen D., Timko M., Walden D. Kano's methods for under‐ standing customer-defined quality. The Center of Quality Management Journal 1993;

[40] Huang K.L., Rust C., Press M. Packaging design for e-commerce: identifying new challenges and opportunities for online packaging. College of Digital Design. Visual Communication Design Graduate School of Digital Content and Animation, 2009.

[41] Fraser J., Fraser N., McDonald F. The strategic challenge of electronic commerce.

[42] Turban E., Lee J., King D. and Chung H.M. Electronic commerce: a managerial per‐

[43] Giovani J.C. Towards a framework for operations management in e-commerce. Inter‐ national Journal of Operations & Production Management 2003; 23(2) 200-212.

[44] Skjoett-Larsen T. European logistics beyond 2000. International Journal of Physical

[45] Soliman F., Youssef M. The impact of some recent developments in e-business in the management of next generation manufacturing. International Journal of Operations

Supply Chain Management: An International Journal 2000; 5(1) 7-14.

spective. Prentice-Hall International (UK) Limited, London, 2000.

Distribution & Logistics Management 2000; 30(5) 377-387.

& Production Management 2001; 21(5;6) 538-564.

gen framåt? Packbridge publication, Malmö, Sweden. 2010.

fine and measure it? 22nd IAPRI symposium of packaging; 2005.

Technology and Science 2010; 23(2) 161–175.

ing, 2000, Singapore.

Hinshitsu 1984; 2 147-156.

2(4).

218 Operations Management

Management Journal 2005; 12(3) 7-20.


**Chapter 9**

**An Overview of Human Reliability**

Valentina Di Pasquale, Raffaele Iannone, Salvatore Miranda and Stefano Riemma

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/55065

of consequences – is high [2].

the likely causes of errors [1].

**1. Introduction**

**Analysis Techniques in Manufacturing Operations**

In recent years, there has been a decrease in accidents due to technical failures through technological developments of redundancy and protection, which have made systems more reliable. However, it is not possible to talk about system reliability without addressing the failure rate of all its components; among these components, "man" – because his rate of error changes the rate of failure of components with which he interacts. It is clear that the contribu‐ tion of the human factor in the dynamics of accidents – both statistically and in terms of severity

Although valid values are difficult to obtain, estimates agree that errors committed by man are responsible for 60–90% of the accidents; the remainder of accidents are attributable to technical deficiencies [2,3,4]. The incidents are, of course, the most obvious human errors in industrial systems, but minor faults can seriously reduce the operations performances, in terms of productivity and efficiency. In fact, human error has a direct impact on productivity because errors affect the rates of rejection of the product, thereby increasing the cost of production and possibly reduce subsequent sales. Therefore, there is need to assess human reliability to reduce

The starting point of this work was to study the framework of today's methods of human reliability analysis (HRA): those quantitative of the first generation (as THERP and HCR), those qualitative of second (as CREAM and SPAR-H), and new dynamic HRA methods and recent improvements of individual phases of HRA approaches. These methods have, in fact, the purpose of assessing the likelihood of human error – in industrial systems, for a given operation, in a certain interval of time and in a particular context – on the basis of models that

> © 2013 Di Pasquale et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

> © 2013 Di Pasquale et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **An Overview of Human Reliability Analysis Techniques in Manufacturing Operations**

Valentina Di Pasquale, Raffaele Iannone, Salvatore Miranda and Stefano Riemma

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/55065

### **1. Introduction**

In recent years, there has been a decrease in accidents due to technical failures through technological developments of redundancy and protection, which have made systems more reliable. However, it is not possible to talk about system reliability without addressing the failure rate of all its components; among these components, "man" – because his rate of error changes the rate of failure of components with which he interacts. It is clear that the contribu‐ tion of the human factor in the dynamics of accidents – both statistically and in terms of severity of consequences – is high [2].

Although valid values are difficult to obtain, estimates agree that errors committed by man are responsible for 60–90% of the accidents; the remainder of accidents are attributable to technical deficiencies [2,3,4]. The incidents are, of course, the most obvious human errors in industrial systems, but minor faults can seriously reduce the operations performances, in terms of productivity and efficiency. In fact, human error has a direct impact on productivity because errors affect the rates of rejection of the product, thereby increasing the cost of production and possibly reduce subsequent sales. Therefore, there is need to assess human reliability to reduce the likely causes of errors [1].

The starting point of this work was to study the framework of today's methods of human reliability analysis (HRA): those quantitative of the first generation (as THERP and HCR), those qualitative of second (as CREAM and SPAR-H), and new dynamic HRA methods and recent improvements of individual phases of HRA approaches. These methods have, in fact, the purpose of assessing the likelihood of human error – in industrial systems, for a given operation, in a certain interval of time and in a particular context – on the basis of models that

© 2013 Di Pasquale et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Di Pasquale et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

describe, in a more or less simplistic way, the complex mechanism that lies behind the single human action that is potentially subject to error [1].

numerous studies, remains difficult to fully represent in describing all the nuances that distinguish it [1]. It is abundantly clear how complex an effort has been made in the literature to propose models of human behaviour, favoring numerical values of probability of error to predict and prevent unsafe behaviours. For this reason, the study of human reliability can be seen as a specialised scientific subfield – a hybrid between psychology, ergonomics, engineer‐

An Overview of Human Reliability Analysis Techniques in Manufacturing Operations

http://dx.doi.org/10.5772/55065

223

The birth of HRA methods dates from the year 1960, but most techniques for assessment of the human factor, in terms of propensity to fail, have been developed since the mid-'80s. HRA techniques or approaches can be divided essentially into two categories: first and second generation. Currently, we come to HRA dynamic and methods of the third generation,

The first generation HRA methods have been strongly influenced by the viewpoint of proba‐ bilistic safety assessment (PSA) and have identified man as a mechanical component, thus losing all aspects of dynamic interaction with the working environment, both as a physical environment and as a social environment [33]. In many of these methods – such as Technique for Human Error Rate Prediction (THERP) [2, 3, 13–15], Accident Sequence Evaluation Program (ASEP) [16], and Human Cognition Reliability (HCR) [2] – the basic assumption is that because humans have natural deficiencies, humans logically fail to perform tasks, just as do mechanical or electrical components. Thus, HEP can be assigned based on the characteristics of the operator's task and then modified by performance shaping factors (PSF). In the first HRA generation, the characteristics of a task, represented by HEPs, are regarded as major factors; the context, which is represented by PSFs, is considered a minor factor in estimating the probability of human failure [8]. This generation concentrated towards quantification, in terms of success/failure of the action, with less attention to the depth of the causes and reasons

THERP and approaches developed in parallel – as HCR, developed by Hannaman, Spurgin, and Lukic in 1985 – describe the cognitive aspects of operator's performance with cognitive modelling of human behaviour, known as model skill-rule-knowledge (SKR) by Rasmussen (1984) [2]. This model is based on classification of human behaviour divided into skill-based,

The attention and conscious thought that an individual gives to activities taking place decreases moving from the third to first level. This behaviour model fits very well with the theory of the human error in Reason (1990), according to which there are several types of errors, depending on which result from actions implemented according to the intentions or less [2]. Reason distinguishes between: slips, intended as execution errors that occur at the level of skill; lapses, that is, errors in execution caused by a failure of memory; and mistakes, errors committed during the practical implementation of the action. In THERP, instead, wrong actions are divided into errors of *omission* and errors of *commission*, which represent, respec‐ tively, the lack of realisation of operations required to achieve the result and the execution of an operation, not related to that request, which prevents the obtainment of the result [1, 4].

rule-based, and knowledge-based, compared to the cognitive level used (see Fig. 1).

ing, reliability analysis, and system analysis [4].

understood as an evolution of previous generations.

of human behaviour, borrowed from the behavioural sciences [1].

**2.1. First generation HRA methods**

The concern in safety and reliability analyses is whether an operator is likely to make an incorrect action and which type of action is most likely [5]. The goals defined by Swain and Guttmann (1983) in discussing the THERP approach, one of the first HRA methods developed, are still valid: The objective of a human reliability analysis is 'to evaluate the operator's contribution to system reliability' and, more precisely, 'to predict human error rates and to evaluate the degradation to human–machine systems likely to be caused by human errors in association with equipment functioning, operational procedures and practices, and other system and human characteristics which influence the system behavior' [7].

The different HRA methods analysed allowed us to identify guidelines for determining the likelihood of human error and the assessment of contextual factors. The first step is to identify a probability of human error for the operation to be performed, while the second consists of the evaluation through appropriate multipliers, the impact of environmental, and the behav‐ ioural factors of this probability [1]. The most important objective of the work will be to provide a simulation module for the evaluation of human reliability that must be able to be used in a dual manner [1]:


The tool will also provide for the possibility of determining the optimal configuration of breaks through use of a methodology that, with assessments of an economic nature, allow identifi‐ cation of conditions that, in turn, is required for the suspension of work for psychophysical recovery of the operator and then for the restoration of acceptable values of reliability [1].

### **2. Literature review of HRA methods**

Evidence in the literature shows that human actions are a source of vulnerability for industrial systems, giving rise to HRA that aims to deepen the examination of the human factor in the workplace [1]. HRA is concerned with identifying, modelling, and quantifying the probability of human errors [3]. Nominal human error probability (HEP) is calculated on the basis of operator's activities and, to obtain a quantitative estimate of HEP, many HRA methods utilise performance shaping factors (PSF), which characterise significant facets of human error and provide a numerical basis for modifying nominal HEP levels [24]. The PSF are environmental factors, personal, or directed to activities that have the potential to affect performance posi‐ tively or negatively; therefore, identifying and quantifying the effects of a PSF are key steps in the process of HRA [3]. Another key step concerns interpretation and simulation of human behaviour, which is a dynamic process driven by cognitive and behavioural rules, and influenced by physical and psychological factors. Human behaviour, although analysed in numerous studies, remains difficult to fully represent in describing all the nuances that distinguish it [1]. It is abundantly clear how complex an effort has been made in the literature to propose models of human behaviour, favoring numerical values of probability of error to predict and prevent unsafe behaviours. For this reason, the study of human reliability can be seen as a specialised scientific subfield – a hybrid between psychology, ergonomics, engineer‐ ing, reliability analysis, and system analysis [4].

The birth of HRA methods dates from the year 1960, but most techniques for assessment of the human factor, in terms of propensity to fail, have been developed since the mid-'80s. HRA techniques or approaches can be divided essentially into two categories: first and second generation. Currently, we come to HRA dynamic and methods of the third generation, understood as an evolution of previous generations.

### **2.1. First generation HRA methods**

describe, in a more or less simplistic way, the complex mechanism that lies behind the single

The concern in safety and reliability analyses is whether an operator is likely to make an incorrect action and which type of action is most likely [5]. The goals defined by Swain and Guttmann (1983) in discussing the THERP approach, one of the first HRA methods developed, are still valid: The objective of a human reliability analysis is 'to evaluate the operator's contribution to system reliability' and, more precisely, 'to predict human error rates and to evaluate the degradation to human–machine systems likely to be caused by human errors in association with equipment functioning, operational procedures and practices, and other

The different HRA methods analysed allowed us to identify guidelines for determining the likelihood of human error and the assessment of contextual factors. The first step is to identify a probability of human error for the operation to be performed, while the second consists of the evaluation through appropriate multipliers, the impact of environmental, and the behav‐ ioural factors of this probability [1]. The most important objective of the work will be to provide a simulation module for the evaluation of human reliability that must be able to be used in a

**•** In the preventive phase, as an analysis of the possible situation that may occur and as

**•** In post-production, to understand what are the factors that influence human performance

The tool will also provide for the possibility of determining the optimal configuration of breaks through use of a methodology that, with assessments of an economic nature, allow identifi‐ cation of conditions that, in turn, is required for the suspension of work for psychophysical recovery of the operator and then for the restoration of acceptable values of reliability [1].

Evidence in the literature shows that human actions are a source of vulnerability for industrial systems, giving rise to HRA that aims to deepen the examination of the human factor in the workplace [1]. HRA is concerned with identifying, modelling, and quantifying the probability of human errors [3]. Nominal human error probability (HEP) is calculated on the basis of operator's activities and, to obtain a quantitative estimate of HEP, many HRA methods utilise performance shaping factors (PSF), which characterise significant facets of human error and provide a numerical basis for modifying nominal HEP levels [24]. The PSF are environmental factors, personal, or directed to activities that have the potential to affect performance posi‐ tively or negatively; therefore, identifying and quantifying the effects of a PSF are key steps in the process of HRA [3]. Another key step concerns interpretation and simulation of human behaviour, which is a dynamic process driven by cognitive and behavioural rules, and influenced by physical and psychological factors. Human behaviour, although analysed in

evaluation of the percentage of pieces discarded by the effect of human error;

system and human characteristics which influence the system behavior' [7].

human action that is potentially subject to error [1].

dual manner [1]:

222 Operations Management

so they can reduce errors.

**2. Literature review of HRA methods**

The first generation HRA methods have been strongly influenced by the viewpoint of proba‐ bilistic safety assessment (PSA) and have identified man as a mechanical component, thus losing all aspects of dynamic interaction with the working environment, both as a physical environment and as a social environment [33]. In many of these methods – such as Technique for Human Error Rate Prediction (THERP) [2, 3, 13–15], Accident Sequence Evaluation Program (ASEP) [16], and Human Cognition Reliability (HCR) [2] – the basic assumption is that because humans have natural deficiencies, humans logically fail to perform tasks, just as do mechanical or electrical components. Thus, HEP can be assigned based on the characteristics of the operator's task and then modified by performance shaping factors (PSF). In the first HRA generation, the characteristics of a task, represented by HEPs, are regarded as major factors; the context, which is represented by PSFs, is considered a minor factor in estimating the probability of human failure [8]. This generation concentrated towards quantification, in terms of success/failure of the action, with less attention to the depth of the causes and reasons of human behaviour, borrowed from the behavioural sciences [1].

THERP and approaches developed in parallel – as HCR, developed by Hannaman, Spurgin, and Lukic in 1985 – describe the cognitive aspects of operator's performance with cognitive modelling of human behaviour, known as model skill-rule-knowledge (SKR) by Rasmussen (1984) [2]. This model is based on classification of human behaviour divided into skill-based, rule-based, and knowledge-based, compared to the cognitive level used (see Fig. 1).

The attention and conscious thought that an individual gives to activities taking place decreases moving from the third to first level. This behaviour model fits very well with the theory of the human error in Reason (1990), according to which there are several types of errors, depending on which result from actions implemented according to the intentions or less [2]. Reason distinguishes between: slips, intended as execution errors that occur at the level of skill; lapses, that is, errors in execution caused by a failure of memory; and mistakes, errors committed during the practical implementation of the action. In THERP, instead, wrong actions are divided into errors of *omission* and errors of *commission*, which represent, respec‐ tively, the lack of realisation of operations required to achieve the result and the execution of an operation, not related to that request, which prevents the obtainment of the result [1, 4].

and, in fact, possess a cognitive model without adequate human and psychological realism. They are often criticised for not having considered the impact of factors such as environment, organisational factors, and other relevant PSFs; errors of commission; and for not using proper methods of judging experts [4,10,25]. Swain remarked that "all of the above HRA inadequacies often lead to HRA analysts assessing deliberately higher estimates of HEPs and greater uncertainty bounds, to compensate, at least in part, for these problems" [4]. This is clearly not

An Overview of Human Reliability Analysis Techniques in Manufacturing Operations

http://dx.doi.org/10.5772/55065

225

**Figure 2.** Scheme for the construction of a HRA-THERP event tree [2]: Each node in the tree is related to an action, the sequence of which is shown from the top downwards. Originating from each node are two branches: The branch to the left, marked with a lowercase letter, indicates the success; the other, to the right and marked with the capital let‐

Despite the criticisms and inefficiencies of some first-generation methods, such as THERP and HCR, they are regularly used in many industrial fields, thanks to their ease of use and highly

In the early 1990s, the need to improve HRA approaches interested a number of important research and development activities around the world. These efforts led to much progress in first generation methods and the birth of new techniques, identified as second generation. These HRA methods have been immediately unclear and uncertain, substantially because the methods have been defined in terms of what should not be – that is, they should be as the first generation of HRA methods [5]. While the first generation HRA methods are mostly behav‐ ioural approaches, the second generation HRA methods aspire to be of conceptual type [26]. The separation between generations is evident in the abandonment of the quantitative approach of PRA/PSA in favour of a greater attention to qualitative assessment of human error. The focus shifted to the cognitive aspects of humans, the causes of errors rather than their frequency, the study of the interaction of the factors that increase the probability of error, and

a desirable solution.

ter, indicates the failure.

quantitative aspects.

**2.2. Second generation HRA methods**

the interdependencies of the PSFs [1].

**Figure 1.** Rasmussen's SKR model [2].

The main characteristics of the methods can be summarised as follows [9]:


Among the first generation techniques are: absolute probability judgement (APJ), human error assessment and reduction technique (HEART), justified human error data information (JHEDI), probabilistic human reliability analysis (PHRA), operator action tree system (OATS), and success likelihood index method (SLIM) [31,32]. Among these, the most popular and effectively method used is THERP, characterised as other first generation approaches by an accurate mathematical treatment of the probability and error rates, as well as computer programs well-structured for interfacing with the trees for evaluation of human error of a fault event and trees [11]. The base of THERP is event tree modelling, where each limb represents a combination of human activities, influences upon these activities, and results of these activities [3]. The basic analytical tool for the analysis of human reliability is represented with the graphics and symbols in Figure 2.

First generation HRA methods are demonstrated with experience and use, not able to provide sufficient prevention and adequately perform its duties [10]. The criticism of base to the adequacy of the traditional methods is that these approaches have a tendency to be descriptive of events in which only the formal aspects of external behaviour are observed and studied in terms of errors, without considering reasons and mechanisms that made them level of cognition. These methods ignore the cognitive processes that underlie human performance and, in fact, possess a cognitive model without adequate human and psychological realism. They are often criticised for not having considered the impact of factors such as environment, organisational factors, and other relevant PSFs; errors of commission; and for not using proper methods of judging experts [4,10,25]. Swain remarked that "all of the above HRA inadequacies often lead to HRA analysts assessing deliberately higher estimates of HEPs and greater uncertainty bounds, to compensate, at least in part, for these problems" [4]. This is clearly not a desirable solution.

**Figure 2.** Scheme for the construction of a HRA-THERP event tree [2]: Each node in the tree is related to an action, the sequence of which is shown from the top downwards. Originating from each node are two branches: The branch to the left, marked with a lowercase letter, indicates the success; the other, to the right and marked with the capital let‐ ter, indicates the failure.

Despite the criticisms and inefficiencies of some first-generation methods, such as THERP and HCR, they are regularly used in many industrial fields, thanks to their ease of use and highly quantitative aspects.

### **2.2. Second generation HRA methods**

Interpretation

Identification Procedure

Rules (Rule-based)

Concepts (Knowledge-based)

> Abilities (Skill-based)

The main characteristics of the methods can be summarised as follows [9]:

**•** Low concentration on human cognitive actions (lack of a cognitive model);

**•** Emphasis on quantifying the likelihood of incorrect performance of human actions;

Among the first generation techniques are: absolute probability judgement (APJ), human error assessment and reduction technique (HEART), justified human error data information (JHEDI), probabilistic human reliability analysis (PHRA), operator action tree system (OATS), and success likelihood index method (SLIM) [31,32]. Among these, the most popular and effectively method used is THERP, characterised as other first generation approaches by an accurate mathematical treatment of the probability and error rates, as well as computer programs well-structured for interfacing with the trees for evaluation of human error of a fault event and trees [11]. The base of THERP is event tree modelling, where each limb represents a combination of human activities, influences upon these activities, and results of these activities [3]. The basic analytical tool for the analysis of human reliability is represented with

First generation HRA methods are demonstrated with experience and use, not able to provide sufficient prevention and adequately perform its duties [10]. The criticism of base to the adequacy of the traditional methods is that these approaches have a tendency to be descriptive of events in which only the formal aspects of external behaviour are observed and studied in terms of errors, without considering reasons and mechanisms that made them level of cognition. These methods ignore the cognitive processes that underlie human performance

**•** Binary representation of human actions (success/failure);

**•** Dichotomy between errors of omission and commission;

**•** Attention on the phenomenology of human action;

Evaluation

selection

Action

Stimulus

**Figure 1.** Rasmussen's SKR model [2].

224 Operations Management

**•** Indirect treatment of context.

the graphics and symbols in Figure 2.

In the early 1990s, the need to improve HRA approaches interested a number of important research and development activities around the world. These efforts led to much progress in first generation methods and the birth of new techniques, identified as second generation. These HRA methods have been immediately unclear and uncertain, substantially because the methods have been defined in terms of what should not be – that is, they should be as the first generation of HRA methods [5]. While the first generation HRA methods are mostly behav‐ ioural approaches, the second generation HRA methods aspire to be of conceptual type [26]. The separation between generations is evident in the abandonment of the quantitative approach of PRA/PSA in favour of a greater attention to qualitative assessment of human error. The focus shifted to the cognitive aspects of humans, the causes of errors rather than their frequency, the study of the interaction of the factors that increase the probability of error, and the interdependencies of the PSFs [1].

Second generation HRA methods are based on a cognitive model more appropriate to explain human behaviour. It is evident that any attempt at understanding human perform‐ ance needs to include the role of human cognition, defined as "the act or process of knowing including both awareness and judgement" by an operator [1]. From the HRA practitioner's perspective, the immediate solution to take into consideration human cognition in HRA methods was to introduce a new category of error: "cognitive error", defined both as failure of an activity that is predominantly of a cognitive nature and as the inferred cause of an activity that fails [4]. For example, in CREAM, developed by Erik Hollnagel in 1993, maintained division between logical causes and consequences of human error [5]. The causes of misbehaviour (genotypes) are the reasons that determine the occurrence of certain behaviours, and the effects (phenotypes) are represented by the incorrect forms of cognitive process and inappropriate actions [2,17,25].

impacts on operators [18]. The PSFs of both generations were reviewed and collected in a single

An Overview of Human Reliability Analysis Techniques in Manufacturing Operations

http://dx.doi.org/10.5772/55065

227

Among the methods of the second generation can be mentioned: a technique for human error analysis (ATHEANA), Cognitive Environmental Simulation (CES), Connectionism Assess‐ ment of Human Reliability (CAHR) and Méthode d'Evaluation de la Réalisation des Missions

Many proposed second generation methods still lack sufficient theoretical or experimental bases for their key ingredients. Missing from all is a fully implemented model of the underlying causal mechanisms linking measurable PSFs or other characteristics of the context of operator response. The problem extends to the quantification side, where the majority of the proposed approaches still rely on implicit functions relating PSFs to probabilities [25]. In short, some of the key shortcomings that motivated the development of new methods still remain unfulfilled. Furthermore, unlike first generation methods, which have been largely validated [13–15], the

**•** Lack of inclusion of human cognition (i.e. need for better human behaviour modelling);

**•** Large variability in implementation (the parameters for HRA strongly depend on the

**•** Heavy reliance on expert judgement in selecting PSFs and use of these PSFs to obtain the

taxonomy of performance influencing factors for HRA [16].

**Figure 3.** Model of human performance [12].

methodology used)

HEP in human reliability analysis.

Opérateur pour la Sûreté (MERMOS) [31,32].

second generation has yet to be empirically validated [32].

There are four main sources of deficiencies in current HRA methods [3]:

**•** Lack of empirical data for model development and validation;

Moreover, the second generation HRA methods have aimed at the qualitative assessment of the operator's behaviour and the search for models that describe the interaction with the production process. Cognitive models have been developed, which represent the process logical–rational of the operator and summarise the dependence on personal factors (such as stress, incompetence, etc.) and by the current situation (normal conduction system, abnormal conditions, or even emergency conditions), and models of man–machine interface, which reflect the control system of the production process [33]. In this perspective, man must be seen in an integrated system, men–technology–organisation (MTO), or as a team of operators (men) who collaborate to achieve the same objective, intervening in the mechanical process (tech‐ nology) within a system of organisation and management of the company (organisation) and, together, represent the resources available [1,6].

The CREAM operator model is more significant and less simplistic than that of first generation approaches. The cognitive model used is the contextual control model (COCOM), based on the assumption that human behaviour is governed by two basic principles: the cyclical nature of human cognition and the dependence of cognitive processes from context and working environment. The model refers to the IPS paradigm and considers separately the cognitive functions (perception, interpretation, planning and action) and their connection mechanisms and cognitive processes that govern the evolution [2,4,5,8]. The standardised plant analysis risk–human reliability analysis method (SPAR-H) [11,12,34] is built on an explicit informationprocessing model of human performance, derived from the behavioural sciences literature. An information-processing model is a representation of perception and perceptual elements, memory, sensory storage, working memory, search strategy, long-term memory, and decisionmaking [34]. The components of the behavioural model of SPAR-H are presented in Figure 3.

A further difference between generations relates to the choice and use of PSF. None of the first generation HRA approaches tries to explain how PSFs exert their effect on performance; moreover, PSFs – such as managerial methods and attitudes, organisational factors, cultural differences, and irrational behaviour – are not adequately treated in these methods. PSFs in the first generation were mainly derived by focusing on the environmental impacts on operators, whereas PSFs in the second generation were derived by focusing on the cognitive impacts on operators [18]. The PSFs of both generations were reviewed and collected in a single taxonomy of performance influencing factors for HRA [16].

**Figure 3.** Model of human performance [12].

Second generation HRA methods are based on a cognitive model more appropriate to explain human behaviour. It is evident that any attempt at understanding human perform‐ ance needs to include the role of human cognition, defined as "the act or process of knowing including both awareness and judgement" by an operator [1]. From the HRA practitioner's perspective, the immediate solution to take into consideration human cognition in HRA methods was to introduce a new category of error: "cognitive error", defined both as failure of an activity that is predominantly of a cognitive nature and as the inferred cause of an activity that fails [4]. For example, in CREAM, developed by Erik Hollnagel in 1993, maintained division between logical causes and consequences of human error [5]. The causes of misbehaviour (genotypes) are the reasons that determine the occurrence of certain behaviours, and the effects (phenotypes) are represented by the

Moreover, the second generation HRA methods have aimed at the qualitative assessment of the operator's behaviour and the search for models that describe the interaction with the production process. Cognitive models have been developed, which represent the process logical–rational of the operator and summarise the dependence on personal factors (such as stress, incompetence, etc.) and by the current situation (normal conduction system, abnormal conditions, or even emergency conditions), and models of man–machine interface, which reflect the control system of the production process [33]. In this perspective, man must be seen in an integrated system, men–technology–organisation (MTO), or as a team of operators (men) who collaborate to achieve the same objective, intervening in the mechanical process (tech‐ nology) within a system of organisation and management of the company (organisation) and,

The CREAM operator model is more significant and less simplistic than that of first generation approaches. The cognitive model used is the contextual control model (COCOM), based on the assumption that human behaviour is governed by two basic principles: the cyclical nature of human cognition and the dependence of cognitive processes from context and working environment. The model refers to the IPS paradigm and considers separately the cognitive functions (perception, interpretation, planning and action) and their connection mechanisms and cognitive processes that govern the evolution [2,4,5,8]. The standardised plant analysis risk–human reliability analysis method (SPAR-H) [11,12,34] is built on an explicit informationprocessing model of human performance, derived from the behavioural sciences literature. An information-processing model is a representation of perception and perceptual elements, memory, sensory storage, working memory, search strategy, long-term memory, and decisionmaking [34]. The components of the behavioural model of SPAR-H are presented in Figure 3.

A further difference between generations relates to the choice and use of PSF. None of the first generation HRA approaches tries to explain how PSFs exert their effect on performance; moreover, PSFs – such as managerial methods and attitudes, organisational factors, cultural differences, and irrational behaviour – are not adequately treated in these methods. PSFs in the first generation were mainly derived by focusing on the environmental impacts on operators, whereas PSFs in the second generation were derived by focusing on the cognitive

incorrect forms of cognitive process and inappropriate actions [2,17,25].

together, represent the resources available [1,6].

226 Operations Management

Among the methods of the second generation can be mentioned: a technique for human error analysis (ATHEANA), Cognitive Environmental Simulation (CES), Connectionism Assess‐ ment of Human Reliability (CAHR) and Méthode d'Evaluation de la Réalisation des Missions Opérateur pour la Sûreté (MERMOS) [31,32].

Many proposed second generation methods still lack sufficient theoretical or experimental bases for their key ingredients. Missing from all is a fully implemented model of the underlying causal mechanisms linking measurable PSFs or other characteristics of the context of operator response. The problem extends to the quantification side, where the majority of the proposed approaches still rely on implicit functions relating PSFs to probabilities [25]. In short, some of the key shortcomings that motivated the development of new methods still remain unfulfilled. Furthermore, unlike first generation methods, which have been largely validated [13–15], the second generation has yet to be empirically validated [32].

There are four main sources of deficiencies in current HRA methods [3]:


### **2.3. Last generation**

In recent years, the limitations and shortcomings of the second generation HRA methods have led to further developments related to the improvement of pre-existing methods. The only method now defined as third generation is nuclear action reliability assessment (NARA) and is, in fact, an advanced version of HEART for the nuclear field. The shortcomings in the second generation, highlighted above, have been the starting point of HRA experts for new research and improvement of existing methods.

Some HRA methods – such as CREAM, SPAR-H, and IDAC – try to provide guidance on how to treat dependencies at the level of the factor assessments but do not consider that a PSF category might depend on itself and that the presence of a specific PSF might modulate the impact of another PSF on HEP; therefore, they do not adequately consider the relationships and dependencies between PSFs [20]. Instead, De Ambroggi and Trucco's (2011) study deals with the development of a framework for modelling the mutual influences existing among PSFs and a related method to assess the importance of each PSF in influencing performance

An Overview of Human Reliability Analysis Techniques in Manufacturing Operations

http://dx.doi.org/10.5772/55065

229

**Figure 5.** The procedure for modelling and evaluation of mutual influences among PSFs (De Ambroggi and Trucco

Another limitation of current HRA methods is the strong dependence on expert opinion to assign values to the PSFs; in fact, during this assignment process, subjectivity plays an important role, causing difficulties in assuring consistency. To overcome this problem and obtain a more precise estimation, Park and Lee (2008) suggest a new and simple method: AHP– SLIM [19].This method combines the decision-making tool AHP – a multicriteria decision method for complex problems in which both qualitative and quantitative aspects are consid‐ ered to provide objective and realistic results – with success likelihood index method (SLIM), a simple, flexible method of the expert judgement for estimating HEPs [6,19]. Therefore through a type of HEP estimation using an analytic hierarchy process (AHP), it is possible to quantify the subjective judgement and confirm the consistency of collected data (see Fig. 6).

2011)

of an operator, in a specific context, considering these interactions (see Fig. 5).

Some of the more recent studies have focused on lack of empirical data for development and validation of an HRA model and were intended to define the database HRA, which may provide the methodological tools needed to make greater use of more types of information in future HRAs and reduce uncertainties in the information used to conduct human reliability assessments. Currently, there are some databases for HRA analysts that contain the human error data with cited sources to improve the validity and reproducibility of HRA results. Examples of databases are the human event repository and analysis (HERA) [17] and the human factors information system (HFIS).

The PSFs are an integral part of the modelling and characterisation of errors and play an impor‐ tant role in the process of human reliability assessment; for this reason in recent years, HRA experts have focused their efforts on PSFs. Despite continuing advances in research and applications, one of the main weaknesses of current HRA methods is their limited ability to model the mutual influence among PSFs, intended both as a dependency among the states of the PSFs' dependen‐ cy among PSFs' influences (impacts ) on human performance (Fig. 4) [20,26].

**Figure 4.** Possible types of dependency among PSFs: (A) dependency between the states (the presence) of the PSFs and (B) dependency between the state of the PSFj and the impact of PSFi over the HEP [20].

Some HRA methods – such as CREAM, SPAR-H, and IDAC – try to provide guidance on how to treat dependencies at the level of the factor assessments but do not consider that a PSF category might depend on itself and that the presence of a specific PSF might modulate the impact of another PSF on HEP; therefore, they do not adequately consider the relationships and dependencies between PSFs [20]. Instead, De Ambroggi and Trucco's (2011) study deals with the development of a framework for modelling the mutual influences existing among PSFs and a related method to assess the importance of each PSF in influencing performance of an operator, in a specific context, considering these interactions (see Fig. 5).

**2.3. Last generation**

228 Operations Management

and improvement of existing methods.

human factors information system (HFIS).

In recent years, the limitations and shortcomings of the second generation HRA methods have led to further developments related to the improvement of pre-existing methods. The only method now defined as third generation is nuclear action reliability assessment (NARA) and is, in fact, an advanced version of HEART for the nuclear field. The shortcomings in the second generation, highlighted above, have been the starting point of HRA experts for new research

Some of the more recent studies have focused on lack of empirical data for development and validation of an HRA model and were intended to define the database HRA, which may provide the methodological tools needed to make greater use of more types of information in future HRAs and reduce uncertainties in the information used to conduct human reliability assessments. Currently, there are some databases for HRA analysts that contain the human error data with cited sources to improve the validity and reproducibility of HRA results. Examples of databases are the human event repository and analysis (HERA) [17] and the

The PSFs are an integral part of the modelling and characterisation of errors and play an impor‐ tant role in the process of human reliability assessment; for this reason in recent years, HRA experts have focused their efforts on PSFs. Despite continuing advances in research and applications, one of the main weaknesses of current HRA methods is their limited ability to model the mutual influence among PSFs, intended both as a dependency among the states of the PSFs' dependen‐

**Figure 4.** Possible types of dependency among PSFs: (A) dependency between the states (the presence) of the PSFs

and (B) dependency between the state of the PSFj and the impact of PSFi over the HEP [20].

cy among PSFs' influences (impacts ) on human performance (Fig. 4) [20,26].

**Figure 5.** The procedure for modelling and evaluation of mutual influences among PSFs (De Ambroggi and Trucco 2011)

Another limitation of current HRA methods is the strong dependence on expert opinion to assign values to the PSFs; in fact, during this assignment process, subjectivity plays an important role, causing difficulties in assuring consistency. To overcome this problem and obtain a more precise estimation, Park and Lee (2008) suggest a new and simple method: AHP– SLIM [19].This method combines the decision-making tool AHP – a multicriteria decision method for complex problems in which both qualitative and quantitative aspects are consid‐ ered to provide objective and realistic results – with success likelihood index method (SLIM), a simple, flexible method of the expert judgement for estimating HEPs [6,19]. Therefore through a type of HEP estimation using an analytic hierarchy process (AHP), it is possible to quantify the subjective judgement and confirm the consistency of collected data (see Fig. 6).

further efforts are to instill the PSF of SPAR-H in the simulation system [24]. PROCOS [21,22] is a probabilistic cognitive simulator for HRA studies, developed to support the analysis of human reliability in operational contexts complex. The simulation model comprised two cognitive flow charts, reproducing the behaviour of a process industry operator. The aim is to integrate the quantification capabilities of HRA methods with a cognitive evaluation of the

An Overview of Human Reliability Analysis Techniques in Manufacturing Operations

http://dx.doi.org/10.5772/55065

231

The model used for the configuration of the flow diagram that represents the operators is based on a combination of PIPE and SHELL. The two combined models allow for representation of the main cognitive processes that an operator can carry out to perform an action (PIPE) and

operator (see Fig. 8).

**Figure 7.** Uses of simulation and modelling in HRA [23].

**Figure 8.** Architecture of PROCOS simulator [21].

**Figure 6.** AHP–SLIM procedure scheme [19].

The real development concerns, however, are the so-called methods of reliability dynamics. Cacciabue [7] outlined the importance of simulation and modelling of human performance for the field of HRA. Specifically, simulation and modelling address the dynamic nature of human performance in a way not found in most HRA methods [23]. A cognitive simulation consists of the reproduction of a cognition model using a numerical application or computation [21,22].

As depicted in Figure 7, simulation and modelling may be used in three ways to capture and generate data that are meaningful to HRA [23]:


Concurrent to the emergence of simulation and modelling, several authors (e.g. Jae and Park 1994; Sträter 2000) have posited the need for dynamic HRA and begun developing new HRA methods or modifying existing HRA methods to account for the dynamic progression of human behaviour leading up to and following human failure events (HFEs) [23]. There is still not a tool for modelling and simulation that fully or perfectly combines all the basic elements of simulation HRA. There is, however, a significant work in progress, as for the simulator PROCOS, developed by Trucco and Leva in 2006 or for the IDAC system, which combines a realistic plant simulator with a system of cognitive simulation capable of modelling the PSF. In addition to systems such as MIDAS, in which the modelling of the error was already present, further efforts are to instill the PSF of SPAR-H in the simulation system [24]. PROCOS [21,22] is a probabilistic cognitive simulator for HRA studies, developed to support the analysis of human reliability in operational contexts complex. The simulation model comprised two cognitive flow charts, reproducing the behaviour of a process industry operator. The aim is to integrate the quantification capabilities of HRA methods with a cognitive evaluation of the operator (see Fig. 8).

**Figure 7.** Uses of simulation and modelling in HRA [23].

**Figure 6.** AHP–SLIM procedure scheme [19].

230 Operations Management

generate data that are meaningful to HRA [23]:

an estimate of the likelihood of human error;

human error probabilities (HEPs);

The real development concerns, however, are the so-called methods of reliability dynamics. Cacciabue [7] outlined the importance of simulation and modelling of human performance for the field of HRA. Specifically, simulation and modelling address the dynamic nature of human performance in a way not found in most HRA methods [23]. A cognitive simulation consists of the reproduction of a cognition model using a numerical application or computation [21,22].

As depicted in Figure 7, simulation and modelling may be used in three ways to capture and

**•** The simulation runs produce logs, which may be analysed by experts and used to inform

**•** The simulation may be used to produce estimates PSFs, which can be quantified to produce

**•** A final approach is to set specific performance criteria by which the virtual performers in the simulation are able to succeed or fail at given tasks. Through iterations of the task that systematically explore the range of human performance, it is possible to arrive at a frequency of failure (or success). This number may be used as a frequentist approximation of an HEP.

Concurrent to the emergence of simulation and modelling, several authors (e.g. Jae and Park 1994; Sträter 2000) have posited the need for dynamic HRA and begun developing new HRA methods or modifying existing HRA methods to account for the dynamic progression of human behaviour leading up to and following human failure events (HFEs) [23]. There is still not a tool for modelling and simulation that fully or perfectly combines all the basic elements of simulation HRA. There is, however, a significant work in progress, as for the simulator PROCOS, developed by Trucco and Leva in 2006 or for the IDAC system, which combines a realistic plant simulator with a system of cognitive simulation capable of modelling the PSF. In addition to systems such as MIDAS, in which the modelling of the error was already present,

**Figure 8.** Architecture of PROCOS simulator [21].

The model used for the configuration of the flow diagram that represents the operators is based on a combination of PIPE and SHELL. The two combined models allow for representation of the main cognitive processes that an operator can carry out to perform an action (PIPE) and describe the interaction among procedures, equipment, environment and plants present in the working environment, and the operator, as well as taking into account the possibility of interaction of the operator with other operators or supervisors (SHELL).

The IDAC model [25–30] is an operator behaviour model developed based on many relevant findings from cognitive psychology, behavioural science, neuroscience, human factors, field observations, and various first and second generation HRA approaches. In modelling cogni‐ tion, IDAC combines the effects of rational and emotional dimensions (within the limited scope of modelling the behaviour of operators in a constrained environment) through a small number of generic rules-of-behaviour that govern the dynamic responses of the operator. The model constrained behaviour, largely regulated through training, procedures, standardised work processed, and professional discipline. This significantly reduces the complexity of the prob‐ lem, as compared to modelling general human response. IDAC covers the operator's various dynamic response phases, including situation assessment, diagnosis, and recovery actions in dealing with an abnormal situation. At a high level of abstraction, IDAC is composed of models of information processing (I), problem-solving and decision-making (D), and action execution (A) of a crew (C). Given incoming information, the crew model generated a probabilistic re‐ sponse, linking the context to the action through explicit causal chains. Due to the variety, quantity, and details of the input information, as well as the complexity of applying its internal rules, the IDAC model can only be presently implemented through a computer simulation (see Fig. 9).

**Figure 10.** High-level vision of the IDAC dynamic response [25].

simple form of breaks or micro-pauses in work shifts.

One of the most important factors influencing the physical and mental condition of an employee – and, thus, his or her ability to cope with work – is the degree to which employees are able to recover from fatigue and stress at work. Recovery can be defined as the period of time that an individual needs to return to prestressor level of functioning following the termination of a stressor [35]. Jansen argued that fatigue should not be regarded as a discrete disorder but as a continuum ranging from mild, frequent complaints seen in the community to the severe, disabling fatigue characteristics of burnout, overstrain, or chronic fatigue syndrome [35]. It is necessary that recovery is properly positioned within this continuum not only in the form of lunch breaks, rest days, weekends or summer holidays, but even in the

An Overview of Human Reliability Analysis Techniques in Manufacturing Operations

http://dx.doi.org/10.5772/55065

233

Work breaks are generally defined as "planned or spontaneous suspension from work on a task that interrupts the flow of activity and continuity" [36]. Breaks can potentially be disruptive to the flow of work and the completion of a task. The potential negative consequences of breaks for the person being interrupted include loss of available time to complete a task, a temporary

**3. Literature review of rest breaks**

**Figure 9.** IDAC model of operator cognitive flow (Chang and Mosleh 2007).

**Figure 10.** High-level vision of the IDAC dynamic response [25].

### **3. Literature review of rest breaks**

describe the interaction among procedures, equipment, environment and plants present in the working environment, and the operator, as well as taking into account the possibility of

The IDAC model [25–30] is an operator behaviour model developed based on many relevant findings from cognitive psychology, behavioural science, neuroscience, human factors, field observations, and various first and second generation HRA approaches. In modelling cogni‐ tion, IDAC combines the effects of rational and emotional dimensions (within the limited scope of modelling the behaviour of operators in a constrained environment) through a small number of generic rules-of-behaviour that govern the dynamic responses of the operator. The model constrained behaviour, largely regulated through training, procedures, standardised work processed, and professional discipline. This significantly reduces the complexity of the prob‐ lem, as compared to modelling general human response. IDAC covers the operator's various dynamic response phases, including situation assessment, diagnosis, and recovery actions in dealing with an abnormal situation. At a high level of abstraction, IDAC is composed of models of information processing (I), problem-solving and decision-making (D), and action execution (A) of a crew (C). Given incoming information, the crew model generated a probabilistic re‐ sponse, linking the context to the action through explicit causal chains. Due to the variety, quantity, and details of the input information, as well as the complexity of applying its internal rules, the IDAC model can only be presently implemented through a computer simulation (see Fig. 9).

(I) Information Elaboration

Information in Input

(D) Problem-solvingand decision-making

(MS) Mental

 Status

(A) Action

Actionsin

Ouput Input

Influence

**Figure 9.** IDAC model of operator cognitive flow (Chang and Mosleh 2007).

External Environment

interaction of the operator with other operators or supervisors (SHELL).

232 Operations Management

One of the most important factors influencing the physical and mental condition of an employee – and, thus, his or her ability to cope with work – is the degree to which employees are able to recover from fatigue and stress at work. Recovery can be defined as the period of time that an individual needs to return to prestressor level of functioning following the termination of a stressor [35]. Jansen argued that fatigue should not be regarded as a discrete disorder but as a continuum ranging from mild, frequent complaints seen in the community to the severe, disabling fatigue characteristics of burnout, overstrain, or chronic fatigue syndrome [35]. It is necessary that recovery is properly positioned within this continuum not only in the form of lunch breaks, rest days, weekends or summer holidays, but even in the simple form of breaks or micro-pauses in work shifts.

Work breaks are generally defined as "planned or spontaneous suspension from work on a task that interrupts the flow of activity and continuity" [36]. Breaks can potentially be disruptive to the flow of work and the completion of a task. The potential negative consequences of breaks for the person being interrupted include loss of available time to complete a task, a temporary disengagementfromthetask,procrastination(i.e.excessivedelaysinstartingorcontinuingwork on a task), and the reduction in productivity; the break can lead to a loss of time to complete activities. However, breaks can serve multiple positive functions for the person being interrupt‐ ed, such as stimulation for the individual performing a job that is routine or boring, opportuni‐ ties to engage in activities that are essential to emotional wellbeing, job satisfaction, sustained productivity, and time for the subconscious to process complex problems that require creativity [36]. In addition, regular breaks seem to be an effective way to control the accumulation of risk during the industrial shift. The few studies on work breaks indicate that people need occasional changes... the shift or an oscillation between work and recreation, mainly when fatigued or working continuously for an extended period [36]. A series of laboratory studies in the work‐ place have been conducted to evaluate the effects of breaks in more recent times; however, there appears to be a single recent study that examined in depth the impact of rest breaks, focusing on the risk of injury. Tucker's study [37,38] focused attention on the risk of accidents in the work‐ place, noting that the inclusion of work breaks can reduce this risk. Tucker examined accidents in a car assembly plant, where workers were given a 15-minute break after each 2-hour period of continuous work. The number of accidents within each of four periods of 30 minutes between successive interruptions was calculated, and the risk in each period of 30 minutes was ex‐ pressed in the first period of 30 minutes immediately after the break. The results are shown in Figure 5, and it is clear that the accident risk increased significantly, and less linearly, between the successive breaks. The results showed that rest breaks neutralise successfully accumulation of risk over 2 hours of continuous work. The risk immediately after a pause has been reduced to a rate close to that recorded at the start of the previous work period. However, the effects of the breaks are short-term recovery.

Figure 11.The trend in relative risk between breaks [38]. **Figure 11.** The trend in relative risk between breaks [38].

operator's work (see Fig. 12).

A 2006 study by Folkard and Lombardi showed the impact of frequent pauses of different shift systems [39]. The results of these studies confirm that breaks, even for a short period of time, are positively reflected from physical and psychic viewpoints on the A 2006 study by Folkard and Lombardi showed the impact of frequent pauses of different shift systems [39]. The results of these studies confirm that breaks, even for a short period of time, are positively reflected from physical and psychic viewpoints on the operator's work (see Fig. 12).

Figure 12.Effect of breaks in different shift systems [39].

**4. Research perspectives in HRA** 

Proper design of work–rest schedule that involves frequency, duration, and timing of rest breaks may be effective in improving workers' comfort, health, and productivity. But today, work breaks are not taken into proper consideration, and there are ongoing efforts to create systems that better manage the business in various areas, especially in manufacturing. From the analysis of the literature, in fact, there has been the almost total lack of systems for the management of work breaks in an automatic manner. The only exception is the software that stimulates workers at VDT to take frequent breaks and recommend performing exercises during breaks. The validity and effectiveness of this type of software has been demonstrated by several studies, including one by Van Den Heuvel [41] that evaluated the effects of work-related disorders of the neck and upper limbs and the productivity of computer workers stimulated to take regular breaks and perform physical exercises with the use of an adapted version of WorkPace, Niche Software Ltd., New, and that of McLean (2001) [40] that examined the benefits of micro-breaks to prevent onset or progression of

**Figure 12.** Effect of breaks in different shift systems [39].

Proper design of work–rest schedule that involves frequency, duration, and timing of rest breaks may be effective in improving workers' comfort, health, and productivity. But today, work breaks are not taken into proper consideration, and there are ongoing efforts to create systems that better manage the business in various areas, especially in manufacturing. From the analysis of the literature, in fact, there has been the almost total lack of systems for the management of work breaks in an automatic manner. The only exception is the software that stimulates workers at VDT to take frequent breaks and recommend performing exercises during breaks. The validity and effectiveness of this type of software has been demonstrated by several studies, including one by Van Den Heuvel [41] that evaluated the effects of workrelated disorders of the neck and upper limbs and the productivity of computer workers stimulated to take regular breaks and perform physical exercises with the use of an adapted version of WorkPace, Niche Software Ltd., New, and that of McLean (2001) [40] that examined the benefits of micro-breaks to prevent onset or progression of cumulative trauma disorders

An Overview of Human Reliability Analysis Techniques in Manufacturing Operations

http://dx.doi.org/10.5772/55065

235

for the computerised environment, mediated using the program Ergobreak 2.2.

periods of continuous work to improve productivity.

**4. Research perspectives in HRA**

In future, therefore, researchers should focus their efforts on the introduction of management systems of breaks and countering the rates of increase in the risk of accidents during long

The previous paragraphs described the development of HRA methods from their origin to the last generation. In this generation, there are literally dozens of HRA methods from which to

In future, therefore, researchers should focus their efforts on the introduction of management systems of breaks and countering the

The previous paragraphs described the development of HRA methods from their origin to the last generation. In this generation, there are literally dozens of HRA methods from which to choose. However, many difficulties remain: Most of the techniques, in fact, do not have solid empirical bases and are essentially static, unable to capture the dynamics of an accident in progress or general human behaviour. Therefore, the limitations of current methods are natural starting point for future studies and work.

As described in this paper, the path has been paved for the next generation of HRA through simulation and modelling. The human performance simulation reveals important new data sources and possibilities for exploring human reliability, but there are significant challenges to be resolved, both as regards the dynamic nature of HRA versus the mostly static nature of conventional first and second generation HRA methods both for the weakness of the simulators themselves [23]. The simulator PROCOS, in particular, requires further optimisation, as evidenced by the same Trucco and Leva in [21]. Additionally, in its development, some sensitivity analysis has still to be performed on the main elements on which the simulator is based – blocks of the flow chart,

cumulative trauma disorders for the computerised environment, mediated using the program Ergobreak 2.2.

rates of increase in the risk of accidents during long periods of continuous work to improve productivity.

**Figure 12.** Effect of breaks in different shift systems [39].

disengagementfromthetask,procrastination(i.e.excessivedelaysinstartingorcontinuingwork on a task), and the reduction in productivity; the break can lead to a loss of time to complete activities. However, breaks can serve multiple positive functions for the person being interrupt‐ ed, such as stimulation for the individual performing a job that is routine or boring, opportuni‐ ties to engage in activities that are essential to emotional wellbeing, job satisfaction, sustained productivity, and time for the subconscious to process complex problems that require creativity [36]. In addition, regular breaks seem to be an effective way to control the accumulation of risk during the industrial shift. The few studies on work breaks indicate that people need occasional changes... the shift or an oscillation between work and recreation, mainly when fatigued or working continuously for an extended period [36]. A series of laboratory studies in the work‐ place have been conducted to evaluate the effects of breaks in more recent times; however, there appears to be a single recent study that examined in depth the impact of rest breaks, focusing on the risk of injury. Tucker's study [37,38] focused attention on the risk of accidents in the work‐ place, noting that the inclusion of work breaks can reduce this risk. Tucker examined accidents in a car assembly plant, where workers were given a 15-minute break after each 2-hour period of continuous work. The number of accidents within each of four periods of 30 minutes between successive interruptions was calculated, and the risk in each period of 30 minutes was ex‐ pressed in the first period of 30 minutes immediately after the break. The results are shown in Figure 5, and it is clear that the accident risk increased significantly, and less linearly, between the successive breaks. The results showed that rest breaks neutralise successfully accumulation of risk over 2 hours of continuous work. The risk immediately after a pause has been reduced to a rate close to that recorded at the start of the previous work period. However, the effects of the

Figure 11.The trend in relative risk between breaks [38].

0‐30 31‐60 61‐90 91‐120

A 2006 study by Folkard and Lombardi showed the impact of frequent pauses of different shift systems [39]. The results of these studies confirm that breaks, even for a short period of time, are positively reflected from physical and psychic viewpoints on the operator's work (see Fig. 12).

**Minutes Since Last Break**

Figure 12.Effect of breaks in different shift systems [39].

**4. Research perspectives in HRA** 

Proper design of work–rest schedule that involves frequency, duration, and timing of rest breaks may be effective in improving workers' comfort, health, and productivity. But today, work breaks are not taken into proper consideration, and there are ongoing efforts to create systems that better manage the business in various areas, especially in manufacturing. From the analysis of the literature, in fact, there has been the almost total lack of systems for the management of work breaks in an automatic manner. The only exception is the software that stimulates workers at VDT to take frequent breaks and recommend performing exercises during breaks. The validity and effectiveness of this type of software has been demonstrated by several studies, including one by Van Den Heuvel [41] that evaluated the effects of work-related disorders of the neck and upper limbs and the productivity of computer workers stimulated to take regular breaks and perform physical exercises with the use of an adapted version of WorkPace, Niche Software Ltd., New, and that of McLean (2001) [40] that examined the benefits of micro-breaks to prevent onset or progression of

In future, therefore, researchers should focus their efforts on the introduction of management systems of breaks and countering the

The previous paragraphs described the development of HRA methods from their origin to the last generation. In this generation, there are literally dozens of HRA methods from which to choose. However, many difficulties remain: Most of the techniques, in fact, do not have solid empirical bases and are essentially static, unable to capture the dynamics of an accident in progress or general human behaviour. Therefore, the limitations of current methods are natural starting point for future studies and work.

As described in this paper, the path has been paved for the next generation of HRA through simulation and modelling. The human performance simulation reveals important new data sources and possibilities for exploring human reliability, but there are significant challenges to be resolved, both as regards the dynamic nature of HRA versus the mostly static nature of conventional first and second generation HRA methods both for the weakness of the simulators themselves [23]. The simulator PROCOS, in particular, requires further optimisation, as evidenced by the same Trucco and Leva in [21]. Additionally, in its development, some sensitivity analysis has still to be performed on the main elements on which the simulator is based – blocks of the flow chart,

cumulative trauma disorders for the computerised environment, mediated using the program Ergobreak 2.2.

rates of increase in the risk of accidents during long periods of continuous work to improve productivity.

operator's work (see Fig. 12).

0.00

**Figure 11.** The trend in relative risk between breaks [38].

1.00

**Relative**

**Risk**

2.00

3.00

breaks are short-term recovery.

234 Operations Management

Proper design of work–rest schedule that involves frequency, duration, and timing of rest breaks may be effective in improving workers' comfort, health, and productivity. But today, work breaks are not taken into proper consideration, and there are ongoing efforts to create systems that better manage the business in various areas, especially in manufacturing. From the analysis of the literature, in fact, there has been the almost total lack of systems for the management of work breaks in an automatic manner. The only exception is the software that stimulates workers at VDT to take frequent breaks and recommend performing exercises during breaks. The validity and effectiveness of this type of software has been demonstrated by several studies, including one by Van Den Heuvel [41] that evaluated the effects of workrelated disorders of the neck and upper limbs and the productivity of computer workers stimulated to take regular breaks and perform physical exercises with the use of an adapted version of WorkPace, Niche Software Ltd., New, and that of McLean (2001) [40] that examined the benefits of micro-breaks to prevent onset or progression of cumulative trauma disorders for the computerised environment, mediated using the program Ergobreak 2.2.

In future, therefore, researchers should focus their efforts on the introduction of management systems of breaks and countering the rates of increase in the risk of accidents during long periods of continuous work to improve productivity.

### **4. Research perspectives in HRA**

A 2006 study by Folkard and Lombardi showed the impact of frequent pauses of different shift systems [39]. The results of these studies confirm that breaks, even for a short period of time, are positively reflected from physical and psychic viewpoints on the The previous paragraphs described the development of HRA methods from their origin to the last generation. In this generation, there are literally dozens of HRA methods from which to choose. However, many difficulties remain: Most of the techniques, in fact, do not have solid empirical bases and are essentially static, unable to capture the dynamics of an accident in progress or general human behaviour. Therefore, the limitations of current methods are natural starting point for future studies and work.

robust scientific basis. Currently, there are several international efforts to collect human

Of course, many studies that are being carried out are aimed at improving the application of HRA methods in complex environments, such as nuclear power plants. The methods already developed in these areas are adapting to different situations by expanding their scope.

[1] Iannone, R., Miranda, S., Riemma S.: Proposta di un modello simulativo per la deter‐ minazione automatica delle pause di lavoro in attività manifatturiere a prevalente

[2] Madonna, M., et al.: Il fattore umano nella valutazione dei rischi: confronto metodo‐ logico fra le tecniche per l'analisi dell'affidabilità umana. Prevenzione oggi. 5 (n. 1/2),

[3] Griffith, C.D., Mahadevan, S.: Inclusion of fatigue effects in human reliability analy‐

[5] Hollnagel, E.: Reliability analysis and operator modelling. *Reliability Engineering and*

[6] Bye, A., Hollnagel, E., Brendeford, T.S.: Human-machine function allocation: a func‐ tional modelling approach. *Reliability Engineering and System Safety*, 64 (2), 291–300

[7] Cacciabue, P.C.: Modelling and simulation of human behaviour for safety analysis

[8] Kim, M.C., Seong, P.H., Hollnagel, E.: A probabilistic approach for determining the control mode in CREAM. *Reliability Engineering and System Safety*, 91 (2), 191–199

[9] Kim, I.S.: Human reliability analysis in the man–machine interface design review.

contenuto manuale. Treviso - Italy ANIMP Servizi Srl Pag. 46–60 (2004).

sis. *Reliability Engineering & System Safety*, 96 (11), 1437–1447 (2011).

and control of complex systems. *Safety Science*, 28 (2), 97–110 (1998).

[4] Hollnagel, E.: Cognitive Reliability and Error Analysis Method CREAM (1998).

, Salvatore Miranda and Stefano Riemma

An Overview of Human Reliability Analysis Techniques in Manufacturing Operations

http://dx.doi.org/10.5772/55065

237

performance data that can be used to improve HRA [46].

Dept. of Industrial Engineering – University of Salerno, Italy

**Author details**

**References**

67–83 (2009).

(1999).

(2006).

*System Safety*, 52, 327–337 (1996).

*Annals of Nuclear Energy*, 28, 1069–1081 (2001).

Valentina Di Pasquale, Raffaele Iannone\*

As described in this paper, the path has been paved for the next generation of HRA through simulation and modelling. The human performance simulation reveals important new data sources and possibilities for exploring human reliability, but there are significant challenges to be resolved, both as regards the dynamic nature of HRA versus the mostly static nature of conventional first and second generation HRA methods both for the weakness of the simula‐ tors themselves [23]. The simulator PROCOS, in particular, requires further optimisation, as evidenced by the same Trucco and Leva in [21]. Additionally, in its development, some sensitivity analysis has still to be performed on the main elements on which the simulator is based – blocks of the flow chart, decision block criteria, PSF importance – to test the robustness of the method [21]. Mosleh and Chang, instead, are conducting their studies to eliminate the weak points of IDAC as outlined in [25]. First of all, is development of an operator behaviour model more comprehensive and realistic; it can be used not only for nuclear power plants but also for more general applications. This is a subject of current research effort by the authors.

Many researchers are moving to the integration of their studies with those of other researchers to optimise HRA techniques. Some future plans include, for example, extending AHP–SLIM into other HRAs methods to exploit its performance [19]. The method proposed by De Ambroggi and Trucco for modelling and assessment of dependent performance shaping factors through analytic network process [20] is moving towards better identification of dependencies among PSFs using the simulator PROCOS or Bayesian networks.

Bayesian networks (BN) represent, in particular, an important field of study for future developments. Many experts are studying these networks with the aim of exploiting the features and properties in the techniques HRA [44,45]. Bayesian methods are appealing since they can combine prior assumptions of human error probability (i.e. based on expert judge‐ ment) with available human performance data. Some results already show that the combina‐ tion of the model conceptual causal model with a BN approach can not only qualitatively model the causal relationships between organisational factors and human reliability but can also quantitatively measure human operational reliability, identifying the most likely root causes or prioritisation of root causes of human error [44]. This is a subject of current research effort by the authors of the IDAC model as an alternative way for calculating branch probability and representing PIF states as opposed to the current method; in the current method, branch probabilities are dependent on the branch scores that are calculated based on explicit equations reflecting the causal model built, based on the influence of PIFs and other rules of behaviour.

Additional research and efforts are related to the performance shaping factors (PSFs). Cur‐ rently, there are more than a dozen HRA methods that use PIFs/PSFs, but there is no standard set of PIFs used among methods. The performance shaping factors at present are not defined specifically enough to ensure consistent interpretation of similar PIFs across methods. There are few rules governing the creation, definition, and usage of PIF sets. Within the HRA community, there is a widely acknowledged need for an improved HRA method with a more robust scientific basis. Currently, there are several international efforts to collect human performance data that can be used to improve HRA [46].

Of course, many studies that are being carried out are aimed at improving the application of HRA methods in complex environments, such as nuclear power plants. The methods already developed in these areas are adapting to different situations by expanding their scope.

### **Author details**

choose. However, many difficulties remain: Most of the techniques, in fact, do not have solid empirical bases and are essentially static, unable to capture the dynamics of an accident in progress or general human behaviour. Therefore, the limitations of current methods are

As described in this paper, the path has been paved for the next generation of HRA through simulation and modelling. The human performance simulation reveals important new data sources and possibilities for exploring human reliability, but there are significant challenges to be resolved, both as regards the dynamic nature of HRA versus the mostly static nature of conventional first and second generation HRA methods both for the weakness of the simula‐ tors themselves [23]. The simulator PROCOS, in particular, requires further optimisation, as evidenced by the same Trucco and Leva in [21]. Additionally, in its development, some sensitivity analysis has still to be performed on the main elements on which the simulator is based – blocks of the flow chart, decision block criteria, PSF importance – to test the robustness of the method [21]. Mosleh and Chang, instead, are conducting their studies to eliminate the weak points of IDAC as outlined in [25]. First of all, is development of an operator behaviour model more comprehensive and realistic; it can be used not only for nuclear power plants but also for more general applications. This is a subject of current research effort by the authors. Many researchers are moving to the integration of their studies with those of other researchers to optimise HRA techniques. Some future plans include, for example, extending AHP–SLIM into other HRAs methods to exploit its performance [19]. The method proposed by De Ambroggi and Trucco for modelling and assessment of dependent performance shaping factors through analytic network process [20] is moving towards better identification of

dependencies among PSFs using the simulator PROCOS or Bayesian networks.

Bayesian networks (BN) represent, in particular, an important field of study for future developments. Many experts are studying these networks with the aim of exploiting the features and properties in the techniques HRA [44,45]. Bayesian methods are appealing since they can combine prior assumptions of human error probability (i.e. based on expert judge‐ ment) with available human performance data. Some results already show that the combina‐ tion of the model conceptual causal model with a BN approach can not only qualitatively model the causal relationships between organisational factors and human reliability but can also quantitatively measure human operational reliability, identifying the most likely root causes or prioritisation of root causes of human error [44]. This is a subject of current research effort by the authors of the IDAC model as an alternative way for calculating branch probability and representing PIF states as opposed to the current method; in the current method, branch probabilities are dependent on the branch scores that are calculated based on explicit equations reflecting the causal model built, based on the influence of PIFs and other rules of behaviour. Additional research and efforts are related to the performance shaping factors (PSFs). Cur‐ rently, there are more than a dozen HRA methods that use PIFs/PSFs, but there is no standard set of PIFs used among methods. The performance shaping factors at present are not defined specifically enough to ensure consistent interpretation of similar PIFs across methods. There are few rules governing the creation, definition, and usage of PIF sets. Within the HRA community, there is a widely acknowledged need for an improved HRA method with a more

natural starting point for future studies and work.

236 Operations Management

Valentina Di Pasquale, Raffaele Iannone\* , Salvatore Miranda and Stefano Riemma

Dept. of Industrial Engineering – University of Salerno, Italy

### **References**


[10] Sträter, O., Dang, V., Kaufer, B., Daniels, A.: On the way to assess errors of commis‐ sion. *Reliability Engineering and System Safety*, 83 (2), 129–138 (2004).

[24] Boring, R.L.: Modelling human reliability analysis using MIDAS.In: International Workshop on Future Control Station Designs and Human Performance Issues in Nu‐

An Overview of Human Reliability Analysis Techniques in Manufacturing Operations

http://dx.doi.org/10.5772/55065

239

[25] Mosleh, A., Chang, Y.H.: Model-based human reliability analysis: prospects and re‐ quirements. *Reliability Engineering and System Safety*, 83 (2), 241–253 (2004).

[26] Mosleh, A., Chang, Y.H.: Cognitive modelling and dynamic probabilistic simulation of operating crew response to complex system accidents – Part 1: Overview of the

[27] Mosleh, A., Chang, Y.H.: Cognitive modelling and dynamic probabilistic simulation of operating crew response to complex system accidents – Part 2: IDAC performance influencing factors model. *Reliability Engineering and System Safety*, 92, 1014–1040

[28] Mosleh, A., Chang, Y.H.: Cognitive modelling and dynamic probabilistic simulation of operating crew response to complex system accidents – Part 3: IDAC operator re‐

[29] Mosleh, A., Chang, Y.H.: Cognitive modelling and dynamic probabilistic simulation of operating crew response to complex system accidents – Part 4: IDAC causal model of operator problem-solving response. reliability engineering and system safety, 92,

[30] Mosleh, A., Chang, Y.H.: Cognitive modelling and dynamic probabilistic simulation of operating crew response to complex system accidents – Part 5: Dynamic probabil‐ istic simulation of the IDAC model. *Reliability Engineering and System Safety*, 92, 1076–

[33] http://conference.ing.unipi.it/vgr2006/archivio/Archivio/pdf/063-Tucci-Giagnoni-

[34] http://www.nrc.gov/reading-rm/doc-collections/nuregs/contract/cr6883/cr6883.pdf

*ternational Journal of Behavioral Medicine*, 9 (4), 322–340 (2002).

[35] Jansen, N.W.H., Kant, I., Van den Brandt, P.A.: Need for recovery in the working population: description and associations with fatigue and psychological distress. *In‐*

[36] Jett, Q.R., George, J.M.: Work interrupted: a closer look at the role of interruptions in

[37] Tucker, P., Folkard, S., Macdonald, I.: Rest breaks and accident risk. *Lancet,* 361, 680

organizational life. *Academy of Management Review*, 28 (3), 494–507 (2003).

sponse model. *Reliability Engineering and System Safety*, 92, 1041–1060 (2007).

IDAC model. *Reliability Engineering and System Safety*, 92, 997–1013 (2007).

clear Power Plants (2006).

(2007).

1061–1075 (2007).

Cappelli-MossaVerre.PDF

[31] http://www.hse.gov.uk/research/rrpdf/rr679.pdf

[32] http://www.cahr.de/cahr/Human%20Reliability.PDF

1101 (2007).

(2003).


[10] Sträter, O., Dang, V., Kaufer, B., Daniels, A.: On the way to assess errors of commis‐

[11] Boring, R.L., Blackman, H.S.: The origins of the SPAR-H method's performance shap‐

[12] Blackman, H.S., Gertman, D.I., Boring, R.L.: Human error quantification using per‐ formance shaping factors in the SPAR-H method. In: 52nd Annual Meeting of the

[13] Kirwan, B.: The validation of three human reliability quantification techniques – THERP, HEART and JHEDI: Part 1 – Technique descriptions and validation issues.

[14] Kirwan, B.: The validation of three human reliability quantification techniques – THERP, HEART and JHEDI – Part 2 – Results of validation exercise. *Applied Ergo‐*

[15] Kirwan, B.: The validation of three human reliability quantification techniques – THERP, HEART and JHEDI – Part 3 – Practical aspects of the usage of the techni‐

[16] Kim, J.W., Jung, W.: A taxonomy of performance influencing factors for human relia‐ bility analysis of emergency tasks. *Journal of Loss Prevention in the Process Industries*,

[17] Hallbert, B.P., Gertmann, D.I.: Using Information from operating experience to in‐ form human reliability analysis .In: International Conference On Probabilistic Safety

[18] Lee, S.W., Kim, R., Ha, J.S., Seong, P.H.: Development of a qualitative evaluation framework for performance shaping factors (PSFs) in advanced MCR HRA. *Annals of*

[19] Park, K.S., Lee, J.: A new method for estimating human error probabilities: AHP–

[20] De Ambroggi, M., Trucco, P.: Modelling and assessment of dependent performance shaping factors through analytic network process. *Reliability Engineering & System*

[21] Trucco, P., Leva, M.C.: A probabilistic cognitive simulator for HRA studies (PRO‐

[22] Leva, M.C., et al.: Quantitative analysis of ATM safety issues using retrospective acci‐ dent data: the dynamic risk modelling project. *Safety Science*, 47, 250–264 (2009). [23] Boring, R.L.: Dynamic human reliability analysis: benefits and challenges of simulat‐ ing human performance. In: Proceedings of the European Safety and Reliability Con‐

SLIM. *Reliability Engineering and System Safety*, 93 (4), 578–587 (2008).

COS). *Reliability Engineering and System Safety*, 92 (8), 1117–1130 (2007).

sion. *Reliability Engineering and System Safety*, 83 (2), 129–138 (2004).

ing factor multipliers. In: Joint 8th IEEE HFPP/13th HPRCT (2007).

Human Factors and Ergonomics Society (2008).

*Applied Ergonomics*, 27 (6), 359–373 (1996).

ques. *Applied Ergonomics*, 28 (1), 27–39 (1997).

Assessment and Management (2004).

*Nuclear Energy*, 38 (8), 1751–1759 (2011).

*Safety,* 96 (7), 849–860 (2011).

ference (ESREL 2007) (2007).

*nomics*, 28 (1), 17–25 (1997).

238 Operations Management

16, 479–495 (2003).


[38] Folkard, S., Tucker, P.: Shift work, safety, and productivity. *Occupational Medicine*, 53,

[39] Folkard, S., Lombardi, D.A.: Modelling the impact of the components of long work hours on injuries and ''accidents''. *American Journal of Industrial Medicine*, 49, 953–963

[40] McLean, L., Tingley, M., Scott, R.N, Rickards, J.: Computer terminal work and the

[41] Van Den Heuvel, S.G., et al.: Effects of software programs stimulating regular breaks and exercises on work-related neck and upper-limb disorders. *Scandinavian Journal of*

[42] Jaber, M.Y., Bonney, M.: Production breaks and the learning curve: the forgetting

[43] Jaber, M.Y., Bonney, M.: A comparative study of learning curves with forgetting. *Ap‐*

[44] Li Peng-cheng, Chen Guo-hua, Dai Li-cao, Zhang Li: A fuzzy Bayesian network ap‐ proach to improve the quantification of organizational influences in HRA frame‐

[45] Kelly, D.L., Boring, R.L., Mosleh, A., Smidts, C.: Science-based simulation model of human performance for human reliability analysis. Enlarged Halden Program Group

[46] Groth, K.M., Mosleh, A.: A data-informed PIF hierarchy for model-based human reli‐ ability analysis. *Reliability Engineering and System Safety*, 108, 154–174 (2012).

benefit of microbreaks. *Applied Ergonomics*, 32, 225–237 (2001).

phenomenon. *Applied Mathematics Modelling*, 20, 162–169 (1996).

*Work, Environment & Health*, 29 (2), 106–116 (2003).

*plied Mathematics Modelling,* 21, 523–531 (1997).

works. *Safety Science*, 50, 1569–1583 (2012).

Meeting, October 2011.

95–101 (2003).

(2006).

240 Operations Management

*Edited by Massimiliano M. Schiraldi*

Operations Management is an area of business concerned with managing the process that converts inputs into outputs, in the form of goods and/or services. Increasingly complex environments together with the recent economic swings and substantially squeezed industrial margins put extra pressure on companies, and decision makers are pushed to increase operations efficiency and effectiveness. This book presents the contributions of a selected group of researchers, reporting new ideas, original results and practical experiences as well as systematizing some fundamental topics in Operations Management. Although it represents only a small sample of the research activity on Operations Management, people from diverse backgrounds, academia, industry and research as well as engineering students can take advantage of this volume.

Operations Management

Operations Management

*Edited by Massimiliano M. Schiraldi*

Photo by Rawpixel / iStock