**Meet the editor**

Dr. Kuodi Jian is a noted scholar in the field of computer science. He holds a BS degree in Computer Science from the University of Mary Hardin-Baylor in Texas, USA, and an MS degree in Computer Science and a PhD degree in Computer Science and Operations Research from the North Dakota State University in Fargo, USA. He worked as a computer system architect at Banner Health System,

Fargo, North Dakota, as an associate professor in Metropolitan State University (MSU) since 2003, and he currently takes the role of ICS Graduate Director in MSU. Dr. Kuodi Jian is active in his research. He published a book *A Graph Planning Procedure within an Agent Architecture Fast Planning and Distributed Agent Architecture*, book chapters "Knowledge Management in Bio-Information Systems" and "Introductory Chapter: Real-Time Systems," and numerous journal/conference articles in the areas of algorithms, programming languages, real-time operating systems, operations research, database systems, web service-oriented architecture (SOA), artificial intelligence, computer hardware, and computer simulation.

## Contents

#### **Preface XI**



Chapter 6 **Financial Feasibility Analysis of Natura Rab Business: Case Study 87**

Karmen Pažek, Matija Kaštelan, Martina Bavec, Črtomir Rozman and Jernej Prišenk

Chapter 7 **Influence of Phosphorus Precipitation on Wastewater Treatment Processes 103** Ján Derco, Rastislav Kuffa, Barbora Urminská, Jozef Dudáš and Jana Kušnierová

## Preface

In a nutshell, operations research (sometimes called operational research) is a decision sci‐ ence. It studies the subject of how to make the best decision given the constraints at hand. When applying to different fields, it manifests itself as many forms: in military, it takes the form of using strategies to win a war; in business, it takes the form of making most profit; and in mathematical modeling, it takes the form of finding optimal solution by using deriva‐ tive, or simplex algorithm of linear programming.

Operations research is an important topic. The importance of winning a war, making maxi‐ mal amount of profit, and building a correct math modeling is obvious. Here, I want to point out another aspect of operations research that is often overlooked and vitally impor‐ tant to us: efficient use of resources. Currently, human species are consuming natural re‐ sources at an astonishing rate: resources such as crude oil, drinkable water, safe food are in danger or soon will be exhausted. We must be aware and take actions before it's too late. One of the solutions is to make efficient use of natural resources, and operations research offers such a tool. In this sense, the topic of the book is important and relevant to everyone.

The content of operations research discussed in this book covers a wide range of areas and has some unique features:


It's exhilarating to know that this book is the result of contributions by practitioners, re‐ searchers, scientists, and scholars from many countries, like Slovak Republic, Slovenia, Ser‐ bia, Mexico, Portugal, Germany, and the USA. This book will be useful to a wide range of audiences: university students/professors, government policy makers, engineers, and busi‐ nessmen who are interested in operations research.

> **Dr. Kuodi Jian** Metropolitan State University, Computer Science Faculty Department of Information and Computer Sciences Minnesota, The United States of America

#### **Introductory Chapter: Operations Research Provisional chapter**

Kuodi Jian **Introductory Chapter: Operations Research**

Kuodi Jian

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/66835

### **1. Introduction**

Operations research (sometimes referred to as management science or decision science) is a subject that deals with the art of making good decisions under specified constraints. In real life, we are often faced with complex situations for which no simple answer can be found. Thus, the topic of making good decision (operations research) is both intriguing and relevant. Most people consider operations research as a subfield of mathematics. As pointed out below, the criteria of good or bad decisions are often affected by culture, viewpoints, and other factors.

Other than moral issues, there are **two important keys** to make good decisions: **good information** and **the skill of making good decisions** based on the information at hand. By "good information," we mean that all essential factors are captured (tools used to obtain quality information could be verification and validation); by skilled decision making, we mean the application of appropriate solutions to different problems (e.g., to solve well-formed mathematical optimal problems, we use calculus or the simplex algorithm; to solve poorly defined problems, we use empirical trial-and-error methods or ad hoc methods). The subject of operations research covers both the acquiring of "good information" and "the skill of making good decisions." **Figure 1** shows the relationship among these entities.

Depending on the problem domain, decision-making skills can be regarded as either a science or an art. When a problem is a well-defined mathematic problem, we are able to use scientific methods such as calculus, linear programming, integer programming, dynamic programming, and simplex algorithm; on the other hand, when a problem is poorly defined, we can only use trial-and-error methods, heuristic methods, or ad hoc methods. Since trial-and-error methods and ad hoc methods are non-repeatable, they are regarded as art. Making a good decision under multiple constraints (especially where different criteria are involved) is never easy; the topic of operations research is not easy and has a lot of variations.

and reproduction in any medium, provided the original work is properly cited.

© 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution,

**Figure 1.** The relationship among obtaining information, decision process, and tools.

In this book, we cover a wide range of applications that employ a variety of decision-making skills (including problems that are poorly defined). Therefore we subtitled the book as "an art of making good decisions."

Since decision making is an essential part of our life, the application of operations research covers a wide range of areas such as military, business, mathematics, and resource allocation among competing parties, just to name a few.

When focusing on decision making, operations research has a long history.

#### **2. History**

Chinese people used good decision-making strategies to military more than 2000 years ago. In the book "Master Sun's Art of War" by Master Sun (also called Sun Tzu), important aspects of warfare are summarized into 13 chapters. The essence of the book is the strategies of how to motivate soldiers and leverage tactical advantages (win the battle of wits). The first chapter of the book talks about "Detail Assessment and Planning," and the last chapter of the book talks about "Intelligence and Espionage." From the available evidence, we know that the book was completed sometime between 500 and 450 BC. Some of the well-known strategies are as follows [1]:

故曰:知彼知己,百戰不殆;不知彼而知己,一勝一負;不知彼,不知己,每戰必殆。

*So it is said that if you know your enemies and know yourself, you will not be put at risk even in a hundred battles.*

*If you only know yourself, but not your opponent, you may win or may lose.*

*If you know neither yourself nor your enemy, you will always endanger yourself.*

*This has been more tersely interpreted and condensed into the Chinese modern proverb:*

知己知彼,百戰不殆。 *(Zhī jǐ zhī bǐ, bǎi zhàn bù dài.)*

*If you know both yourself and your enemy, you can win numerous (literally, "a hundred") battles without jeopardy.*

Other two high-order thinking of military campaign contained in that book are the following [2]:

*"The supreme art of war is to subdue the enemy without fighting."***―Sun Tzu, The Art of War**

*"If your enemy is secure at all points, be prepared for him. If he is in superior strength, evade him. If your opponent is temperamental, seek to irritate him. Pretend to be weak, that he may grow arrogant. If he is taking his ease, give him no rest. If his forces are united, separate them. If sovereign and subject are in accord, put division between them. Attack him where he is unprepared, appear where you are not expected."―***Sun Tzu, The Art of War**

The effectiveness of Master Sun's idea is well tested. At his time, Master Sun helped the warlord of his state Wu to defeat the much stronger enemy state Chu. Even today, many people benefit from the strategies of Master Sun. The following episode is taken from the History website and illustrates the point [3]:

*Ever since The Art of War was published, military leaders have been following its advice. In the twentieth century, the Communist leader Mao Zedong said that the lessons he learned from The Art of War helped him defeat Chiang Kai-Shek's Nationalist forces during the Chinese Civil War. Other recent devotees of Sun Tzu's work include Viet Minh commanders Vo Nguyen Giap and Ho Chi Minh and American Gulf War generals Norman Schwarzkopf and Colin Powell.*

*Meanwhile, executives and lawyers use the teachings of The Art of War to get the upper hand in negotiations and to win trials. Business-school professors assign the book to their students and sports coaches use it to win games. It has even been the subject of a self-help dating guide. Plainly, this 2,500-year-old book still resonates with a 21st-century audience.*

People around the world have been using optimization for a long time. For example, the sizes of some jade items excavated from Han dynasty's bury sites in China are optimal in terms of their surface areas vs. their weights. Ancient Egyptians built remarkable structures called pyramids as shown in **Figure 2**.

These types of structures were built with the correct proportion and angle, which is 52.606° for their top angle. When a pyramid is built that way, it will preserve certain energy. Some interesting aspects of pyramids discussed by the website are as follows [5]:

#### **Some of the effects are:**

In this book, we cover a wide range of applications that employ a variety of decision-making skills (including problems that are poorly defined). Therefore we subtitled the book as "an art

Since decision making is an essential part of our life, the application of operations research covers a wide range of areas such as military, business, mathematics, and resource allocation

Chinese people used good decision-making strategies to military more than 2000 years ago. In the book "Master Sun's Art of War" by Master Sun (also called Sun Tzu), important aspects of warfare are summarized into 13 chapters. The essence of the book is the strategies of how to motivate soldiers and leverage tactical advantages (win the battle of wits). The first chapter of the book talks about "Detail Assessment and Planning," and the last chapter of the book talks about "Intelligence and Espionage." From the available evidence, we know that the book was completed sometime between 500 and 450 BC. Some of the well-known strategies are as follows [1]:

*So it is said that if you know your enemies and know yourself, you will not be put at risk even in a* 

When focusing on decision making, operations research has a long history.

**Figure 1.** The relationship among obtaining information, decision process, and tools.

故曰:知彼知己,百戰不殆;不知彼而知己,一勝一負;不知彼,不知己,每戰必殆。

*If you only know yourself, but not your opponent, you may win or may lose.*

of making good decisions."

**2. History**

*hundred battles.*

among competing parties, just to name a few.

2 Operations Research - the Art of Making Good Decisions

Food kept under the pyramid will stay fresh for two to three times longer than uncovered food. Artificial flavorings in food will lose their taste, but natural flavors are enhanced.

**Figure 2.** A picture of pyramids [4].

The taste of foods change; they become less bitter and acidic.

When we take a spectrographic reading of the treated item, it will show a change in the molecular structure.

The pyramid will dehydrate and mummify things, but it will not permit decay or mold to grow on them.

There is also a slowing or complete stopping of the growth of microorganisms.

Kirlian photographs of human subjects show the aura to be significantly brighter after a 15-min exposure period.

#### **Pyramid research**:

Bill Kerell has been a pyramid researcher for about 17 years. He has performed many experiments using brine shrimp. Brine shrimp usually live for 6 to 7 weeks, but under the pyramids, Bill observed that brine shrimp can survive for over a year. He also noticed that pyramidgrown shrimp grew two to three times larger than the normal ones. Bill has also conducted a lot of researches with humans.

Bill and his associates also have found that hypertensive individuals become tranquilized, but lethargic people become energetic again.

All these show that humans have been using the knowledge of optimization for a long time and some of their creations are still not fully understood and still have research value for us.

However, "the first formal activities of Operations Research (OR) were initiated in England during World War II, when a team of British scientists set out to make scientifically based decisions regarding the best utilization of war materiel. After the war, the ideas advanced in military operations were adapted to improve efficiency and productivity in the civilian sector" [6].

### **3. What is a good decision?**

The taste of foods change; they become less bitter and acidic.

molecular structure.

**Figure 2.** A picture of pyramids [4].

4 Operations Research - the Art of Making Good Decisions

15-min exposure period.

lot of researches with humans.

lethargic people become energetic again.

**Pyramid research**:

for us.

grow on them.

When we take a spectrographic reading of the treated item, it will show a change in the

The pyramid will dehydrate and mummify things, but it will not permit decay or mold to

Kirlian photographs of human subjects show the aura to be significantly brighter after a

Bill Kerell has been a pyramid researcher for about 17 years. He has performed many experiments using brine shrimp. Brine shrimp usually live for 6 to 7 weeks, but under the pyramids, Bill observed that brine shrimp can survive for over a year. He also noticed that pyramidgrown shrimp grew two to three times larger than the normal ones. Bill has also conducted a

Bill and his associates also have found that hypertensive individuals become tranquilized, but

All these show that humans have been using the knowledge of optimization for a long time and some of their creations are still not fully understood and still have research value

However, "the first formal activities of Operations Research (OR) were initiated in England during World War II, when a team of British scientists set out to make scientifically based

There is also a slowing or complete stopping of the growth of microorganisms.

In this section, we will answer the question "what is a good decision?" At a first look, it appears simple. But, when taking a closer look, you will find that the answer is not so simple. First, before you are able to answer the question, you have to understand the concept of decision criteria. Second, you need to understand the dynamics of requirements, relationships between natural laws and decision criteria, the point of views, and the culture. Third, when moral is involved, there is no simple binary answer, and instead, it becomes a philosophical answer.

In decision science, a criterion is a reference yardstick against which the quality of a decision is measured. If the criterion is met, the decision is good; otherwise, the decision is bad. Now, you may wonder: how decision criteria are made.

#### **3.1. Relationship between natural laws and decision criteria**

Whether you admit it or not, in this universe, there are a lot of natural laws in existence and these laws affect our decision-making processes. The effects of natural laws manifest themselves by rewarding those decisions conforming to the laws and punishing those decisions violating them. Throughout the years, people create decision criteria with natural laws in consideration. In fact, natural laws determine and affect decision criteria. For example, when standing on the surface of the earth, we will feel something pulling us down and we call this non-visible force "gravity." The law of gravity helps to produce decision criteria that avoid fall but favor safety.

One directional passing of time makes death irreversible. This will produce decision criteria that favor life but recoil from death.

Another example is the conservation law. In a closed system, we cannot create something from nothing nor can we destroy something and make it disappear; we can only change its existing form from one to another. Criteria nurtured by this law are the human disposition toward resource saving and refraining from wasting. When applying criteria derived from the conservation law, we position ourselves to problems that exhibit themselves as optimal problems. This is why most people think operations research is equal to optimality finding. As you can see, in reality, operations research (or decision science) is more than that.

One natural phenomenon that interests us is the randomness. This phenomenon expresses itself in different ways: when generating a random number, tossing an unbiased die, or flipping a fair coin. It manifests itself as if there were a designer of this law in the universe who is fair. Anyone with a clear mind, regardless whether he or she believes there is a designer or not, would have faith that the number of heads he or she would get when flipping a fair coin is approximately half of the total tosses, given that the number of try is large. It is this subconscious belief that affects our statistic criteria.

#### **3.2. Decision criteria are affected by cultures and view points**

Decision criteria are affected by cultures. By culture, we mean the norms of a community and its sanctioned customs. As we are brought up in a community, we carry the fingerprints of that community. These fingerprints will be reflected in our decision criteria. For example, most of Chinese will not donate their body parts after their death. The mainstream Chinese culture values whole body (the belief of reincarnation and a perfect body in the next life cycle). As a result, you will hardly see a Chinese who selects organ donation on the back of his or her driver's license.

In terms of viewpoints, I will use the following true story to illustrate their effects on decision criteria:

In ancient China, there was a period called "Spring and Autumn" (771-476 BC). During that time, there were two opposing states Wu (the state king is named Fuchai) and Yue (the state king is named Goujian). The king of Yue was captured and humiliated by the king of Wu. After returning to his state, Goujian (the king of Yue) vowed revenge. One of his high-ranking officers named Weng Zhong conceived 10 strategies to weaken and destroy the enemy state Wu. In fact, he only used 3 of his 10 strategies. By the end of his third strategy, the enemy state Wu was destroyed and the king Fuchai was captured and killed.

Now, let's take a look at one of Weng Zhong's strategies: causing famine in Wu. In one autumn season, Weng Zhong picked tons of high-quality un-threshing rice (rice with skin so they can be used as seeds for next year) freshly out of the rice fields. He secretly steamed them and made them inert. Then, he gave these rice to the enemy king Fuchai as a token of obedience. Sure enough, Fuchai took the bait and asked the farmers of his state to use these high-quality rice as the seeds for next year's crop. Thus, a big famine ensued in Wu the following year.

To judge Weng Zhong's famine-causing strategy, we will have different results depending on how you look at it. It would be an effective and good decision if you are from a view point that wants Wu to be destroyed; on the other hand, it would be a bad one if you are judging it from the conservation law's point of view or from Fuchai's point of view.

#### **3.3. Decision criteria are affected by the requirements**

When talking about good decisions, we need to be aware of the requirements attached. In many problems, the same objective variable will take different optimal values when the requirements are different.

For example, let us assume that we have a wire with length of 10 inches. and we ask the following two questions (see **Figure 3**):

**1.** What is the largest area that can be formed by using this wire (given that you can use any two-dimensional shapes)?

**2.** What is the largest area that can be formed by using this wire (given that you can use any different rectangular (including square) shapes)?

*Answer 1*: From the knowledge of algebra, we know that the circle will give us the maximum area. The problem boils down to solving the following two equations:

Dπ =10 Area =πD2 /4 Thus, we get area = πD2 /4 = [π(10/π)2 ]/4 = 100/4π = 7.96 inch2

*Answer 2*: From the knowledge of algebra, we know that the square will give us the maximum area given that the shape must be rectangle. The problem boils down to solving the following two equations:

4b = 10 Area = b2 Thus, we get area = b2 = (10/4)2 = 6.25 inch2

not, would have faith that the number of heads he or she would get when flipping a fair coin is approximately half of the total tosses, given that the number of try is large. It is this subcon-

Decision criteria are affected by cultures. By culture, we mean the norms of a community and its sanctioned customs. As we are brought up in a community, we carry the fingerprints of that community. These fingerprints will be reflected in our decision criteria. For example, most of Chinese will not donate their body parts after their death. The mainstream Chinese culture values whole body (the belief of reincarnation and a perfect body in the next life cycle). As a result, you will hardly see a Chinese who selects organ donation on the back of

In terms of viewpoints, I will use the following true story to illustrate their effects on decision

In ancient China, there was a period called "Spring and Autumn" (771-476 BC). During that time, there were two opposing states Wu (the state king is named Fuchai) and Yue (the state king is named Goujian). The king of Yue was captured and humiliated by the king of Wu. After returning to his state, Goujian (the king of Yue) vowed revenge. One of his high-ranking officers named Weng Zhong conceived 10 strategies to weaken and destroy the enemy state Wu. In fact, he only used 3 of his 10 strategies. By the end of his third strategy, the enemy state

Now, let's take a look at one of Weng Zhong's strategies: causing famine in Wu. In one autumn season, Weng Zhong picked tons of high-quality un-threshing rice (rice with skin so they can be used as seeds for next year) freshly out of the rice fields. He secretly steamed them and made them inert. Then, he gave these rice to the enemy king Fuchai as a token of obedience. Sure enough, Fuchai took the bait and asked the farmers of his state to use these high-quality rice as the seeds for next year's crop. Thus, a big famine ensued in Wu the following year.

To judge Weng Zhong's famine-causing strategy, we will have different results depending on how you look at it. It would be an effective and good decision if you are from a view point that wants Wu to be destroyed; on the other hand, it would be a bad one if you are judging it

When talking about good decisions, we need to be aware of the requirements attached. In many problems, the same objective variable will take different optimal values when the

For example, let us assume that we have a wire with length of 10 inches. and we ask the fol-

**1.** What is the largest area that can be formed by using this wire (given that you can use any

scious belief that affects our statistic criteria.

6 Operations Research - the Art of Making Good Decisions

his or her driver's license.

criteria:

**3.2. Decision criteria are affected by cultures and view points**

Wu was destroyed and the king Fuchai was captured and killed.

from the conservation law's point of view or from Fuchai's point of view.

**3.3. Decision criteria are affected by the requirements**

requirements are different.

lowing two questions (see **Figure 3**):

two-dimensional shapes)?

As you can see, these two problems have the same optimization objective variables, but we get different results because of the different requirements. Thus, we conclude that the requirements in a decision problem play important roles.

**Figure 3.** The effects of requirements.

#### **3.4. Decision criteria are affected by moral beliefs**

People are often faced with situations in which there are no right or wrong answers. We usually call these moral dilemmas. The criteria used here are mainly affected by a person's moral beliefs. For example, when a doctor is treating a terminally ill patient suffering from pain, the decision of whether to prescribe pain-relieving drugs such as Marijuana or morphine is affected by his/her belief system. Depending on how strongly he/she feels (pain vs. controlled drugs), the doctor acts accordingly to what he/she thinks appropriate (meeting his/her decision criteria).

Allowing a terminally ill patient to die at his/her will is also a decision affected by moral beliefs. When morals are involved, decision criteria are complicated since we are carrying the whole baggage of our belief systems. A lot of times, there are no simple answers (we only see and pick aspects that make us comfortable—beauty is in the eye of the beholder).

#### **4. Overview of the chapters**

In this book, we have carefully selected a set of manuscripts that are written by authors from different backgrounds. The selected articles have a broad spectrum of topics ranging from theory to application. On the other hand, all the topics are centered on the main theme of making good decisions. Thus, readers get the benefit of wide exposure to the ins and outs related to the subject. In the following, I will give you a brief introduction to each of the remaining chapters.


The above chapters cover a wide range of topics that are centered on the theme of operations research (decision science). We wish you enjoy reading rest of the book.

#### **Author details**

Kuodi Jian

Allowing a terminally ill patient to die at his/her will is also a decision affected by moral beliefs. When morals are involved, decision criteria are complicated since we are carrying the whole baggage of our belief systems. A lot of times, there are no simple answers (we only see

In this book, we have carefully selected a set of manuscripts that are written by authors from different backgrounds. The selected articles have a broad spectrum of topics ranging from theory to application. On the other hand, all the topics are centered on the main theme of making good decisions. Thus, readers get the benefit of wide exposure to the ins and outs related to the subject. In the following, I will give you a brief introduction to each of the remaining chapters. Chapter 2 "Improving Informational Bases of Performance Measurement with Grey Relation Analysis" written by Thorben Hustedt, Ossadnik Wolfang, and Burrey Fabian. The main contribution of the chapter is the presentation that provides a partial view on Grey Systems Theory (GST) as a conception to improve poor data situations for Performance Measurement (PM) and to operate with a few data already at hand. The chapter gives not only concepts related to GST, GST's element called Grey Relation Analysis (GRA), Performance Measurement (PM), and Key Performance Indicators (KPIs) but also an exam-

Chapter 3 "Application of Lean Methodologies in a Neurosurgery High Dependency Unit" written by Ricardo Balau Esteves, Susana Garriddo Azevedo, and Francisco Proenca Brojo. The main contribution of the chapter is the application of **Lean methodologies** to a Neurosurgery High Dependency Unit (NHDU). The manuscript presents the results; shows the research results; does the statistical analysis on the results; and points out the benefits of applying lean methodologies. The research method used is "an action research supported by a longitudinal mixed method approach with a one-group within-subjects

Chapter 4 "Iteration Algorithms in Markov Decision Processes with State-Action-Dependent Discount Factors and Unbounded Costs," written by Fernando Luque-Vasquez, and J. Adolfo MinJarez-Sosa. The main contribution of the chapter is the study of control models with state-action-dependent discount factors, focusing mainly on introducing approximation algorithms for the optimal value function (value iteration and policy iteration).

Chapter 5 "Mathematical Modeling of Isothermal Drying and Its Potential Application in the Design of the Industrial Drying Regimes of Clay Products," written by Milos Vasic, Zagorka Radojevic, and Robert Rekecki. The main contribution of the chapter is the creation of a link between the comprehensive theory of moisture migration during drying and

and pick aspects that make us comfortable—beauty is in the eye of the beholder).

**4. Overview of the chapters**

8 Operations Research - the Art of Making Good Decisions

ple of applying GRA analysis to a PM problem.

the setup of the non-isothermal drying process.

pretest-posttest experimental type."

Address all correspondence to: kuodi.jian@metrostate.edu

Metropolitan State University, Saint Paul, MN, USA

#### **References**


#### **Improving Informational Bases of Performance Measurement with Grey Relation Analysis Improving Informational Bases of Performance Measurement with Grey Relation Analysis**

Thorben Hustedt, Wolfgang Ossadnik and Fabian Burrey Thorben Hustedt, Wolfgang Ossadnik and Fabian Burrey

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/65286

#### **Abstract**

Performance measurement (PM) needs objective empirical data with causal relevance in order to steer and control financial performance generation. In business practice, there is often a lack of such objective data. A surrogate might be collected subjectively based on data generated by questioning corporate experts. Such an involvement of subjects can rapidly lead to an immense extent of data that (partially) imply incomplete information. To handle this imperfection of data, the Grey systems theory (GST) and especially its element, the Grey relation analysis (GRA), seem to be methodologies able to improve informational bases for PM purposes. Therefore, GRA is able to reveal those performance indicators that considerably influence the corporate financial performance, the key performance indicators. GRA is able to supply valid results with only four data points of a time series. Hence, the GST provides an improvement of the PM framework in situations of incomplete information, which is demonstrated in the following.

**Keywords:** small samples, performance measurement, performance indicator selec‐ tion, causal ambition

#### **1. Introduction**

In business practice, empirical data with causal relevance for financial performance generation are required for steering and controlling demands. Often there is a shortage of such data. Therefore, a severe problem has to be solved by the management. From the development and implementation of measurement and management systems, for example, performance management and measurement, a provision of causal‐oriented data as a quantitative basis for

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

steering and controlling purposes can be expected. PM as the quantitative database of manage‐ ment control operates as an information supply system for the performance management. The current relevance of the topic is shown by Rigby and Bilodeau [1] who expose the balanced scorecard (BSC) as one of the most popular management tools for strategically oriented performance management. A comprehensive BSC also requires the identification of causal interdependencies between indicators that drive the corporate's financial performance. But in business practice, often a shortage of especially objective data exists. To derive causal hypotheses on financial performance generation, subjectively based data can be collected and used in the framework of a PM design as a surrogate. Afterwards, these data can be intersubjectivated groupwise and objectivated by statistical validation.

Generating subjectively based data by interviews or questionnaires often leads to an excess of such data implying problems of handling. In this case, appropriate performance indicators have to be selected which contribute to the success of an organization. Identifying and ensuring an effective PM demands a focus on the cause‐and‐effect relations between these performance indicators. Their estimation and validation induce methodical questions how to cope with imperfect information. This challenge *inter alia* demands analytical decision support. Besides the known disciplines and methods to handle imperfect information, like stochastics, Fuzzy Mathematics or DEMATEL (as a technique of groupwise intersubjectivation), this chapter provides a partial view on Grey systems theory (GST) as a conception to improve poor data situations for PM and to operate already with a few data.

As organizations often do not have sufficient objective databases for PM purposes, they must refer to subjective data usually filtered out of tacit knowledge stemming from employee interviews. These finally lead to a large number of performance indicators determined by the means and relations of the fixed corporate strategy and its usage of identified cause‐and‐effect relations which is indispensable for causally ambitioned performance control. This is the framework that demands an evaluation and reduction of the obtained variety of indicators to main key performance indicators (KPIs).

For instance, 50 performance indicators are denominated as candidates by a company's employees with expert status. Hypostatizing the causal interdependencies of these would leads to a challenge without any operations research (OR) support. In addition, how could an organization obtain quick but also valid information for the selection of the KPIs, without multicriteria decision support, if the Statistics require a sample size implying longstanding data collection?

Do there exist methods to transform subjectively based data into intersubjectivated ones reaching closer to quasi‐objective data and therefore allowing more detailed conclusions for the PM context?

Such methodology is made available by the GST that has been developed to handle situations with incomplete information that cannot be coped by other support disciplines. Thus, per‐ formance indicators can be selected by aid of the Grey relation analysis (GRA) based on subjective information. GRA analyses the geometric relationships of compared discrete objectives as well as of subjective indicators and is able to operate with a sequence length of minimally four data points. In situations with databases being too small for statistical analyses, processes of intersubjectivation or validation become possible. With GRA, PM would be enabled to prepare an order of the KPI priorities resulting from the geometrical similarity of the performance indicators' time series to the sequence of the top strategic financial perform‐ ance ratio. In addition, it is also possible to display the interdependencies between the residual indicators in a network or in a causally ambitioned map by GRA to steer and control the performance generation in the PM system context.

#### **2. Performance management and performance measurement**

steering and controlling purposes can be expected. PM as the quantitative database of manage‐ ment control operates as an information supply system for the performance management. The current relevance of the topic is shown by Rigby and Bilodeau [1] who expose the balanced scorecard (BSC) as one of the most popular management tools for strategically oriented performance management. A comprehensive BSC also requires the identification of causal interdependencies between indicators that drive the corporate's financial performance. But in business practice, often a shortage of especially objective data exists. To derive causal hypotheses on financial performance generation, subjectively based data can be collected and used in the framework of a PM design as a surrogate. Afterwards, these data can be intersubjectivated

Generating subjectively based data by interviews or questionnaires often leads to an excess of such data implying problems of handling. In this case, appropriate performance indicators have to be selected which contribute to the success of an organization. Identifying and ensuring an effective PM demands a focus on the cause‐and‐effect relations between these performance indicators. Their estimation and validation induce methodical questions how to cope with imperfect information. This challenge *inter alia* demands analytical decision support. Besides the known disciplines and methods to handle imperfect information, like stochastics, Fuzzy Mathematics or DEMATEL (as a technique of groupwise intersubjectivation), this chapter provides a partial view on Grey systems theory (GST) as a conception to improve poor data

As organizations often do not have sufficient objective databases for PM purposes, they must refer to subjective data usually filtered out of tacit knowledge stemming from employee interviews. These finally lead to a large number of performance indicators determined by the means and relations of the fixed corporate strategy and its usage of identified cause‐and‐effect relations which is indispensable for causally ambitioned performance control. This is the framework that demands an evaluation and reduction of the obtained variety of indicators to

For instance, 50 performance indicators are denominated as candidates by a company's employees with expert status. Hypostatizing the causal interdependencies of these would leads to a challenge without any operations research (OR) support. In addition, how could an organization obtain quick but also valid information for the selection of the KPIs, without multicriteria decision support, if the Statistics require a sample size implying longstanding

Do there exist methods to transform subjectively based data into intersubjectivated ones reaching closer to quasi‐objective data and therefore allowing more detailed conclusions for

Such methodology is made available by the GST that has been developed to handle situations with incomplete information that cannot be coped by other support disciplines. Thus, per‐ formance indicators can be selected by aid of the Grey relation analysis (GRA) based on subjective information. GRA analyses the geometric relationships of compared discrete objectives as well as of subjective indicators and is able to operate with a sequence length of

groupwise and objectivated by statistical validation.

12 Operations Research - the Art of Making Good Decisions

situations for PM and to operate already with a few data.

main key performance indicators (KPIs).

data collection?

the PM context?

To focus the whole company on a long‐term financial success, it is necessary to reflect and if required to recombine the objectives of the corporate strategy on every single company level, in each business unit and in the cognitive systems of the employees. Thus, the integration and therefore the implementation of the corporate strategy ensures the value creation in an organization. This value creation is also known as the generation of financial performance. The term "performance" is much discussed and underlies no standardized definition. It only becomes clear by an individual corporate‐specific description [2]. The special task of the PM is to provide an information supply system for the management by finding the causal rela‐ tionships that are related to financial performance. The causes of financial performance are not only financially dimensioned. The challenge of a thriving business is to include nonfinancial performance measures often anchored in the intuitive implicit knowledge of the employees. The ability to respond to altered circumstances presupposes an update of critical success factors [3, 4]. An entire focus of an organization on a backward‐looking financial performance indicator system as it was usual in traditional Management Accounting with its reference to the decomposed structure of financial ratios (e.g. DuPont scheme) is inconceivable in today's dynamic business and Management Science. Instead, a new operational framework is neces‐ sary. Such a scope should include all relevant aspects of the corporate performance [4, 5]. This superior framework is tailored as a management and control system and is also known as the Performance Management of the organization. To provide an adequate information basis for the Performance Management, a measurement of the KPIs is necessary [6]. Therefore, the PM addresses three central functions: measurability of financial and especially nonfinancial indicators, identification and selection of the most important indicators that drive financial performance and lead to value generation. Thus, an additional transparency for the different members of the organization is provided. For this, a sound knowledge of employees about the process of financial performance generation is necessary [7].

Hence, the interaction of Performance Management and PM shows that these two internal corporate systems cannot be separated. The PM may be viewed double‐edged: First, as a feedback‐oriented system that supplies Performance Management with norms and informa‐ tion on current processes by data measured in the past and presence. Thus, a base to derive counteractions exists. Furthermore, a feedforward tool is made available, which informs about failings of the conceptual framework so that a new causal model will have to be developed, validated, and implemented [4]. Without any knowledge about the interaction between those systems, the organization misses the opportunity to control and dominate the performance‐ generating process [2, 8].

The performance‐generation process has multidimensional aspects incorporated by the responsibility of multiple causes that lead to an unidimensional financial effect specified by the owners of an organization [4, 9]. Consequently, the interaction of the multidimensional PM and the Performance Management conduce to the improvement of the corporate performance. For quantifying the financial and nonfinancial measures, the PM serves support for a per‐ formance recording. Often a shortage of available, objective empirical data for the represen‐ tation of performance indicators occurs, which has to be handled by the management control. In case of missing objective data, it is indispensable that the PM manages this problem by collecting subjectively based data on the basis of surveys or interviews which enable a quasi‐ objectivation of these measures [10, 11]. Even if organizations should have historical objective data, subjectively based data should not be ignored. Many times, historical data have been collected in varying frequencies and ranges. In this case, an usage within a PM seems to be inappropriate [12].

Revelations of the interdependencies between the KPIs that are essential for the value creation or rather the performance improvement can only be determined by sufficiently articulated knowledge. At this, it is necessary to differentiate between the explicit knowledge on the one hand which is simple to communicate and can be made available to all individuals that want to use it. On the other hand, there is the intuitive implicit knowledge that makes important performance‐related causal relationships available [13, 14]. The implicit knowledge is charac‐ terized by four conditions: difficult to imitate, hardly to replace, only transmittable to a limited extent (not by the normal use of language), and scarce existence [13]. The tacit knowledge is, however, very difficult to create because it has been sharpened over years in extensive activities and experience of individuals. To evoke this dormant, subject‐bound, intuitive knowledge, Abernethy et al. [10] propose to interview or rather execute subjective questionnaires so that the employees give partly insights into their tacit knowledge. This results in a variety of subjectively based data which first need to be reduced to a manageable level and can be intersubjectivated to work with. Here, the task of the PM should be based on an adequate even optimal—complexity reduction [15]. Thus, the immense amount of subjective data has to be channelled, properly. Besides the PM has to concentrate on the essential factors with the aid of intersubjectivated data. All this is taking place to avoid that the PM System is more confusing than helpful.

#### **2.1. Strategic alignment of performance indicators**

The PM should not only be understood according to the phase of validating the established hypotheses at the beginning of the PM process but rather by Bourne et al. [16] as a tool to identify appropriate indicators covering structure and processes of an organization in a dynamically changing environment. To focus on inadequate measures would constitute a resource‐wasting framework. Hence, the organizational, multidimensional PM System requires a selection of such KPIs endogenously linked to the corporate strategy and thus able to improve the performance [17]. Various studies [18, 19] detected that systems that are constructed as too complex have a negative influence on the performance. Too complex systems lead to an overload of information and consequently cause an increase of adminis‐ trative costs [20]. Therefore, the amount of KPIs has to be limited to a level cognitively manageable by the members of the organization [17, 21].

On account of a lack of objective data, organizations may refer to subjective estimations stemming from samples being too small or too fragmented to apply statistical methods (**Figure 1**). Small or fragmented data sizes lead to incomplete information. This problem is to be solved by the GST. Fuzzy Mathematics, which focus on experience data of an individual, are characterized by a clear content (intension) but by unclear (not determined) quantitative boundaries of an expression—for example "very strong"—(extension of information). GST is more suitable with concepts of multiple meanings (e.g., performance), is additionally able to handle fuzziness situations and disposes of a clearly defined extension [22]. Thus, the above‐ mentioned problem of poor and incomplete information is almost impossible to solve with Fuzzy Mathematics or Statistics. The incomplete nature of the information needs to be managed in the PM context. A subjective query that was collected over a small number of periods can be considered as an incomplete information, if the experts of the organization deliver only a few estimations of the extent of an indicator [23]. Reducing the volume of performance indicators needs subjectively‐based and thus poor information [24]. The organizational challenge is to solve this problem by providing valid results for PM also in case of small samples in situations of incomplete information. This would be possible by reference to support models for comprehending and decoding the problems of the system [25, 26].

**Figure 1.** Imperfect information, situations and instruments.

systems, the organization misses the opportunity to control and dominate the performance‐

The performance‐generation process has multidimensional aspects incorporated by the responsibility of multiple causes that lead to an unidimensional financial effect specified by the owners of an organization [4, 9]. Consequently, the interaction of the multidimensional PM and the Performance Management conduce to the improvement of the corporate performance. For quantifying the financial and nonfinancial measures, the PM serves support for a per‐ formance recording. Often a shortage of available, objective empirical data for the represen‐ tation of performance indicators occurs, which has to be handled by the management control. In case of missing objective data, it is indispensable that the PM manages this problem by collecting subjectively based data on the basis of surveys or interviews which enable a quasi‐ objectivation of these measures [10, 11]. Even if organizations should have historical objective data, subjectively based data should not be ignored. Many times, historical data have been collected in varying frequencies and ranges. In this case, an usage within a PM seems to be

Revelations of the interdependencies between the KPIs that are essential for the value creation or rather the performance improvement can only be determined by sufficiently articulated knowledge. At this, it is necessary to differentiate between the explicit knowledge on the one hand which is simple to communicate and can be made available to all individuals that want to use it. On the other hand, there is the intuitive implicit knowledge that makes important performance‐related causal relationships available [13, 14]. The implicit knowledge is charac‐ terized by four conditions: difficult to imitate, hardly to replace, only transmittable to a limited extent (not by the normal use of language), and scarce existence [13]. The tacit knowledge is, however, very difficult to create because it has been sharpened over years in extensive activities and experience of individuals. To evoke this dormant, subject‐bound, intuitive knowledge, Abernethy et al. [10] propose to interview or rather execute subjective questionnaires so that the employees give partly insights into their tacit knowledge. This results in a variety of subjectively based data which first need to be reduced to a manageable level and can be intersubjectivated to work with. Here, the task of the PM should be based on an adequate even optimal—complexity reduction [15]. Thus, the immense amount of subjective data has to be channelled, properly. Besides the PM has to concentrate on the essential factors with the aid of intersubjectivated data. All this is taking place to avoid that the PM System is more

The PM should not only be understood according to the phase of validating the established hypotheses at the beginning of the PM process but rather by Bourne et al. [16] as a tool to identify appropriate indicators covering structure and processes of an organization in a dynamically changing environment. To focus on inadequate measures would constitute a resource‐wasting framework. Hence, the organizational, multidimensional PM System requires a selection of such KPIs endogenously linked to the corporate strategy and thus able to improve the performance [17]. Various studies [18, 19] detected that systems that are

generating process [2, 8].

14 Operations Research - the Art of Making Good Decisions

inappropriate [12].

confusing than helpful.

**2.1. Strategic alignment of performance indicators**

In the PM context, it is important that a strategy is formulated as simple as possible [27]. If the extension of the strategy is then reduced to a manageable minimum, the organization possesses a list of factors most important for performance generation [28]. But this does not deliver a sufficient condition to control an organization successfully. Instead, it is essential to know how the factors are interrelated to actuate the right "lever" for an increase in financial performance [29]. Therefore, a causally ambitioned network of interdependencies of the KPIs seems to be useful [30].

#### **2.2. Causal mapping**

The performance of an organization can be interpreted as a result of past actions of the managers. To explain this performance, a causally ambitioned model with all relevant relations between the considered indicators is indispensable. Thus, the process leading to performance can be visualized. Such an illustration (e.g., a map) delivers – especially if structured—a blueprint for implementing the corporate strategy [31].

A map generally provides the visualization of a reference framework. In the 1970s, the political scientist Axelrod [32] spread the methodology of cognitive maps that should illustrate simplified social studies. A cognitive map provides an optical representation of the structures people perceive in their environment [9]. Cognitive maps serve management with a tool for evaluating alternative business situations in order to meet better an uncertain, dynamic corporate environment and to simplify complex issues [33]. Here, the organization should, however, focus on a visualization of tacit knowledge [10].

A simple list of the most important corporate strategy factors would point out the indicators the organization has to focus on. As an enumeration, such a list, however, would not represent the interdependencies within the system. To control as well as to monitor the performance generation, it is necessary to understand the causal relations between the KPIs [34, 35]. Therefore, it is fundamental to keep in view the cost–benefit ratio: a too detailed map costs a large amount of time [10], a graphical apposition of ovals does quantify dependencies in the system [32]. GRA as an OR management support is simple in usage and provides meaningful results already after a few periods. Additionally, it even enables a visualization of the outcome within a relational network [36].

In contrast to parametric approaches like the correlation analysis, nonparametric mapping approaches are much more able to represent the multidimensionality of the performance generation. By avoiding assumptions, nonparametric approaches focus on mapped causal relationships among the measures based on their perceived environment [11]. Organizations tend to skip a statistical validation of their causal model. The reasons for this are the perceived obviousness of the model, the time exposure or rather the high validation costs [20, 37, 38]. The changing dynamic and competitive environment requires an adjustment of an organiza‐ tion's causal model to adapt the strategy continuously. In order to meet this condition sufficiently, an ongoing customization of an organization's cause‐and‐effect network is not manageable with regard to time and costs that appear by longstanding serial questionnaires [39].

A sole focus on subjectively based data can lead to systematic judgment errors by incorrect estimations of individuals. Thus, such data are to be considered as incomplete because of small or fragmented sample sizes [39]. In addition, subjectively based data in the PM context can imply errors in the described network of relations because of the occurrence of new environ‐ mental circumstances. On account of these changes, a resulting illustration of interdependen‐ cies can be inadequate to reality. Therefore, it is indispensable to improve these data with quantifying methods and consequently intersubjectivate them. So, there is a necessity of research in new mathematical applications with regard to measurement and especially to PM which is yet limited to the fundamental methodologies of sociology (survival analysis), psychology (various psychometric methods) and economics (econometrics) [11]. In such social economic systems with poor information, it is challenging to look for solutions in Statistics because of the system's dynamic characteristics. In this case of incomplete and fast‐changing information, the application of GRA may be advisable [40].

#### **3. Applying Grey systems in performance management**

**2.2. Causal mapping**

16 Operations Research - the Art of Making Good Decisions

blueprint for implementing the corporate strategy [31].

however, focus on a visualization of tacit knowledge [10].

within a relational network [36].

[39].

The performance of an organization can be interpreted as a result of past actions of the managers. To explain this performance, a causally ambitioned model with all relevant relations between the considered indicators is indispensable. Thus, the process leading to performance can be visualized. Such an illustration (e.g., a map) delivers – especially if structured—a

A map generally provides the visualization of a reference framework. In the 1970s, the political scientist Axelrod [32] spread the methodology of cognitive maps that should illustrate simplified social studies. A cognitive map provides an optical representation of the structures people perceive in their environment [9]. Cognitive maps serve management with a tool for evaluating alternative business situations in order to meet better an uncertain, dynamic corporate environment and to simplify complex issues [33]. Here, the organization should,

A simple list of the most important corporate strategy factors would point out the indicators the organization has to focus on. As an enumeration, such a list, however, would not represent the interdependencies within the system. To control as well as to monitor the performance generation, it is necessary to understand the causal relations between the KPIs [34, 35]. Therefore, it is fundamental to keep in view the cost–benefit ratio: a too detailed map costs a large amount of time [10], a graphical apposition of ovals does quantify dependencies in the system [32]. GRA as an OR management support is simple in usage and provides meaningful results already after a few periods. Additionally, it even enables a visualization of the outcome

In contrast to parametric approaches like the correlation analysis, nonparametric mapping approaches are much more able to represent the multidimensionality of the performance generation. By avoiding assumptions, nonparametric approaches focus on mapped causal relationships among the measures based on their perceived environment [11]. Organizations tend to skip a statistical validation of their causal model. The reasons for this are the perceived obviousness of the model, the time exposure or rather the high validation costs [20, 37, 38]. The changing dynamic and competitive environment requires an adjustment of an organiza‐ tion's causal model to adapt the strategy continuously. In order to meet this condition sufficiently, an ongoing customization of an organization's cause‐and‐effect network is not manageable with regard to time and costs that appear by longstanding serial questionnaires

A sole focus on subjectively based data can lead to systematic judgment errors by incorrect estimations of individuals. Thus, such data are to be considered as incomplete because of small or fragmented sample sizes [39]. In addition, subjectively based data in the PM context can imply errors in the described network of relations because of the occurrence of new environ‐ mental circumstances. On account of these changes, a resulting illustration of interdependen‐ cies can be inadequate to reality. Therefore, it is indispensable to improve these data with quantifying methods and consequently intersubjectivate them. So, there is a necessity of research in new mathematical applications with regard to measurement and especially to PM The GST first appeared in 1981 by Deng [41]. According to that, a Grey system (GS) has the structure of a black box, which contains a system of both known and unknown variables. The unknown represents a "black", totally incomplete information and the known a "white", absolutely complete information. Hence, a (Grey) incomplete information can be understood as an information that is partially known as well as to some extent unknown [42]. Inconsider‐ ably, whether it is the message format, the coordination mechanism or just the behaviour within a system: As soon as a lack of information within this system is disseminated, it is referred to as a GS [36]. In practice, as already mentioned in the previous chapter, it is difficult to concretely obtain all information about an examined object [40]. Systems with a lack of information can be found everywhere: for example, the biological limitations of the human senses, the constraints of important economic conditions or the unavailability of technical resources. The GS as a system of incomplete information is also known as an "indeterminate system" of which the fundamental characteristics are small samples and/or interruptions of time series [42].

On the account of the small size of the samples problems within information systems with incomplete information cannot be solved with statistical methods [42]. With increasing sample size, the statistical power of a validation method grows [43]. Thus, sample sizes are preferable, in which the standard error is as low as possible. Various studies [44–46] consider large numbers of data points as necessary for the application of statistical support of time series as well as cross‐sectional analysis in PM. For instance, according to McDonald and Ho [45], an organization needs to obtain quarterly data for a moderate time series analysis for almost six years in order to make a statement about possible causal relations by structural equation modelling. In social and economic systems, which are driven by the highest degree of dyna‐ mism and continuous changes, such problem solving demands for overextend the conditions of typical situations of business practice. Some variables in the system underlie a faster change of their environment conditions than the measurement lasts at all, so that the analytical results are irrelevant and therefore superfluous [41]. The resulting situation of incomplete information can be supported by GST [23].

The enormous volumes of data arising from subjective questionnaires about the performance indicators (*k*<sup>i</sup> ) need reduction. **Table 1** shows the result of such a decimation to those indicators which are most essentially interlinked with the financial performance generation. For this, benchmarking of the most representative indicators is crucial [47]. The expression *x*it represents an opinion aggregated from the individual members of an expert group in period *t* to performance indicator *k*<sup>i</sup> . Here, GST disposes of a major advantage because of the ability to provide valid results already from a number of data points with t ≥ 4. Thus, the GST is able to work with incomplete information in terms of decimating the indicators to the KPIs [36].


**Table 1.** Subjective questionnaire.

#### **3.1. Buffer operator in Grey systems theory**

The GST could be the way out for problems of incomplete and therefore inadequate data. The challenge for PM is especially the collection of performance relevant data often derived from the answers to subjective questionnaires within organizations. From time to time, this requires a certain number of subjective data as shown in **Table 1**. Nevertheless, in practice, it may occur that experts cannot answer their quarterly surveys (e.g., vacation, illness or simple absence). Therefore, the GST is providing a buffer operator, which makes it possible to complete missing information in fragmented queries, without this leading to informational distortion or loss. If two adjacent entries of a data sequence are described by *x*(*n* – 1) and *x*(*k*), then, *x*(*k* – 1) represents an old information, and *x*(*k*) operates as a part of a newer information. If there is a gap between entries within a data sequence, a lack of information because of the insufficient completion of an expert's questionnaire occurs (e.g., *X* = (*x*(1), *x*(2) *x*(3), *x*(5)). A new value *x*(4) can be created as follows:

$$\mathbf{x}^\*(k) = \mathbf{a} \cdot \mathbf{x}(k) + (\mathbf{l} - \mathbf{a}) \cdot \mathbf{x}(k - \mathbf{l}), \mathbf{a} \in [0, 1]. \tag{1}$$

The value of *α* represents the weighting of the informational content with regard to its currency. If *α >* 0.5, the researcher attaches more importance to the newer information than to the older one and vice versa [23]. For simplification, no preference with respect to the timeliness of information should be assumed in the following, so that old and new information should be weighted equally (*α* = 0.5).

In cases of a blank first entry *x*(1) or a missing last entry *x*(*n*) of a sequence *X*—for exam‐ ple, measured customer contentment—the gap cannot be filled by the method of adjacent neighbour generation, but rather by methods called stepwise ratio generator <sup>σ</sup> <sup>=</sup> x() x( 1) ; = 2, 3, …, or the smooth ratio generator . If the first value is missing, the method operates with the adjacent values within the se‐ quence right of the missing one: <sup>1</sup> <sup>=</sup> (2) <sup>σ</sup> <sup>3</sup> or <sup>1</sup> <sup>=</sup> 2(2) <sup>3</sup> (2) . If only the last sequence value shows an empty entry, the two previous sequence data help to create an adequate "substitute": = ( 1)σ( 1) or [23].

#### **3.2. Grey relation analysis**

*t* to performance indicator *k*<sup>i</sup>

18 Operations Research - the Art of Making Good Decisions

**Period** *t*

Performance indicator (*k*<sup>i</sup>

**Table 1.** Subjective questionnaire.

can be created as follows:

weighted equally (*α* = 0.5).

**3.1. Buffer operator in Grey systems theory**

. Here, GST disposes of a major advantage because of the ability

) *Q*<sup>1</sup> *Q*<sup>2</sup> *Q*<sup>3</sup> *Q*<sup>4</sup> … *Q*<sup>t</sup>

to provide valid results already from a number of data points with t ≥ 4. Thus, the GST is able to work with incomplete information in terms of decimating the indicators to the KPIs [36].

*k*<sup>1</sup> *x*<sup>11</sup> *x*<sup>12</sup> ……… *x*1t *k*<sup>2</sup> … *x*<sup>22</sup> … *k*<sup>3</sup> … … *k*<sup>4</sup> … … … … … *k*<sup>i</sup> *x*i1 ………… *x*it

The GST could be the way out for problems of incomplete and therefore inadequate data. The challenge for PM is especially the collection of performance relevant data often derived from the answers to subjective questionnaires within organizations. From time to time, this requires a certain number of subjective data as shown in **Table 1**. Nevertheless, in practice, it may occur that experts cannot answer their quarterly surveys (e.g., vacation, illness or simple absence). Therefore, the GST is providing a buffer operator, which makes it possible to complete missing information in fragmented queries, without this leading to informational distortion or loss. If two adjacent entries of a data sequence are described by *x*(*n* – 1) and *x*(*k*), then, *x*(*k* – 1) represents an old information, and *x*(*k*) operates as a part of a newer information. If there is a gap between entries within a data sequence, a lack of information because of the insufficient completion of an expert's questionnaire occurs (e.g., *X* = (*x*(1), *x*(2) *x*(3), *x*(5)). A new value *x*(4)

\* *x k xk xk* ( ) ( ) (1 ) ( 1), [0,1]. =

aa

The value of *α* represents the weighting of the informational content with regard to its currency. If *α >* 0.5, the researcher attaches more importance to the newer information than to the older one and vice versa [23]. For simplification, no preference with respect to the timeliness of information should be assumed in the following, so that old and new information should be

In cases of a blank first entry *x*(1) or a missing last entry *x*(*n*) of a sequence *X*—for exam‐ ple, measured customer contentment—the gap cannot be filled by the method of adjacent neighbour generation, but rather by methods called stepwise ratio generator

+ - × - Î× (1)

a

The challenge of GRA is to clarify which factors influence the PM system in a desirable extent, to strengthen and to focus those subsequently. In the past, this has been discussed in scientific articles and essays about system theory. However, this methodology still attends rare attention in the context of Performance Management [23, 48–50]. This model was chosen, as it tries to work as an ideal PM support with its consideration of both financially and nonfinancially dimensioned factors by analysing the system's factors that display sufficient influence on the top strategic financial ratio but appear as incomplete [51]. By means of the Performance Management as well as by the efficient and effective KPIs identified by the PM, the entire organization could be aligned to its strategy and vision [52]. Therefore, GRA attempts to discover the sequences of the KPIs by determining the geometrically most similar sequences to the top strategic financial performance ratio to uncover the system's most descriptive factors [23]. Therefore, an organization has to determine a reference sequence, which optimally represents the strategy of the organization and thus the behaviour of the entire system [53]. The strategy and hence the ultimate performance generation should be illustrated by the KPIs. Here, Paquette and Kida [27] showed in their study that it is important to reduce the extension of the strategy to a minimum. So, in order to reflect the strategy by a reference sequence, it is advisable to refer to a single factor and not to a variety of multiple sequences. Kasperskaya and Tayles [34] propose that both types of indicators (financial and nonfinancial) within a well‐ functioning PM system should be used, but, however, the financial measures dominate in practice. Kaplan and Norton [52] also consider that a financial measure should be attributed the most weight in a strategy‐focused organization, so that it can monitor and control their operational and strategic budgeting. Thus, a financial measure should also be used as a reference sequence in the selection of the strategy‐related KPIs in a PM System.

The GRA is a part of the GST mentioned earlier and is based on all of its assumptions and conditions [47]. In this context, a Grey relation proposes the valuation between two autono‐ mous systems or two indicators within a system over a determined time series. It is precisely this point where the examination method GRA can be used. The elements are examined for homogeneous or heterogeneous temporal behaviour which means the development of the considered indicator in terms of time. If the elements display a very similar, homogeneous development concerning the time series, a high relational degree is assumed and vice versa. First, a reference sequence X0 = (x0 <sup>1</sup> , x0 <sup>2</sup> , …, x0 ) is defined. Afterwards it is possible, to compare the geometrical similarity of the reference sequence with another system's element

and its sequence = (xi <sup>1</sup> , <sup>2</sup> , …, xi ). If γ(x0(), xi ()) is the image of the real numbers at point *k* as well as x0 and i display x0 and xi at point *k*, and γ(X0, Xi ) reflects all data points of every sequence (i = 1, 2, …, *n*) with *k* = 1, 2, …, m, then the Grey relation coefficient follows the formula [36, 53]:

$$\varphi(\mathbf{x}\_{\text{o}}(\mathbf{k}), \mathbf{x}\_{\text{i}}(\mathbf{k})) = \frac{\min\_{\mathbf{l}} \min\_{\mathbf{l}} |\mathbf{x}\_{\text{o}}(\mathbf{k}) - \mathbf{x}\_{\text{i}}(\mathbf{k})| + \xi \max\_{\mathbf{l}} \max\_{\mathbf{l}} |\mathbf{x}\_{\text{o}}(\mathbf{k}) - \mathbf{x}\_{\text{i}}(\mathbf{k})|}{|\mathbf{x}\_{\text{o}}(\mathbf{k}) - \mathbf{x}\_{\text{i}}(\mathbf{k})| + \xi \max\_{\mathbf{l}} \max\_{\mathbf{l}} |\mathbf{x}\_{\text{o}}(\mathbf{k}) - \mathbf{x}\_{\text{i}}(\mathbf{k})|}. \tag{2}$$

The value ξ ∈ [0, 1] describes the differentiation coefficient that aids to adjust the various relation coefficients. Lin et al. [54] suggest for *ξ* a value of 0.5 to attain a stable and appropriate distinction.

Then, the Grey relational degree <sup>γ</sup> X0, Xi could be calculated as follows [36, 47, 53]:

$$\gamma\left(\mathbf{X}\_{0},\mathbf{X}\_{i}\right) = \frac{1}{n} \sum\_{k=1}^{n} \gamma\left(\mathbf{x}\_{0}\left(\mathbf{k}\right),\mathbf{x}\_{i}\left(\mathbf{k}\right)\right). \tag{3}$$

That leads to 0<γ X0, Xi < 1, so that a value of 0 can be interpreted as a blank and a value of 1 suggests a complete and perfect relation of the two compared sequences [47].

Though, the Grey relational degree by Deng functions as a historical basis of the GRA in this case, it is not applied in the further process because of its dependence of the sequence order in the calculation <sup>γ</sup> X0, Xi ≠ γ Xi , X0 , and therefore, its rank reversal problems. As a result, the more general Grey incidence analysis is focused which is based on the approaches of symmetry and thus protected against the problems of Deng's Grey relational degree. The relative and the absolute degree of incidence are to be attributed to the more general approach of the Grey incidence analysis. Nevertheless the Grey relational degree and the Grey incidence are used synonymously [53].

First, the absolute degree of incidence is considered. Assuming I is an economic factor of the regarded system and *k* represents the ordinal number of this factor. Then <sup>=</sup> <sup>1</sup> , <sup>2</sup> , …, represents the series of the index and thus the temporal behav‐ iour of an economic factor. Equivalently, this could be transferred to the PM context, so that various cash flows of several months, confidence of the employees in management or contentment of customers with a corporate's products are constituted as performance indi‐ cators [24]. These sequences may take various forms. If the sequence of an indicator is given by Xi <sup>=</sup> xi <sup>1</sup> , xi <sup>2</sup> , …, , then X − xi <sup>1</sup> , or rather:

Improving Informational Bases of Performance Measurement with Grey Relation Analysis http://dx.doi.org/10.5772/65286 21

$$\left(\mathbf{x}\_{\mathrm{i}}(1) - \mathbf{x}\_{\mathrm{i}}(1), \ \mathbf{x}\_{\mathrm{i}}(2) - \mathbf{x}\_{\mathrm{i}}(1), \dots, \ \mathbf{x}\_{\mathrm{i}}(n) - \mathbf{x}\_{\mathrm{i}}(1)\right) \tag{4}$$

illustrates a fluctuating image and therefore a development of the indicator behaviour [23]. The area under the curve can therefore be quantified as follows:

and its sequence = (xi <sup>1</sup> , <sup>2</sup> , …, xi ). If γ(x0(), xi

at point *k* as well as x0 and i display x0 and xi

follows the formula [36, 53]:

g

distinction.

0 i

20 Operations Research - the Art of Making Good Decisions

in the calculation <sup>γ</sup> X0, Xi ≠ γ Xi

are used synonymously [53].

()) is the image of the real numbers

) reflects all data

at point *k*, and γ(X0, Xi

*<sup>k</sup> <sup>n</sup>* (3)

, X0 , and therefore, its rank reversal problems. As a result,

is an economic factor of

points of every sequence (i = 1, 2, …, *n*) with *k* = 1, 2, …, m, then the Grey relation coefficient

x

The value ξ ∈ [0, 1] describes the differentiation coefficient that aids to adjust the various relation coefficients. Lin et al. [54] suggest for *ξ* a value of 0.5 to attain a stable and appropriate

min min | x (k) x (k) | max max |x (k) x (k) (x (k), x (k)) . | x (k) x (k) | max max | x (k) x (k) |

Then, the Grey relational degree <sup>γ</sup> X0, Xi could be calculated as follows [36, 47, 53]:

( ) 0i 0 i ( () ()) 1 <sup>1</sup> γ X , X γ x k , x k . = <sup>=</sup> å *n*

1 suggests a complete and perfect relation of the two compared sequences [47].

First, the absolute degree of incidence is considered. Assuming I

given by Xi <sup>=</sup> xi <sup>1</sup> , xi <sup>2</sup> , …, , then X − xi <sup>1</sup> , or rather:

That leads to 0<γ X0, Xi < 1, so that a value of 0 can be interpreted as a blank and a value of

Though, the Grey relational degree by Deng functions as a historical basis of the GRA in this case, it is not applied in the further process because of its dependence of the sequence order

the more general Grey incidence analysis is focused which is based on the approaches of symmetry and thus protected against the problems of Deng's Grey relational degree. The relative and the absolute degree of incidence are to be attributed to the more general approach of the Grey incidence analysis. Nevertheless the Grey relational degree and the Grey incidence

the regarded system and *k* represents the ordinal number of this factor. Then <sup>=</sup> <sup>1</sup> , <sup>2</sup> , …, represents the series of the index and thus the temporal behav‐ iour of an economic factor. Equivalently, this could be transferred to the PM context, so that various cash flows of several months, confidence of the employees in management or contentment of customers with a corporate's products are constituted as performance indi‐ cators [24]. These sequences may take various forms. If the sequence of an indicator is

i k0 i i k0 i

x


0 i i k0 i

$$\mathbf{x}\_{l} = \prod\_{l=1}^{n} (X\_{l} - \left(\mathbf{x}\_{l}(1)\right)\mathbf{d} . \tag{5}$$

As a result, the sequences can occur by decreasing (A), increasing (B) and vibrating (C) temporal behaviour. To be able to compare sequences with each other, a zero‐starting point operator is applied [23]:

$$X\_i D = \mathbf{x}\_i(1)d, \mathbf{x}\_i(2)d, \dots, \mathbf{x}\_i(n)d \quad \text{and} \quad \mathbf{x}\_i(k)d = \mathbf{x}\_i(k) - \mathbf{x}\_i(1) \quad \text{with} \quad k = 1, 2, \dots, n. \tag{6}$$

Consequently, the comparison of two sequences appears possible, so that also statements about the area beyond the curves can be made (**Figure 2**) [23].

**Figure 2.** Relationship between two sequences. (A = *x*<sup>i</sup> 0 is located above *x*<sup>j</sup> 0 ; B = *x*<sup>i</sup> 0 is located underneath *x*<sup>j</sup> 0 ; C = *x*<sup>i</sup> 0 and *x*j 0 alternate positions).

Now, the area *s*<sup>i</sup> between Xi 0 and abscissa can be calculated by the following equation [56]:

$$\mathbf{x} \mid \mathbf{s\_i} = \left| \sum\_{k=2}^{n-1} \mathbf{x\_i^0} \left( k \right) + \frac{1}{2} \mathbf{x\_i^0} \left( n \right) \right|. \tag{7}$$

Here, however, rather the area between the two curves, i <sup>0</sup> and j 0 is of interest, which can be described by the following equation [55]:

#### 22 Operations Research - the Art of Making Good Decisions

$$|\mathbf{s}\_{\mathbf{i}\mathbf{j}}| = |\mathbf{s}\_{\mathbf{i}} - \mathbf{s}\_{\mathbf{j}}| = \left| \sum\_{\mathbf{k}=2}^{n-1} \left( \mathbf{x}\_{\mathbf{i}}^{0}(k) - \mathbf{x}\_{\mathbf{j}}^{0}(k) \right) + \frac{1}{2} \left( \mathbf{x}\_{\mathbf{i}}^{0}(n) - \mathbf{x}\_{\mathbf{j}}^{0}(n) \right) \right| \tag{8}$$

Assuming the length of both sequences is the same (otherwise the sequences could be adjusted, as described in Subchapter 3.1), then the absolute degree of incidence of the sequences *X*<sup>i</sup> and *X*j can be determined by [23]:

$$\varepsilon\_{\rm ij} = \frac{1 + |\mathbf{s\_i}| + |\mathbf{s\_j}|}{1 + |\mathbf{s\_i}| + |\mathbf{s\_j}| + |\mathbf{s\_i} - \mathbf{s\_j}|}. \tag{9}$$

In the PM, the challenge is to associate both financial and nonfinancial measures [34]. However, in case of this endeavour, problems of differently scaled indicators can emerge. Likert scale estimations by experts of employee satisfaction, for example, can be set in relation. But this would be no normalization as demanded by the concept of the absolute degree of incidence [56]. On the contrary to this, the concept of the relative degree of incidence provides a quan‐ titative description of the rate of change of two sequences to their initial values, thus enabling a sufficient normalization. The closer these rates of changes of the two sequences are, the greater is the relative degree of incidence *r*ij between them. Assuming that *X*<sup>i</sup> and *X*j are two sequences of an equal length with initial values that are different to zero, then there is no connection or linkage between the absolute and the relative degree of incidence, so that the absolute degree ε ij can be relatively large, whereas its relative counterpart *r*ij can be extremely small and vice versa [23]. Using the relative degree of incidence, an equal length of the sequences is assumed, this means an identical number of data points for the two sequences *X*<sup>0</sup> and *X*<sup>i</sup> , where 0 <sup>=</sup> 0 <sup>1</sup> , 0 <sup>2</sup> , …, 0 constitutes the reference sequence. Afterwards, to be able to compare the possibly differently scaled indicators and so their sequences, the values of each sequence are divided by their initial value:

$$\mathbf{x}\_{0}^{\cdot} = \left(\mathbf{x}\_{0}^{\cdot}(1), \mathbf{x}\_{0}^{\cdot}(2), \dots, \mathbf{x}\_{0}^{\cdot}(k)\right) = \frac{\mathbf{x}\_{0}(1)}{\mathbf{x}\_{0}(1)}, \frac{\mathbf{x}\_{0}(2)}{\mathbf{x}\_{0}(1)}, \dots, \frac{\mathbf{x}\_{0}(k)}{\mathbf{x}\_{0}(1)},\tag{10}$$

$$X\_{l}^{'} = \left(\mathbf{x}\_{l}^{'}(1), \mathbf{x}\_{l}^{'}(2), \dots, \mathbf{x}\_{l}^{'}(k)\right) = \frac{\mathbf{x}\_{l}(1)}{\mathbf{x}\_{l}(1)}, \frac{\mathbf{x}\_{l}(2)}{\mathbf{x}\_{l}(1)}, \dots, \frac{\mathbf{x}\_{l}(k)}{\mathbf{x}\_{l}(1)}.\tag{11}$$

Subsequently, the zero‐starting point is determined analogously to Eq. (5), so there is the possibility to calculate the areas |i <sup>|</sup> and ij as well as the relative degree of incidence *r*ij [57]:

Improving Informational Bases of Performance Measurement with Grey Relation Analysis http://dx.doi.org/10.5772/65286 23

$$\mathbf{x}|\mathbf{s}'\_{l}| = \left| \sum\_{k=2}^{n-1} \mathbf{x}'^{0}\_{l}(k) + \frac{1}{2}\mathbf{x}'^{0}\_{l}(n) \right|,\tag{12}$$

$$|\mathbf{s}\_{ij}"| = \left| \mathbf{s}\_i' - \mathbf{s}\_j' \right| = \left| \sum\_{k=2}^{n-1} \left( \mathbf{x}\_i'^0(k) - \mathbf{x}\_j'^0(k) \right) + \frac{1}{2} \left( \mathbf{x}\_i'^0(n) - \mathbf{x}\_j'^0(n) \right) \right|, \tag{13}$$

$$r\_{ij} = \frac{1 + \left| \mathbf{s}\_i' \right| + \left| \mathbf{s}\_j' \right|}{1 + \left| \mathbf{s}\_i' \right| + \left| \mathbf{s}\_j' \right| + \left| \mathbf{s}\_i' - \mathbf{s}\_j' \right|}. \tag{14}$$

Using these formulas, it is possible to calculate the respective relative degree of incidence between the variety of performance indicators and the reference sequence, to disclose for example the ten most "important" sequences/indicators for the reference sequence, the KPIs. To get an overview of the dependencies within those 10 KPIs, also the relative degrees of incidence between the KPIs can be calculated so that an interdependency network emerges [55]. Since there is only the possibility of building a network of interdependencies between the KPIs by GRA, the cause-and-effect-relationships lack a detailed explanation. This network, however, is likely to be understood as a construct of the holistic organizational strategy which is determined by "highly correlated" KPIs. If then the strategy changes or rather is adjusted to altered circumstances, the indicators act to the same extent, so that their cause-and-effect relationships are inconsiderable [58]. Nevertheless, the KPIs in their combination must be selected providing sufficiently the strategy and therefore its means and relations.

#### **4. Example of application**

( () ()) ( () ())

*s kk nn* (8)

+ + <sup>=</sup> +++- *s s ss* (9)

and

and *X*j are two

00 00

Assuming the length of both sequences is the same (otherwise the sequences could be adjusted, as described in Subchapter 3.1), then the absolute degree of incidence of the sequences *X*<sup>i</sup>

i j

1 s |s | ε . 1 ||

i j ij

In the PM, the challenge is to associate both financial and nonfinancial measures [34]. However, in case of this endeavour, problems of differently scaled indicators can emerge. Likert scale estimations by experts of employee satisfaction, for example, can be set in relation. But this would be no normalization as demanded by the concept of the absolute degree of incidence [56]. On the contrary to this, the concept of the relative degree of incidence provides a quan‐ titative description of the rate of change of two sequences to their initial values, thus enabling a sufficient normalization. The closer these rates of changes of the two sequences are, the

sequences of an equal length with initial values that are different to zero, then there is no connection or linkage between the absolute and the relative degree of incidence, so that the

small and vice versa [23]. Using the relative degree of incidence, an equal length of the sequences is assumed, this means an identical number of data points for the two sequences *X*<sup>0</sup>

, where 0 <sup>=</sup> 0 <sup>1</sup> , 0 <sup>2</sup> , …, 0 constitutes the reference sequence. Afterwards, to be able to compare the possibly differently scaled indicators and so their sequences, the values

( )

( )

ij can be relatively large, whereas its relative counterpart *r*ij can be extremely

( ) ( )

00 0 x1 x2 x

> ( ) ( )

*ii i x x xk X x x xk xx x* (11)

1 2

*<sup>k</sup>* = ¼= ¼ (10)

( )

( ) ( )

<sup>|</sup> and ij as well as the relative degree of incidence *r*ij [57]:

greater is the relative degree of incidence *r*ij between them. Assuming that *X*<sup>i</sup>

( () ( ) ( )) ( )

( () ( ) ( )) ( )

( ) '' ' ' 00 0

1 , 2 , , , , , . 11 1 ¢¢ ¢ ¢ = ¼= ¼ *ii i*

Subsequently, the zero‐starting point is determined analogously to Eq. (5), so there is the

X x 1, x 2 , ,x k , , , , x1 x1 x1

1


*n*

k 2

ij

of each sequence are divided by their initial value:

00 0 0

*ii i i*

possibility to calculate the areas |i

*X*j

can be determined by [23]:

22 Operations Research - the Art of Making Good Decisions

absolute degree ε

and *X*<sup>i</sup>

=

ij i j i j i j

=-= - + - å

<sup>1</sup> |s | | s | x x x x <sup>2</sup>

The following example of a PM relevant application shall illustrate the possibility of simplifying the indicator selection in the PM with GRA in case of poor data situations. Therefore, the estimations of 50 performance indicators, the possible KPIs, by five organizational experts over four quarters serve as initial data for the example. For the reference sequence, to reflect the corporate strategy as simple as possible, the cash flows over the four quarters are used. The 50 performance indicators show a pre-selected pool of indicators elicited, for example by interviews [10]. They can range from employee satisfaction over customer contentment to process quality, for instance. Then, the experts are encouraged to estimate the respective extent of the indicator kit in the considered period with regard to the Saaty scale (with 1 = very weak extent to 9 = very strong extent) [59]. After the other four experts have analogously estimated, the respective indicators in each period, an aggregated group matrix is created by the mean value of the experts' estimations (**Table 2**). The corresponding cash flows of the considered periods should fictitiously serve as a compliant financial target indicator of the corporation and thus as the reference sequence of the application example.

According to the equal length of all sequences, the values of **Table 3** can be normalized in a certain way by Eq. (10) in order to make the differently scaled sequences comparable (**Ta‐ ble 3**).

The indicators 1–50 do not require to consist of subjective data. For example, customer satisfaction, as a performance indicator, could be represented by an objective measure such as the amount of product returns, if existing. Subsequently, the sequences of **Table 3** need to be moved to an initial value of zero with the zero‐starting point operator of Eq. (6) (**Table 4**).


**Table 2.** Aggregated experts' estimations.


**Table 3.** Normalized aggregated estimations.

Then, it is possible to calculate the area between the abscissa and the respective sequence | ′| by Eq. (12). The geometrical nearness between a considered sequence and the cash flow reference sequence |ij′ | can be determined by Eq. (13) and consequently also the relative degree of incidence ij with the help of Eq. (14).

Thus, it is possible to provide a ranking of the geometrically most similar sequences with regard to the cash flow reference sequence (**Table 5**). In this example, the number of KPIs is limited to a count of ten as proposed by Markóczy and Goldberg as the optimal number to work with in PM [60].

GRA not only provides a ranking of the most important indicators of complex systems, it also offers the possibility to reveal the dependencies between the considered indicators by a network map. For this purpose, the relative degrees of incidence between the ten KPIs are determined by Eq. (13) (**Table 6**).


**Table 4.** Images with zero‐starting point.

periods should fictitiously serve as a compliant financial target indicator of the corporation

According to the equal length of all sequences, the values of **Table 3** can be normalized in a certain way by Eq. (10) in order to make the differently scaled sequences comparable (**Ta‐**

The indicators 1–50 do not require to consist of subjective data. For example, customer satisfaction, as a performance indicator, could be represented by an objective measure such as the amount of product returns, if existing. Subsequently, the sequences of **Table 3** need to be moved to an initial value of zero with the zero‐starting point operator of Eq. (6)

**Aggregated experts' estimations Period** *t Q***<sup>1</sup>** *Q***<sup>2</sup>** *Q***<sup>3</sup>** *Q***<sup>4</sup>** Reference sequence j: cash flow 1,000,000 1,500,000 1,750,000 1,250,000

*k*<sup>1</sup> 4.0000 4.4000 2.6000 4.6000 *k*<sup>2</sup> 5.4000 6.6000 4.8000 2.8000 *k*<sup>3</sup> 5.0000 4.0000 3.4000 5.2000 *k*<sup>4</sup> 7.0000 5.4000 4.8000 3.0000 … … … … … *k*<sup>50</sup> 5.2000 5.6000 5.4000 4.4000

**Normalized aggregated estimations Period** *t Q***<sup>1</sup>** *Q***<sup>2</sup>** *Q***<sup>3</sup>** *Q***<sup>4</sup>** Reference sequence j: cash flow 1.0000 1.5000 1.7500 1.2500

*k*<sup>1</sup> 1.0000 1.1000 0.6500 1.1500 *k*<sup>2</sup> 1.0000 1.2222 0.8889 0.5185 *k*<sup>3</sup> 1.0000 0.8000 0.6800 1.0400 *k*<sup>4</sup> 1.0000 0.7714 0.6857 0.4286 … ………… *k*<sup>50</sup> 1.0000 1.0769 1.0385 0.8462

Then, it is possible to calculate the area between the abscissa and the respective sequence |

by Eq. (12). The geometrical nearness between a considered sequence and the cash flow

′|

and thus as the reference sequence of the application example.

**ble 3**).

(**Table 4**).

Performance indicator (ki

Performance indicator (*k*<sup>i</sup>

)

24 Operations Research - the Art of Making Good Decisions

**Table 2.** Aggregated experts' estimations.

)

**Table 3.** Normalized aggregated estimations.


**Table 5.** Relative degrees of incidence of the performance indicators and their ranking.


**Table 6.** Network of KPI dependencies.

**Table 6** shows the relative degrees of incidence between the KPIs, which can be interpreted as reciprocal as these degrees can be understood as a kind of "Grey Correlation" [42]. However, similar to the DEMATEL approach, it is important to limit the dependencies to the really "essential" and "significant" ones. Therefore, the shaded fields are not considered subse‐ quently so that only those dependencies which exceed the threshold, the average of the matrix (mean value = 0.74161862), should remain for further analytical procedure.

#### **5. Results and conclusion**

The GST shows considerable advantages, particularly in a complex system as the PM. At the present time, it is indispensable to involve the dynamic environment in management control. For this purpose, it is necessary to continuously focus the corporate strategy and objectives in order to create a long‐term financial success. The problems that especially occur as a conse‐ quence of incomplete information and small sample sizes can be a huge hurdle. The PM requires a permanent update which cannot be enabled by mere application of the existing statistical methods. The PM represents a highly dynamical system with ever‐changing environmental conditions. This prohibits an appropriate data measurement with analysis by common statistical methods. Data alter before statistic samples can provide any analytic results. Therefore, it is important to seek methods with minimum data size demands. Accord‐ ing to that, the GST with its applications can be useful with its low requirements in sample sizes. Specifically, GRA offers important advantages for the selection of KPIs in poor data situations with the additional possibility of a visual representation of the revealed KPIs within a network of interdependencies.

In conclusion, GRA provides the feasibility to support the performance generation process and to assist PM as a tool‐selecting performance indicators in case of incomplete information with small sample sizes. Besides, GRA is able to visualize the performance generation in a map that facilitates steering and control of the organization in the framework of Performance Manage‐ ment [35]. The ability to include financial and non-financial measures provides further advantages for GRA. So, it definitely appears suitable as an OR tool for management control, in particular in PM.

GRA as one of the submethods of the GST will help to improve the informational bases of PM by its possibilities of flexible usage. Therefore, GRA should serve as a feedback as well as a feedforward-oriented PM support. Initially, it provides intersubjectivated data for the performance management, which then disposes of improved informational bases for counteraction measures. After structural breaks of the system, in which PM is implemented, GRA is supposed to inform about such defects and should operate as a feedforward-oriented support for deriving, validating and implementing a new causal model.

The rising number of OR-publications on GST issues demonstrates the enhancing importance of this theory for the analysis of complex systems. However, there are only a small number of articles in the PM literature referring to GST [49]. GST with its wide range of applications is nevertheless an appropriate OR method to support PM. Because of its relevance specifically in poor data situations with incomplete information, PM literature should increasingly focus GST as an important support instrument.

#### **Author details**

**KPI** *k***<sup>15</sup>** *k***<sup>10</sup>** *k***<sup>14</sup>** *k***<sup>13</sup>** *k***<sup>23</sup>** *k***<sup>21</sup>** *k***<sup>31</sup>** *k***<sup>43</sup>** *k***<sup>24</sup>** *k***<sup>12</sup>** *k***<sup>15</sup>** 1.0000 0.4995 0.5629 0.9219 0.8400 0.9422 0.7285 0.9904 0.5635 0.5948 *k***<sup>10</sup>** 1.0000 0.8161 0.4793 0.5520 0.5153 0.6138 0.5020 0.8148 0.7572 *k***<sup>14</sup>** 1.0000 0.5373 0.6305 0.5830 0.7123 0.5660 0.9980 0.9129 *k***<sup>13</sup>** 1.0000 0.7842 0.8725 0.6862 0.9137 0.5378 0.5663 *k***<sup>23</sup>** 1.0000 0.8857 0.8458 0.8469 0.6312 0.6708 *k***<sup>21</sup>** 1.0000 0.7626 0.9508 0.5837 0.6174 *k***<sup>31</sup>** 1.0000 0.7337 0.7133 0.7642 *k***<sup>43</sup>** 1.0000 0.5666 0.5983 *k***<sup>24</sup>** 1.0000 0.9145 *k***<sup>12</sup>** 1.0000

**Table 6** shows the relative degrees of incidence between the KPIs, which can be interpreted as reciprocal as these degrees can be understood as a kind of "Grey Correlation" [42]. However, similar to the DEMATEL approach, it is important to limit the dependencies to the really "essential" and "significant" ones. Therefore, the shaded fields are not considered subse‐ quently so that only those dependencies which exceed the threshold, the average of the matrix

The GST shows considerable advantages, particularly in a complex system as the PM. At the present time, it is indispensable to involve the dynamic environment in management control. For this purpose, it is necessary to continuously focus the corporate strategy and objectives in order to create a long‐term financial success. The problems that especially occur as a conse‐ quence of incomplete information and small sample sizes can be a huge hurdle. The PM requires a permanent update which cannot be enabled by mere application of the existing statistical methods. The PM represents a highly dynamical system with ever‐changing environmental conditions. This prohibits an appropriate data measurement with analysis by common statistical methods. Data alter before statistic samples can provide any analytic results. Therefore, it is important to seek methods with minimum data size demands. Accord‐ ing to that, the GST with its applications can be useful with its low requirements in sample sizes. Specifically, GRA offers important advantages for the selection of KPIs in poor data situations with the additional possibility of a visual representation of the revealed KPIs within

In conclusion, GRA provides the feasibility to support the performance generation process and to assist PM as a tool‐selecting performance indicators in case of incomplete information with small sample sizes. Besides, GRA is able to visualize the performance generation in a map that facilitates steering and control of the organization in the framework of Performance Manage‐

(mean value = 0.74161862), should remain for further analytical procedure.

**Table 6.** Network of KPI dependencies.

26 Operations Research - the Art of Making Good Decisions

**5. Results and conclusion**

a network of interdependencies.

Thorben Hustedt, Wolfgang Ossadnik\* and Fabian Burrey

\*Address all correspondence to: wolfgang.ossadnik@uni-osnabrueck.de

Department of Management Science/Management Accounting and Control, University of Osnabrück, Osnabrück, Germany

#### **References**


ung für die Controllinglehre und Stimulanz für deren Weiterentwicklung] (to be published). In: Funk W, Rossmanith J, editors. International Accounting and Interna‐ tional Controlling [Internationale Rechnungslegung und Internationales Controlling]. 3rd ed. Wiesbaden: Gabler; 2017. p. 1–31.


[18] Heneman R L, Ledford Jr. G E, Gresham M T. The changing nature of work and its effects on compensation design and delivery. In: Heneman R L, editor. Strategic Reward Management: Design, Implementation, and Evaluation. 1st ed. Greenwich, CT: Information Age Publishing; 2002. p. 35–74.

ung für die Controllinglehre und Stimulanz für deren Weiterentwicklung] (to be published). In: Funk W, Rossmanith J, editors. International Accounting and Interna‐ tional Controlling [Internationale Rechnungslegung und Internationales Controlling].

[5] Ghalayini A M, Noble J S. The changing basis of performance measurement. Interna‐ tional Journal of Operations and Production Management. 1996;16(8):63–80. DOI:

[6] Marr B, Schiuma G, Neely A. Intellectual capital—defining key performance indicators for organizational knowledge assets. Business Process Management Journal.

[7] Neely A, Gregory M, Platts K. Performance measurement system design: a literature review and research agenda. International Journal of Operations and Production

[8] Nørreklit H. The balance on the balanced scorecard—a critical analysis of some of its assumptions. Management Accounting Research. 2000;11(1):65–88. DOI: 10.1006/mare.

[9] Buytendijk F, Hatch T, Micheli P. Scenario‐based strategy maps. Business Horizons.

[10] Abernethy M A, Horne M, Lillis A M, Malina M A, Selto F H. A multi‐method approach to building causal performance maps from expert knowledge. Management Account‐

[11] Richard P J, Devinney T M, Yip G S, Johnson G. Measuring organizational performance: towards methodological best practice. Journal of Management. 2009;35(3):718–804.

[12] Chytas P, Glykas M, Valiris G. A proactive balanced scorecard. International Journal of Information Management. 2011;31(5):460–468. DOI: 10.1016/j.ijinfomgt.2010.12.007.

[13] Ambrosini V, Bowman C. Tacit knowledge: some suggestions for operationalization. Journal of Management Studies. 2001;38(6):811–829. DOI: 10.1111/1467‐6486.00260.

[14] Nonaka I. A dynamic theory of organizational knowledge creation. Organization

[15] Bretzke W R. The Problem Reference of Decision Models [Der Problembezug von

[16] Bourne M, Mills J, Wilcox M, Neely A, Platts K. Designing, implementing and updating performance measurement systems. International Journal of Operations and Produc‐

[17] Simons R, Dávila A. How high is your return on management?. Harvard Business

Entscheidungsmodellen]. 1st ed. Tübingen: Mohr Siebeck; 1980. p. 280.

tion Management. 2000;20(7):754–771. DOI: 10.1108/01443570010330739.

3rd ed. Wiesbaden: Gabler; 2017. p. 1–31.

2004;10(5):551–569. DOI: 10.1108/14637150410559225.

2010;53(4):335–347. DOI: 10.1016/j.bushor.2010.02.002.

Science. 1994;5(1):14–37. DOI: 10.1287/orsc.5.1.14.

Management. 1995;15(4):80–116. DOI: 10.1108/01443579510083622.

ing Research. 2005;16(2):135–155. DOI: 10.1016/j.mar.2005.03.003.

10.1108/01443579610125787.

28 Operations Research - the Art of Making Good Decisions

DOI: 10.1177/0149206308330560.

Review. 1998;76(1):70–80.

1999.0121.


[46] Anderson J C, Gerbing D W. Structural equation modeling in practice: a review and recommended two‐step approach. Psychological Bulletin. 1988;103(3):411–423. DOI: 10.1037/0033‐2909.103.3.411.

[32] Axelrod R. Structure of Decision: The Cognitive Maps of Political Elites. 1st ed.

[33] Fiol C M, Huff A S. Maps for Managers: Where are we? Where do we go from here?. Journal of Management Studies. 1992;29(3):267–285. DOI: 10.1111/j.

[34] Kasperskaya Y, Tayles M. The role of causal links in performance measurement models. Managerial Auditing Journal. 2013;28(5):426–443. DOI: 10.1108/02686901311327209.

[35] Bititci U, Cocca P, Ates A. Impact of visual performance management systems on the performance management practices of organisations. International Journal of Produc‐

[36] Deng J. Introduction to grey system theory. The Journal of Grey System. 1989;1(1):1–

[37] Nørreklit H, Mitchell F. The balanced scorecard. In: Hopper T, Northcott D, Scapens R, editors. Issues in Management Accounting. 3rd ed. London: Prentice Hall; 2007. p. 175–

[38] Speckbacher G, Bischof J, Pfeiffer T. A descriptive analysis on the implementation of balanced scorecards in German‐speaking countries. Management Accounting Re‐

[39] Kelly K. Accuracy of relative weights on multiple leading performance measures: effects on managerial performance and knowledge. Contemporary Accounting

[40] Chen M‐Y, Li Z, Zhou L, Xiong H, An X. SCGM‐model and grey control of "poor" information systems. Kybernetes. 2004;33(2):231–237. DOI:

[41] Deng J. Control problems of grey systems. Systems and Control Letters. 1982;1(5):288–

[42] Lin Y, Chen M‐Y, Liu S. Theory of grey systems: capturing uncertainties of grey information. Kybernetes. 2004;33(2):196–218. DOI: 10.1108/03684920410514139.

[43] Gujarati D N, Porter D C. Basic Econometrics. 5th ed. Singapore: McGraw‐Hill Educa‐

[44] Kelley K, Maxwell S E. Sample size for multiple regression: obtaining regression coefficients that are accurate, not simply significant. Psychological Methods. 2003;8(3):

[45] McDonald R P, Ho M‐H R. Principles and practice in reporting structural equation analyses. Psychological Methods. 2002;7(1):64–82. DOI: 10.1037//1082‐989X.7.1.64.

search. 2003;14(4):361–388. DOI: 10.1016/j.mar.2003.10.001.

Research. 2010;27(2):577–608. DOI: 10.1111/j.1911‐3846.2010.01017.x.

tion Research. 2016;54(6):1571–1593. DOI: 10.1080/00207543.2015.1005770.

Princeton, NJ: Princeton University Press; 1976. p. 422.

1467‐6486.1992.tb00665.x.

30 Operations Research - the Art of Making Good Decisions

10.1108/03684920410514157.

tion Ltd.; 2009. p. 922.

305–321. DOI: 10.1037/1082‐989X.8.3.305

24.

198.

294.


gleichungs‐Modells]. Schmalenbachs Zeitschrift für betriebswirtschaftliche Forschung. 2006;58(1):2–33. DOI: 10.1007/BF03371642


#### **Application of Lean Methodologies in a Neurosurgery High Dependency Unit Application of Lean Methodologies in a Neurosurgery High Dependency Unit**

Ricardo Balau Esteves, Susana Garrido Azevedo and Francisco Proença Brójo Ricardo Balau Esteves, Susana Garrido Azevedo and Francisco Proença Brójo

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/64715

#### **Abstract**

gleichungs‐Modells]. Schmalenbachs Zeitschrift für betriebswirtschaftliche Forschung.

[59] Saaty T L. Decision Making—The analytical hierarchy and network processes (AHP/ ANP). Journal of Systems Science and Systems Engineering. 2004;13(1):1–35. DOI:

[60] Markóczy L, Goldberg J. A method for eliciting and comparing causal maps. Journal

of Management. 1995;21(2):305–333. DOI: 10.1177/014920639502100207.

2006;58(1):2–33. DOI: 10.1007/BF03371642

10.1007/s11518‐006‐0151‐5.

32 Operations Research - the Art of Making Good Decisions

This study aims to apply Lean methodologies at a neurosurgery high dependency unit (NHDU) for increasing safety and quality on the care delivered to acute neuropatients and to reduce time, steps, and distance travelled by nurses accessing life support equipment (LSE). The methodology used in this study is an action research, supported by a longitudinal mixed method approach with a one‐group within‐subjects pretest‐ posttest experimental type design. Resulting in a high waste of time, steps, and distance travelled to reach them. After the application of Lean methodologies, distance, steps, and time travelled by Nurses were quite improved. Lean methodologies applied in NHDU contributed to improve the organization, availability, and accessibility of LSE by putting them at the point‐of‐use. Quality and safety of patient care were also improved by allowing almost immediate life support interventions. Resistance to change was the major limitation. The Lean philosophy empowers health facility managers with tools and methodologies that help them create health gains, implement a culture of continuous improvement of care and working environment, identify and eliminate barriers, and waste that limits the work of staff in providing quality services and saving lives. This chapter highlights the responsibility of health facility managers to properly organize health units to cope with emergency situations, by allowing immediate, efficient, and effective intervention of staff.

**Keywords:** lean methodologies, critical care nursing, management, work simplifica‐ tion, action research

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### **1. Introduction**

Imagine that in a ward or in an acute care unit, a patient develops a sudden and severe laryngeal edema and stops breathing for obstruction of the respiratory tract. Nurse and medical staff starts‐up advanced life support (ALS) maneuvers. The primary and emerging interventions are to permeabilize the airway accessing the trachea through an endotracheal tube. Not being possible to access it by the usual routes, the only solution is to perform a tracheotomy or cricothyrotomy, using a tracheotomy surgical tray (TST) or an emergency cricothyrotomy kit (ECK). However, the health team most of the times do not know its existence, location, or has difficulties in accessing it in due time, what could have as conse‐ quence the loss of a life.

Portuguese Directorate‐General of Health (DGS) points out on the Circular Normative no. 15/ DQS/DQCO of 22/06/2010 [1] that "patients who are admitted in hospitals believe that they are being admitted to a safe environment. They feel confident that if their clinical condition gets worse, they are in the best place for a prompt and effective intervention. However, there is some evidence that this does not always happen" (p. 1). This Circular also states, "ALL inpatient areas should have easy and immediate access to equipment, supplies and emergency drugs. They should be organized and stored in a standardized way… throughout the health unit" (p.6). However, compliance with these recommendations depends, above all, on political and management decisions of legislators, regulators, managers, and industry providers, and it is at the health institutions jurisdiction to "adequate resources and create the structures that leads to quality professional practice" [2]. For the Portuguese Republic Government (Governo da República Portuguesa—GRP) is "fundamental, that the available resources are better used, avoiding waste, that is, improving management, transparency, and accountability for the use of money from the citizens" [3]. Corvi [4] draws attention to the "waste epidemic in health care," as acknowledge by the GRP, and so a great opportunity to improve, being fundamental a spirit of continuous learning as part of "implementing a lean management system" [4].

The Intensive Care Society [5] recommends "all critical care areas should have their own, appropriately stocked and checked difficult airway trolley to deal with airway and tracheos‐ tomy emergencies" (p.11). The absence or inaccessibility to this kind of equipment can lead to adverse events with huge impact on the safety and lives of patients, mainly the critical ones. To the matter of the impact of layout configurations in hospital environment, Soriano‐Meier et al. [6] points out that "inadequate facility layout negatively affects the performance of the service staff, the quality of care provision and the service temporally over time" (p. 255).

At a particular neurosurgery high dependency unit (NHDU) of a central hospital in Lisbon, problems related to design, layout, architectural barriers, accessibility to life support equip‐ ment (LSE) and wastes of time, handling, and transport were identified. Those that may infer greater impact on patients and on the provision of care taken by nurses are as follows: (a) accessibility to LSE, (b) difficulty/inexperience in the use of the resuscitation trolley (RT), and (c) lack of knowledge of ECK and TST existence and location. This chapter summarizes the action research study embarked with the purpose of testing the application of Lean method‐ ologies at NHDU for a quality and safety provision of care to acute neuropatients and to reduce at least by half, time, steps, and distance travelled by nurses' accessing LSE. Gemba walk, value stream mapping, spaghetti diagrams, 5S, and JIT (just‐in‐time) were the main Lean method‐ ologies used.

Gemba is the Japanese term used for Shop floor, where products are produced, or where the services are provided [7, 8]. To start an improvement project, it is critical to analyze the current and real situation of an organization or its workplace. Therefore, the gemba should analyze processes, time of setups, physical layout and surroundings with an open mind, to detect where, how, and why clients and staff experience problems [7–9].

The value stream mapping (VSM) is one of the main lean methodologies that engages waste elimination in any organization [10]. VSM will help to identify and analyze, for example, problems experienced by stakeholders, errors in medicine, flow of processes and work, financial analysis, among others. VSM allows checking (visual and graphically) the current state of a particular procedure, its productive time (value‐added), and the non‐productive time (non‐value‐added) [11].

Jackson [12] argues that 5S is the foundation of the Toyota production system (TPS). What this methodology tries to ensure is an orderly organized workspace for an efficient and safe work environment [10], increased productivity, fewer errors, and less waste [8]. 5S represents the five levels of this methodology starting with the letter "S". In Japanese vocabulary: SEIRI (sort), SEITON (set in order), SEISO (shine), SEIKETSU (standardize), and SHITSUKE (sustain). Smart [13] summarizes this methodology with the expression "a place for everything and everything in its place" (p. 62).

JIT is a production process that targets the optimization of the process as a whole in a contin‐ uous flow improvement and tries to answer to the organization or service needs. Briefly, it means producing no sooner, no later, neither more or less, only and just the necessary [8, 10].

Another Lean methodology is the spaghetti diagram. This diagram consists on a graphic reproduction of the architectural floor plan of a structure, where you draw lines from one space to another, representing the path taken by employees, customers, and objects along a particular process (round trip) [12, 14]. It allows documenting and visualizing the physical flow, in order to identify waste motion or transportation, architectural barriers, and improvement opportu‐ nities to expedite process flow [15].

This chapter is organized as follows: Following the introduction, a literature review on Lean philosophy is performed, then the methodology used in the research, and the results of the action research are described. Finally, some discussion and conclusions are drawn.

#### **2. Lean philosophy**

**1. Introduction**

34 Operations Research - the Art of Making Good Decisions

quence the loss of a life.

Imagine that in a ward or in an acute care unit, a patient develops a sudden and severe laryngeal edema and stops breathing for obstruction of the respiratory tract. Nurse and medical staff starts‐up advanced life support (ALS) maneuvers. The primary and emerging interventions are to permeabilize the airway accessing the trachea through an endotracheal tube. Not being possible to access it by the usual routes, the only solution is to perform a tracheotomy or cricothyrotomy, using a tracheotomy surgical tray (TST) or an emergency cricothyrotomy kit (ECK). However, the health team most of the times do not know its existence, location, or has difficulties in accessing it in due time, what could have as conse‐

Portuguese Directorate‐General of Health (DGS) points out on the Circular Normative no. 15/ DQS/DQCO of 22/06/2010 [1] that "patients who are admitted in hospitals believe that they are being admitted to a safe environment. They feel confident that if their clinical condition gets worse, they are in the best place for a prompt and effective intervention. However, there is some evidence that this does not always happen" (p. 1). This Circular also states, "ALL inpatient areas should have easy and immediate access to equipment, supplies and emergency drugs. They should be organized and stored in a standardized way… throughout the health unit" (p.6). However, compliance with these recommendations depends, above all, on political and management decisions of legislators, regulators, managers, and industry providers, and it is at the health institutions jurisdiction to "adequate resources and create the structures that leads to quality professional practice" [2]. For the Portuguese Republic Government (Governo da República Portuguesa—GRP) is "fundamental, that the available resources are better used, avoiding waste, that is, improving management, transparency, and accountability for the use of money from the citizens" [3]. Corvi [4] draws attention to the "waste epidemic in health care," as acknowledge by the GRP, and so a great opportunity to improve, being fundamental a spirit of continuous learning as part of "implementing a lean management system" [4].

The Intensive Care Society [5] recommends "all critical care areas should have their own, appropriately stocked and checked difficult airway trolley to deal with airway and tracheos‐ tomy emergencies" (p.11). The absence or inaccessibility to this kind of equipment can lead to adverse events with huge impact on the safety and lives of patients, mainly the critical ones. To the matter of the impact of layout configurations in hospital environment, Soriano‐Meier et al. [6] points out that "inadequate facility layout negatively affects the performance of the service staff, the quality of care provision and the service temporally over time" (p. 255).

At a particular neurosurgery high dependency unit (NHDU) of a central hospital in Lisbon, problems related to design, layout, architectural barriers, accessibility to life support equip‐ ment (LSE) and wastes of time, handling, and transport were identified. Those that may infer greater impact on patients and on the provision of care taken by nurses are as follows: (a) accessibility to LSE, (b) difficulty/inexperience in the use of the resuscitation trolley (RT), and (c) lack of knowledge of ECK and TST existence and location. This chapter summarizes the action research study embarked with the purpose of testing the application of Lean method‐ ologies at NHDU for a quality and safety provision of care to acute neuropatients and to reduce

It was through Krafcik [16] that the Lean term was released thus referring to the TPS as a lean production system: A system that uses less resources compared to the mass production systems. Less effort, less capital investment, less space, and less time [17]. The Lean philosophy is essentially focused on waste reduction as a means to increase actual value‐added, in order to fulfill customer needs and maintain profitability [17]. The fundamental focuses of Lean are respect for people, teamwork, waste elimination, continuous improvement, value, quality, and safety [8, 16–18]. Several authors have highlighted this and other key principles of Lean philosophy, such as follows: (i) customer relationship [19]; (ii) total quality management (TQM) [20]; (iii) JIT [21, 22]; (iv) pull production/flow [19, 20]; (v) supplier relationships/long‐term business relationship [21]; (vi) mistake‐proofing [23]; (vii) total productive maintenance (TPM) [22]; and (viii) physical layout [6]. At the operational level, the Lean paradigm is implemented using a number of techniques such as kanban, 5S, visual control, takt‐time, poke‐yoke, and single minute exchange of die (SMED) [24].

To Imai [8], the importance of applying a Lean philosophy in an organization has at least three components: (1) any activity or process that does not add value is waste, independently of being practiced by people or machines; (2) the reduction or elimination of waste may be the most cost‐effective way for improving productivity and reduce operating costs instead of increasing investment in the hope of adding value. Moreover, investing in new equipment is expensive while eliminating waste, in most cases, has no costs. (3) Standardization of processes ensures quality and error prevention. Womack et al. [17] documented the benefits of Lean philosophy compared to the mass production model, arguing that this philosophy would succeed, not only in the automotive industry or aviation, but also in all activities from distribution, retail, and healthcare. Not being the solution to all the problems that health services faces today, the Lean philosophy can bring significant benefits to this sector and in a range of hospital areas [25], contributing to develop the continuous improvement into the organizational culture and improving quality of care, efficiency and effectiveness, while reducing costs, errors, and waste.

In the Portuguese healthcare sector, the implementation of Lean philosophy has been focused on some specific areas such as quality [26, 27] logistics, supply and storage [28–30], agility and continuous process improvement [31–33], workplace reorganization [34], and reducing waiting times [35, 36]. Particularly in services such as community health centers [37], operating room, imaging, ophthalmology, outpatient, ward, pharmacy, and warehouse. Other studies focused on conducting systematic reviews [38, 39]. There is thus a research gap in applying the Lean philosophy to inhospital medical emergency, especially in inpatient critical care services.

#### **3. Methodology**

The methodology used in this study was an action research, supported by a longitudinal mixed method approach with a one‐group within‐subjects pretest‐posttest experimental type design.

Lewin [40] suggests the existence of a cycle in action research. It begins with the diagnosis and identification of the problem(s) with all participants in a democratic way and then follows the proposal, planning interventions, and actions of change. Subsequently, the impact of the changes is monitored, the data collected, analyzed, interpreted, and finally results are reported. This is a flexible research methodology that integrates an exploratory action in order to investigate and support the implementation of changes according to the diagnosis raised [41]. Action research claims that the researcher participates in the change process since the changes suggested are implemented by himself, that is, he "take action to improve the practice and study … the effects of the action taken" [42]. Yin [43] considers this methodology as a variant of qualitative research that emphasizes the researcher action role and his active collaboration with the research participants.

#### **3.1. Research design**

is essentially focused on waste reduction as a means to increase actual value‐added, in order to fulfill customer needs and maintain profitability [17]. The fundamental focuses of Lean are respect for people, teamwork, waste elimination, continuous improvement, value, quality, and safety [8, 16–18]. Several authors have highlighted this and other key principles of Lean philosophy, such as follows: (i) customer relationship [19]; (ii) total quality management (TQM) [20]; (iii) JIT [21, 22]; (iv) pull production/flow [19, 20]; (v) supplier relationships/long‐term business relationship [21]; (vi) mistake‐proofing [23]; (vii) total productive maintenance (TPM) [22]; and (viii) physical layout [6]. At the operational level, the Lean paradigm is implemented using a number of techniques such as kanban, 5S, visual control, takt‐time, poke‐yoke, and

To Imai [8], the importance of applying a Lean philosophy in an organization has at least three components: (1) any activity or process that does not add value is waste, independently of being practiced by people or machines; (2) the reduction or elimination of waste may be the most cost‐effective way for improving productivity and reduce operating costs instead of increasing investment in the hope of adding value. Moreover, investing in new equipment is expensive while eliminating waste, in most cases, has no costs. (3) Standardization of processes ensures quality and error prevention. Womack et al. [17] documented the benefits of Lean philosophy compared to the mass production model, arguing that this philosophy would succeed, not only in the automotive industry or aviation, but also in all activities from distribution, retail, and healthcare. Not being the solution to all the problems that health services faces today, the Lean philosophy can bring significant benefits to this sector and in a range of hospital areas [25], contributing to develop the continuous improvement into the organizational culture and improving quality of care, efficiency and effectiveness, while

In the Portuguese healthcare sector, the implementation of Lean philosophy has been focused on some specific areas such as quality [26, 27] logistics, supply and storage [28–30], agility and continuous process improvement [31–33], workplace reorganization [34], and reducing waiting times [35, 36]. Particularly in services such as community health centers [37], operating room, imaging, ophthalmology, outpatient, ward, pharmacy, and warehouse. Other studies focused on conducting systematic reviews [38, 39]. There is thus a research gap in applying the Lean philosophy to inhospital medical emergency, especially in inpatient critical care

The methodology used in this study was an action research, supported by a longitudinal mixed method approach with a one‐group within‐subjects pretest‐posttest experimental type design. Lewin [40] suggests the existence of a cycle in action research. It begins with the diagnosis and identification of the problem(s) with all participants in a democratic way and then follows the proposal, planning interventions, and actions of change. Subsequently, the impact of the changes is monitored, the data collected, analyzed, interpreted, and finally results are reported.

single minute exchange of die (SMED) [24].

36 Operations Research - the Art of Making Good Decisions

reducing costs, errors, and waste.

services.

**3. Methodology**

The research was performed at a level 2 patient care four‐bedded NHDU. This unit shares human resources, equipment, and materials with the 44‐bedded standard care neurosurgery and neurotraumatology wards. NHDU is a healthcare facility specialized in the care of neuropatients undergoing neurological, hemodynamic, and respiratory instability with the eventual need of non‐invasive or invasive ventilatory support by tracheotomy. These patients require critical care nursing and permanent vigilance that, although not requiring intensive care, may potential and quickly evolve to a severe status and thus the need of an immediate intervention. Nurse:patient ratio is 1:4. Located in one of an 802‐bedded triple hospital centre at the metropolitan area of Lisbon (Portugal), this centre serves about a million people population. Data available from 2013 institutional performance reports show a surgical movement of 1423 neurosurgeries and a bed occupancy rate of 87.7% and 91.4% at the neurotrauma and neurosurgery wards, respectively.

The research was authorized by the NHDU Medical Director, the NHDU Chief Nurse, and the Ethics for Health Committee of the hospital centre. The unit of analysis is the NHDU with the corresponding nurse team. A convenience sampling was used attending to nurses' availability during the period that took place the visit of the researcher. The two nurses of the management team (chief and coordinator) were excluded from this sample since the purpose was to simulate the performance of the direct care nurses. Thus, from a population of 20 nurses, a sample of 12 nurses (60%) was selected. This is a longitudinal research in which data were collected from two points in time, which allowed studying the changes that have occurred during the period in which it was conducted (November 2014 to January 2015).

The research design follows several phases. The main three phases were (1) pre‐intervention, (2) intervention, and (3) post‐intervention, in which a simultaneous mixed method approach (qualitative and quantitative) was applied. The pre‐intervention phase was further divided into three sub‐phases: (i) diagnostic assessment (qualitative approach), (ii) simulation (quan‐ titative approach), and (iii) proposal of changes (qualitative approach). The intervention phase consisted on the application of 5S and JIT lean methodologies. The post‐intervention phase was divided into two qualitative approaches: (i) simulation and (ii) unstructured interview.

The pre‐intervention diagnostic assessment sub‐phase involved the following activities: (a) direct observation of the physical space performed by the participant researcher (PR) which focused mainly on the layout of the NHDU and the location of existing materials and equip‐ ment. To support the gemba walk, pictures and paper record with graphical representation of the service plan were used to complement the visual management and spaghetti diagram. The transition to digital record was made using Microsoft® Office® 2013 software. (b) Personal unstructured interviews performed by the PR to the nurse team, and questionnaires to identify the difficulties and constraints of nurses in their professional daily routines, especially in emergency situations. The questionnaires were anonymous and blind in order to guarantee their confidentiality. The participants returned them in a sealed envelope deposited at a container left in the nursing room. The analysis of questionnaires and interviews was per‐ formed using qualitative content analysis, and it was organized according to the research variables, the types of wastes considered by the Lean philosophy and the suggestions of change by the participants.

The pre‐intervention simulation sub‐phase was accomplished by measuring time, distance, and number of steps (dependent variables) undertaken by nurses in the access to LSE (RT, ECK, TST, and automated infusion systems (AIS)). The simulation context was used because during the research it was not possible to monitor the tasks developed by nurses in a real context. As measuring instruments, the Nokia® 6230 mobile phone chronometer was used to monitor timing performance in seconds, rounded to the unit. Sixty meters' tape Stanley PowerWinder® was used to measure the distance travelled by nurses, with data rounded to the first decimal place. The PR counted the number of steps, and the data were triangulated with the participant itself. The monitoring was performed from the point of departure (nurses' station), arrival to LSE and return to the starting point with the respective LSE.

The third pre‐intervention sub‐phase was completed by the suggestion of changes presented, as a proposal like determined by Lewin [40], to the Medical Director and Chief Nurse of NHDU.

The intervention phase consisted on the application of lean 5S and JIT methodologies for the reorganization of physical space, equipment location, and NHDU inventory. The tasks performed by the researcher in this phase consisted on the organization of the contents in the NHDU large cabinet, relocation, and availability of TST and AIS. The reorganization of RT, ECK, and NHDU small cabinets was performed with the help of the nurses' management and direct care team. Other human resources such as nurses' aides and the hospital carpentry services were involved to perform small changes and to construct small furniture. Stock boxes abandoned in the hospital storage were recycled and used for better storage and visual management of cabinet contents.

The post‐intervention phase was divided into two sub‐phases: (i) simulations, using the same methodology and equipments applied in the pre‐intervention. (ii) Unstructured interviews, using the same methodology as in the pre‐intervention to collect the opinion of nurses regarding the interventions made to the unit, and how this influenced their daily routines and professional practice.

The quantitative results are presented comparing the pre‐intervention with the post‐interven‐ tion phases, allowing a more direct comparison of the data. The IBM SPSS Statistics version 21 and Microsoft® Office® 2013 Excel version 15 software were used for the statistical analysis of data. For the statistical hypothesis tests, the parametric Student's t‐test with a significance level of 0.025 (one‐sided) was used, such as the nonparametric Wilcoxon W‐test with the exact significance of 0.025 (one‐sided) for the poorly distributed data situations [44]. Standardized response mean, calculated through MedCalc Statistical Software version 15.2.2, was used to analyze the effect size (Cohen's d) of the intervention made by the application of Lean methodologies, representing the independent variable. The qualitative results are summarized in tables with transcription of the nurses opinions collected from the interviews and the summary of the answers given by them in the questionnaires. Spaghetti diagrams and photographs are also used for better contextualization.

Attending to the literature review and the pre‐intervention phase the following hypotheses are formulated:

H01: The difference of TST time of access between pre‐ and post‐intervention equals zero.

H02: The difference of TST distance of access travelled between pre‐ and post‐intervention equals zero.

H03: The difference of TST number of steps of access between pre‐ and post‐intervention equals zero.

H04: The difference of AIS time of access between pre‐ and post‐intervention equals zero.

H05: The difference of AIS distance of access travelled between pre‐ and post‐intervention equals zero.

H06: The difference of AIS number of steps of access between pre‐ and post‐intervention equals zero.

#### **4. Results of the action research**

transition to digital record was made using Microsoft® Office® 2013 software. (b) Personal unstructured interviews performed by the PR to the nurse team, and questionnaires to identify the difficulties and constraints of nurses in their professional daily routines, especially in emergency situations. The questionnaires were anonymous and blind in order to guarantee their confidentiality. The participants returned them in a sealed envelope deposited at a container left in the nursing room. The analysis of questionnaires and interviews was per‐ formed using qualitative content analysis, and it was organized according to the research variables, the types of wastes considered by the Lean philosophy and the suggestions of change

The pre‐intervention simulation sub‐phase was accomplished by measuring time, distance, and number of steps (dependent variables) undertaken by nurses in the access to LSE (RT, ECK, TST, and automated infusion systems (AIS)). The simulation context was used because during the research it was not possible to monitor the tasks developed by nurses in a real context. As measuring instruments, the Nokia® 6230 mobile phone chronometer was used to monitor timing performance in seconds, rounded to the unit. Sixty meters' tape Stanley PowerWinder® was used to measure the distance travelled by nurses, with data rounded to the first decimal place. The PR counted the number of steps, and the data were triangulated with the participant itself. The monitoring was performed from the point of departure (nurses'

The third pre‐intervention sub‐phase was completed by the suggestion of changes presented, as a proposal like determined by Lewin [40], to the Medical Director and Chief Nurse of NHDU.

The intervention phase consisted on the application of lean 5S and JIT methodologies for the reorganization of physical space, equipment location, and NHDU inventory. The tasks performed by the researcher in this phase consisted on the organization of the contents in the NHDU large cabinet, relocation, and availability of TST and AIS. The reorganization of RT, ECK, and NHDU small cabinets was performed with the help of the nurses' management and direct care team. Other human resources such as nurses' aides and the hospital carpentry services were involved to perform small changes and to construct small furniture. Stock boxes abandoned in the hospital storage were recycled and used for better storage and visual

The post‐intervention phase was divided into two sub‐phases: (i) simulations, using the same methodology and equipments applied in the pre‐intervention. (ii) Unstructured interviews, using the same methodology as in the pre‐intervention to collect the opinion of nurses regarding the interventions made to the unit, and how this influenced their daily routines and

The quantitative results are presented comparing the pre‐intervention with the post‐interven‐ tion phases, allowing a more direct comparison of the data. The IBM SPSS Statistics version 21 and Microsoft® Office® 2013 Excel version 15 software were used for the statistical analysis of data. For the statistical hypothesis tests, the parametric Student's t‐test with a significance level of 0.025 (one‐sided) was used, such as the nonparametric Wilcoxon W‐test with the exact significance of 0.025 (one‐sided) for the poorly distributed data situations [44]. Standardized

station), arrival to LSE and return to the starting point with the respective LSE.

by the participants.

38 Operations Research - the Art of Making Good Decisions

management of cabinet contents.

professional practice.

Throughout Gemba Walk, twelve unstructured interviews were carried out to nurses in order to identify their difficulties in their professional daily routines and what kind of improvements they would like to implement in NHDU (**Table 1**). The collected data focused mainly on the inadequate layout and location of equipment, poor organization of clinical material in NHDU cabinets and units of patients, obstacles, restricted circulation and workspaces, frequent journeys out of NHDU to supply missing materials and equipment, and difficulty in imple‐ menting improvements because of a great resistance to change. According to these interviews only 50% of nurses knew the existence and location of TST, and only 33.3% of nurses knew the ECK existence or location. For AIS and RT, all participants were aware of them. After Lean methodologies' intervention and education, 100% of the participants were aware of all life support equipment.

In addition to the interviews, questionnaires were delivered to 12 nurses and eight were returned, representing a 67% response rate. The purpose of the questionnaire was to identify the set of difficulties felt by nurses in their daily professional life in NHDU, mainly in emer‐ gencies, monitoring and surveillance of the acute neurosurgical patient. The questionnaire made also possible to study the kind of wastes (according to Lean philosophy) the nurses identify. The collected data from questionnaires are summarized in **Table 2** that includes suggestions provided by the respondents.


**Table 1.** Excerpts from interviews.


**Table 2.** Data Collected from Questionnaires.

A "My greatest difficulty in NHDU is to always have to go out of the unit to look for supplies …either because we

B "We should have an adequate level of stocks according to our needs and not have to always go 'out there' seek for

C "The NHDU should be independent from all resources of Neurosurgery… Nurses and nurses' aides should be dedicated to NHDU… Stock, equipment and supplies should be replenished regularly and directly by the supply

D "The vital signs monitors should be fixed to the wall for not taking up space in patients' desk . . . and because

E "It's hard to work when there is not enough space to move around the patient bed without going against curtains,

G "Patients from one bed can touch and reach things of next patients because everything is so tight and so close to each other… Patients are potentially contaminating each other … and we ourselves have a hard time for this

H "We have no space to put a RT next to the patient's units … it is impossible to make secure ALS with the available

I "We usually are trained in basic life support every year, but we should also be trained in the use of RT and ALS… I have some difficulties in perceiving the location of clinical materials in the RT because there is a bad visual

K "Practices adopted in NHDU goes against scientific evidences … but it is difficult here to make whatever change

M "There is a lack of standards for admittance and clearance of patients… Even the doctors and some nurses do not

O "They never listen to us. They do not realize, or understand, the staff who are working with them. We could make

J "I have little practice in the use of the RT, mainly the defibrillator… We should have training…"

N "We cannot take any initiative to improve anything, because they fear us to take their place."

do not have a specific location for them either it was not replaced…

sometimes they drop of the desk, usually when pulled by confused patients."

F "There is neither space nor conditions to lift patients to an armchair or wheelchair."

cross‐contamination doesn't happen, I am sure it does eventually happen. "

we need … some people do not understand what good practices are."

L "The NHDU has a lack of identity and autonomy."

P "There is a huge resistance to change…

**Table 1.** Excerpts from interviews.

There is a fear of loss or prestige transfer."

understand that we only have capacity for 4 patients"

a great contribution to the better functioning of the unit."

literally upon us, against wheelchairs and other patient's beds."

Medication and serums, forget it…"

40 Operations Research - the Art of Making Good Decisions

supplies."

and pharmacy services."

space that we have."

perception of it."

According to the previous results and analysis of the interviews, questionnaires, spaghetti diagrams, value stream mapping (data not shown), and simulations, a set of suggestions were proposed by the PR to the Medical Director and Chief Nurse of NHDU (third pre‐intervention sub‐phase). This proposal was drawn up from the data collected attending to the Lean philosophy, the recommendations of best practices, and the standards of Portuguese regulatory institutions. The proposal considers several suggestions for amendments procedures, layout updates of the physical space, RT and NHDU cabinets content, and different locations of the clinical material and equipment. Briefly, these suggestions were the following:


After approval, or disapproval, of each suggestion, Lean methodologies (5S, JIT) were undertaken to ensure a better and safer work environment for patients and staff. The cabinets were reorganized into categories to cover the various patients' needs like breathing, elimina‐ tion, circulation, and administration, dressings and skin integrity, feeding, individual protec‐ tion equipment. Sliding frosted glasses were removed from the cabinets, and it was possible to reduce and optimize the occupied space without decreasing the amount of material, but rather increasing its variety and availability, as seen in **Figure 1** that also illustrates the post‐ intervention TST location. The patients' units were likewise reorganized with the inclusion of supports for suction probes, water bottles, and AIS (in mobile IV poles). Vital signs monitors were placed at a new shelve on each patient's unit and ALS, and RT workshops has been schedule for nurse training and education.

**Figure 1.** Supplies in NHDU large cabinet before and after Lean intervention.

According to the previous results and analysis of the interviews, questionnaires, spaghetti diagrams, value stream mapping (data not shown), and simulations, a set of suggestions were proposed by the PR to the Medical Director and Chief Nurse of NHDU (third pre‐intervention sub‐phase). This proposal was drawn up from the data collected attending to the Lean philosophy, the recommendations of best practices, and the standards of Portuguese regulatory institutions. The proposal considers several suggestions for amendments procedures, layout updates of the physical space, RT and NHDU cabinets content, and different locations of the

**2.** Place water bottles supports to ensure suction tubes washing after manipulation (accept‐

**4.** Remove vital signs monitors from patients' desks and fixate them on the wall (accepted);

**8.** Reorganization of NHDU cabinets to improve contents access, variety, and identification

**11.** Place manual ventilator at each unit in the presence of tracheotomized patient (rejected);

After approval, or disapproval, of each suggestion, Lean methodologies (5S, JIT) were undertaken to ensure a better and safer work environment for patients and staff. The cabinets were reorganized into categories to cover the various patients' needs like breathing, elimina‐ tion, circulation, and administration, dressings and skin integrity, feeding, individual protec‐ tion equipment. Sliding frosted glasses were removed from the cabinets, and it was possible to reduce and optimize the occupied space without decreasing the amount of material, but rather increasing its variety and availability, as seen in **Figure 1** that also illustrates the post‐ intervention TST location. The patients' units were likewise reorganized with the inclusion of supports for suction probes, water bottles, and AIS (in mobile IV poles). Vital signs monitors were placed at a new shelve on each patient's unit and ALS, and RT workshops has been

**5.** ALS and RT handling workshops for nurse training and education (accepted);

**10.** Place double air and oxygen pressure regulators at each patient unit (rejected);

**12.** Organize trolley with clinical material for isolation room (rejected);

**13.** Eliminate one of the beds to increase circulation space (rejected).

clinical material and equipment. Briefly, these suggestions were the following:

**1.** Place suction probes supports on the wall at each bed side (accepted);

**3.** Place mobile IV pole with AIS mounted at each bed side (accepted);

**7.** Place TST at NHDU next to nurse station (accepted);

ed);

(accepted);

**6.** RT standardization (accepted);

42 Operations Research - the Art of Making Good Decisions

**9.** Place drug vault at NHDU (rejected);

schedule for nurse training and education.

**Figure 2.** Spaghetti diagram for TST access before and after Lean intervention.

The presence of tracheotomized patients or at risk of being tracheotomized in NHDU is constant; therefore, the availability and accessibility to LSE, particularly ECK and TST, are of extreme importance. In the pre‐intervention phase, TST was in the treatment room of neuro‐ surgery, about 63 m (round trip) far from NHDU nurse station. Changing its location into the large cabinet inside NHDU the distance decreased to 6 m (round trip) from the nurse station. **Figure 2** represents the spaghetti diagram made before and after Lean intervention for the TST.

**Table 3** shows the quantitative results obtained from the simulations of TST accessibility before and after Lean intervention. Data show the reduction of waste in time (−87.35%), dis‐ tance (−90.47%), and steps (−87.12%) achieved with the application of Lean methodologies. According to Cohen's d, the effect size is large. Shapiro‐Wilk normality test (data not shown) rejected the normality of the distance distribution (p < 0.001). So, for a one‐sided significance level of 0.025, were accepted the alternative hypothesis that time (p = 0.0017), number of steps (p = 0.000015) and distance (p = 0.016) were statistical and significantly low‐ er after the application of Lean methodologies.


b W‐test with exact significance.

**Table 3.** Results from TST accessibility.

Although the ECK is correctly located in the RT, 66.7% (n = 8) of nurses were unaware of its existence or location. For ethical reasons, there was an imperative and urgent need to educate them, which was done by the PR to all nurses' team. In order to identify the difficulties of nurses in using the RT, simulations were performed. These simulations consisted in locating and accessing all RT contents, especially ECK. Through direct observation, it was found that all 12 nurses had some difficulties such as follows: safety seal breakage; retraction of safety latch; removal of back board; opening drawers by poor perception of the handle; finding and identifying critical medications and supplies; swing arm handling; and use of equipment including heart defibrillator. After the simulations, nurses justified their difficulties as a result of little practice and/or experience. An ALS and RT handling workshop intervention were scheduled to nurse's continuous education plan.

In the pre‐intervention phase, the AIS were in a storeroom forcing nurses to a constant movement and transportation of about 84 m (round trip). Lean 5S and JIT methodologies determined changing AIS location into a mobile IV pole next to the patients' unit, permanently connected to electricity in order to ensure its permanent availability (**Figures 3** and **4**).

Application of Lean Methodologies in a Neurosurgery High Dependency Unit http://dx.doi.org/10.5772/64715 45

**Figure 3.** AIS location before and after Lean intervention.

number of steps (p = 0.000015) and distance (p = 0.016) were statistical and significantly low‐

**A B ∆ ∆% A B ∆ ∆% A B ∆ ∆% Collective Paired Collective Paired Collective Paired**

[−61.99;

−14.21

0.0017 0.000015

−43]

(5)

M 45.5 4.58 −40.5 −87.35 64.03 6 −58.03 −90.62 60.17 7.5 −52.5 −87.12

**Time Distance Steps**

 Mdn 39.5 5 −34 −86.1 63 6 −57 −90.47 60.05 8 −52.5 −87.32 SD 18.87 0.79 19.07 5.08 2.43 0 2.43 0.34 9.75 0.91 9.05 1.71 Max 76 6 −71 −93.42 69 6 −63 −91.3 70 9 −62 −89.09 Min 26 3 −21 −80.77 63 6 −57 −90.48 45 6 −38 −84.44 Range 50 3 −50 −12.65 3 0 6 −0.82 25 3 −24 −4.65

‐test Z −2.264

0.016

Although the ECK is correctly located in the RT, 66.7% (n = 8) of nurses were unaware of its existence or location. For ethical reasons, there was an imperative and urgent need to educate them, which was done by the PR to all nurses' team. In order to identify the difficulties of nurses in using the RT, simulations were performed. These simulations consisted in locating and accessing all RT contents, especially ECK. Through direct observation, it was found that all 12 nurses had some difficulties such as follows: safety seal breakage; retraction of safety latch; removal of back board; opening drawers by poor perception of the handle; finding and identifying critical medications and supplies; swing arm handling; and use of equipment including heart defibrillator. After the simulations, nurses justified their difficulties as a result of little practice and/or experience. An ALS and RT handling workshop intervention were

In the pre‐intervention phase, the AIS were in a storeroom forcing nurses to a constant movement and transportation of about 84 m (round trip). Lean 5S and JIT methodologies determined changing AIS location into a mobile IV pole next to the patients' unit, permanently connected to electricity in order to ensure its permanent availability (**Figures 3** and **4**).

−2.12 −23.84 −5.8

−20.48]

 −5.2 (5)

er after the application of Lean methodologies.

44 Operations Research - the Art of Making Good Decisions

t‐test 95% CI [−60.52;

A: Pre‐intervention (n = 6). B: Post‐intervention (n = 12).

scheduled to nurse's continuous education plan.

(df)

t

pa

One‐sided 0.025 significance.

W‐test with exact significance.

**Table 3.** Results from TST accessibility.

Collective and paired data

Wb

a

b

pa

Effect size Cohen's d

**Figure 4.** Spaghetti diagram for AIS access before and after Lean intervention.


A: Pre‐intervention (*n*=6). B: Post‐intervention (*n*=12).

a One‐sided 0.025 significance.

b *W*‐test with exact significance.

**Table 4.** Results from AIS accessibility.

After the intervention and application of Lean methodologies, the AIS mean access was 96.27% in time, 95.83% in steps, and 96.41% in distance lower than in the pre‐intervention. The effect size is large (or very large) with d = −10.28 for time, d = −11.58 to number of steps, and d = −5×1015 to distance. For the hypothesis test, the Shapiro‐Wilk normality test rejected the normality of steps (p = 0.039) and distance (this one constant) distribution. So, for a one‐sided significance level of 0.025, were accepted the alternative hypothesis that time (p = 5.64×10−13), number of steps (p = 0.00024) and distance (p = 0.00024) were statistical and significantly lower after Lean methodologies application, as shown in **Table 4**.

The results of quantitative data associated to the hypothesis test, the size effect, and the improvements in accessibility to TST and AIS are summarized in **Table 5**.


**Table 5.** Summary results from quantitative data.

#### **5. Discussion**

After all the action research phases performed, it was demonstrated that the application of Lean methodologies contributes for improving the accessibility to equipment and material that are essential to nurses' safe practice. With the application of the Lean methodologies, it is possible to provide optimized care to acute neurosurgical patients, in emergency and life support situations. Lean methodologies such as Gemba walk and spaghetti diagram made possible to identify wastes and difficulties in LSE accessibility, organization, and provision of other clinical equipment and supplies, and security issues such as potential cross‐contamina‐ tion provoked by exiguous work areas and architectural barriers. 5S and JIT philosophies together with interviews and questionnaires led to the development of a grounded interven‐ tional proposal for a functional and organizational harmonization of NHDU. Each suggestion on the proposal was then analyzed by medical and nurse unit managers giving deferral or refusal to certain interventions. The implementation of 5S and JIT methodologies led to the reorganization of NHDU and the allocation of the equipment closer to patients and nurses as well as to the decrease of waste, non‐value‐added activities and to significant improvements. These same results are argued in Carvalho et al. [45] since they defend that the layout must "reflect the need to reduce the time spent traveling" (p. 291) since "time 'lost' in travel between the various services… represents a cost to the organization in question, and that, in most cases, is not noticed or accounted for" (p. 291). For example, a nurse who searches for drugs, supplies, and equipment are doing it to serve the needs of patients, but may not notice that it can result in a waste of time, transport, handling, and human potential. But according to the Institute for Healthcare Improvement [46], if these materials were readily available when, how, and where they are needed (JIT), the time that nurses wasted looking for them would be instantly devoted to other more appropriate and critical tasks.

After the intervention and application of Lean methodologies, the AIS mean access was 96.27% in time, 95.83% in steps, and 96.41% in distance lower than in the pre‐intervention. The effect size is large (or very large) with d = −10.28 for time, d = −11.58 to number of steps, and d = −5×1015 to distance. For the hypothesis test, the Shapiro‐Wilk normality test rejected the normality of steps (p = 0.039) and distance (this one constant) distribution. So, for a one‐sided significance level of 0.025, were accepted the alternative hypothesis that time (p = 5.64×10−13), number of steps (p = 0.00024) and distance (p = 0.00024) were statistical and significantly lower

The results of quantitative data associated to the hypothesis test, the size effect, and the

*p***‐value Size**

**effect**

**Percentage variation (decrease)**

0.0017 −2.12 −87.35% 837.22%

0.016 −23.84 −90.47% 950%

0.0000151 −5.8 −87.12% 687.46%

5.64×10−13 −10.28 −96.27% 2733.8%

0.00024 −5×1015 −96.41% 2686.7%

0.00024 −11.58 −95.84% 2310%

**Percentage variation (improvement)**

after Lean methodologies application, as shown in **Table 4**.

**Hypotheses Statistical**

46 Operations Research - the Art of Making Good Decisions

H01: The difference of TST time of access between pre and post‐ intervention equals zero.

H02: The difference of TST distance of access travelled between pre and post‐intervention equals zero.

H03: The difference of TST number of steps of access between pre and post‐intervention equals zero.

H04: The difference of AIS time of access between pre and post‐ intervention equals zero.

H05: The difference of AIS distance of access travelled between pre and post‐intervention equals zero.

H06: The difference of AIS number of steps of access between pre and post‐intervention equals zero.

**Table 5.** Summary results from quantitative data.

α = 0.025 one‐sided.

**5. Discussion**

a

improvements in accessibility to TST and AIS are summarized in **Table 5**.

**test**

Paired samples *t*‐test

Paired samples *W*‐test

Paired samples *t*‐test

Paired samples *t*‐test

Paired samples *W*‐test

Paired samples *W*‐test

After all the action research phases performed, it was demonstrated that the application of Lean methodologies contributes for improving the accessibility to equipment and material that are essential to nurses' safe practice. With the application of the Lean methodologies, it is possible to provide optimized care to acute neurosurgical patients, in emergency and life support situations. Lean methodologies such as Gemba walk and spaghetti diagram made possible to identify wastes and difficulties in LSE accessibility, organization, and provision of other clinical equipment and supplies, and security issues such as potential cross‐contamina‐ Through action research and the application of Lean methodologies, nurses of NHDU actually take only 10% of time, 9.37% of the distance travelled and 12.46% of the steps spent accessing TST compared to pre‐intervention. The results of the intervention in AIS showed an improve‐ ment even more significant since the post‐intervention access time is just 3.77% of pre‐ intervention time, the distance just 3.59%, and the number of steps only 4.21% compared to pre‐intervention. To achieve this, nurses were educated about the location of LSE, and the need for training these nurses in ALS and RT handling was identified. Wastes and barriers that conditioned rapid access and action to acute patients were identified, reduced, or removed. Time, steps, and distance travelled accessing LSE were shortened and reduced more than half (−87.12 to −96.41%).

The same results were reached in other researches. Virginia Mason Medical Center (VMMC), in Seattle (USA), is credited to be one of the pioneers in healthcare industry to implement Lean by applying their own Virginia Mason Production System (based on TPS) [47]. Since 2001 VMMC makes efforts with the reorganization of spaces and workflows, minimizing transpor‐ tation, and handling wastes, where all clinical equipment and supplies essential to care are placed in the point‐of‐use in UK Hereford Hospital, Lean methodologies led also to reductions of delay in nurses' response time between 40 and 93% [48]. In Scotland, from a sample of 19 critical care units, nurses available time increased from 35 to 64%, in which 32% of these units reached changes greater than 100%, supported by the program Releasing Time to Care: The Productive Ward, based on Lean and six sigma methodology [49].

In this study, there is a significant and serious lack of nurses' knowledge on the existence and location of LSE. Intervention trough education, awareness, and change of its location resulted in an improvement of 100% to TST and 200% to ECK leading to health benefits for patient's safety and quality of care. Still on the ECK and the RT, the simulation demonstrated the difficulties experienced by nurses in the use of the RT, particularly in opening it, use of drawers, location, and rapid visualization of contents. It was retrieved from this analysis that the imperative and urgent need for nurse's professional training and the need for a clearly defined intervention criteria in emergency situations. This is in line with Silich et al. [50] that also highlights that informed and trained professionals provide better care with potential reduction of adverse events, bad practices, and less waste of resources.

Catchpole [51] argues that the undesirable effects of an inadequate working environment can result in fatigue, frustration, reduced performance, and human capacity, increased risk, and adverse events. Hence, the importance that health facility managers have and the impact of their decisions on patients and staff, and "usually, it is the intermediate and elementary level manager, involved in everyday decisions, that affect the care that is actually provided to patients" [52].

#### **6. Conclusion**

This research was intended to interfere in the reality studied by solving identified problems in an effective and participatory manner (through action), not only explain it or proposing a problem solution. The impact for practice and health services (quality indicators, safety, and satisfaction) of the Lean interventions carried out by the PR is well grounded by the results. In this research, it was verified that 66.7% of nurses were unaware of the existence or location of ECK and 50% of the TST. The education intervention resulted in an improvement of knowledge of 100% in the TST and 200% in the ECK, leading to potentially high health gains for the patient, because trained professionals provide better care with fewer mistakes. Fur‐ thermore, this research identified needs for periodic training and education on ALS and RT practice. Through Lean methodologies such as 5S, JIT, and spaghetti diagrams, it was possible to decrease time, steps, and distance travelled by nurses accessing TST and AIS between 87.12% and 96.41% and to improve this accessibility between 687.46% and 2733.8%.

These results confirm the contribution of this research to address the need of this healthcare unit to improve the care of neurosurgical acute/critically ill patients. The implementation of Lean 5S and just‐in‐time methodologies led to the reorganization of NHDU environment by allocating LSE closer to patients and nurses station, contributing by this way for improving the security and responsiveness of nurses' team for having more knowledge and quick access to LSE. In addition, it contributes to overcoming emergency, life support situations, and day‐ to‐day professional life action to the needs of patients, freeing up time and availability of nurses for direct care by a work environment with less waste of time, distance, steps, handling, and setup procedures.

Although not focused in this research, for the unit and hospital management, there are potential economic and financial benefits attained from the application of Lean methodologies through the following factors: hand labor and human capital gains by reducing the time required to perform certain tasks (setup time); reduction of the "snowball" effect that leads to the accumulation of everyday work; reprocessing gains from potential reduction of costs in time of hospital internment and patient morbidity.

Besides the advantages reached with the application of the lean methodologies the research findings, however, are tempered by several shortcomings such as the unavailability of participants to collaborate with the research and resistance to change. Financial impact of the intervention was not recorded. Moreover, the results cannot be generalized; other realities can compare them and encounter similar situations that may benefit with the application of Lean methodologies in an attempt to overcome their problems.

It is expected that health professionals, especially their leaders and managers, can take some lessons from the different approaches adopted in this research and may act as a catalyst for future positive changes in all health services.

As a suggestion for future research it would be interesting to study the financial impact (time saved vs. value/hour) of the application of these lean methodologies, the impact on the quality of nurses daily professional life (satisfaction, fatigue, stress, burnout) and on emergency scenarios (LSE accessibility/availability vs. morbidity and mortality).

### **Author details**

imperative and urgent need for nurse's professional training and the need for a clearly defined intervention criteria in emergency situations. This is in line with Silich et al. [50] that also highlights that informed and trained professionals provide better care with potential reduction

Catchpole [51] argues that the undesirable effects of an inadequate working environment can result in fatigue, frustration, reduced performance, and human capacity, increased risk, and adverse events. Hence, the importance that health facility managers have and the impact of their decisions on patients and staff, and "usually, it is the intermediate and elementary level manager, involved in everyday decisions, that affect the care that is actually provided to

This research was intended to interfere in the reality studied by solving identified problems in an effective and participatory manner (through action), not only explain it or proposing a problem solution. The impact for practice and health services (quality indicators, safety, and satisfaction) of the Lean interventions carried out by the PR is well grounded by the results. In this research, it was verified that 66.7% of nurses were unaware of the existence or location of ECK and 50% of the TST. The education intervention resulted in an improvement of knowledge of 100% in the TST and 200% in the ECK, leading to potentially high health gains for the patient, because trained professionals provide better care with fewer mistakes. Fur‐ thermore, this research identified needs for periodic training and education on ALS and RT practice. Through Lean methodologies such as 5S, JIT, and spaghetti diagrams, it was possible to decrease time, steps, and distance travelled by nurses accessing TST and AIS between 87.12%

These results confirm the contribution of this research to address the need of this healthcare unit to improve the care of neurosurgical acute/critically ill patients. The implementation of Lean 5S and just‐in‐time methodologies led to the reorganization of NHDU environment by allocating LSE closer to patients and nurses station, contributing by this way for improving the security and responsiveness of nurses' team for having more knowledge and quick access to LSE. In addition, it contributes to overcoming emergency, life support situations, and day‐ to‐day professional life action to the needs of patients, freeing up time and availability of nurses for direct care by a work environment with less waste of time, distance, steps, handling, and

Although not focused in this research, for the unit and hospital management, there are potential economic and financial benefits attained from the application of Lean methodologies through the following factors: hand labor and human capital gains by reducing the time required to perform certain tasks (setup time); reduction of the "snowball" effect that leads to the accumulation of everyday work; reprocessing gains from potential reduction of costs in

and 96.41% and to improve this accessibility between 687.46% and 2733.8%.

of adverse events, bad practices, and less waste of resources.

48 Operations Research - the Art of Making Good Decisions

patients" [52].

**6. Conclusion**

setup procedures.

time of hospital internment and patient morbidity.

Ricardo Balau Esteves1 , Susana Garrido Azevedo2\* and Francisco Proença Brójo3

\*Address all correspondence to: sazevedo@ubi.pt

1 Health Units Management, Management and Economics Department, University of Beira Interior, Covilhã, Portugal

2 CEFAGE‐UBI, Management and Economics Department, University of Beira Interior, Cov‐ ilhã, Portugal

3 C‐MAST – Aerospace Sciences Department, University of Beira Interior, Covilhã, Portugal

#### **References**


2012/04/18/there‐is‐a‐waste‐epidemic‐in‐health‐care‐how‐do‐you‐deal‐with‐it‐in‐ your‐organization [Accessed: 01 March 2014]


[19] Anand, G. & Kodali, R. Development of a framework for implementation of Lean manufacturing systems. International Journal of Management Practice. 2008;4(1):95‐ 116

2012/04/18/there‐is‐a‐waste‐epidemic‐in‐health‐care‐how‐do‐you‐deal‐with‐it‐in‐

[5] Intensive Care Society. Standards for the care of adult patients with a temporary tracheostomy [Internet]. 2014. Available from: http://www.ics.ac.uk/EasySiteWeb/

[6] Soriano‐Meier, H., Forrester, P. L., Markose, S., Garza‐Reyes, J. A. The role of the physical layout in the implementation of lean management initiatives. International

[8] Imai, M. Gemba Kaizen: A Commonsense Approach to a Continuous Improvement

[9] Graban, M. & Swartz, J. E.. The Executive Guide to Healthcare Kaizen: Leadership for a Continuously Learning an Improving Organization. Boca Raton, FL: CRC Press,

[10] Meisel, R. M., Babb, S. J., Marsh, S. F., Schlichting, J. P. The Executive Guide to Under‐ standing and Implementing Lean Six Sigma: The Financial Impact. Milwaukee, WI:

[11] Locher, D. A. Value Stream Mapping for Lean Development: A How‐To Guide for Streamlining Time to Market. New York, NY: Productivity Press, Taylor & Francis

[12] Jackson, T. L., editors. 5S for Healthcare. New York, NY: Rona Consulting Group &

[13] Smart, N. J. Lean Biomanufacturing: Creating Value through Innovative Bioprocessing

[14] Womack, J. P., Jones, D. T. Lean Thinking: Banish Waste and Create Wealth in Your

[15] Bialek, R., Duffy, G. L. & Moran, J. W., editors. The Public Health Quality Improvement

[16] Krafcik, J.. Triumph of the lean production system. Sloan Management Review.

[17] Womack, J. P., Jones, D. T. & Roos, D.. The Machine That Changed the World. New York,

[18] Treville, S. & Antonakis, J. Could Lean production job design, be intrinsically motivat‐ ing? Contextual, configurational, and levels‐of‐analysis issues. Journal of Operations

Approaches. Cambridge: Woodhead Publishing Limited; 2013

your‐organization [Accessed: 01 March 2014]

50 Operations Research - the Art of Making Good Decisions

Journal of Lean Six Sigma. 2011;2(3):254‐269

Taylor & Francis Group; 2014

Group; 2008

1988;30(1):41‐52

Management. 2006;24(2):99‐123

Productivity Press; 2009

Strategy. 2nd ed. New York, NY: McGraw‐Hill; 2012

American Society for Quality, Quality Press; 2007

Corporation. New York, NY: Free Press, 2003

Handbook. Milwaukee, WI: ASQ Quality Press; 2009

NY: Rawson Associates; Collier Macmillan Canada; 1990

GatewayLink.aspx?alId=2212 [Accessed: 20 October 2014]

[7] Womack, J. Gemba Walks. Cambridge, MA: Lean Enterprise, Inc.; 2011


of Minho; 2012. Available from: http://repositorium.sdum.uminho.pt/bitstream/ 1822/23481/1/Lu%C3%ADsa%20Emanuela%20Martins%20Libano.pdf


[47] Virginia Mason Medical Center. Virginia Mason Production System ‐ Fast Facts [Internet]. 2015. Available from: https://www.virginiamason.org/workfiles/pdfdocs/ press/vmps\_ fastfacts.pdf [Accessed: 20 January 2015]

of Minho; 2012. Available from: http://repositorium.sdum.uminho.pt/bitstream/

[33] Matos, I. A. Aplicação de técnicas Lean Services no bloco operatório de um hospital [dissertation]. Guimarães: University of Minho; 2011. Available from: http://

[34] Paula, P. S. A contribuição da implementação dos 5S para a melhoria contínua da qualidade num serviço de imagiologia ‐ o estudo de caso no HFF [dissertation]. Porto: University Fernando Pessoa; 2008. Available from: http://hdl.handle.net/10284/1431

[35] Dias, S. M. Implementação da metodologia Lean Seis‐Sigma – O caso do Serviço de Oftalmologia dos Hospitais da Universidade de Coimbra [dissertation]. Coimbra: University of Coimbra; 2011. Available from: http://hdl.handle.net/10316/17667

[36] Resende, M. O. Melhoria de Processos Hospitalares através de ferramentas Lean: Aplicação ao serviço de Imagiologia no Centro Hospitalar Entre Douro e Vouga [dissertation]. Porto: University of Porto; 2010. Available from: http://repositorio‐

[37] Ribeiro, A. C. A implementação da filosofia Lean na gestão dos serviços de saúde: o caso dos centros de saúde da região norte [dissertation]. Porto: University of Porto;

[38] Guimarães, M. C. M. Lean thinking in Healthcare services ‐ learning from case studies [thesis]. Lisbon: Lisbon University Institut ; 2013. Available from: http://hdl.handle.net/

[39] Luzes, C. S. A. Implementação da Filosofia Lean na Gestão dos Serviços de Saúde: O Caso Português [dissertation]. Porto: Polytechnic Institut of Porto; 2013. Available from:

[40] Lewin, K. Action research and minority problems. Journal of Social Issues. 1946;2(4):

[41] Given, L. M. The Sage Encyclopedia of Qualitative Research Methods. Thousand Oaks,

[42] Streubert, H. J. & Carpenter, D. R.. Investigação Qualitativa em Enfermagem: Avan‐

[43] Yin, R. K. Qualitative Research from Start to Finish. New York, NY: The Guilford

[44] Mehta, C. R., Patel, N. R. IBM SPSS Exact Tests. Cambridge, MA: IBM Corporation; 2012

[46] Institute for Healthcare Improvement. Going Lean in Health Care. Cambridge, MA:

[45] Carvalho, J. C., & Ramos, T. Logística na Saúde. Lisbon: Edições Sílabo, Lda; 2009

http://www.fep.up.pt/docentes/fontes/FCTEGE2008/Publicacoes/D17.pdf

aberto.up.pt/bitstream/10216/59520/1/000145447.pdf

2013. Available from: http://hdl.handle.net/10216/69710

çando o imperativo humanista. Loures: Lusociência; 2002

1822/23481/1/Lu%C3%ADsa%20Emanuela%20Martins%20Libano.pdf

hdl.handle.net/1822/16321

52 Operations Research - the Art of Making Good Decisions

10071/6183

34‐46

Press; 2011

CA: SAGE Publications, Inc.; 2008

Institute for Healthcare Improvement; 2005


#### **Iteration Algorithms in Markov Decision Processes with State-Action-Dependent Discount Factors and Unbounded Costs Iteration Algorithms in Markov Decision Processes with State-Action-Dependent Discount Factors and Unbounded Costs**

Fernando Luque-Vásquez and Fernando Luque-Vásquez and J. Adolfo Minjárez-Sosa

J. Adolfo Minjárez-Sosa

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/65044

#### **Abstract**

This chapter concerns discrete time Markov decision processes under a discounted optimality criterion with state-action-dependent discount factors, possibly unbounded costs, and noncompact admissible action sets. Under mild conditions, we show the existence of stationary optimal policies and we introduce the value iteration and the policy iteration algorithms to approximate the value function.

**Keywords:** discounted optimality, non-constant discount factor, value iteration, policy iteration, Markov decision processes

AMS 2010 subject classifications: 93E10, 90C40

#### **1. Introduction**

In this chapter we study Markov decision processes (MDPs) with Borel state and action spaces under a discounted criterion with state-action–dependent discount factors, possibly unbounded costs and noncompact admissible action sets. That is, we consider discount factors of the form

$$a(x\_n, a\_n),\tag{1}$$

where and are the state and the action at time , respectively, playing the following role during the evolution of the system. At the initial state 0, the controller chooses an action 0 and a cost (0,0) is incurred. Then the system moves to a new state 1 according to a transition law. Once the system is in state 1 the controller selects an action 1 and incurs a discounted cost (0,0)(1,1) . Next the system moves to a state 2 and the process is repeated. In general, for the stage 1, the controller incurs the discounted cost

$$\prod\_{k=0}^{n-1} a(\mathbf{x}\_k, a\_k) c(\mathbf{x}\_n, a\_n),\tag{2}$$

and our objective is to show the existence of stationary optimal control policies under the corresponding performance index, as well as to introduce approximation algorithms, namely, value iteration and policy iteration.

In the scenario of assuming a constant discount factor, the discounted optimality criterion in stochastic decision problems is the best understood of all performance indices, and it is widely accepted in several application problems (see, e.g., [1–3] and reference there in). However, such assumption might be strong or unrealistic in some economic and financial models. Indeed, in these problems the discount factors are typically functions of the interest rates, which in turn depend on the amount of currency and the decision-makers actions. Hence, we have state-action– dependent discount factors, and it is indeed these kinds of situations we are dealing with.

MDPs with non constant discount factors have been studied under different approaches (see, e.g., [4–8]). In particular, our work is a sequel to [8] where is studied the control problem with state-dependent discount factor. In addition, randomized discounted criteria have been analyzed in [9–12] where the discount factor is modeled as a stochastic process independent of the state-action pairs.

Specifically, in this chapter we study control models with state-action-dependent discount factors, focusing mainly on introducing approximation algorithms for the optimal value function (value iteration and policy iteration). Furthermore, an important feature in this work is that there is no compactness assumption on the sets of admissible actions neither continuity conditions on the cost, which, in most of the papers on MDPs, are needed to show the existence of measurable selectors and continuity or semi-continuity of the minima function. Indeed, in contrast to the previously cited references, in this work, we assume that the cost and discount factor functions satisfy the -inf-compactness condition introduced in [13]. Then, we use a generalization of Berge's Theorem, given in [13], to prove the existence of measurable selectors. To the best of our knowledge there are no works dealing with MDPs in the context presented in this chapter.

The remainder of the chapter is organized as follows. Section 2 contains the description of the Markov decision model and the optimality criterion. In Section 3 we introduce the assumptions on the model and we prove the convergence of the value iteration algorithm (Theorem 3.5). In Section 4 we define the policy iteration algorithm and the convergence is stated in Theorem 4.1.

**Notation**. Throughout the paper we shall use the following notation. Given a *Borel space* that is, a Borel subset of a complete separable metric space — ℬ() denotes the Borel -algebra and "measurability" always means measurability with respect to ℬ(). Given two Borel spaces and ′, a *stochastic kernel* ( ⋅ | ⋅ ) on given ′ is a function such that ( ⋅ |′) is a probability measure on for each ′ ∈ ′, and ( | ⋅ ) is a measurable function on ′ for each ∈ ℬ() . Moreover, ℕ (ℕ0) denotes the positive (nonnegative) integers numbers. Finally, () stands for the class of lower semicontinuous functions on bounded below and +() denotes the subclass of nonnegative functions in ().

#### **2. Markov decision processes**

**Markov control model**. Let

where and are the state and the action at time , respectively, playing the following role during the evolution of the system. At the initial state 0, the controller chooses an action 0 and a cost (0,0) is incurred. Then the system moves to a new state 1 according to a transition law. Once the system is in state 1 the controller selects an action 1 and incurs a discounted cost (0,0)(1,1) . Next the system moves to a state 2 and the process is repeated. In

( , ) ( , ),

*x a cx a*

*kk nn*

and our objective is to show the existence of stationary optimal control policies under the corresponding performance index, as well as to introduce approximation algorithms, namely,

In the scenario of assuming a constant discount factor, the discounted optimality criterion in stochastic decision problems is the best understood of all performance indices, and it is widely accepted in several application problems (see, e.g., [1–3] and reference there in). However, such assumption might be strong or unrealistic in some economic and financial models. Indeed, in these problems the discount factors are typically functions of the interest rates, which in turn depend on the amount of currency and the decision-makers actions. Hence, we have state-action– dependent discount factors, and it is indeed these kinds of situations we are dealing with.

MDPs with non constant discount factors have been studied under different approaches (see, e.g., [4–8]). In particular, our work is a sequel to [8] where is studied the control problem with state-dependent discount factor. In addition, randomized discounted criteria have been analyzed in [9–12] where the discount factor is modeled as a stochastic process independent

Specifically, in this chapter we study control models with state-action-dependent discount factors, focusing mainly on introducing approximation algorithms for the optimal value function (value iteration and policy iteration). Furthermore, an important feature in this work is that there is no compactness assumption on the sets of admissible actions neither continuity conditions on the cost, which, in most of the papers on MDPs, are needed to show the existence of measurable selectors and continuity or semi-continuity of the minima function. Indeed, in contrast to the previously cited references, in this work, we assume that the cost and discount factor functions satisfy the -inf-compactness condition introduced in [13]. Then, we use a generalization of Berge's Theorem, given in [13], to prove the existence of measurable selectors. To the best of our knowledge there are no works dealing with MDPs in the context presented

The remainder of the chapter is organized as follows. Section 2 contains the description of the Markov decision model and the optimality criterion. In Section 3 we introduce the assumptions

Õ (2)

general, for the stage 1, the controller incurs the discounted cost

1 =0


*n*

*k* a

value iteration and policy iteration.

56 Operations Research - the Art of Making Good Decisions

of the state-action pairs.

in this chapter.

$$\mathcal{M} := \left( X, A, \left( A(\mathbf{x}) \subset A \mid \mathbf{x} \in X \right), \underline{Q}, a, \mathbf{c} \right) \tag{3}$$

be a discrete-time Markov control model with state-action-dependent discount factors satisfying the following conditions. The state space and the action or control space are Borel spaces. For each state ∈ , () is a nonempty Borel subset of denoting the set of admissible controls when the system is in state . We denote by the graph of the multifunction (), that is,

$$\mathbb{K} = \{(\mathbf{x}, a) : \mathbf{x} \in X, a \in A(\mathbf{x})\}\tag{4}$$

which is assumed to be a Borel subset of the Cartesian product of and . The transition law ( ⋅ | ⋅ ) is a stochastic kernel on given . Finally, : (0,1) and : (0,∞) are measurable functions representing the discount factor and the cost-per-stage, respectively, when the system is in state ∈ and the action ∈ () is selected.

The model ℳ represents a controlled stochastic system and has the following interpretation. Suppose that at time <sup>∈</sup> ℕ0 the system is in the state <sup>=</sup> ∈ . Then, possibly taking into account the history of the system, the controller selects an action <sup>=</sup> ∈ (), and a discount factor (, ) is imposed. As a consequence of this the following happens:

**1.** A cost (,) is incurred;

**2.** The system visits a new state + 1 <sup>=</sup> ′ ∈ according to the transition law

$$\underline{Q}(B \mid \mathbf{x}, a) \coloneqq \Pr\left[\mathbf{x}\_{n+1} \in B \mid \mathbf{x}\_n = \mathbf{x}, a\_n = a\right], \quad B \in \mathcal{B}(X). \tag{5}$$

Once the transition to state ′ occurs, the process is repeated.

Typically, in many applications, the evolution of the system is determined by stochastic difference equations of the form

$$\mathbf{x}\_{n+1} = F(\mathbf{x}\_n, a\_n, \xi\_n), n \in \mathbb{N}\_0,\tag{6}$$

where is a sequence of independent and identically distributed random variables with values in some Borel space , independent of the initial state 0, and : × × is a given measurable function. In this case, if denotes the common distribution of , that is

$$\theta(D) \colon \equiv P\left[\underline{\mathcal{F}}\_n \in D\right], \ D \in \mathcal{B}(S), n \in \mathbb{N}\_0,\tag{7}$$

then the transition kernel can be written as

$$\begin{aligned} Q(B \mid \mathbf{x}, a) &= \Pr\left[F(\mathbf{x}\_n, a\_n, \mathbf{y}\_n) \in B \mid \mathbf{x}\_n = \mathbf{x}, a\_n = a\right] \\ &= \theta\left\{\mathbf{s} \in S : F(\mathbf{x}, a, \mathbf{s}) \in B\right\} \\ &= \int\_S \mathbf{l}\_B \left[F(\mathbf{x}, a, \mathbf{s})\right] \theta(d\mathbf{s}), \quad B \in \mathcal{B}(X), (\mathbf{x}, a) \in \mathbb{K}, \end{aligned} \tag{8}$$

where 1(⋅) represents the indicator function of the set .

**Control policies**. The actions applied by the controller are chosen by mean of rules known as control policies defined as follows. Let ℍ0: = and ℍ: = × , <sup>1</sup> be the spaces of admissible histories up to time . A generic element of ℍ is written as ℎ = (0,0,..., 1, <sup>1</sup>,) .

**Definition 2.1** *A control policy (randomized, history-dependent) is a sequence*  =  *of stochastic kernels on given* ℍ *such that* (()|ℎ) = 1, *for all* ℎ ∈ℍ, ∈ ℕ0.

We denote by Π the set of all control policies.

Let be the set of measurable selectors, that is, is the set of measurable function : such that () ∈ () for all ∈.

**Definition 2.2** *A control policy*  =  *is said to be:*

**a.** *deterministic if there exists a sequence of measurable functions* :ℍ  *such that*

Iteration Algorithms in Markov Decision Processes with State-Action-Dependent Discount Factors... http://dx.doi.org/10.5772/65044 59

$$\pi\_n(C \mid h\_n) = \mathbf{1}\_C \left[ \mathbf{g}\_n(h\_n) \right], \ \forall h\_n \in \mathbb{H}\_n, n \in \mathbb{N}\_0, C \in \mathcal{B}(A);\tag{9}$$

**b.** *a Markov control policy if there exists a sequence of functions*  ∈  *such that*

$$\mathbb{E}\,\pi\_n(C\mid h\_n) = \mathbb{I}\_C\left[f\_n(\mathbf{x}\_n)\right], \,\forall h\_n \in \mathbb{H}\_n, n \in \mathbb{N}\_0, C \in \mathcal{B}(A). \tag{10}$$

*In addition*

*QB xa x B x xa a B X* ( | , ) := Pr | = , = , ( ). [ *n nn* <sup>+</sup><sup>1</sup> Î Î ] B (5)

<sup>+</sup> ÎN (6)

( ) := , ( ), , *D P D D Sn <sup>n</sup>* ÎÎ Î B N (7)

Typically, in many applications, the evolution of the system is determined by stochastic

where is a sequence of independent and identically distributed random variables with values in some Borel space , independent of the initial state 0, and : × × is a given

[ ] <sup>0</sup>

[ ] { }

*nnn n n*

q

**Control policies**. The actions applied by the controller are chosen by mean of rules known as control policies defined as follows. Let ℍ0: = and ℍ: = × , <sup>1</sup> be the spaces of admissible histories up to time . A generic element of ℍ is written as ℎ = (0,0,..., <sup>1</sup>, 1,) .

**Definition 2.1** *A control policy (randomized, history-dependent) is a sequence*  =  *of stochastic*

Let be the set of measurable selectors, that is, is the set of measurable function : such

**a.** *deterministic if there exists a sequence of measurable functions* :ℍ  *such that*

*s S F xas B*

Î Î Î

= 1 ( , , ) ( ), ( ),( , ) ,

*F x a s ds B X x a*

Î Î ò <sup>B</sup> <sup>K</sup>

[ ]

x

( | , ) = Pr ( , , ) | = , = = : (,,)

*kernels on given* ℍ *such that* (()|ℎ) = 1, *for all* ℎ ∈ℍ, ∈ ℕ0.

*QB xa F x a B x xa a*

*<sup>B</sup> <sup>S</sup>*

q

where 1(⋅) represents the indicator function of the set .

1 0 = ( , , ), , *n nnn x Fx a n* x

measurable function. In this case, if denotes the common distribution of , that is

 x

Once the transition to state ′ occurs, the process is repeated.

q

then the transition kernel can be written as

We denote by Π the set of all control policies.

**Definition 2.2** *A control policy*  =  *is said to be:*

that () ∈ () for all ∈.

difference equations of the form

58 Operations Research - the Art of Making Good Decisions

**c.** *A Markov control policy is stationary if there exists* ∈ *such that*  =  *for all*  ∈ ℕ0.

If necessary, see for example [1–3, 14–16] for further information on those policies.

Observe that a Markov policy is identified with the sequence , and we denote = . In this case, the control applied at time is = () ∈ () . In particular, a stationary policy is identified with the function ∈, and following a standard convention we denote by the set of all stationary control policies.

To ease the notation, for each ∈ and ∈ , we write

$$\begin{aligned} \mathbf{c}(\mathbf{x}, f) & \quad := \mathbf{c}(\mathbf{x}, f(\mathbf{x})), \\ \underline{Q}(\cdot \mid \mathbf{x}, f) & \quad := \underline{Q}(\cdot \mid \mathbf{x}, f(\mathbf{x})), \end{aligned} \tag{11}$$

and

(8)

$$a(\mathbf{x}, f) \coloneqq a(\mathbf{x}, f(\mathbf{x})).\tag{12}$$

**The underlying probability space**. Let (, ℱ) be the canonical measurable space consisting of the sample space =∞: = ×× ⋅⋅⋅ and its product algebra ℱ. Then, under standard arguments (see, e.g., [1, 14]) for each ∈ and initial state ∈ , there exists a probability measure on (,ℱ) such that, for all ℎ ∈ ℍ, ∈ (), ∈ ℕ0, <sup>∈</sup> ℬ(), and <sup>∈</sup> ℬ(),

$$\begin{aligned} P\_\times^x \left[ \chi\_0 = \chi \right] &= 1; \\ P\_\times^x \left[ a\_n \in C \mid h\_n \right] &= \pi\_n(C \mid h\_n); \end{aligned} \tag{13}$$

and the Markov-like property is satisfied

$$P\_x^{x}\left[\mathbf{x}\_{n+1}\in B\mid h\_n, a\_n\right] = \underline{Q}(B\mid \mathbf{x}\_n, a\_n). \tag{14}$$

The stochastic process , ℱ, , is called Markov decision process.

**Optimality criterion**. We assume that the costs are discounted in a multiplicative discounted rate. That is, a cost incurred at stage is equivalent to a cost Γ at time 0, where

$$\Gamma\_n \coloneqq \prod\_{k=0}^{n-1} \alpha(\mathbf{x}\_k, a\_k) \text{ if } n \ge 1 \text{, and } \Gamma\_0 = 1. \tag{15}$$

In this sense, when using a policy , given the initial state 0 = , we define the total expected discounted cost (with state-action–dependent discount factors) as

$$V(\boldsymbol{\pi}, \boldsymbol{\alpha}) \coloneqq E\_{\boldsymbol{x}}^{\boldsymbol{x}} \left[ \sum\_{\boldsymbol{n}=\boldsymbol{0}}^{\boldsymbol{\alpha}} \Gamma\_{\boldsymbol{n}} \boldsymbol{c}(\boldsymbol{x}\_{\boldsymbol{n}}, \boldsymbol{a}\_{\boldsymbol{n}}) \right],\tag{16}$$

where denotes the expectation operator with respect to the probability measure induced by the policy , given 0 = .

The optimal control problem associated to the control model ℳ, is then to find an optimal policy ∗ such that (∗ ,) = () for all , where

$$V(\mathbf{x}) \coloneqq \inf\_{\mathbf{z} \in \Pi} V(\boldsymbol{\pi}, \mathbf{x}) \tag{17}$$

is the optimal value function (see [10]).

#### **3. The value iteration algorithm**

In this section we give conditions on the model that imply: (i) the convergence of the value iteration algorithm; (ii) the value function is a solution of the corresponding optimality equation; and (iii) the existence of stationary optimal policies. In order to guarantee that () is finite for each initial state we suppose the following.

**Assumption 3.1.** *There exists* 0  *such that for all*  ,(0,) < ∞.

At the end of Section 4 we give sufficient conditions for Assumption 3.1. We also require continuity and (inf-) compactness conditions to ensure the existence of "measurable minimizers." The following definition was introduced in [13].

**Definition 3.2.** *A function* : ℝ *is said to be -inf-compact on if for each compact subset of and*  ℝ*, the set*

Iteration Algorithms in Markov Decision Processes with State-Action-Dependent Discount Factors... http://dx.doi.org/10.5772/65044 61

$$\{(\mathbf{x}, a) \in Gr\_{\mathbf{x}}(A) : u(\mathbf{x}, a) \le r\} \tag{18}$$

*is a compact subset of*  × , *where* (): = (,): , () .

**Assumption 3.3.** *(a) The one-stage cost and the discount factor are -inf-compact functions on . In addition, is nonnegative*.

(b) The transition law is weakly continuous, that is, the mapping

$$
\mu(\mathbf{x}, a) \to \int\_{\mathbf{x}} \mu(\mathbf{y}) Q(d\mathbf{y} \mid \mathbf{x}, a) \tag{19}
$$

*is continuous for each bounded and continuous function on* .

For each measurable function on , , and , we define the operators

$$Tu(\mathbf{x}) \coloneqq \inf\_{a \in \mathcal{A}(\mathbf{x})} \left\{ c(\mathbf{x}, a) + a(\mathbf{x}, a) \Big\|\! \mu(\mathbf{y}) Q(d\mathbf{y} \mid \mathbf{x}, a) \right\} \tag{20}$$

and

The stochastic process , ℱ,

60 Operations Research - the Art of Making Good Decisions

where

by the policy , given 0 = .

policy ∗ such that (∗

 *and*  ℝ*, the set*

is the optimal value function (see [10]).

**3. The value iteration algorithm**

is finite for each initial state we suppose the following.

ers." The following definition was introduced in [13].

**Assumption 3.1.** *There exists* 0  *such that for all*  ,(0,) < ∞.

, is called Markov decision process.

<sup>G</sup> Õ ³ G (15)

å (16)

ÎP (17)

induced

**Optimality criterion**. We assume that the costs are discounted in a multiplicative discounted

<sup>0</sup> =0 := ( , ) if 1, and = 1. *<sup>n</sup>*

=0 ( , ) := ( , ) , *x n nn n V x E cx a* p

denotes the expectation operator with respect to the probability measure

The optimal control problem associated to the control model ℳ, is then to find an optimal

p

In this section we give conditions on the model that imply: (i) the convergence of the value iteration algorithm; (ii) the value function is a solution of the corresponding optimality equation; and (iii) the existence of stationary optimal policies. In order to guarantee that ()

At the end of Section 4 we give sufficient conditions for Assumption 3.1. We also require continuity and (inf-) compactness conditions to ensure the existence of "measurable minimiz-

**Definition 3.2.** *A function* : ℝ *is said to be -inf-compact on if for each compact subset of*

,) = () for all , where

*Vx V x* ( ) := ( , ) inf p

In this sense, when using a policy , given the initial state 0 = , we define the total

¥ é ù <sup>G</sup> ê ú ë û

rate. That is, a cost incurred at stage is equivalent to a cost Γ at time 0, where

1

*n k k k* a*xa n* -

p

expected discounted cost (with state-action–dependent discount factors) as

$$T\_f u(\mathbf{x}) \coloneqq c(\mathbf{x}, f) + \alpha(\mathbf{x}, f) \Big\|\_{\mathcal{X}} \mu(\mathbf{y}) Q(d\mathbf{y} \mid \mathbf{x}, f). \tag{21}$$

A consequence of Assumption 3.3 is the following.

**Lemma 3.4.** *Let be a function in* +() . *If Assumption 3.3 holds then the function* : <sup>ℝ</sup> *defined by*

$$\nu(\mathbf{x}, a) \coloneqq c(\mathbf{x}, a) + \alpha(\mathbf{x}, a) \Big| \mu(\mathbf{y}) Q(d\mathbf{y} \mid \mathbf{x}, a) \tag{22}$$

*is -inf-compact on* 

**Proof**. First note that by the -inf-compactness hypothesis ( ⋅ , ⋅ ) and ( ⋅ , ⋅ ) are l.s.c on () for each compact subset of . Then, since and are nonnegative functions, from Assumption 3.3 we have that ( ⋅ , ⋅ ) is l.s.c on (). Thus, for each <sup>ℝ</sup>, the set

$$\{(\mathbf{x}, a) \in Gr\_{\mathbb{K}}(A) : \mathbf{v}(\mathbf{x}, a) \le r\} \tag{23}$$

is a closed subset of the compact set (, ) ∈ ():(,) <sup>≤</sup> . Then, is -inf-compact on .

Observe that the operator is monotone in the sense that if then . In addition, from Assumption 3.3 and ([13], Theorem 3.3), we have that maps +() into itself. Furthermore, there exists ∈ such that

$$T u(\mathbf{x}) = T\_{\bar{\jmath}} u(\mathbf{x}), \ \mathbf{x} \in X. \tag{24}$$

To state our first result we define the sequence ⊂ +() of value iteration functions as:

$$\begin{aligned} \nu\_0 &\equiv 0; \\ \nu\_n(\mathbf{x}) &= T \nu\_{n-1}(\mathbf{x}), \quad \mathbf{x} \in X. \end{aligned} \tag{25}$$

Since is monotone, note that is a nondecreasing sequence.

**Theorem 3.5.** *Suppose that Assumptions 3.1 and 3.3 hold. Then*


$$V(\mathbf{x}) = TV(\mathbf{x}) = \inf\_{a \in \mathcal{A}(\mathbf{x})} \left\{ c(\mathbf{x}, a) + a(\mathbf{x}, a) \Big| V(\mathbf{y}) Q(\mathbf{dy} \mid \mathbf{x}, a) \right\}.\tag{26}$$

**c.** *There exists a stationary policy* ∗ ∈  *such that, for all* ∈,() = ∗(), *that is*

$$W(\mathbf{x}) = c(\mathbf{x}, f^\*) + \alpha(\mathbf{x}, f^\*) \Big|\_{\mathbf{x}} \mu(\mathbf{y}) Q(d\mathbf{y} \mid \mathbf{x}, f^\*), \tag{27}$$

*and* ∗  *is an optimal policy*.

**Proof**. Since is nondecreasing, there exists ∈+() such that . Hence, from Monotone Convergence Theorem, ([13], Lemmas 2.2, 2.3), and ([1], Lemma 4.2.4), we obtain, for each ∈, () = 1() (), as ∞, which, in turn implies

$$T\mathbf{v} = \mathbf{v}.\tag{28}$$

Therefore, to get (a)-(b) we need to prove that = . To this end, observe that for all and π∈Π

$$\mathbf{v}\_{u}(\mathbf{x}) \le \int\_{\mathcal{A}} \mathbf{c}(\mathbf{x}, a)\pi(da \mid \mathbf{x}) + \int\_{\mathcal{A}} \mathbf{c}(\mathbf{x}, a) \int\_{\mathcal{X}} \mathbf{v}\_{u-1}(\mathbf{x}\_{1}) \underline{Q}(d\mathbf{x}\_{1} \mid \mathbf{x}, a)\pi(da \mid \mathbf{x}). \tag{29}$$

Then, iterating (29) we obtain

$$\nu\_n(\mathbf{x}) \le V\_n(\boldsymbol{\pi}, \mathbf{x}), \ n \in \mathbb{N}, \tag{30}$$

where

is a closed subset of the compact set (, ) ∈ ():(,) <sup>≤</sup> . Then, is -inf-compact on

Observe that the operator is monotone in the sense that if then . In addition, from Assumption 3.3 and ([13], Theorem 3.3), we have that maps +() into itself. Further-

To state our first result we define the sequence ⊂ +() of value iteration functions as:

1

*v x Tv x x X* -

( ) = ( ), . *n n*

( )= ( )= ( , ) ( , ) ( ) ( | , ) . inf *a Ax <sup>X</sup> V x TV x c x a x a V y Q dy x a* a

> ( ) = ( , ) ( , ) ( ) ( | , ), *X*

**Proof**. Since is nondecreasing, there exists ∈+() such that . Hence, from Monotone Convergence Theorem, ([13], Lemmas 2.2, 2.3), and ([1], Lemma 4.2.4), we obtain,

*V x c x f x f u y Q dy x f* a

for each ∈, () = <sup>1</sup>() (), as ∞, which, in turn implies

ì ü ï ï

í ý <sup>+</sup> ï ï î þ <sup>ò</sup> (26)

\* \* \* <sup>+</sup> ò (27)

*Tv v* = . (28)

∗(), *that is*

0

º

*v*

Since is monotone, note that is a nondecreasing sequence.

**b.** *is the minimal solution in* +() *of the Optimality Equation, i.e*.,

( )

**c.** *There exists a stationary policy* ∗ ∈  *such that, for all* ∈,() =

Î

**Theorem 3.5.** *Suppose that Assumptions 3.1 and 3.3 hold. Then*

0;

( ) = ( ), . *<sup>f</sup> Tu x T u x x X* % Î (24)

<sup>Î</sup> (25)

.

**a.** .

*and* ∗

 *is an optimal policy*.

more, there exists ∈ such that

62 Operations Research - the Art of Making Good Decisions

$$W\_n(\pi, \mathbf{x}) = E\_\times^\pi \left[ \sum\_{\iota=0}^{n-1} \Gamma\_\iota c(\mathbf{x}\_\iota, a\_\iota) \right],\tag{31}$$

is the stage discounted cost . Then, letting <sup>∞</sup> we get () ≤ (,), for all and . Thus,

$$\nu(\mathbf{x}) \le V(\mathbf{x}), \ \mathbf{x} \in X. \tag{32}$$

On the other hand, from (28) and (24), let such that () = (), . Iterating this equation, we have (see (31))

$$\begin{split} \nu(\mathbf{x}) &= E\_x^{f} \left[ \mathbf{c}(\mathbf{x}, f) + \sum\_{t=1}^{n-1} \prod\_{k=0}^{t-1} \mathbf{c}(\mathbf{x}\_k, f) \mathbf{c}(\mathbf{x}\_i, f) \right] \\ &+ E\_x^{f} \left[ \prod\_{k=0}^{n-1} \mathbf{c}(\mathbf{x}\_k, f) \nu(\mathbf{x}\_n) \right] \\ &\ge V\_n(f, \mathbf{x}). \end{split} \tag{33}$$

Hence, letting ∞,

$$\mathbf{v}(\mathbf{x}) \ge V(f, \mathbf{x}) \ge V(\mathbf{x}), \ \mathbf{x} \in X. \tag{34}$$

Combining (32) and (34) we get = .

Now, let +() be an arbitrary solution of the optimality equation, that is, <sup>=</sup> . Then, applying the arguments in the proof of (34) with instead of we conclude that . That is, is minimal in +() .

Part (c) follows from (b) and ([13], Theorem 3.3). Indeed, there exists a stationary policy ∗ ∈ such that () = ∗(), ∈ . Then, iteration of this equation yields () = (∗ ,), which implies that ∗ is optimal.

#### **4. Policy iteration algorithm**

In Theorem 3.5 is established an approximation algorithm for the value function by means of the sequence of the value iteration functions . In this case the sequence increase to and it is defined recursively. Now we present the well-known policy iteration algorithm which provides a decreasing approximation to in the set of the control policies.

To define the algorithm, first observe that from the Markov property (14) and applying properties of conditional expectation, for any stationary policy ∈ and ∈ , the corresponding cost (,) satisfies

$$V(f, \mathbf{x}) = c(\mathbf{x}, f) + \alpha(\mathbf{x}, f) E\_x^{\prime} \left[ \sum\_{l=1}^{n} \prod\_{k=0}^{l-1} \alpha(\mathbf{x}\_k, f) c(\mathbf{x}\_l, f) \right]$$

$$= c(\mathbf{x}, f) + \alpha(\mathbf{x}, f) \Big|\_{\mathcal{X}} E^{\prime} \Big[ c(\mathbf{x}\_l, f) + \sum\_{l=2}^{n} \prod\_{k=0}^{l-1} \alpha(\mathbf{x}\_k, f) c(\mathbf{x}\_l, f) \, | \, \mathbf{x}\_l = \mathbf{y} \Big] Q(d\mathbf{y} \, | \, \mathbf{x}, f) \tag{35}$$

$$= c(\mathbf{x}, f) + \alpha(\mathbf{x}, f) \Big|\_{\mathcal{X}} V(f, \mathbf{y}) Q(d\mathbf{y} \, | \, \mathbf{x}, f) = T\_f V(f, \mathbf{x}), \; \mathbf{x} \in X.$$

Let 0 ∈ be a stationary policy with a finite valued cost Then, from (35),

$$\begin{split} \mathbb{W}\_{0}(\mathbf{x}) &= \mathbf{c}(\mathbf{x}, f\_{0}) + \mathbf{a}(\mathbf{x}, f\_{0}) \Big[\_{X} \mathbb{W}\_{0}(\mathbf{y}) Q(d\mathbf{y} \mid \mathbf{x}, f\_{0}) \\ &= T\_{f\_{0}} \mathbb{W}\_{0}(\mathbf{x}), \quad \mathbf{x} \in X. \end{split} \tag{36}$$

Now, let 1 ∈ be such that

$$T w\_0(x) = T\_{f\_1} w\_0(x),\tag{37}$$

and define

In general, we define a sequence in +() as follows. Given ∈ , compute Next, let + 1 ∈ be such that

Iteration Algorithms in Markov Decision Processes with State-Action-Dependent Discount Factors... http://dx.doi.org/10.5772/65044 65

$$T\_{f\_{n+1}}w\_n(x) = Tw\_n(x), \quad x \in X,\tag{38}$$

that is,

,), which

(35)

(37)

Part (c) follows from (b) and ([13], Theorem 3.3). Indeed, there exists a stationary policy ∗ ∈

In Theorem 3.5 is established an approximation algorithm for the value function by means of the sequence of the value iteration functions . In this case the sequence increase to and it is defined recursively. Now we present the well-known policy iteration algorithm

To define the algorithm, first observe that from the Markov property (14) and applying properties of conditional expectation, for any stationary policy ∈ and ∈ , the corre-

> 1 1 1 =0 =2

 a

Let 0 ∈ be a stationary policy with a finite valued cost Then, from (35),

*X*

In general, we define a sequence in +() as follows. Given ∈ , compute

*t f*

åÕ

¥ -

é ù

ë û + Î

+ ê ú

( , )= ( , ) ( , ) ( , )( , )

*V f x cxf xf E x f cx f*

= ( , ) ( , ) ( , ) ( , ) ( , )| = ( | , )

¥ -

+ + ê ú

*cxf xf E cx f x f c x f x y Q dy x f*

*k t <sup>X</sup> <sup>k</sup> <sup>t</sup>*

*t f*

ò åÕ

ò

<sup>0</sup> <sup>0</sup>

*f*

= ( ), .

Next, let + 1 ∈ be such that

*Twx x X* +a

a

= ( , ) ( , ) ( , ) ( | , ) = ( , ), .

0 0 00 0

Î

*w x c x f x f w y Q dy x f*

( )= ( , ) ( , ) ( ) ( | , )

*c x f x f V f y Q dy x f T V f x x X*

*<sup>f</sup> <sup>X</sup>*

1 =0 =1

*x kt <sup>k</sup> <sup>t</sup>*

é ù

ë û

ò (36)

 a

which provides a decreasing approximation to in the set of the control policies.

∗(), ∈ . Then, iteration of this equation yields () = (∗

such that () =

is optimal.

64 Operations Research - the Art of Making Good Decisions

**4. Policy iteration algorithm**

sponding cost (,) satisfies

Now, let 1 ∈ be such that

and define

a

a

implies that ∗

$$\begin{aligned} \left(T\_{f\_{n+1}}w\_n(x)\right) &= \left. c(x, f\_{n+1}) + \alpha(x, f\_{n+1}) \right\} \int\_X w\_n(y) Q(dy|x, f\_{n+1}) \\ &= \min\_{a \in A(x)} \left\{ c(x, a) + \alpha(x, a) \int\_X w\_n(y) Q(dy|x, a) \right\} \\ &= \left. T w\_n(x), \quad x \in X. \end{aligned} \tag{39}$$

Then we define

**Theorem 4.1.** *Under Assumptions 3.1 and 3.3, there exists a measurable nonnegative function such that and Moreover, if satisfies*

$$\lim\_{n \to \infty} E\_x^{\pi} \left[ \Gamma\_n w(x\_n) \right] = 0 \quad \forall \pi \in \Pi, \ x \in X,\tag{40}$$

*then* 

To prove the Theorem 4.1 we need the following result.

**Lemma 4.2.** *Under Assumption 3.3, if* : ℝ *is a measurable function such that is well defined, , and*

$$\lim\_{n \to \infty} E\_x^{\pi} \left[ \Gamma\_n \mu(\mathbf{x}\_n) \right] = 0 \quad \forall \pi \in \Pi, \ x \in X,\tag{41}$$

*then*  .

**Proof**. From the Markov property (14), for each π∈Π and ,

$$E\_x^{\pi} \left[ \Gamma\_{n+1} \mu(\mathbf{x}\_{n+1}) \mid h\_n, a\_n \right] = \Gamma\_{n+1} \int\_X \mu(\mathbf{y}) Q(d\mathbf{y} \mid \mathbf{x}\_n, a\_n) \tag{42}$$

$$\Gamma = \Gamma\_n \left[ \mathbf{c}(\mathbf{x}\_n, a\_n) + \mathbf{a}(\mathbf{x}\_n, a\_n) \int\_X \boldsymbol{\mu}(\mathbf{y}) Q(d\boldsymbol{\nu} \mid \mathbf{x}\_n, a\_n) - \mathbf{c}(\mathbf{x}\_n, a\_n) \right] \tag{43}$$

$$\mathbf{x} \ge \Gamma\_{\mathbf{z}} \inf\_{a \in \mathcal{A}(\mathbf{x}\_n)} \left[ c(\mathbf{x}\_n, a) + a(\mathbf{x}\_n, a) \int\_{\mathcal{X}} \mu(\mathbf{y}) Q(d\mathbf{y} \mid \mathbf{x}\_n, a) \right] - \Gamma\_{\mathbf{z}} c(\mathbf{x}\_n, a\_n) \tag{44}$$

$$\mathbf{x} = \Gamma\_n T \mu(\mathbf{x}\_n) - \Gamma\_n \mathbf{c}(\mathbf{x}\_n, a\_n) \ge \Gamma\_n \mu(\mathbf{x}\_n) - \Gamma\_n \mathbf{c}(\mathbf{x}\_n, a\_n), \tag{45}$$

which, in turn implies

$$\Gamma\_n \mathfrak{c}(\mathbf{x}\_n, a\_n) \ge E\_\mathbf{x}^\pi \left[ \Gamma\_n \mu(\mathbf{x}\_n) - \Gamma\_{n+1} \mu(\mathbf{x}\_{n+1}) \, | \, h\_n, a\_n \right]. \tag{46}$$

Therefore, for all ℕ (see (31)),

$$W\_k(\boldsymbol{\pi}, \boldsymbol{\pi}) = E\_\boldsymbol{\pi}^\pi \sum\_{\boldsymbol{n}=0}^{k-1} \Gamma\_\boldsymbol{n} \mathbf{c}(\mathbf{x}\_n, a\_n) \ge \boldsymbol{\mu}(\mathbf{x}) - E\_\boldsymbol{\pi}^\pi \left[\Gamma\_k \boldsymbol{\mu}(\mathbf{x}\_k)\right]. \tag{47}$$

Finally, letting ∞, (41) yields (,) ≥ (), and since is arbitrary we obtain () ≥ () .

**Proof of Theorem 4.1**. According to Lemma 4.2, it is sufficient to show the existence of a function such that and To this end, from (36)–(38),

$$\begin{split} w\_0(x) &\geq \min\_{a \in A(x)} \left\{ c(x, a) + \alpha(x, a) \int\_X w\_0(y) Q(dy|x, a) \right\} = T\_{f\_1} w\_0(x) \\ &= \quad c(x, f\_1) + \alpha(x, f\_1) \int\_X w\_0(y) Q(dy|x, f\_1). \end{split} \tag{48}$$

Iterating this inequality, a straightforward calculation as in (34) shows that

$$w\_0(x) \ge V(f\_1, x) = w\_1(x), \quad x \in X. \tag{49}$$

In general, similar arguments yield

$$w\_n \ge Tw\_n \ge w\_{n+1}, \quad n \in \mathbb{N}.\tag{50}$$

Therefore, there exists a nonnegative measurable function such that In addition, since ∀ ℕ0, we have Next, letting <sup>∞</sup> in (47) and applying ([17], Lemma 3.3), we obtain which implies

#### **4.1. Sufficient conditions for Assumption 3.1 and (40)**

An obvious sufficient condition for Assumption 3.1 and (40) is the following:

**C1** (a) There exists (0,1) such that for all (,) , (,) < .

(b) For some constant , 0 ≤ (,) ≤ for all (,) .

Indeed, under condition C1, (,) ≤ /(1 − ) for all and , and is a bounded sequence which in turn implies (since the boundedness of the function This fact clearly yields (40).

Other less obvious sufficient conditions are the following (see, e.g., [15, 16, 2]).

**C2** (a) Condition C1 (a).

(b) There exist a measurable function : (1,∞) and constants > 0, (1,1/), such that for all (,) ,

$$\sup\_{\mathcal{A}(\mathbf{x})} (\mathbf{x}, a) \le MW(\mathbf{x}) \tag{51}$$

and

which, in turn implies

Therefore, for all ℕ (see (31)),

66 Operations Research - the Art of Making Good Decisions

In general, similar arguments yield

3.3), we obtain which implies

clearly yields (40).

**4.1. Sufficient conditions for Assumption 3.1 and (40)**

(b) For some constant , 0 ≤ (,) ≤ for all (,) .

*n nn x n n n n nn cx a E ux ux h a* ( , ) ( ) ( )| , . [ 1 1 ]

( , )= ( , ) ( ) ( ) .

Finally, letting ∞, (41) yields (,) ≥ (), and since is arbitrary we obtain () ≥ () . **Proof of Theorem 4.1**. According to Lemma 4.2, it is sufficient to show the existence of a

Therefore, there exists a nonnegative measurable function such that In addition, since

Indeed, under condition C1, (,) ≤ /(1 − ) for all and , and is a bounded sequence which in turn implies (since the boundedness of the function This fact

∀ ℕ0, we have Next, letting <sup>∞</sup> in (47) and applying ([17], Lemma

*k x n nn xk k*

*V x E cx a ux E ux*

[ ] <sup>1</sup>

 p

åG ³ -G (47)

(48)

(49)

(50)

G ³ G -G + + (46)

p

=0

function such that and To this end, from (36)–(38),

Iterating this inequality, a straightforward calculation as in (34) shows that

An obvious sufficient condition for Assumption 3.1 and (40) is the following:

**C1** (a) There exists (0,1) such that for all (,) , (,) < .

*k*


p

p

*n*

$$\int\_{\chi} W(\mathbf{y}) Q(d\mathbf{y} \mid \mathbf{x}, a) \le \beta W(\mathbf{x}).\tag{52}$$

First note that by condition C2 and the Markov property (14), for any policy and initial state 0 <sup>=</sup> ,

$$E\_x^{\pi} \left[ W(\mathbf{x}\_{n+1}) \mid h\_n, a\_n \right] = \int\_X W(\mathbf{y}) Q(\mathbf{dy} \mid \mathbf{x}\_n, a\_n) \le \beta W(\mathbf{x}\_n), \ \forall n \in \mathbb{N}\_0. \tag{53}$$

Then, using properties of conditional expectation,

$$E\_x^{\pi} \left[ W(\mathbf{x}\_{n+1}) \right] \le \beta E\_x^{\pi} \left[ W(\mathbf{x}\_n) \right], \quad \forall n \in \mathbb{N}\_0. \tag{54}$$

Iterating inequality (51) we get

$$E\_x^{\pi} \left[ W(\mathbf{x}\_n) \right] \le \beta^\* W(\mathbf{x}), \ \forall n \in \mathbb{N}\_0. \tag{55}$$

Therefore, by condition C2, for any policy and ,

$$\begin{split} V(\boldsymbol{\pi}, \boldsymbol{\chi}) &\leq E\_{\boldsymbol{\pi}}^{\boldsymbol{\pi}} \sum\_{\boldsymbol{\pi}=\boldsymbol{0}}^{\boldsymbol{\alpha}} \overline{\boldsymbol{\alpha}}^{\boldsymbol{\alpha}} \boldsymbol{c}(\boldsymbol{x}\_{\boldsymbol{\alpha}\_{\boldsymbol{\pi}}} \boldsymbol{a}\_{\boldsymbol{\alpha}}) \leq \sum\_{\boldsymbol{\pi}=\boldsymbol{0}}^{\boldsymbol{\alpha}} M \, \overline{\boldsymbol{\alpha}}^{\boldsymbol{\alpha}} E\_{\boldsymbol{\pi}}^{\boldsymbol{\pi}} W(\boldsymbol{x}\_{\boldsymbol{\alpha}}) \\ &\leq \frac{M}{1 - \overline{\boldsymbol{\alpha}} \boldsymbol{\beta}} W(\boldsymbol{x}). \end{split} \tag{56}$$

Thus, Assumption 3.1 holds.

On the other hand, if + () denotes the subclass of all functions in +() such that

$$\left\|\mu\right\|\_{w} \coloneqq \sup\_{\mathbf{x} \in X} \frac{h(\mathbf{x})}{W(\mathbf{x})} < \infty,\tag{57}$$

then, because ( ⋅ ) = ( + 1, ⋅ ), from (53) and condition C2, we have that ∈ + () for all = 1,2,... and

$$\lim\_{n \to \infty} E\_x^{\pi} \left[ \Gamma\_n \mathcal{w}\_k(\mathbf{x}\_n) \right] = 0 \quad \forall \pi \in \Pi, \mathbf{x} \in X. \tag{58}$$

Since (40) follows from (55).

#### **Acknowledgements**

Work supported partially by Consejo Nacional de Ciencia y Tecnologa (CONACYT) under grant CB2015/254306.

#### **Author details**

Fernando Luque-Vásquez and J. Adolfo Minjárez-Sosa\*

\*Address all correspondence to: aminjare@gauss.mat.uson.mx

Department of Mathematics, University of Sonora, Hermosillo, Sonora, México

#### **References**


[5] E.A. Feinberg, A. Shwartz, Constrained dynamic programming with two discount factors: applications and an algorithm. IEEE Trans. Autom. Control, 44 (1999), 628–631.

( ) := < , sup ( ) *<sup>W</sup> x X h x <sup>u</sup>*

then, because ( ⋅ ) = ( + 1, ⋅ ), from (53) and condition C2, we have that ∈ +

lim [ ] ( ) =0 , . *x nk n <sup>n</sup> E wx x X*

p

Work supported partially by Consejo Nacional de Ciencia y Tecnologa (CONACYT) under

[1] O. Hernández-Lerma, J.B. Lasserre, Discrete-Time Markov Control Processes: Basic

[2] O. Hernández-Lerma, J.B. Lasserre, Further Topics on Discrete-Time Markov Control

[3] M.L. Puterman, Markov Decision Processes: Discrete Stochastic Dynamic Program-

[4] Y. Carmon, A. Shwartz, Markov decision processes with exponentially representable

p

®¥

Fernando Luque-Vásquez and J. Adolfo Minjárez-Sosa\*

\*Address all correspondence to: aminjare@gauss.mat.uson.mx

Department of Mathematics, University of Sonora, Hermosillo, Sonora, México

Optimality Criteria. Springer-Verlag, New York, NY, 1996.

Processes. Springer-Verlag, New York, NY, 1999.

discounting. Oper. Res. Lett. 37 (2009), 51–55.

ming. Wiley, New York, NY, 1994.

Since (40) follows from (55).

68 Operations Research - the Art of Making Good Decisions

**Acknowledgements**

grant CB2015/254306.

**Author details**

**References**

= 1,2,... and

<sup>Î</sup> *W x* ¥ (57)

G " ÎP Î (58)

() for all


#### **Mathematical Modeling of Isothermal Drying and its Potential Application in the Design of the Industrial Drying Regimes of Clay Products Mathematical Modeling of Isothermal Drying and its Potential Application in the Design of the Industrial Drying Regimes of Clay Products**

Miloš Vasić, Zagorka Radojević and Robert Rekecki Miloš Vasić, Zagorka Radojević and Robert Rekecki

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/64983

#### **Abstract**

The processes of simultaneous moisture and heat transfer, which are often nonstation‐ ary, and the distinct nature and properties of the material to be dried complicate the description of the drying process. The theory of moisture migration and modeling of drying process has been the subject of many studies. Three theories, the diffusion, the capillary flow, and the evaporation‐condensation, have won general recognition for the explanation of moisture transfer in porous media. This study has several objectives. The first one was to present a new method for calculation of the variable effective diffusivity as well as to identify different drying mechanisms and its exact transitions during isothermal drying of clay tiles. The second and main objectives were to analyze all obtained isothermal data, to create a link with the comprehensive theory of moisture migration during drying, and to set up the non‐isothermal drying process. The procedure was based on the principle of controlling the mass transport during the drying process. Proposed regimes were consisted from several isothermal segments. Isothermal segments were selected and specificated in accordance with the clay raw material nature and the moisture migration theory.

**Keywords:** drying regime, effective diffusivity, clay tile, non‐isothermal drying, shrinking

#### **1. Introduction**

Drying represents very important and complex process in the production of clay tiles, which involves simultaneous heat and mass transfer between the body and the surrounding

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

atmosphere. The whole process consists of several periods characterized with different mechanisms of internal moisture transfer. Until recently, three theories, the diffusion theory, the capillary flow theory, and the "evaporation‐condensation theory," have won general recognition for an explanation of moisture transfer in porous media. During drying, several mechanisms are controlling the overall internal moisture transport from the drying material up to its surface. In order to describe the complete internal transport with the same equations as pure diffusion and to take the correction for all secondary types of mass transfer into account, it is suitable to simply replace the pure diffusion coefficient with an effective diffusion coefficient. This procedure was successfully applied in the reference [1]. Within the same reference, it is stated that "the effective moisture diffusivity represents an overall mass transport property of moisture which includes molecular diffusion, the Knudsen diffusion, the non‐Fickian or stress‐driven diffusion, capillary motions, liquid diffusion through solid pores, vapor diffusion in air‐filled pores, vaporization‐condensation sequence flow, and hydrodynamic flow mass transfer mechanisms."

Determination of the effective diffusion coefficient is essential for a credible description of the mass transfer process, described by Fick's equation [2]. Description and modeling of drying process based on the calculation of constant effective moisture diffusivity have been the subject of many studies [3–7]. The plot of effective moisture diffusivity vs. time or moisture content (Deff‐t or Deff‐MR curve) is a good indicator to evaluate and present an overall mass transport property of moisture during isothermal drying. Determination of time‐dependent effective moisture diffusivity along with the detection of Deff‐MR curves has been reported in several studies [8–11]. The facts that capillary flow is a predominant mechanism within the constant drying period while, in the falling drying period, the evaporation‐condensation and vapor diffusion are the predominant mechanisms, have won general recognition for the explanation of moisture transfer in porous media. The comprehensive theory of moisture migration during drying which represents a method useful to trace and quantify all possible mechanisms of moisture transport and their transitions during the isothermal drying process was recently reported [12].

This study has several objectives. The first one was to shortly present the theory of moisture migration along with the method for calculation of the variable effective diffusivity. The next one was to calculate the variable effective diffusivity, to divide drying curve in segments, and to identify all possible mechanisms of moisture transport within a clay roofing tile for several different experiments, in which drying air parameters were constant. The main objectives of this study were to analyze all obtained isothermal data, to create a link with the nature of the raw clay material and the comprehensive theory of moisture migration during drying, and finally to design the non‐isothermal industrial drying regime.

#### **2. Methods and materials**

After using appropriate initial and boundary conditions, along with reasonable assumptions, Cranck has presented the analytical solution of Fick's second diffusion law for several standard geometries, such as tile, cylinder, and sphere [13]. Hence, on the base of the lumping approx‐ imation, which assumes that the effective diffusivity is an overall mass transport property, the Cranck solution for tile geometry can be expressed as Eq. (1):

$$MR = \frac{X - X\_{eq}}{X\_0 - X\_{eq}} = \frac{8}{\pi^2} \sum\_{n=1}^{\infty} \frac{1}{\left(2n + 1\right)^2} \exp\left(\frac{-\left(2n + 1\right)^2}{4}\right) \pi^2 \frac{D\_{gf}t}{l^2} \tag{1}$$

In order to estimate effective diffusivity from Eq. (1), many researchers were using a simpli‐ fying method [14, 15]. They assumed that for sufficiently long drying times, the terms in the infinite summation series in Eq. (1) converge rapidly and in most cases may be accurately approximated by the first term of the series. This method is commonly known as "simplified slope" model. Two programs for determination of effective diffusion coefficient, based on mathematical calculation of Fick's second law and the Cranck diffusion equation, were recently presented for clay tiles [2, 6, 7].

#### **2.1. Method for estimation of the time-dependent effective diffusivity**

Zagrouba was one of the first researchers that had reported the "slope" method as a possible solution for estimation of the dependent effective diffusivity at various moisture contents for clay materials [11]. The Fourier diffusion number (*F*0) has to be calculated from Eq. (2), while constant diffusion coefficient *D*0 is obtained from the "simplified slope" method. The depend‐ ent effective diffusivity *D*eff is calculated from Eq. (3). This method has been widely applied in several studies such as [8, 9, 16]:

$$F\_0 = \frac{D\_0 t}{l\_0^2} \tag{2}$$

$$D\_{\rm eff} = \frac{\left(\frac{\partial MR}{\partial t}\right)\_{\rm app}}{\left(\frac{\partial MR}{\partial F\_0}\right)\_{\rm th}} I\_0^2 \tag{3}$$

#### *2.1.1. Modified slope method*

atmosphere. The whole process consists of several periods characterized with different mechanisms of internal moisture transfer. Until recently, three theories, the diffusion theory, the capillary flow theory, and the "evaporation‐condensation theory," have won general recognition for an explanation of moisture transfer in porous media. During drying, several mechanisms are controlling the overall internal moisture transport from the drying material up to its surface. In order to describe the complete internal transport with the same equations as pure diffusion and to take the correction for all secondary types of mass transfer into account, it is suitable to simply replace the pure diffusion coefficient with an effective diffusion coefficient. This procedure was successfully applied in the reference [1]. Within the same reference, it is stated that "the effective moisture diffusivity represents an overall mass transport property of moisture which includes molecular diffusion, the Knudsen diffusion, the non‐Fickian or stress‐driven diffusion, capillary motions, liquid diffusion through solid pores, vapor diffusion in air‐filled pores, vaporization‐condensation sequence flow, and

Determination of the effective diffusion coefficient is essential for a credible description of the mass transfer process, described by Fick's equation [2]. Description and modeling of drying process based on the calculation of constant effective moisture diffusivity have been the subject of many studies [3–7]. The plot of effective moisture diffusivity vs. time or moisture content (Deff‐t or Deff‐MR curve) is a good indicator to evaluate and present an overall mass transport property of moisture during isothermal drying. Determination of time‐dependent effective moisture diffusivity along with the detection of Deff‐MR curves has been reported in several studies [8–11]. The facts that capillary flow is a predominant mechanism within the constant drying period while, in the falling drying period, the evaporation‐condensation and vapor diffusion are the predominant mechanisms, have won general recognition for the explanation of moisture transfer in porous media. The comprehensive theory of moisture migration during drying which represents a method useful to trace and quantify all possible mechanisms of moisture transport and their transitions during the isothermal drying process was recently

This study has several objectives. The first one was to shortly present the theory of moisture migration along with the method for calculation of the variable effective diffusivity. The next one was to calculate the variable effective diffusivity, to divide drying curve in segments, and to identify all possible mechanisms of moisture transport within a clay roofing tile for several different experiments, in which drying air parameters were constant. The main objectives of this study were to analyze all obtained isothermal data, to create a link with the nature of the raw clay material and the comprehensive theory of moisture migration during drying, and

After using appropriate initial and boundary conditions, along with reasonable assumptions, Cranck has presented the analytical solution of Fick's second diffusion law for several standard

finally to design the non‐isothermal industrial drying regime.

hydrodynamic flow mass transfer mechanisms."

72 Operations Research - the Art of Making Good Decisions

reported [12].

**2. Methods and materials**

Better and more accurate solution, called "modified slope" method, for calculation and estimation of time‐dependent effective diffusivity based on the "slope" method was presented [12] and recently applied in two studies [17, 18]. The essence of this new model is reflected in replacing the constant term *l*0 in Eqs. (2) and (3) with an interchangeable *l*(*t*) term and by using the model that includes shrinkage, which is presented in the study [7], for calculation of the constant diffusion coefficient *D*0. Interchangeable *l*(*t*) term must be registered experimentally. The final calculation formula is presented in the form of Eq. (4):

$$D\_{\mathcal{O}\_{(l)}} = \frac{\left(\frac{\partial MR}{\partial t}\right)\_{\text{exp}}}{\left(\frac{\partial MR}{\partial F\_0}\right)\_{th}} l\_{(l)}^2 \tag{4}$$

#### *2.1.2. Moisture migration theory*

Typical curves, which represent the dependence of the effective moisture diffusivity as a function of the moisture content or drying time obtained using calculation method presented in the study [12], are presented in **Figure 1**.

**Figure 1.** Typical time‐dependent effective moisture diffusivity curves (Deff‐MR and Deff‐t).

It is important to highlight the significance of the characteristic pattern shown in **Figure 1**. It indicates all possible mechanisms of moisture transport along with their transitions from one to another during the constant and the falling drying period for isothermal experiments.

At the beginning of the drying process, effective moisture diffusivity values are equal to zero until characteristic point A is reached. This drying period is commonly known as the "initial heating segment." It is a relatively short period. The quantity of evaporated water is small, and shrinkage of the green body (clay tile) is not detected. The clay tile surface is heating up from its starting temperature up to the wet‐bulb temperature. Moisture diffusivity values are practically zero suggesting that the overall mass transport is negligible.

After the initial heating period is over, the effective moisture diffusivity values are increasing as the moisture ratio is decreased, until characteristic point E is reached. This period is commonly known as the "constant drying rate period." Throughout this period, the surface of the green body is constantly covered with a continuous film of water. The surface temper‐ ature is constant and has a value corresponding to the wet‐bulb temperature of the air. The shrinkage of the green body is characteristic for this drying period. The end of the constant rate period is marked by a maximum in capillary pressure. Cracking of the green body is most likely to occur at this point of the drying process [19–21].

( ) ( )

ç ÷ ¶ è ø

*MR t D l MR F*

æ ö ¶ ç ÷ è ø ¶ <sup>=</sup> æ ö ¶

*eff t*

*t*

**Figure 1.** Typical time‐dependent effective moisture diffusivity curves (Deff‐MR and Deff‐t).

practically zero suggesting that the overall mass transport is negligible.

It is important to highlight the significance of the characteristic pattern shown in **Figure 1**. It indicates all possible mechanisms of moisture transport along with their transitions from one to another during the constant and the falling drying period for isothermal experiments.

At the beginning of the drying process, effective moisture diffusivity values are equal to zero until characteristic point A is reached. This drying period is commonly known as the "initial heating segment." It is a relatively short period. The quantity of evaporated water is small, and shrinkage of the green body (clay tile) is not detected. The clay tile surface is heating up from its starting temperature up to the wet‐bulb temperature. Moisture diffusivity values are

*2.1.2. Moisture migration theory*

in the study [12], are presented in **Figure 1**.

74 Operations Research - the Art of Making Good Decisions

0

Typical curves, which represent the dependence of the effective moisture diffusivity as a function of the moisture content or drying time obtained using calculation method presented

2

(4)

*exp*

*th*

From point A to point B, liquid is transporting through the biggest capillaries. This transport is caused by a gradient of the capillary potential and is commonly known as the "capillary pumping flow." Throughout this period, the quantity of evaporated water and detected sample shrinkage is relatively small. The "hydrodynamic flow" (caused by viscosity) is negligible and that the overall mass transport is governed only by capillary pumping flow mechanisms.

From point B up to point D, liquid is transported simultaneously by two mechanisms. The main mechanism is capillary pumping flow, caused by the gradient of the capillary potential originating from still saturated capillaries, and the second one is hydrodynamic liquid flow in pores, which arises from the difference in total pressure caused by friction. Capillary pumping flow up to point C was caused by the still present macro capillaries, while the same flow up to point D was caused by the presence of mezzo capillaries. Throughout this period, the quantity of evaporated water and detected sample shrinkage is considerably higher than in the already presented drying segments. Point D is commonly known as the "upper critical point" which is indicating the beginning of the transition from the "funicular" to the "pendu‐ lar" state. It is important to highlight that in the "funicular" state, continuous threads of moisture are present in the pores, while in the "pendular" state this is not the case. From this point on, the clay tile surface is not fully covered by a water film. "Dry" patches will appear on the surface for the first time. The drying front is starting to recede.

From point D up to point E, liquid and vapor are transported simultaneously. Liquid is transported by three mechanisms. The main one is capillary pumping flow. Capillaries in the funicular state are generating the pressure gradient which secures the capillary pumping flow. The difference in total pressure caused by friction provides hydrodynamic liquid flow in pores which represents the second transport mechanisms. The concentration gradient of the liquid in the pores is the driving force for the liquid diffusion which represents the third transport mechanisms. The difference in total pressure caused by friction secures the hydrodynamic vapor flow in pores. Deviation from a constant drying rate is first registered as point E, which is commonly named the "critical" point. A partially wet surface is able to provide a constant or a falling drying period depending on the fraction of wet surface and the boundary layer thickness. The influence of a partially wet surface on the transition from a constant rate to a falling rate of drying is described in the study [22].

From point E up to point F, the fraction of the wet surface is decreased until the "last" wet patches disappeared from the surface. This point is commonly known as the "lower" critical point which indicates the end of the transition from the "funicular" to the "pendular" state. In other words, the moisture content is decreased, and the gas bubbles attained the dimensions of the pores, breaking the continuous threads of moisture in the pores. Moisture is transported up to the surface by creeping along the capillary when the liquid is in the funicular state or by the successive evaporation‐condensation mechanism between liquid bridges. When the "pendular" state is reached, there is no further contraction of the drying body, and conse‐ quently the possibility of the drying body to crack is extremely small.

During the drying process, the temperature increase is registered. The temperature is increas‐ ing slowly from point D up to point E, and then moderate temperature increase is registered up to point F. After point F is reached, the temperature of the system is rising rapidly up to point I. From point I up to point K, the temperature is rising very slow and is practically close to the level off at the so‐called pseudo‐wet‐bulb temperature. The temperature is increasing again just before the end of the drying process until it reached the level of the final temperature.

With further drying from point F to point G, the moisture is transported up to the surface by the successive evaporation‐condensation mechanism between liquid bridges. Simultaneously with the evaporation and condensation mechanisms, liquid starts to evaporate within the pore space at a growing rate, causing the vapor pressure to increase. These two mechanisms move the "pendular" water up to the surface.

The G–H segment is consisted from two parts. Locally produced vapor is accumulating within the pores causing a local increase in the effective diffusivity within the first G–H segment. In one moment, the local vapor pressure is exceeding the critical value, and the vapor is practically "blown" away up to the surface. During the vapor release, some liquid bridges of "pendular" water are also transported. This kind of moisture transport is accompanied with the local decrease of the effective diffusivity value which is characteristic for the second G–H segment. The movement of the remaining "pendular" water in the H–I segment is mostly caused by the vapor pressure existing within the pore space.

After "pendular" water is removed, the evaporation occurs only inside the body, and the temperature of the surface approaches the so‐called pseudo‐wet‐bulb temperature, and at the end of the drying, it reaches ambient temperature. This I–L segment is commonly known as the "diffusion period." It is divided into three parts. The first one "I–J" represents pure molecular diffusion, while the second "J–K" and third "K–L" ones represent, respectively, transitional and the Knudsen diffusion.

#### **2.2. Experimental**

The raw material, used in this study, was obtained from the largest roofing tile manufacturer in Serbia. "Kanjiza's" clay raw deposit is formed from two layers. The first layer is commonly known as the "yellow clay." It contains a relatively small amount of clay minerals (under 23 wt.%), but it is rich in quartz, carbonates (above 20 wt.%), and feldspar minerals. The second layer commonly known as the "blue clay" predominantly contains clay minerals: illite, smectite, chlorite, and kaolinite. The industrial raw material mixture and the one used in this study consist, respectively, 80 and 20 wt.% of blue and yellow clay [23].

Initial characterization of Initial characterization of the raw material has included determina‐ tion of particle size distribution (PSD) analysis, standard silicate chemical analysis (SSA), qualitative and semiquantitative XRD analysis, and TGA (TG analysis). Standardized proce‐ dures, described, respectively, in SRPS U.B1.018:2005 and SRPS B.D8.210:1982 norms, were used for PSD and SSA determination. Qualitative and semiquantitative XRD analysis and TGA (TG analysis) were reported in the study [24].

After initial characterization, the raw material was homogenized and prepared for the forming process. During the homogenization process, the raw material was moisturized and milled using laboratory differential mills.

Laboratory roofing tile samples 120 × 50 × 14 mm were formed in a laboratory extruder "Hendle" type 4, under a vacuum of 0.8 bar. Formed samples were packed into plastic bags which were afterward sealed and put into a glass container with lid. Glass containers with samples were kept in the air‐conditioned room in which temperature and relative humidity were maintained, respectively, at 25°C and 65%. This procedure allows the minimal moisture content fluctuations within the stored samples.

Series of isothermal drying curves was recorded. Laboratory recirculation dryer in which drying parameters (humidity, temperature, and velocity) could be programmed, controlled, and monitored was used. Regulation of wet air parameters within the range of 0–125°C, 20– 100%, and 0–3.5 m/s with accuracies of ±0.2°C, ±0.2%, and ±0.1% for temperature, humidity, and velocity, respectively, was limited by the laboratory dryer design. The mass of the samples and their linear shrinkage were continually monitored and recorded during the experiments. The accuracies of these measurements were 0.01 g and 0.2 mm, respectively. Experimental conditions presented in **Table 1** were used in the present study. Each experiment was repeated two times.


**Table 1.** Experimental conditions.

of the pores, breaking the continuous threads of moisture in the pores. Moisture is transported up to the surface by creeping along the capillary when the liquid is in the funicular state or by the successive evaporation‐condensation mechanism between liquid bridges. When the "pendular" state is reached, there is no further contraction of the drying body, and conse‐

During the drying process, the temperature increase is registered. The temperature is increas‐ ing slowly from point D up to point E, and then moderate temperature increase is registered up to point F. After point F is reached, the temperature of the system is rising rapidly up to point I. From point I up to point K, the temperature is rising very slow and is practically close to the level off at the so‐called pseudo‐wet‐bulb temperature. The temperature is increasing again just before the end of the drying process until it reached the level of the final temperature. With further drying from point F to point G, the moisture is transported up to the surface by the successive evaporation‐condensation mechanism between liquid bridges. Simultaneously with the evaporation and condensation mechanisms, liquid starts to evaporate within the pore space at a growing rate, causing the vapor pressure to increase. These two mechanisms move

The G–H segment is consisted from two parts. Locally produced vapor is accumulating within the pores causing a local increase in the effective diffusivity within the first G–H segment. In one moment, the local vapor pressure is exceeding the critical value, and the vapor is practically "blown" away up to the surface. During the vapor release, some liquid bridges of "pendular" water are also transported. This kind of moisture transport is accompanied with the local decrease of the effective diffusivity value which is characteristic for the second G–H segment. The movement of the remaining "pendular" water in the H–I segment is mostly caused by the

After "pendular" water is removed, the evaporation occurs only inside the body, and the temperature of the surface approaches the so‐called pseudo‐wet‐bulb temperature, and at the end of the drying, it reaches ambient temperature. This I–L segment is commonly known as the "diffusion period." It is divided into three parts. The first one "I–J" represents pure molecular diffusion, while the second "J–K" and third "K–L" ones represent, respectively,

The raw material, used in this study, was obtained from the largest roofing tile manufacturer in Serbia. "Kanjiza's" clay raw deposit is formed from two layers. The first layer is commonly known as the "yellow clay." It contains a relatively small amount of clay minerals (under 23 wt.%), but it is rich in quartz, carbonates (above 20 wt.%), and feldspar minerals. The second layer commonly known as the "blue clay" predominantly contains clay minerals: illite, smectite, chlorite, and kaolinite. The industrial raw material mixture and the one used in this

Initial characterization of Initial characterization of the raw material has included determina‐ tion of particle size distribution (PSD) analysis, standard silicate chemical analysis (SSA),

study consist, respectively, 80 and 20 wt.% of blue and yellow clay [23].

quently the possibility of the drying body to crack is extremely small.

the "pendular" water up to the surface.

76 Operations Research - the Art of Making Good Decisions

vapor pressure existing within the pore space.

transitional and the Knudsen diffusion.

**2.2. Experimental**

The modified slope method was used to calculate the functional dependence of the effective diffusivity vs. moisture content (Deff‐MR), to divide obtained curves in segments, and to identify all possible mechanisms of moisture transport within each drying segment. Roofing tile samples were afterward dried to constant mass. Dried samples were heated in oxygen atmosphere with the heating rate of 1.4°C/min from room temperature up to 610°C and further with the heating rate of 2.5°C/min up to the 1000°C. Samples were kept at 1000°C for 2 hours. Flexural strength was determined on dried (DSFS) and fired samples (FSFS) using the proce‐ dure described in EN 538 norm.

Obtained data were analyzed and used to set up several non‐isothermal drying regimes. Drying air parameters which were maintained in each proposed drying regime are present‐ ed in **Table 2**. Duration of the approximately isothermal drying segments was detected from the isothermal curves Deff‐MR. Tiles were then dried to constant mass and fired using the same heating rate as one previously mentioned. DSFS and FSFS were determined. Twist coefficient (TWC) and longitudinal camber (LOC) coefficient and transverse camber (TRC) coefficient were determined on fired samples using the procedure described in EN 1024 norm.


**Table 2.** Experimental conditions—proposed drying regimes.

#### **3. Results and discussion**

Results of several analyses, used for initial characterization of the raw material, are presented in **Table 3**. The mass content of SiO2, Al2O3, CaO, and MgO obtained by SSA analyses is indicating the presence of free quartz, feldspars, clay minerals, and carbonate minerals in the analyzed raw material. Results of qualitative mineralogical and XRD analyses, reported in the study [24], have confirmed the presence of quartz, feldspars (orthoclase), illite, muscovite, kaolinite, montmorillonite, chlorite, calcite, and dolomite in the analyzed raw material. Semiquantitative XRD mineralogical analysis has quantified the presence of previously mentioned minerals. The mass content of clay, silt, and sand obtained by PSD analyses is indicating that the analyzed raw material is classified as clay loam suitable for clay roofing tile production.

All possible mechanisms of moisture transport and their transition from one to another dur‐ ing drying for isothermal experiments are identified and are shown in **Figure 2**.

Mathematical Modeling of Isothermal Drying and its Potential Application in the Design of the Industrial Drying... http://dx.doi.org/10.5772/64983 79


**Table 3.** Initial characterization of the raw material.

identify all possible mechanisms of moisture transport within each drying segment. Roofing tile samples were afterward dried to constant mass. Dried samples were heated in oxygen atmosphere with the heating rate of 1.4°C/min from room temperature up to 610°C and further with the heating rate of 2.5°C/min up to the 1000°C. Samples were kept at 1000°C for 2 hours. Flexural strength was determined on dried (DSFS) and fired samples (FSFS) using the proce‐

Obtained data were analyzed and used to set up several non‐isothermal drying regimes. Drying air parameters which were maintained in each proposed drying regime are present‐ ed in **Table 2**. Duration of the approximately isothermal drying segments was detected from the isothermal curves Deff‐MR. Tiles were then dried to constant mass and fired using the same heating rate as one previously mentioned. DSFS and FSFS were determined. Twist coefficient (TWC) and longitudinal camber (LOC) coefficient and transverse camber (TRC) coefficient were determined on fired samples using the procedure described in EN 1024 norm.

**I II III IV V**

Results of several analyses, used for initial characterization of the raw material, are presented in **Table 3**. The mass content of SiO2, Al2O3, CaO, and MgO obtained by SSA analyses is indicating the presence of free quartz, feldspars, clay minerals, and carbonate minerals in the analyzed raw material. Results of qualitative mineralogical and XRD analyses, reported in the study [24], have confirmed the presence of quartz, feldspars (orthoclase), illite, muscovite, kaolinite, montmorillonite, chlorite, calcite, and dolomite in the analyzed raw material. Semiquantitative XRD mineralogical analysis has quantified the presence of previously mentioned minerals. The mass content of clay, silt, and sand obtained by PSD analyses is indicating that the analyzed raw material is classified as clay loam suitable for clay roofing tile

All possible mechanisms of moisture transport and their transition from one to another dur‐

ing drying for isothermal experiments are identified and are shown in **Figure 2**.

7 Exp. 1 Exp. 2 Exp. 3 Exp. 4 70°C/40%

8 Exp. 2 Exp. 3 Exp. 5 Exp. 6 70°C/40%

dure described in EN 538 norm.

78 Operations Research - the Art of Making Good Decisions

**Experiment Segment**

**3. Results and discussion**

production.

**Table 2.** Experimental conditions—proposed drying regimes.

**Figure 2.** Estimated Deff‐MR curves for isothermal experiments.

The procedure for setting up the non‐isothermal drying regime that is consistent with the theory of moisture migration during drying was based on the principle of controlling the mass transport during the drying process and has demanded to divide the drying process into five segments. In each of these segments, approximately isothermal drying conditions were maintained (see **Table 2**).

The main functions of the first segment are to restrain the moisture transport (evaporation), through the boundary layer between material surface and the bulk air, and to heat the ceramic body to the temperature of the drying air. That is the reason why high values of the drying air humidity in the first segment were selected. In order to fulfill previously mentioned require‐ ments, this drying segment is over when characteristic point C is reached. During the second drying segment, external transport (surface evaporation) and internal transport (of liquid water from the ceramic body up to the surface) have to be increased and simultaneously harmonized in such a way that the drying surface remains fully covered by a water film. That is the reason why in most cases drying air humidity in this segment is reduced. Its absolute value is still relatively high. This will increase the evaporation driving force and consequently will speed up the drying process. Drying air temperature in this segment may slightly increase compared to the previous segment. This will moderately increase the capillary transport as well as the drying rate. The second segment starts and ends when characteristic points C and D ("upper critical point") are, respectively, reached.

The third and fourth segment represents together a transitional drying period in which the sample is gradually shifting from "funicular" to "pendular" state. A higher fraction of wet surface and a thicker boundary layer produce and favor a constant drying rate period, while a smaller fraction of the surface and a thinner boundary layer shell favor and produce a falling drying rate. The main function of the third segment is to provide the conditions that will lead to the fact that partially wet surfaces provide a constant rate of drying. That is the reason why the humidity and temperature of the drying air within the third segment have to be carefully selected. Further reduction of the drying air humidity (see **Table 2**) will increase the evapora‐ tion driving force (external surface evaporation) and consequently will speed up the drying process. Drying air temperature in this segment may slightly increase compared to the previous segment. This will increase the capillary transport as well as the drying rate. The third segment starts and ends when characteristic points D and E are, respectively, reached.

Within the fourth segment, the fraction of the wet surface decreased until the "last" wet patches disappeared from the surface. At the end of the fourth segment, the system had reached the "pendular" state, and there is no further contraction of the drying body. The main function of this segment is to simultaneously harmonize the liquid transport originating from the pores which are near or just below the "dry" patches on the surface and are still in the funicular state with the liquid flow originating from the surface "wet" patches. That is the reason why drying air humidity in this segment is not reduced (see **Table 2**).

Further increase of the drying air temperature has a positive influence which has led to the liquid transport enhancement. The fourth segment starts and ends when characteristic points E and F ("lower critical point") are, respectively, reached. The main function of the fifth segment is to maximally facilitate the internal moisture transport up to the surface. That is the reason why drying air humidity in the fifth segment is further reduced, while drying air temperature is maximally increased [25].

Characteristic data for isothermal experiments from point A up to point F are presented in **Table 4**.


**Table 4.** Characteristic data for isothermal experiments from point A up to point F.

The procedure for setting up the non‐isothermal drying regime that is consistent with the theory of moisture migration during drying was based on the principle of controlling the mass transport during the drying process and has demanded to divide the drying process into five segments. In each of these segments, approximately isothermal drying conditions were

The main functions of the first segment are to restrain the moisture transport (evaporation), through the boundary layer between material surface and the bulk air, and to heat the ceramic body to the temperature of the drying air. That is the reason why high values of the drying air humidity in the first segment were selected. In order to fulfill previously mentioned require‐ ments, this drying segment is over when characteristic point C is reached. During the second drying segment, external transport (surface evaporation) and internal transport (of liquid water from the ceramic body up to the surface) have to be increased and simultaneously harmonized in such a way that the drying surface remains fully covered by a water film. That is the reason why in most cases drying air humidity in this segment is reduced. Its absolute value is still relatively high. This will increase the evaporation driving force and consequently will speed up the drying process. Drying air temperature in this segment may slightly increase compared to the previous segment. This will moderately increase the capillary transport as well as the drying rate. The second segment starts and ends when characteristic points C and

The third and fourth segment represents together a transitional drying period in which the sample is gradually shifting from "funicular" to "pendular" state. A higher fraction of wet surface and a thicker boundary layer produce and favor a constant drying rate period, while a smaller fraction of the surface and a thinner boundary layer shell favor and produce a falling drying rate. The main function of the third segment is to provide the conditions that will lead to the fact that partially wet surfaces provide a constant rate of drying. That is the reason why the humidity and temperature of the drying air within the third segment have to be carefully selected. Further reduction of the drying air humidity (see **Table 2**) will increase the evapora‐ tion driving force (external surface evaporation) and consequently will speed up the drying process. Drying air temperature in this segment may slightly increase compared to the previous segment. This will increase the capillary transport as well as the drying rate. The third segment starts and ends when characteristic points D and E are, respectively, reached.

Within the fourth segment, the fraction of the wet surface decreased until the "last" wet patches disappeared from the surface. At the end of the fourth segment, the system had reached the "pendular" state, and there is no further contraction of the drying body. The main function of this segment is to simultaneously harmonize the liquid transport originating from the pores which are near or just below the "dry" patches on the surface and are still in the funicular state with the liquid flow originating from the surface "wet" patches. That is the reason why drying

Further increase of the drying air temperature has a positive influence which has led to the liquid transport enhancement. The fourth segment starts and ends when characteristic points E and F ("lower critical point") are, respectively, reached. The main function of the fifth segment is to maximally facilitate the internal moisture transport up to the surface. That is the reason

maintained (see **Table 2**).

80 Operations Research - the Art of Making Good Decisions

D ("upper critical point") are, respectively, reached.

air humidity in this segment is not reduced (see **Table 2**).

Duration of the approximately isothermal drying segments was not specified by experience or by trial‐and‐error method. It was detected from the appropriate isothermal Deff–MR curves (see **Figure 1** and **Table 4**). General procedure will be explained on experiment 7. Duration of the first segment was the same as the duration of the drying process in the case of experiment 1 from the beginning up to the characteristic point C. Duration of the second segment was the same as the duration of the drying process in the case of experiment 2 from the characteristic point C up to the characteristic point D. Duration of the third segment was the same as the duration of the drying process in the case of experiment 3 from the characteristic point D up to the characteristic point E. Duration of the fourth segment was the same as the duration of the drying process in the case of experiment 4 from the characteristic point E up to the characteristic point F. Duration of the fifth segment was limited to 90 minutes. This procedure was used to specify the duration of drying segments in each proposed drying regime. Calculated results are presented in **Table 5**.


**Table 5.** Calculated segment duration within proposed drying regimes.

It is important to define the minimum requirements for dried clay roofing tile which if satisfactory will ensure that the product is able to perform its function. In other words, dried clay roofing tiles has to be dried without cracks. Minimal flexural strength of dried and fired samples has to be, respectively, at least 0.73 and 1.2 kN (see EN 1304 norm). The mean value of the twist coefficient (TWC) and the mean value of the longitudinal camber (LOC) and transverse camber (TRC) coefficients calculated as described in EN 1024 norm shall comply, respectively, with the requirements stated in **Tables 1**–**3** presented within the EN 1304 norm. Proposed drying regimes were tested. Clay roofing tiles were dried without cracks. Flexural strength of dried and fired clay tiles (DSFS and FSFS) are presented in **Table 6**.


**Table 6.** Mechanical properties of dried and fired samples.


**Table 7.** Twist and camber coefficients.

The mean TWC value and the mean LOC and TRC values for isothermal and non‐isothermal drying regimes are presented in **Table 7**. Detailed analysis of data presented in **Table 7** has revealed that mean TC, RL, and RT is increasing with the increase of the drying air temperature as well as that under the same drying air temperature, TWC, LOC, and TRC values, presented in different experimental groups, is also increasing with the decrease of air relative humidity (see **Tables 1** and **7**). Maximally allowed deviation for TWC, LOC, and TRC is 2%. This criterion is defined in **Tables 1**–**3** of the EN 1304 standard.

TWC, LOC, and TRC values can be used as a good indirect indicator of the stress generation. In other words, higher TWC, LOC, and TRC values are correlated with the higher stress generation. The drying air with higher temperature and lower relative humidity leads to more rapid generation of the stress in the samples during drying which will result with the lower shape regularity (higher coefficients) and lower mechanical properties of the dried samples.

The shortest total drying time (TDT) for isothermal and non‐isothermal drying regimes was, respectively, registered in experiments 8 and 6 (see **Tables 4** and **5**). The difference between TDT values related to previously mentioned experiments is relatively small. The lowest DSFS and FSFS values along with the highest mean TWC, LOC, and TRC values for non‐isothermal drying regimes were registered in experiment 8. It is important to point out that dried and fired clay roofing tiles in each proposed non‐isothermal experiment have satisfied previously mentioned flexural strength as well as shape regularity (twist and cambers) criteria. That is the reason why in this study the lowest TDT value was used as a final criterion for selection of the experiment 8 drying regime as optimal (see **Table 5**, experiment 8).

#### **4. Conclusion**

It is important to define the minimum requirements for dried clay roofing tile which if satisfactory will ensure that the product is able to perform its function. In other words, dried clay roofing tiles has to be dried without cracks. Minimal flexural strength of dried and fired samples has to be, respectively, at least 0.73 and 1.2 kN (see EN 1304 norm). The mean value of the twist coefficient (TWC) and the mean value of the longitudinal camber (LOC) and transverse camber (TRC) coefficients calculated as described in EN 1024 norm shall comply, respectively, with the requirements stated in **Tables 1**–**3** presented within the EN 1304 norm. Proposed drying regimes were tested. Clay roofing tiles were dried without cracks. Flexural

**Cambers R (%)**

The mean TWC value and the mean LOC and TRC values for isothermal and non‐isothermal drying regimes are presented in **Table 7**. Detailed analysis of data presented in **Table 7** has revealed that mean TC, RL, and RT is increasing with the increase of the drying air temperature as well as that under the same drying air temperature, TWC, LOC, and TRC values, presented

**Longitudinal Longitudinal**

**C FSFS (kN)**

strength of dried and fired clay tiles (DSFS and FSFS) are presented in **Table 6**.

**Experiment DSFS (kN) 10000**

 0.99 2.73 0.97 2.65 0.92 2.67 0.82 2.65 0.80 2.12 0.75 2.08 0.93 2.65 0.81 2.42

 0.29 0.33 0.33 0.45 0.50 0.50 0.90 0.68 0.68 1.23 0.87 0.87 0.72 0.79 0.79 1.18 1.05 1.05 0.48 0.51 0.51 0.71 0.69 0.69

**Table 6.** Mechanical properties of dried and fired samples.

82 Operations Research - the Art of Making Good Decisions

**C (%)**

**Experiment Twist coefficient**

**Table 7.** Twist and camber coefficients.

The procedure for setting up the non‐isothermal drying regime that is consistent with the theory of moisture migration during drying has demanded to divide the drying process into five segments. For the first time, the choice of isothermal segments specification was achieved in accordance with the theory of moisture migration during drying and with the nature and the properties of the clay raw materials. Duration of the approximately isothermal drying segments was not specified by experience or by trial‐and‐error method. It was detected from the appropriate isothermal Deff‐MR curves. Proposed drying regimes were tested. Dried clay roofing tiles have satisfied all requirements related to the shape regularity and mechanical properties as defined in EN 1304 norm. Finally, experiment 8 was chosen as the optimal drying regime. Semi‐industrial trials have shown that the proposed drying regimes obtained from Deff‐MR curves can be implemented in real industry system. Namely, the design of the optimal drying curve along with the drying time reduction and higher utilization of the dryer is possible without the fear of generating higher scrap rate. The next step is to apply the presented procedure and to find a way to distinguish the influence of shape factor, forming history and drying parameters on the quality of the dried tiles.

#### **Acknowledgements**

This paper was realized under the project ИИИ 45008 which was financed by ministry of education and science of Serbia as well as the company "Potisje Kanjiža".

#### **Author details**

Miloš Vasić1\*, Zagorka Radojević1 and Robert Rekecki2


#### **References**


[10] Batista M.L., Cezar A.R., Pinto L.A.A. Diffusive model with variable effective diffusivity considering shrinkage in thin layer drying of chitosan. Journal of Food Engineering. 2007;81(1):127–132.

**Author details**

**References**

2279.

523–533.

2014;76:33–44.

Miloš Vasić1\*, Zagorka Radojević1

84 Operations Research - the Art of Making Good Decisions

2 Potisije Kanjiža, Kanjiža, Serbia

\*Address all correspondence to: milos.vasic@institutims.rs

application. Croatia: Intech; 2012. p. 295–313.

1 Institute for the Testing of Materials, Belgrade, Serbia

and Robert Rekecki2

[1] Efremov G., Kudra T. Calculation of the effective diffusion coefficients by applying a quasi‐stationary equation for drying kinetic. Drying Technology. 2004;22(10):2273–

[2] Vasić M., Radojević Z., Grbavčić Ž. Methods of determination for effective diffusion coefficient during convective drying of clay products. In: Valaškova M., Martynkova S.G., editors. Clay Minerals in nature—their characterization, modification and

[3] Vasić M., Radojević Z. Calculation of the effective diffusion coefficient. International

[4] Vasić M., Radojević Z., Grbavčić Ž. Calculation of the effective diffusion coefficient during the drying of clay samples. Journal of the Serbian Chemical Society. 2012;77(4):

[5] Waccarezza L.M., Lombardi J.L., Chirifle J. Effects on drying rate of food dehydration.

[6] Vasić M., Radojević Z., Grbavčić Ž. Determination of the effective diffusion coefficient.

[7] Vasić M., Grbavčić Ž., Radojević Z. Determination of the moisture diffusivity coefficient and mathematical modeling of drying. Chemical Engineering and Processing.

[8] Pinto L.A.A., Tobinaga S. Diffusive model with shrinkage in the thin‐layer drying of

[9] Lopez I.I.R., Espinoza H.R., Lozada P.A., Alvarado A.M.G. Analytical model for moisture diffusivity estimation and drying simulation of shrinkable food products.

Journal of Modern Manufacturing Technologies. 2011;3(1):93–98.

Canadian Journal of Chemical Engineering. 1974;52(5):576–579.

Romanian Journal of Material. 2011;42(1):169–176.

fish. Drying Technology. 2006;24(4):509–516.

Journal of Food Engineering. 2012;108:427–435.

